KEMBAR78
Migrate thnn_conv_depthwise2d from THC to ATen by peterbell10 · Pull Request #62281 · pytorch/pytorch · GitHub
Skip to content

Conversation

@peterbell10
Copy link
Collaborator

@peterbell10 peterbell10 commented Jul 27, 2021

Stack from ghstack:

Closes gh-24646, Closes gh-24647

There is no TensorIterator equivalent to these kernels so this is just
migrating the existing kernels over to the ATen style.

I've benchmarked for contiguous tensors with this script:

import torch
shape = (10, 10, 100, 100)
x = torch.randn(*shape, device='cuda')
w = torch.randn((10, 1, 5, 5), device='cuda')

for _ in range(100):
    torch.nn.functional.conv2d(x, w, groups=10)

and similarly for backwards. I see these as the same to within measurement error.

Master Forward (us) This PR Forward (us)
Forward 133.5 133.6
Backward (input) 1,102 1,119
Backward (weight) 2,220 2,217

Differential Revision: D29943062

Closes gh-24646, gh-24647

There is no `TensorIterator` equivalent to these kernels so this is just
migrating the existing kernels over to the ATen style.

I've benchmarked for contiguous tensors with this script:
```
import torch
shape = (10, 10, 100, 100)
x = torch.randn(*shape, device='cuda')
w = torch.randn((10, 1, 5, 5), device='cuda')

for _ in range(100):
    torch.nn.functional.conv2d(x, w, groups=10)
```

and similarly for backwards. I see these as the same to within measurement error.

|                   | Master Forward (us) | This PR Forward (us) |
|------------------:|:-------------------:|:--------------------:|
|           Forward |        133.5        |         133.6        |
|  Backward (input) |        1,102        |         1,119        |
| Backward (weight) |        2,220        |         2,217        |

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jul 27, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit f02533e (more details on the Dr. CI page):


None of the CI failures appear to be your fault 💚



1 job timed out:

  • pytorch_xla_linux_bionic_py3_6_clang9_test

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

peterbell10 added a commit that referenced this pull request Jul 27, 2021
Closes gh-24646, gh-24647

There is no `TensorIterator` equivalent to these kernels so this is just
migrating the existing kernels over to the ATen style.

I've benchmarked for contiguous tensors with this script:
```
import torch
shape = (10, 10, 100, 100)
x = torch.randn(*shape, device='cuda')
w = torch.randn((10, 1, 5, 5), device='cuda')

for _ in range(100):
    torch.nn.functional.conv2d(x, w, groups=10)
```

and similarly for backwards. I see these as the same to within measurement error.

|                   | Master Forward (us) | This PR Forward (us) |
|------------------:|:-------------------:|:--------------------:|
|           Forward |        133.5        |         133.6        |
|  Backward (input) |        1,102        |         1,119        |
| Backward (weight) |        2,220        |         2,217        |

ghstack-source-id: a0555c8
Pull Request resolved: #62281
@peterbell10 peterbell10 requested a review from ngimel July 27, 2021 18:37
@peterbell10 peterbell10 added module: porting Issues related to porting TH/THNN legacy to ATen native open source labels Jul 27, 2021
@ngimel
Copy link
Collaborator

ngimel commented Jul 27, 2021

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@albanD albanD removed their request for review July 27, 2021 20:31
@facebook-github-bot
Copy link
Contributor

@ngimel merged this pull request in 9776e1f.

@facebook-github-bot facebook-github-bot deleted the gh/peterbell10/107/head branch July 31, 2021 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: porting Issues related to porting TH/THNN legacy to ATen native open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants