KEMBAR78
step 0 of cuDNN v8 convolution API integration by zasdfgbnm · Pull Request #51390 · pytorch/pytorch · GitHub
Skip to content

Conversation

@zasdfgbnm
Copy link
Collaborator

@zasdfgbnm zasdfgbnm commented Jan 30, 2021

This PR is step 0 of adding PyTorch convolution bindings using the cuDNN frontend. The cuDNN frontend is the recommended way of using cuDNN v8 API. It is supposed to have faster release cycles, so that, for example, if people find a specific kernel has a bug, they can report it, and that kernel will be blocked in the cuDNN frontend and frameworks could just update that submodule without the need for waiting for a whole cuDNN release.

The work is not complete, and this PR is only step 0.

What this PR does:

  • Add cudnn-frontend as a submodule.
  • Modify cmake to build that submodule.
  • Add bindings for convolution forward in Conv_v8.cpp, which is disabled by a macro by default.
  • Tested manually by enabling the macro and run test_nn.py. All tests pass except those mentioned below.

What this PR doesn't:

  • Only convolution forward, no backward. The backward will use v7 API.
  • No 64bit-indexing support for some configuration. This is a known issue of cuDNN, and will be fixed in a later cuDNN version. PyTorch will not implement any workaround for issue, but instead, v8 API should be disabled on problematic cuDNN versions.
  • No test beyond PyTorch's unit tests.
    • Not tested for correctness on real models.
    • Not benchmarked for performance.
  • Benchmark cache is not thread-safe. (This is marked as FIXME in the code, and will be fixed in a follow-up PR)
  • cuDNN benchmark is not supported.
  • There are failing tests, which will be resolved later:
    FAILED test/test_nn.py::TestNNDeviceTypeCUDA::test_conv_cudnn_nhwc_cuda_float16 - AssertionError: False is not true : Tensors failed to compare as equal!With rtol=0.001 and atol=1e-05, found 32 element(s) (out of 32) whose difference(s) exceeded the margin of error (in...
    FAILED test/test_nn.py::TestNNDeviceTypeCUDA::test_conv_cudnn_nhwc_cuda_float32 - AssertionError: False is not true : Tensors failed to compare as equal!With rtol=1.3e-06 and atol=1e-05, found 32 element(s) (out of 32) whose difference(s) exceeded the margin of error (...
    FAILED test/test_nn.py::TestNNDeviceTypeCUDA::test_conv_large_cuda - RuntimeError: CUDNN_BACKEND_OPERATION: cudnnFinalize Failed cudnn_status: 9
    FAILED test/test_nn.py::TestNN::test_Conv2d_depthwise_naive_groups_cuda - AssertionError: False is not true : Tensors failed to compare as equal!With rtol=0 and atol=1e-05, found 64 element(s) (out of 64) whose difference(s) exceeded the margin of error (including 0 an...
    FAILED test/test_nn.py::TestNN::test_Conv2d_deterministic_cudnn - RuntimeError: not supported yet
    FAILED test/test_nn.py::TestNN::test_ConvTranspose2d_groups_cuda_fp32 - RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM
    FAILED test/test_nn.py::TestNN::test_ConvTranspose2d_groups_cuda_tf32 - RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM
    

Although this is not a complete implementation of cuDNN v8 API binding, I still want to merge this first. This would allow me to do small and incremental work, for the ease of development and review.

jerryzh168 added a commit that referenced this pull request Feb 1, 2022
…r in cudnn"


Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 1, 2022
Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 1, 2022
…r in cudnn"


Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 1, 2022
Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 1, 2022
Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

Test Plan:
python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 031f88e
Pull Request resolved: #70622
jerryzh168 added a commit that referenced this pull request Feb 1, 2022
…r in cudnn"


Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 1, 2022
Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 1, 2022
Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

Test Plan:
python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: b58bbb3
Pull Request resolved: #70622
jerryzh168 added a commit that referenced this pull request Feb 1, 2022
…r in cudnn"


Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 1, 2022
Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 1, 2022
Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

Test Plan:
python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 88afb86
Pull Request resolved: #70622
jerryzh168 added a commit that referenced this pull request Feb 2, 2022
…r in cudnn"


Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 2, 2022
Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 2, 2022
…r in cudnn"


Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 2, 2022
Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 2, 2022
Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

Test Plan:
python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: e3af2fc
Pull Request resolved: #70622
jerryzh168 added a commit that referenced this pull request Feb 3, 2022
…r in cudnn"


Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 3, 2022
Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

TODO:
1. Support bias, relu, support more parameter flexibilities
2. Use the packed_prams api

Test Plan:
```
> USE_EXPERIMENTAL_CUDNN_V8_API=1 python setup.py install
> python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn
```

debug command:
```
CUDNN_LOGINFO_DBG=1 CUDNN_LOGWARN_DBG=1 CUDNN_LOGERR_DBG=1 CUDNN_LOGDEST_DBG=stdout python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn > log
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D33409155](https://our.internmc.facebook.com/intern/diff/D33409155)

[ghstack-poisoned]
jerryzh168 added a commit that referenced this pull request Feb 3, 2022
Summary:
This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

Test Plan:
python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: b445548
Pull Request resolved: #70622
facebook-github-bot pushed a commit that referenced this pull request Feb 4, 2022
Summary:
Pull Request resolved: #70622

This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

Test Plan:
python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D33409155

fbshipit-source-id: cb5183d274993fcd2c3ab6de8ae022baa9f89f7f
pytorchmergebot pushed a commit that referenced this pull request Feb 4, 2022
Summary:
Pull Request resolved: #70622

This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

Test Plan:
python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D33409155

fbshipit-source-id: cb5183d274993fcd2c3ab6de8ae022baa9f89f7f
(cherry picked from commit 4fde555)
eqy added a commit to eqy/pytorch that referenced this pull request Feb 4, 2022
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 9, 2022
Summary:
Pull Request resolved: pytorch/pytorch#70622

This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
pytorch/pytorch#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

Test Plan:
python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D33409155

fbshipit-source-id: cb5183d274993fcd2c3ab6de8ae022baa9f89f7f
(cherry picked from commit 4fde5559dee2a28907b09f96bc5a8dd259148d2e)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 9, 2022
Summary:
Pull Request resolved: pytorch/pytorch#70622

This PR is the initial PR to add eager mode quantized GPU operator support, we'll start
with convolution, following cudnn fp32 Conv code and the example cudnn frontend code
pytorch/pytorch#51390
https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/fusion_sample.cpp#L557

Test Plan:
python test/test_quantization.py TestQuantizedConv.test_qconv2d_cudnn

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D33409155

fbshipit-source-id: cb5183d274993fcd2c3ab6de8ae022baa9f89f7f
(cherry picked from commit 4fde5559dee2a28907b09f96bc5a8dd259148d2e)
eqy added a commit to eqy/pytorch that referenced this pull request Mar 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: convolution Problems related to convolutions (THNN, THCUNN, CuDNN) module: cudnn Related to torch.backends.cudnn, and CuDNN support open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants