KEMBAR78
Switch to CUDA implementation if batch size >= 65536 for affine_grid by vishwakftw · Pull Request #16403 · pytorch/pytorch · GitHub
Skip to content

Conversation

@vishwakftw
Copy link
Contributor

@vishwakftw vishwakftw commented Jan 26, 2019

Changelog:

  • Append a condition that switches to the native CUDA implementation for affine_grid

Fixes #16365

cc: @fmassa

Changelog:

- Append a condition that switches to the native CUDA implementation for affine_grid
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soumith is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@soumith
Copy link
Member

soumith commented Jan 26, 2019

closed via 8c81a72

@soumith soumith closed this Jan 26, 2019
@soumith soumith added this to the 1.0.1 milestone Jan 26, 2019
@vishwakftw vishwakftw deleted the affine-grid-huge-batches branch February 3, 2019 17:00
soumith pushed a commit that referenced this pull request Feb 4, 2019
…16403)

Summary:
Changelog:

- Append a condition that switches to the native CUDA implementation for affine_grid

Fixes #16365

Differential Revision: D13832192

Pulled By: soumith

fbshipit-source-id: 3f484e6673d71e3ba7627b170cb8f1611e12b9b2
@soumith soumith added the cherry-picked This PR was cherry-picked onto a release branch from master label Feb 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cherry-picked This PR was cherry-picked onto a release branch from master open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

torch.nn.functional.affine_grid on GPU crashes when batch size >= 256*256

5 participants