KEMBAR78
Add batch rule for `native_dropout_backward` by guilhermeleobas · Pull Request #140140 · pytorch/pytorch · GitHub
Skip to content

Conversation

@guilhermeleobas
Copy link
Collaborator

@guilhermeleobas guilhermeleobas commented Nov 8, 2024

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 8, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/140140

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 7554e87 with merge base 83e36a6 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@guilhermeleobas guilhermeleobas added module: functorch Pertaining to torch.func or pytorch/functorch release notes: torch.func release notes category for torch.vmap or torch.func.* APIs labels Nov 8, 2024
[ghstack-poisoned]
guilhermeleobas added a commit that referenced this pull request Nov 8, 2024
Fixes: #122432

ghstack-source-id: 93ba434
Pull Request resolved: #140140
@guilhermeleobas guilhermeleobas marked this pull request as ready for review November 8, 2024 16:32
UNARY_POINTWISE_RANDOM_LEADING_FLOAT(normal, float_Tensor);

m.impl("native_dropout", native_dropout_batching_rule); // needs special casing because cuda version doesn't call bernoulli
m.impl("native_dropout_backward", native_dropout_backward_batch_rule);
Copy link
Collaborator Author

@guilhermeleobas guilhermeleobas Nov 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be OP_DECOMPOSE and move this to BatchRulesDecompositions.cpp.

Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks fine, just move to BatchRulesDecomposition please

[ghstack-poisoned]
@guilhermeleobas
Copy link
Collaborator Author

Hit an error for CompositeImplicitAutograd:

AssertionError: The registrations in BatchedDecompositions.cpp must be for CompositeImplicitAutograd operations. If your operation aten::native_dropout_backward is not CompositeImplicitAutograd, then please register it to the FuncTorchBatched key in another file.

[ghstack-poisoned]
guilhermeleobas added a commit that referenced this pull request Nov 11, 2024
Fixes: #122432

ghstack-source-id: 93ba434
Pull Request resolved: #140140
@guilhermeleobas
Copy link
Collaborator Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 12, 2024
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
@github-actions github-actions bot deleted the gh/guilhermeleobas/75/head branch December 14, 2024 02:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged module: functorch Pertaining to torch.func or pytorch/functorch open source release notes: torch.func release notes category for torch.vmap or torch.func.* APIs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants