-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[export] Apply move_to_device_pass to all submodules #159992
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/159992
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 819dd82 with merge base 9fd5b5f ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D79578038 |
|
|
||
| ep = torch.export.export(M(), (torch.ones(3),)) | ||
| ep = move_to_device_pass(ep, "cuda") | ||
| ep.graph_module.submod_1.recompile() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where does the submod_1 come from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the autocast
| else v, | ||
| node.meta.get("val"), | ||
| ) | ||
| for m in ep.graph_module.modules(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you maybe add some descriptions to the PR summary on when do we need to apply the pass to all submodules?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but... we should apply this pass on all submodules no?
435ee4e to
312bf2e
Compare
Summary: Pull Request resolved: pytorch#159992 Test Plan: CI Rollback Plan: Reviewed By: yiming0416 Differential Revision: D79578038
|
This pull request was exported from Phabricator. Differential Revision: D79578038 |
Summary: Pull Request resolved: pytorch#159992 Test Plan: CI Rollback Plan: Reviewed By: yiming0416 Differential Revision: D79578038
312bf2e to
819dd82
Compare
|
This pull request was exported from Phabricator. Differential Revision: D79578038 |
|
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Previously we only applied this move_to_device_pass to the toplevel graph. However if we have HOO, this pass will not be applied on the HOO submodules. This PR modifies the pass to run on all submodules. Pull Request resolved: pytorch#159992 Approved by: https://github.com/yiming0416
Previously we only applied this move_to_device_pass to the toplevel graph. However if we have HOO, this pass will not be applied on the HOO submodules. This PR modifies the pass to run on all submodules.