-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[export] Move example inputs in move_to_device_pass #162301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/162301
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 0e56d70 with merge base 081cab0 ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D81812366 |
2a76b7b to
0e20504
Compare
Summary: If i have a EP that's exported on CPU. But I want to AOTI compile it for CUDA. I need to use `move_to_device_pass`. But in `torch._inductor.aoti_compile_and_package()`, it directly uses the `example_inputs` attached to the EP, so we should move the example inputs as well if applicable. Test Plan: buck2 run mode/dev-nosan caffe2/test:test_export -- -r test_move_device_example_inputs Rollback Plan: Reviewed By: angelayi Differential Revision: D81812366
Summary: If i have a EP that's exported on CPU. But I want to AOTI compile it for CUDA. I need to use `move_to_device_pass`. But in `torch._inductor.aoti_compile_and_package()`, it directly uses the `example_inputs` attached to the EP, so we should move the example inputs as well if applicable. Test Plan: buck2 run mode/dev-nosan caffe2/test:test_export -- -r test_move_device_example_inputs Rollback Plan: Reviewed By: angelayi Differential Revision: D81812366
0e20504 to
e1016a7
Compare
|
This pull request was exported from Phabricator. Differential Revision: D81812366 |
Summary: Pull Request resolved: pytorch#162301 If i have a EP that's exported on CPU. But I want to AOTI compile it for CUDA. I need to use `move_to_device_pass`. But in `torch._inductor.aoti_compile_and_package()`, it directly uses the `example_inputs` attached to the EP, so we should move the example inputs as well if applicable. Test Plan: buck2 run mode/dev-nosan caffe2/test:test_export -- -r test_move_device_example_inputs Rollback Plan: Reviewed By: angelayi Differential Revision: D81812366
|
This pull request was exported from Phabricator. Differential Revision: D81812366 |
e1016a7 to
0e56d70
Compare
|
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary: If i have a EP that's exported on CPU and want to AOTI compile it for CUDA. I need to use `move_to_device_pass`. But in `torch._inductor.aoti_compile_and_package()`, it directly uses the `example_inputs` attached to the EP, so we should move the example inputs as well if applicable. Test Plan: buck2 run mode/dev-nosan caffe2/test:test_export -- -r test_move_device_example_inputs Rollback Plan: Differential Revision: D81812366 Pull Request resolved: pytorch#162301 Approved by: https://github.com/angelayi
Summary: If i have a EP that's exported on CPU and want to AOTI compile it for CUDA. I need to use `move_to_device_pass`. But in `torch._inductor.aoti_compile_and_package()`, it directly uses the `example_inputs` attached to the EP, so we should move the example inputs as well if applicable. Test Plan: buck2 run mode/dev-nosan caffe2/test:test_export -- -r test_move_device_example_inputs Rollback Plan: Differential Revision: D81812366 Pull Request resolved: pytorch#162301 Approved by: https://github.com/angelayi
Summary: If i have a EP that's exported on CPU and want to AOTI compile it for CUDA. I need to use `move_to_device_pass`. But in `torch._inductor.aoti_compile_and_package()`, it directly uses the `example_inputs` attached to the EP, so we should move the example inputs as well if applicable. Test Plan: buck2 run mode/dev-nosan caffe2/test:test_export -- -r test_move_device_example_inputs Rollback Plan: Differential Revision: D81812366 Pull Request resolved: pytorch#162301 Approved by: https://github.com/angelayi
Summary: If i have a EP that's exported on CPU and want to AOTI compile it for CUDA. I need to use `move_to_device_pass`. But in `torch._inductor.aoti_compile_and_package()`, it directly uses the `example_inputs` attached to the EP, so we should move the example inputs as well if applicable. Test Plan: buck2 run mode/dev-nosan caffe2/test:test_export -- -r test_move_device_example_inputs Rollback Plan: Differential Revision: D81812366 Pull Request resolved: pytorch#162301 Approved by: https://github.com/angelayi
Summary: If i have a EP that's exported on CPU and want to AOTI compile it for CUDA. I need to use `move_to_device_pass`. But in `torch._inductor.aoti_compile_and_package()`, it directly uses the `example_inputs` attached to the EP, so we should move the example inputs as well if applicable. Test Plan: buck2 run mode/dev-nosan caffe2/test:test_export -- -r test_move_device_example_inputs Rollback Plan: Differential Revision: D81812366 Pull Request resolved: pytorch#162301 Approved by: https://github.com/angelayi
Summary:
If i have a EP that's exported on CPU and want to AOTI compile it for CUDA. I need to use
move_to_device_pass.But in
torch._inductor.aoti_compile_and_package(), it directly uses theexample_inputsattached to the EP, so we should move the example inputs as well if applicable.Test Plan:
buck2 run mode/dev-nosan caffe2/test:test_export -- -r test_move_device_example_inputs
Rollback Plan:
Differential Revision: D81812366