-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Fix bugs blocking flipping the default layout constraint for custom ops #135391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
For regular PyTorch ops, the default layout constraint tag is always flexible_layout. Test Plan: - The next PR up in the stack. The PRs are split because the next one is riskier. [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/135391
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 4c9d796 with merge base b7eb725 ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…or custom ops" Fixes two things: - For regular PyTorch ops, the default layout constraint tag is always flexible_layout. This was a bug with #135238 - Mark the new quantized _wrapped_linear_prepack ops as flexible_layout. The metas for these are incorrect, I didn't want to fix them (and changing the default requires the metas actually be correct). Test Plan: - The next PR up in the stack. The PRs are split because the next one is riskier. foo [ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM!
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
By default, Inductor should respect the stride order of input Tensors to custom operators. Test Plan: - new tests Pull Request resolved: #135239 Approved by: https://github.com/albanD ghstack dependencies: #135391
…ps (pytorch#135391) Fixes two things: - For regular PyTorch ops, the default layout constraint tag is always flexible_layout. This was a bug with pytorch#135238 - Mark the new quantized _wrapped_linear_prepack ops as flexible_layout. The metas for these are incorrect, I didn't want to fix them (and changing the default requires the metas actually be correct). Test Plan: - The next PR up in the stack. The PRs are split because the next one is riskier. foo Pull Request resolved: pytorch#135391 Approved by: https://github.com/albanD
By default, Inductor should respect the stride order of input Tensors to custom operators. Test Plan: - new tests Pull Request resolved: pytorch#135239 Approved by: https://github.com/albanD ghstack dependencies: pytorch#135391
…ps (pytorch#135391) Fixes two things: - For regular PyTorch ops, the default layout constraint tag is always flexible_layout. This was a bug with pytorch#135238 - Mark the new quantized _wrapped_linear_prepack ops as flexible_layout. The metas for these are incorrect, I didn't want to fix them (and changing the default requires the metas actually be correct). Test Plan: - The next PR up in the stack. The PRs are split because the next one is riskier. foo Pull Request resolved: pytorch#135391 Approved by: https://github.com/albanD
By default, Inductor should respect the stride order of input Tensors to custom operators. Test Plan: - new tests Pull Request resolved: pytorch#135239 Approved by: https://github.com/albanD ghstack dependencies: pytorch#135391
Stack from ghstack (oldest at bottom):
Fixes two things:
flexible_layout. This was a bug with Add Inductor config for default stride behavior #135238
The metas for these are incorrect, I didn't want to fix them (and
changing the default requires the metas actually be correct).
Test Plan:
riskier.
foo
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang