KEMBAR78
Fix bugs blocking flipping the default layout constraint for custom ops by zou3519 · Pull Request #135391 · pytorch/pytorch · GitHub
Skip to content

Conversation

@zou3519
Copy link
Contributor

@zou3519 zou3519 commented Sep 6, 2024

Stack from ghstack (oldest at bottom):

Fixes two things:

  • For regular PyTorch ops, the default layout constraint tag is always
    flexible_layout. This was a bug with Add Inductor config for default stride behavior #135238
  • Mark the new quantized _wrapped_linear_prepack ops as flexible_layout.
    The metas for these are incorrect, I didn't want to fix them (and
    changing the default requires the metas actually be correct).

Test Plan:

  • The next PR up in the stack. The PRs are split because the next one is
    riskier.

foo

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang

For regular PyTorch ops, the default layout constraint tag is always
flexible_layout.

Test Plan:
- The next PR up in the stack. The PRs are split because the next one is
  riskier.

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Sep 6, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/135391

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 4c9d796 with merge base b7eb725 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

…or custom ops"

Fixes two things:
- For regular PyTorch ops, the default layout constraint tag is always
flexible_layout. This was a bug with #135238
- Mark the new quantized _wrapped_linear_prepack ops as flexible_layout.
  The metas for these are incorrect, I didn't want to fix them (and
  changing the default requires the metas actually be correct).

Test Plan:
- The next PR up in the stack. The PRs are split because the next one is
  riskier.

foo

[ghstack-poisoned]
@zou3519 zou3519 changed the title Fix default layout constraint for non-custom ops Fix bugs blocking flipping the default layout constraint for custom ops Sep 9, 2024
@pytorch-bot pytorch-bot bot added the release notes: quantization release notes category label Sep 9, 2024
@zou3519 zou3519 requested review from albanD and removed request for digantdesai, jerryzh168, jianyuh, kimishpatel and salilsdesai September 9, 2024 14:37
@zou3519 zou3519 added the ciflow/trunk Trigger trunk jobs on your pull request label Sep 9, 2024
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM!

@zou3519 zou3519 added topic: not user facing topic category and removed release notes: quantization release notes category labels Sep 9, 2024
@zou3519
Copy link
Contributor Author

zou3519 commented Sep 9, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Sep 10, 2024
By default, Inductor should respect the stride order of input Tensors to
custom operators.

Test Plan:
- new tests

Pull Request resolved: #135239
Approved by: https://github.com/albanD
ghstack dependencies: #135391
tolleybot pushed a commit to tolleybot/pytorch that referenced this pull request Sep 14, 2024
…ps (pytorch#135391)

Fixes two things:
- For regular PyTorch ops, the default layout constraint tag is always
flexible_layout. This was a bug with pytorch#135238
- Mark the new quantized _wrapped_linear_prepack ops as flexible_layout.
  The metas for these are incorrect, I didn't want to fix them (and
  changing the default requires the metas actually be correct).

Test Plan:
- The next PR up in the stack. The PRs are split because the next one is
  riskier.

foo

Pull Request resolved: pytorch#135391
Approved by: https://github.com/albanD
tolleybot pushed a commit to tolleybot/pytorch that referenced this pull request Sep 14, 2024
By default, Inductor should respect the stride order of input Tensors to
custom operators.

Test Plan:
- new tests

Pull Request resolved: pytorch#135239
Approved by: https://github.com/albanD
ghstack dependencies: pytorch#135391
Chao1Han pushed a commit to Chao1Han/pytorch that referenced this pull request Sep 20, 2024
…ps (pytorch#135391)

Fixes two things:
- For regular PyTorch ops, the default layout constraint tag is always
flexible_layout. This was a bug with pytorch#135238
- Mark the new quantized _wrapped_linear_prepack ops as flexible_layout.
  The metas for these are incorrect, I didn't want to fix them (and
  changing the default requires the metas actually be correct).

Test Plan:
- The next PR up in the stack. The PRs are split because the next one is
  riskier.

foo

Pull Request resolved: pytorch#135391
Approved by: https://github.com/albanD
Chao1Han pushed a commit to Chao1Han/pytorch that referenced this pull request Sep 20, 2024
By default, Inductor should respect the stride order of input Tensors to
custom operators.

Test Plan:
- new tests

Pull Request resolved: pytorch#135239
Approved by: https://github.com/albanD
ghstack dependencies: pytorch#135391
@github-actions github-actions bot deleted the gh/zou3519/1065/head branch October 12, 2024 02:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants