-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[core IR] Add lift_fresh, split.Tensor, and unbind decompositions to core ATen decomp table #110102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…p table [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/110102
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 1 PendingAs of commit 230b584 with merge base 34ded74 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
… ATen decomp table" ## Context Add existing decomps for `lift_fresh` and `unbind` to the core ATen decomposition table. Do not use them in inductor, since Inductor currently lowers these directly. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 aakhundov [ghstack-poisoned]
| aten.leaky_relu_backward, | ||
| aten.lerp, | ||
| aten.lerp_, | ||
| aten.lift_fresh, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The lift_fresh decomposition is just a noop, and it has a note saying that it's not correct under autograd. Do we want to add it as a core decomp?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that the core decompositions are registered to the "post_autograd" decomposition table (see torch/_decomp/__init__.py), presumably that means that these decompositions should be used when autograd is not a factor.
This is a good point though, would like to get additional confirmation that what I said above makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm @ezyang - today when we see a tensor constant during tracing, that turns into a constant_lifted = aten.lift_fresh.default(self._tensor_constant0) in the graph. Do you think lift_fresh should be a first-class concept for backends?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a functionalized graph the no-op decomp is OK.
In a graph with mutation, it is required to clone the constant, because there may be a mutation that modifies the module attribute.
I don't really have an opinion outside of this requirement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we guarantee that a graph of core ATen IR is always functional (right @SherlockNoMad?). Then it sounds reasonable for core aten to no-op lift_fresh.
… ATen decomp table" ## Context Add existing decomps for `lift_fresh` and `unbind` to the core ATen decomposition table. Do not use them in inductor, since Inductor currently lowers these directly. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 aakhundov [ghstack-poisoned]
… ATen decomp table" ## Context Add existing decomps for `lift_fresh` and `unbind` to the core ATen decomposition table. Do not use them in inductor, since Inductor currently lowers these directly. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 aakhundov [ghstack-poisoned]
…sitions to core ATen decomp table" ## Context Add existing decomps for `lift_fresh`, `split.Tensor`, and `unbind` to the core ATen decomposition table. Do not use them in inductor, since Inductor currently lowers these directly. One note though is that `lift_fresh`'s decomposition has a note saying it's not correct under autograd. However, my understanding is that these decompositions are registered to the `"post_autograd"` decomposition table, meaning autograd wouldn't be a factor. Would like some confirmation that this premise is correct. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 aakhundov [ghstack-poisoned]
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…sitions to core ATen decomp table" ## Context Add existing decomps for `lift_fresh`, `split.Tensor`, and `unbind` to the core ATen decomposition table. Do not use them in inductor, since Inductor currently lowers these directly. One note though is that `lift_fresh`'s decomposition has a note saying it's not correct under autograd. However, my understanding is that these decompositions are registered to the `"post_autograd"` decomposition table, meaning autograd wouldn't be a factor. Would like some confirmation that this premise is correct. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 aakhundov ColinPeppler [ghstack-poisoned]
Merge failedReason: New commits were pushed while merging. Please rerun the merge command. Details for Dev Infra teamRaised by workflow job |
…sitions to core ATen decomp table" ## Context Add existing decomps for `lift_fresh`, `split.Tensor`, and `unbind` to the core ATen decomposition table. Do not use them in inductor, since Inductor currently lowers these directly. One note though is that `lift_fresh`'s decomposition has a note saying it's not correct under autograd. However, my understanding is that these decompositions are registered to the `"post_autograd"` decomposition table, meaning autograd wouldn't be a factor. Would like some confirmation that this premise is correct. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 aakhundov ColinPeppler [ghstack-poisoned]
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
@pytorchbot -m "Breaks internal CI" -c ghfirst |
|
❌ 🤖 pytorchbot command failed: Try |
|
@pytorchbot revert -m "Breaks internal CI" -c ghfirst |
|
@pytorchbot successfully started a revert job. Check the current status here. |
|
@SS-JIA your PR has been successfully reverted. |
…ions to core ATen decomp table (#110102)" This reverts commit 22e706f. Reverted #110102 on behalf of https://github.com/atalman due to Breaks internal CI ([comment](#110102 (comment)))
| tts_angular,pass,10 | ||
| vgg16,pass,8 | ||
| vision_maskrcnn,fail_accuracy,42 | ||
| vision_maskrcnn,fail_accuracy,39 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any idea why this PR affected # of graph breaks in vision_maskrcnn? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the actual cause of this was #110160.
Not sure why it was showing a failure for this PR though...
…ore ATen decomp table Summary: This is a reland of [github PR pytorch#110102]( pytorch#110102). The original PR had to be unlanded due to internal CI failures. This diff applies some small fixes to the failing tests to adjust to the new decompositions. Note that `lift_fresh` will not be decomposed for now, since it was found that [constant propogation looks specifically for `lift_fresh`](https://github.com/pytorch/pytorch/blob/main/torch/fx/experimental/proxy_tensor.py#L386). Therefore decomposing `lift_fresh` will interfere with constant propogation during export. Test Plan: Github CI and internal CI Differential Revision: D49761321
…ore ATen decomp table (pytorch#110323) Summary: X-link: pytorch/executorch#550 This is a reland of [github PR pytorch#110102]( pytorch#110102). The original PR had to be unlanded due to internal CI failures. This diff applies some small fixes to the failing tests to adjust to the new decompositions. Note that `lift_fresh` will not be decomposed for now, since it was found that [constant propogation looks specifically for `lift_fresh`](https://github.com/pytorch/pytorch/blob/main/torch/fx/experimental/proxy_tensor.py#L386). Therefore decomposing `lift_fresh` will interfere with constant propogation during export. Test Plan: Github CI and internal CI Differential Revision: D49761321
…ore ATen decomp table (pytorch#110323) Summary: X-link: pytorch/executorch#550 This is a reland of [github PR pytorch#110102]( pytorch#110102). The original PR had to be unlanded due to internal CI failures. This diff applies some small fixes to the failing tests to adjust to the new decompositions. Note that `lift_fresh` will not be decomposed for now, since it was found that [constant propogation looks specifically for `lift_fresh`](https://github.com/pytorch/pytorch/blob/main/torch/fx/experimental/proxy_tensor.py#L386). Therefore decomposing `lift_fresh` will interfere with constant propogation during export. Test Plan: Github CI and internal CI Differential Revision: D49761321
…ble (#550) Summary: Pull Request resolved: #550 X-link: pytorch/pytorch#110323 This is a reland of [github PR #110102]( pytorch/pytorch#110102). The original PR had to be unlanded due to internal CI failures. This diff applies some small fixes to the failing tests to adjust to the new decompositions. Note that `lift_fresh` will not be decomposed for now, since it was found that [constant propogation looks specifically for `lift_fresh`](https://github.com/pytorch/pytorch/blob/main/torch/fx/experimental/proxy_tensor.py#L386). Therefore decomposing `lift_fresh` will interfere with constant propogation during export. Reviewed By: kirklandsign Differential Revision: D49761321 fbshipit-source-id: 77e0f12f6c4144086b2c3dff1910bfc1cd674c7a
…ore ATen decomp table (#110323) Summary: This is a reland of [github PR #110102]( #110102). The original PR had to be unlanded due to internal CI failures. This diff applies some small fixes to the failing tests to adjust to the new decompositions. Note that `lift_fresh` will not be decomposed for now, since it was found that [constant propogation looks specifically for `lift_fresh`](https://github.com/pytorch/pytorch/blob/13af952f94e611f67ac306233898b753a9cf73d2/torch/fx/experimental/proxy_tensor.py#L381-L386). Therefore decomposing `lift_fresh` will interfere with constant propogation during export. Test Plan: Github CI and internal CI Differential Revision: D49761321 Pull Request resolved: #110323 Approved by: https://github.com/jansel
Stack from ghstack (oldest at bottom):
Context
Add existing decomps for
lift_fresh,split.Tensor, andunbindto the core ATen decomposition table. Do not use them in inductor, since Inductor currently lowers these directly.One note though is that
lift_fresh's decomposition has a note saying it's not correct under autograd. However, my understanding is that these decompositions are registered to the"post_autograd"decomposition table, meaning autograd wouldn't be a factor. Would like some confirmation that this premise is correct.cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler