-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[ATen] Support multi dim any and all reductions #110310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/110310
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit aeaa808 with merge base 4f79161 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
[ghstack-poisoned]
[ghstack-poisoned]
ghstack-source-id: b7c2801 Pull Request resolved: pytorch#110310
[ghstack-poisoned]
[ghstack-poisoned]
ghstack-source-id: b940f74 Pull Request resolved: pytorch#110310
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
This adds a new overload to `all` and `any` with support for multiple reduction dims. ``` all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor ``` [ghstack-poisoned]
test/onnx/test_fx_op_consistency.py
Outdated
| "amax", | ||
| "amin", | ||
| "any", | ||
| # "any", - onnxscript doesn't handle aten::any.dims |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apparently this is testing out-of-tree code, so can't be fixed in this PR
https://github.com/microsoft/onnxscript/blob/f8046e17490a3222d4ee4bca346b4449a2d30ae4/onnxscript/function_libs/torch_lib/ops/core.py#L404
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @BowenBao what is the best way forward here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @justinchuby
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current best thing to do is to xfail it here: https://github.com/pytorch/pytorch/pull/110310/files/73dd8600a8097f98b675510024e01ea1a3d6af5d#diff-db2f78a51511bb172cbfde1b2f68272b8b33049abe2571cded27bcd0f3ae5fa4R176 and track it with an issue.
e.g.
xfail(
"any", reason="reason and link to issue"
),I will discuss with the team to find a better solution forward. Thanks!
This adds a new overload to `all` and `any` with support for multiple reduction dims. ``` all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor ``` [ghstack-poisoned]
|
Okay if it's a pre-existing issue then I'll just add a skip. |
This adds a new overload to `all` and `any` with support for multiple reduction dims. ``` all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor ``` [ghstack-poisoned]
This adds a new overload to `all` and `any` with support for multiple reduction dims. ``` all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor ``` [ghstack-poisoned]
|
If the error is a hard crash, you will have to add it to the skip list instead of xfail. |
This adds a new overload to `all` and `any` with support for multiple reduction dims. ``` all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor ``` [ghstack-poisoned]
This adds a new overload to `all` and `any` with support for multiple reduction dims. ``` all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor ``` [ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM
This adds a new overload to `all` and `any` with support for multiple reduction dims. ``` all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor ``` [ghstack-poisoned]
Implement aten::{all,any}.dims according to
pytorch/pytorch#110310
Tests will be enabled after the PyTorch PR is merged
This adds a new overload to `all` and `any` with support for multiple reduction dims. ``` all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor ``` [ghstack-poisoned]
This adds a new overload to `all` and `any` with support for multiple reduction dims. ``` all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor ``` [ghstack-poisoned]
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
This adds a new overload to `all` and `any` with support for multiple reduction dims. ``` all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor ``` [ghstack-poisoned]
|
Successfully rebased |
This adds a new overload to `all` and `any` with support for multiple reduction dims. ``` all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor ``` [ghstack-poisoned]
Pull Request resolved: #110311 Approved by: https://github.com/lezcano ghstack dependencies: #110310
This adds a new overload to `all` and `any` with support for multiple reduction dims. ``` all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor ``` Pull Request resolved: pytorch#110310 Approved by: https://github.com/lezcano, https://github.com/albanD, https://github.com/justinchuby
Pull Request resolved: pytorch#110311 Approved by: https://github.com/lezcano ghstack dependencies: pytorch#110310
This adds a new overload to `all` and `any` with support for multiple reduction dims. ``` all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor ``` Pull Request resolved: pytorch#110310 Approved by: https://github.com/lezcano, https://github.com/albanD, https://github.com/justinchuby
Pull Request resolved: pytorch#110311 Approved by: https://github.com/lezcano ghstack dependencies: pytorch#110310
Stack from ghstack (oldest at bottom):
This adds a new overload to
allandanywith support for multiple reduction dims.