KEMBAR78
[ATen] Support multi dim any and all reductions by peterbell10 · Pull Request #110310 · pytorch/pytorch · GitHub
Skip to content

Conversation

@peterbell10
Copy link
Collaborator

@peterbell10 peterbell10 commented Sep 29, 2023

Stack from ghstack (oldest at bottom):

This adds a new overload to all and any with support for multiple reduction dims.

all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 29, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/110310

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit aeaa808 with merge base 4f79161 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Sep 30, 2023
peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Sep 30, 2023
@peterbell10 peterbell10 marked this pull request as ready for review October 2, 2023 12:16
@peterbell10 peterbell10 requested a review from lezcano October 2, 2023 12:16
@pytorch-bot pytorch-bot bot added the release notes: onnx torch.onnx related changes that should show up in the release notes label Oct 2, 2023
This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```

[ghstack-poisoned]
"amax",
"amin",
"any",
# "any", - onnxscript doesn't handle aten::any.dims
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @BowenBao what is the best way forward here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

@justinchuby justinchuby Oct 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current best thing to do is to xfail it here: https://github.com/pytorch/pytorch/pull/110310/files/73dd8600a8097f98b675510024e01ea1a3d6af5d#diff-db2f78a51511bb172cbfde1b2f68272b8b33049abe2571cded27bcd0f3ae5fa4R176 and track it with an issue.

e.g.

xfail(
        "any", reason="reason and link to issue"
    ),

I will discuss with the team to find a better solution forward. Thanks!

This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```

[ghstack-poisoned]
@peterbell10
Copy link
Collaborator Author

Okay if it's a pre-existing issue then I'll just add a skip.

This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```

[ghstack-poisoned]
This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```

[ghstack-poisoned]
@albanD
Copy link
Collaborator

albanD commented Oct 11, 2023

If the error is a hard crash, you will have to add it to the skip list instead of xfail.

This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```

[ghstack-poisoned]
This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```

[ghstack-poisoned]
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM

This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```

[ghstack-poisoned]
justinchuby added a commit to microsoft/onnxscript that referenced this pull request Oct 13, 2023
Implement aten::{all,any}.dims according to
pytorch/pytorch#110310

Tests will be enabled after the PyTorch PR is merged
This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```

[ghstack-poisoned]
This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```

[ghstack-poisoned]
@peterbell10
Copy link
Collaborator Author

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```

[ghstack-poisoned]
@pytorchmergebot
Copy link
Collaborator

Successfully rebased gh/peterbell10/626/orig onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via ghstack checkout https://github.com/pytorch/pytorch/pull/110310)

pytorchmergebot pushed a commit that referenced this pull request Oct 13, 2023
ghstack-source-id: a6bdff4
Pull Request resolved: #110310
This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```

[ghstack-poisoned]
pytorchmergebot pushed a commit that referenced this pull request Oct 24, 2023
@facebook-github-bot facebook-github-bot deleted the gh/peterbell10/626/head branch October 28, 2023 14:24
xuhancn pushed a commit to xuhancn/pytorch that referenced this pull request Nov 7, 2023
This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```
Pull Request resolved: pytorch#110310
Approved by: https://github.com/lezcano, https://github.com/albanD, https://github.com/justinchuby
xuhancn pushed a commit to xuhancn/pytorch that referenced this pull request Nov 7, 2023
Skylion007 pushed a commit to Skylion007/pytorch that referenced this pull request Nov 14, 2023
This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```
Pull Request resolved: pytorch#110310
Approved by: https://github.com/lezcano, https://github.com/albanD, https://github.com/justinchuby
Skylion007 pushed a commit to Skylion007/pytorch that referenced this pull request Nov 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/mps Run MPS tests (subset of trunk) Merged open source release notes: python_frontend python frontend release notes category topic: new features topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants