-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Fix Dispatching not considering List[Optional[Tensor]] for dispatch #60787
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Fixes #60461. Previously, when one calls `self.index(indices)` using a regular `self` Tensor and a `BatchedTensor` indices the dispatcher would not dispatch to the Batched key. This is because the dispatcher did not extract dispatch keys from `indices`. Similar #58283 and #58296, this PR modifies the dispatcher to extract dispatch keys from List[Optional[Tensor]] arguments. We do this for both boxed and unboxed kernels. Test Plan: - run the test case in https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740 (requires functorch). After this PR, it raises `RuntimeError: Batching rule not implemented for aten::index.Tensor. We could not generate a fallback.`, which shows that dispatch happened on the Batched key. - Taking suggestions for how to write a test for this in core [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit 1dba987 (more details on the Dr. CI page and at hud.pytorch.org/pr/60787):
🕵️ 3 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
Fixes #60461. Previously, when one calls `self.index(indices)` using a regular `self` Tensor and a `BatchedTensor` indices the dispatcher would not dispatch to the Batched key. This is because the dispatcher did not extract dispatch keys from `indices`. Similar #58283 and #58296, this PR modifies the dispatcher to extract dispatch keys from List[Optional[Tensor]] arguments. We do this for both boxed and unboxed kernels. Test Plan: - run the test case in https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740 (requires functorch). After this PR, it raises `RuntimeError: Batching rule not implemented for aten::index.Tensor. We could not generate a fallback.`, which shows that dispatch happened on the Batched key. - Taking suggestions for how to write a test for this in core ghstack-source-id: 1dc29ae Pull Request resolved: #60787
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Re how best to add a test to core, that's a good question. I don't know of a place where we systematically test the coverage of this logic (hence the gaps we've been finding). Probably the most closely related tests are in https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/core/op_registration/op_registration_test.cpp if you're down, but I wouldn't hold up landing this in any case.
|
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
… dispatch" Fixes #60461. Previously, when one calls `self.index(indices)` using a regular `self` Tensor and a `BatchedTensor` indices the dispatcher would not dispatch to the Batched key. This is because the dispatcher did not extract dispatch keys from `indices`. Similar #58283 and #58296, this PR modifies the dispatcher to extract dispatch keys from List[Optional[Tensor]] arguments. We do this for both boxed and unboxed kernels. Test Plan: - run the test case in https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740 (requires functorch). After this PR, it raises `RuntimeError: Batching rule not implemented for aten::index.Tensor. We could not generate a fallback.`, which shows that dispatch happened on the Batched key. - Taking suggestions for how to write a test for this in core Differential Revision: [D29438611](https://our.internmc.facebook.com/intern/diff/D29438611) [ghstack-poisoned]
Fixes #60461. Previously, when one calls `self.index(indices)` using a regular `self` Tensor and a `BatchedTensor` indices the dispatcher would not dispatch to the Batched key. This is because the dispatcher did not extract dispatch keys from `indices`. Similar #58283 and #58296, this PR modifies the dispatcher to extract dispatch keys from List[Optional[Tensor]] arguments. We do this for both boxed and unboxed kernels. Test Plan: - run the test case in https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740 (requires functorch). After this PR, it raises `RuntimeError: Batching rule not implemented for aten::index.Tensor. We could not generate a fallback.`, which shows that dispatch happened on the Batched key. - Taking suggestions for how to write a test for this in core ghstack-source-id: 2c2a432 Pull Request resolved: #60787
|
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works [ghstack-poisoned]
Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works ghstack-source-id: 6c870d5 Pull Request resolved: #66506
…nal[Tensor]] for dispatch" Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works [ghstack-poisoned]
… dispatch" Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works [ghstack-poisoned]
Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works ghstack-source-id: 3f33313 Pull Request resolved: #66506
…nal[Tensor]] for dispatch" Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714) [ghstack-poisoned]
… dispatch" Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714) [ghstack-poisoned]
Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works ghstack-source-id: 6bb92de Pull Request resolved: #66506
Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works ghstack-source-id: 06378f4 Pull Request resolved: #66506
…nal[Tensor]] for dispatch" Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714) [ghstack-poisoned]
… dispatch" Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714) [ghstack-poisoned]
Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works ghstack-source-id: 3b8a1a7 Pull Request resolved: #66506
…nal[Tensor]] for dispatch" Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714) [ghstack-poisoned]
… dispatch" Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714) [ghstack-poisoned]
…66506) Summary: Pull Request resolved: #66506 Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Reviewed By: bdhirsh Differential Revision: D31609714 Pulled By: zou3519 fbshipit-source-id: bb91cafd32fb3c1b7d1e4f966b46b5d973b50df2
…nal[Tensor]] for dispatch" Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714) [ghstack-poisoned]
… dispatch" Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714) [ghstack-poisoned]
Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works ghstack-source-id: 59a19fe Pull Request resolved: #66506
…ispatch Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works [ghstack-poisoned]
…ispatch Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works ghstack-source-id: a49b500 Pull Request resolved: #68073
…sor]] for dispatch" Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works [ghstack-poisoned]
…ispatch Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works ghstack-source-id: 59a19fe Pull Request resolved: #68073
…ispatch Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works ghstack-source-id: d766a34 Pull Request resolved: #68073
…ist[Optional[Tensor]] for dispatch" Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601) [ghstack-poisoned]
…sor]] for dispatch" Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601) [ghstack-poisoned]
…ispatch Pull Request resolved: #68073 Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601/) ghstack-source-id: 143814497
…ist[Optional[Tensor]] for dispatch" Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601) [ghstack-poisoned]
…sor]] for dispatch" Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601) [ghstack-poisoned]
…ispatch Pull Request resolved: #68073 Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` ghstack-source-id: 144204580 Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601/)
…ist[Optional[Tensor]] for dispatch" Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601) [ghstack-poisoned]
…sor]] for dispatch" Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601) [ghstack-poisoned]
…ispatch (#68073) Summary: Pull Request resolved: #68073 Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` ghstack-source-id: 144204580 Test Plan: - assert that pytorch/functorch#124 actually works Reviewed By: gchanan Differential Revision: D32313601 Pulled By: zou3519 fbshipit-source-id: 8028d5f34eecabc53d603bd54d6b6748b5db461a
…ispatch (#68073) Summary: Relanding the original PR. Its body was as follows: Followup to #60787 It turns out that the original PR was wrong for unboxed kernels. We recently ran into this in pytorch/functorch#124 For unboxed kernels, the correct type for a Tensor?[] argument is actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>` Test Plan: - assert that pytorch/functorch#124 actually works Reviewed By: gchanan Differential Revision: D32313601 Pulled By: zou3519 fbshipit-source-id: 8028d5f34eecabc53d603bd54d6b6748b5db461a [ghstack-poisoned]
Stack from ghstack:
Fixes #60461.
Previously, when one calls
self.index(indices)using a regularselfTensor and a
BatchedTensorindices the dispatcher would not dispatchto the Batched key. This is because the dispatcher did not extract
dispatch keys from
indices.Similar #58283 and #58296, this PR modifies the dispatcher to extract
dispatch keys from List[Optional[Tensor]] arguments. We do this for both
boxed and unboxed kernels.
Test Plan:
https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740
(requires functorch). After this PR, it raises
RuntimeError: Batching rule not implemented for aten::index.Tensor. We could not generate a fallback., which shows that dispatch happened on the Batched key.Differential Revision: D29438611