KEMBAR78
Fix Dispatching not considering List[Optional[Tensor]] for dispatch by zou3519 · Pull Request #60787 · pytorch/pytorch · GitHub
Skip to content

Conversation

@zou3519
Copy link
Contributor

@zou3519 zou3519 commented Jun 25, 2021

Stack from ghstack:

Fixes #60461.

Previously, when one calls self.index(indices) using a regular self
Tensor and a BatchedTensor indices the dispatcher would not dispatch
to the Batched key. This is because the dispatcher did not extract
dispatch keys from indices.

Similar #58283 and #58296, this PR modifies the dispatcher to extract
dispatch keys from List[Optional[Tensor]] arguments. We do this for both
boxed and unboxed kernels.

Test Plan:

  • run the test case in
    https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740
    (requires functorch). After this PR, it raises RuntimeError: Batching rule not implemented for aten::index.Tensor. We could not generate a fallback., which shows that dispatch happened on the Batched key.
  • Taking suggestions for how to write a test for this in core

Differential Revision: D29438611

Fixes #60461.

Previously, when one calls `self.index(indices)` using a regular `self`
Tensor and a `BatchedTensor` indices the dispatcher would not dispatch
to the Batched key. This is because the dispatcher did not extract
dispatch keys from `indices`.

Similar #58283 and #58296, this PR modifies the dispatcher to extract
dispatch keys from List[Optional[Tensor]] arguments. We do this for both
boxed and unboxed kernels.

Test Plan:
- run the test case in
https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740
(requires functorch). After this PR, it raises `RuntimeError: Batching
rule not implemented for aten::index.Tensor. We could not generate a
fallback.`, which shows that dispatch happened on the Batched key.
- Taking suggestions for how to write a test for this in core

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jun 25, 2021

💊 CI failures summary and remediations

As of commit 1dba987 (more details on the Dr. CI page and at hud.pytorch.org/pr/60787):


  • 4/4 failures possibly* introduced in this PR
    • 1/4 non-scanned failure(s)

🕵️ 3 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_xenial_py3_clang5_asan_test1 (1/3)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Jul 07 18:34:32 SUMMARY: UndefinedBehaviorSanit.../jenkins/workspace/aten/src/ATen/Utils.cpp:20:3 in
Jul 07 18:34:32     #9 0x5581d404c8f2 in PyEval_EvalCode /home/builder/ktietz/cos6/ci_cos6/python_1622833237666/work/Python/ceval.c:731
Jul 07 18:34:32     #10 0x5581d40b4cd5 in run_mod /home/builder/ktietz/cos6/ci_cos6/python_1622833237666/work/Python/pythonrun.c:1025
Jul 07 18:34:32     #11 0x5581d40b6d5d in PyRun_StringFlags /home/builder/ktietz/cos6/ci_cos6/python_1622833237666/work/Python/pythonrun.c:949
Jul 07 18:34:32     #12 0x5581d40b6dbb in PyRun_SimpleStringFlags /home/builder/ktietz/cos6/ci_cos6/python_1622833237666/work/Python/pythonrun.c:445
Jul 07 18:34:32     #13 0x5581d40b7926 in run_command /home/builder/ktietz/cos6/ci_cos6/python_1622833237666/work/Modules/main.c:301
Jul 07 18:34:32     #14 0x5581d40b7926 in Py_Main /home/builder/ktietz/cos6/ci_cos6/python_1622833237666/work/Modules/main.c:749
Jul 07 18:34:32     #15 0x5581d3ff1196 in main /home/builder/ktietz/cos6/ci_cos6/python_1622833237666/work/Programs/python.c:69
Jul 07 18:34:32     #16 0x7f26208e783f in __libc_start_main /build/glibc-S7Ft5T/glibc-2.23/csu/../csu/libc-start.c:291
Jul 07 18:34:32     #17 0x5581d408133d in _start (/opt/conda/bin/python3.6+0x1a733d)
Jul 07 18:34:32 
Jul 07 18:34:32 SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /var/lib/jenkins/workspace/aten/src/ATen/Utils.cpp:20:3 in 
Jul 07 18:34:32 + retcode=1
Jul 07 18:34:32 + set -e
Jul 07 18:34:32 + return 1
Jul 07 18:34:32 + [[ pytorch-linux-xenial-py3-clang5-asan-test1 == *-NO_AVX-* ]]
Jul 07 18:34:32 + [[ pytorch-linux-xenial-py3-clang5-asan-test1 == *-NO_AVX2-* ]]
Jul 07 18:34:32 + '[' -n https://github.com/pytorch/pytorch/pull/60787 ']'
Jul 07 18:34:32 + [[ pytorch-linux-xenial-py3-clang5-asan-test1 != *coverage* ]]
Jul 07 18:34:32 ++ mktemp
Jul 07 18:34:32 + DETERMINE_FROM=/tmp/tmp.Lgwkq8qpaf
Jul 07 18:34:32 + file_diff_from_base /tmp/tmp.Lgwkq8qpaf

See CircleCI build pytorch_linux_xenial_cuda11_1_cudnn8_py3_gcc7_build (2/3)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

Jul 07 20:29:56 ERROR 2021-07-07T15:47:30Z: scc...eof ((socklen_t)))\n ^\n" }
Jul 07 20:29:56 ERROR 2021-07-07T15:47:23Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:332:2: error: \'struct sockaddr\' has no member named \'sa_len\'\n x.sa_len = 0;\n  ^\n" }
Jul 07 20:29:56 
Jul 07 20:29:56 ERROR 2021-07-07T15:47:26Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:366:10: error: \'RTLD_MEMBER\' undeclared (first use in this function); did you mean \'RTLD_NEXT\'?\n   (void) RTLD_MEMBER;\n          ^~~~~~~~~~~\n          RTLD_NEXT\nconftest.c:366:10: note: each undeclared identifier is reported only once for each function it appears in\n" }
Jul 07 20:29:56 
Jul 07 20:29:56 ERROR 2021-07-07T15:47:27Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c:361:9: error: unknown type name \'not\'\n         not a universal capable compiler\n         ^~~\nconftest.c:361:15: error: expected \'=\', \',\', \';\', \'asm\' or \'__attribute__\' before \'universal\'\n         not a universal capable compiler\n               ^~~~~~~~~\nconftest.c:361:15: error: unknown type name \'universal\'\n" }
Jul 07 20:29:56 
Jul 07 20:29:56 ERROR 2021-07-07T15:47:27Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:367:4: error: unknown type name \'not\'; did you mean \'ino_t\'?\n    not big endian\n    ^~~\n    ino_t\nconftest.c:367:12: error: expected \'=\', \',\', \';\', \'asm\' or \'__attribute__\' before \'endian\'\n    not big endian\n            ^~~~~~\n" }
Jul 07 20:29:56 
Jul 07 20:29:56 ERROR 2021-07-07T15:47:28Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:378:4: error: \'struct stat\' has no member named \'st_mtimespec\'; did you mean \'st_mtim\'?\n st.st_mtimespec.tv_nsec = 1;\n    ^~~~~~~~~~~~\n    st_mtim\n" }
Jul 07 20:29:56 
Jul 07 20:29:56 ERROR 2021-07-07T15:47:30Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:402:24: error: expected expression before \')\' token\n if (sizeof ((socklen_t)))\n                        ^\n" }
Jul 07 20:29:56 
Jul 07 20:29:56 =========== If your build fails, please take a look at the log above for possible reasons ===========
Jul 07 20:29:56 Compile requests                   11919
Jul 07 20:29:56 Compile requests executed           6485
Jul 07 20:29:56 Cache hits                          3982
Jul 07 20:29:56 Cache hits (C/C++)                  3746
Jul 07 20:29:56 Cache hits (CUDA)                    236
Jul 07 20:29:56 Cache misses                        2435
Jul 07 20:29:56 Cache misses (C/C++)                2105
Jul 07 20:29:56 Cache misses (CUDA)                  330

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_test (3/3)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Jul 07 18:16:30 Intel MKL ERROR: Parameter 5 was incorrect on entry to DLASCL.
Jul 07 18:16:30   if ((math.isinf(a) or math.isinf(b)) and a != b):
Jul 07 18:16:30 ok (0.075s)
Jul 07 18:16:30   test_cond_cpu_float32 (__main__.TestLinalgCPU) ... 
Jul 07 18:16:30 Intel MKL ERROR: Parameter 4 was incorrect on entry to DLASCL.
Jul 07 18:16:30 
Jul 07 18:16:30 Intel MKL ERROR: Parameter 5 was incorrect on entry to DLASCL.
Jul 07 18:16:30 ok (0.055s)
Jul 07 18:16:30   test_cond_cpu_float64 (__main__.TestLinalgCPU) ... 
Jul 07 18:16:30 Intel MKL ERROR: Parameter 4 was incorrect on entry to DLASCL.
Jul 07 18:16:30 
Jul 07 18:16:30 Intel MKL ERROR: Parameter 5 was incorrect on entry to DLASCL.
Jul 07 18:16:30 ok (0.052s)
Jul 07 18:16:31   test_cond_errors_and_warnings_cpu_complex128 (__main__.TestLinalgCPU) ... ok (0.066s)
Jul 07 18:16:31   test_cond_errors_and_warnings_cpu_complex64 (__main__.TestLinalgCPU) ... ok (0.065s)
Jul 07 18:16:31   test_cond_errors_and_warnings_cpu_float32 (__main__.TestLinalgCPU) ... ok (0.066s)
Jul 07 18:16:31   test_cond_errors_and_warnings_cpu_float64 (__main__.TestLinalgCPU) ... ok (0.064s)
Jul 07 18:16:31   test_cross_cpu_float32 (__main__.TestLinalgCPU) ... ok (0.004s)
Jul 07 18:16:31   test_cross_errors_cpu (__main__.TestLinalgCPU) ... ok (0.036s)
Jul 07 18:16:31   test_cross_with_and_without_dim_cpu_float32 (__main__.TestLinalgCPU) ... ok (0.003s)
Jul 07 18:16:31   test_det_cpu_complex128 (__main__.TestLinalgCPU) ... ok (0.028s)
Jul 07 18:16:31   test_det_cpu_float64 (__main__.TestLinalgCPU) ... ok (0.020s)

2 jobs timed out:

  • pytorch_linux_xenial_cuda11_1_cudnn8_py3_gcc7_build
  • pytorch_linux_xenial_py3_6_gcc5_4_test

Preview docs built from this PR

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

zou3519 added a commit that referenced this pull request Jun 25, 2021
Fixes #60461.

Previously, when one calls `self.index(indices)` using a regular `self`
Tensor and a `BatchedTensor` indices the dispatcher would not dispatch
to the Batched key. This is because the dispatcher did not extract
dispatch keys from `indices`.

Similar #58283 and #58296, this PR modifies the dispatcher to extract
dispatch keys from List[Optional[Tensor]] arguments. We do this for both
boxed and unboxed kernels.

Test Plan:
- run the test case in
https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740
(requires functorch). After this PR, it raises `RuntimeError: Batching
rule not implemented for aten::index.Tensor. We could not generate a
fallback.`, which shows that dispatch happened on the Batched key.
- Taking suggestions for how to write a test for this in core

ghstack-source-id: 1dc29ae
Pull Request resolved: #60787
@zou3519 zou3519 requested review from Chillee, bhosmer and ezyang June 28, 2021 13:27
Copy link

@bhosmer bhosmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Re how best to add a test to core, that's a good question. I don't know of a place where we systematically test the coverage of this logic (hence the gaps we've been finding). Probably the most closely related tests are in https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/core/op_registration/op_registration_test.cpp if you're down, but I wouldn't hold up landing this in any case.

@zou3519
Copy link
Contributor Author

zou3519 commented Jun 28, 2021

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

… dispatch"

Fixes #60461.

Previously, when one calls `self.index(indices)` using a regular `self`
Tensor and a `BatchedTensor` indices the dispatcher would not dispatch
to the Batched key. This is because the dispatcher did not extract
dispatch keys from `indices`.

Similar #58283 and #58296, this PR modifies the dispatcher to extract
dispatch keys from List[Optional[Tensor]] arguments. We do this for both
boxed and unboxed kernels.

Test Plan:
- run the test case in
https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740
(requires functorch). After this PR, it raises `RuntimeError: Batching
rule not implemented for aten::index.Tensor. We could not generate a
fallback.`, which shows that dispatch happened on the Batched key.
- Taking suggestions for how to write a test for this in core

Differential Revision: [D29438611](https://our.internmc.facebook.com/intern/diff/D29438611)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Jul 7, 2021
Fixes #60461.

Previously, when one calls `self.index(indices)` using a regular `self`
Tensor and a `BatchedTensor` indices the dispatcher would not dispatch
to the Batched key. This is because the dispatcher did not extract
dispatch keys from `indices`.

Similar #58283 and #58296, this PR modifies the dispatcher to extract
dispatch keys from List[Optional[Tensor]] arguments. We do this for both
boxed and unboxed kernels.

Test Plan:
- run the test case in
https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740
(requires functorch). After this PR, it raises `RuntimeError: Batching
rule not implemented for aten::index.Tensor. We could not generate a
fallback.`, which shows that dispatch happened on the Batched key.
- Taking suggestions for how to write a test for this in core

ghstack-source-id: 2c2a432
Pull Request resolved: #60787
@zou3519
Copy link
Contributor Author

zou3519 commented Jul 7, 2021

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@zou3519 merged this pull request in 4937d9f.

@facebook-github-bot facebook-github-bot deleted the gh/zou3519/359/head branch July 11, 2021 14:16
zou3519 added a commit that referenced this pull request Oct 12, 2021
Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Oct 12, 2021
Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

ghstack-source-id: 6c870d5
Pull Request resolved: #66506
zou3519 added a commit that referenced this pull request Oct 13, 2021
…nal[Tensor]] for dispatch"

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Oct 13, 2021
… dispatch"

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Oct 13, 2021
Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

ghstack-source-id: 3f33313
Pull Request resolved: #66506
zou3519 added a commit that referenced this pull request Oct 14, 2021
…nal[Tensor]] for dispatch"

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Oct 14, 2021
… dispatch"

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Oct 14, 2021
Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

ghstack-source-id: 6bb92de
Pull Request resolved: #66506
zou3519 added a commit that referenced this pull request Oct 25, 2021
Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

ghstack-source-id: 06378f4
Pull Request resolved: #66506
zou3519 added a commit that referenced this pull request Oct 25, 2021
…nal[Tensor]] for dispatch"

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Oct 25, 2021
… dispatch"

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Nov 4, 2021
Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

ghstack-source-id: 3b8a1a7
Pull Request resolved: #66506
zou3519 added a commit that referenced this pull request Nov 4, 2021
…nal[Tensor]] for dispatch"

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Nov 4, 2021
… dispatch"

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714)

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request Nov 9, 2021
…66506)

Summary:
Pull Request resolved: #66506

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Reviewed By: bdhirsh

Differential Revision: D31609714

Pulled By: zou3519

fbshipit-source-id: bb91cafd32fb3c1b7d1e4f966b46b5d973b50df2
zou3519 added a commit that referenced this pull request Nov 9, 2021
…nal[Tensor]] for dispatch"

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Nov 9, 2021
… dispatch"

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D31609714](https://our.internmc.facebook.com/intern/diff/D31609714)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Nov 9, 2021
Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

ghstack-source-id: 59a19fe
Pull Request resolved: #66506
zou3519 added a commit that referenced this pull request Nov 9, 2021
…ispatch

Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Nov 9, 2021
…ispatch

Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

ghstack-source-id: a49b500
Pull Request resolved: #68073
zou3519 added a commit that referenced this pull request Nov 10, 2021
…sor]] for dispatch"

Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Nov 10, 2021
…ispatch

Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

ghstack-source-id: 59a19fe
Pull Request resolved: #68073
zou3519 added a commit that referenced this pull request Nov 15, 2021
…ispatch

Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

ghstack-source-id: d766a34
Pull Request resolved: #68073
zou3519 added a commit that referenced this pull request Nov 15, 2021
…ist[Optional[Tensor]] for dispatch"

Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Nov 15, 2021
…sor]] for dispatch"

Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Nov 19, 2021
…ispatch

Pull Request resolved: #68073


Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601/)
ghstack-source-id: 143814497
zou3519 added a commit that referenced this pull request Nov 19, 2021
…ist[Optional[Tensor]] for dispatch"

Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Nov 19, 2021
…sor]] for dispatch"

Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Nov 29, 2021
…ispatch

Pull Request resolved: #68073


Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`
ghstack-source-id: 144204580

Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601/)
zou3519 added a commit that referenced this pull request Nov 29, 2021
…ist[Optional[Tensor]] for dispatch"

Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Nov 29, 2021
…sor]] for dispatch"

Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Differential Revision: [D32313601](https://our.internmc.facebook.com/intern/diff/D32313601)

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request Nov 29, 2021
…ispatch (#68073)

Summary:
Pull Request resolved: #68073

Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`
ghstack-source-id: 144204580

Test Plan:
- assert that pytorch/functorch#124
actually works

Reviewed By: gchanan

Differential Revision: D32313601

Pulled By: zou3519

fbshipit-source-id: 8028d5f34eecabc53d603bd54d6b6748b5db461a
PaliC added a commit that referenced this pull request Nov 30, 2021
…ispatch (#68073)

Summary:

Relanding the original PR. Its body was as follows:

Followup to #60787

It turns out that the original PR was wrong for unboxed kernels. We
recently ran into this in
pytorch/functorch#124

For unboxed kernels, the correct type for a Tensor?[] argument is
actually `List<optional<Tensor>>`, not `ArrayRef<optional<Tensor>>`

Test Plan:
- assert that pytorch/functorch#124
actually works

Reviewed By: gchanan

Differential Revision: D32313601

Pulled By: zou3519

fbshipit-source-id: 8028d5f34eecabc53d603bd54d6b6748b5db461a

[ghstack-poisoned]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants