-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[AOTInductor] ProxyExecutor supports Tuple of Tensor and List[Tensor] in returns #110187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/110187
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (2 Unrelated Failures)As of commit 0d4fe24 with merge base bc047ec ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D49710320 |
|
This pull request was exported from Phabricator. Differential Revision: D49710320 |
5936f89 to
d6784c4
Compare
… in returns (pytorch#110187) Summary: Pull Request resolved: pytorch#110187 ProxyExecutor supports custom ops that return a tuple mixed of Tensor and List[Tensor] e.g. `"fn_with_mix_outputs(Tensor t, Tensor[] tensors) -> (Tensor, Tensor[])"` Example: `out7, [out8, out9] = torch.ops.fb.fn_with_mix_outputs(out5, [out6, out4])` got compiled into ``` AtenTensorHandle buf11_handle; // output buffer AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf11_handle)); RAIIAtenTensorHandle buf11(buf11_handle); AtenTensorHandle buf12_handle; // output buffer AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf12_handle)); RAIIAtenTensorHandle buf12(buf12_handle); AtenTensorHandle buf13_handle; // output buffer AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf13_handle)); RAIIAtenTensorHandle buf13(buf13_handle); AtenTensorHandle tensor_args_var_7[] = {buf8.get(), buf9.get(), buf6.get(), buf11.get(), buf12.get(), buf13.get()}; int64_t int_args_var_8[] = {}; aoti_torch_proxy_executor_call_function(proxy_executor, 3, 0, int_args_var_8, 6, tensor_args_var_7); ``` Serialized extern node ``` { "name": "buf10", "node": { "target": "fb::fn_with_mix_outputs", "inputs": [ { "name": "t", "arg": { "asTensor": { "name": "buf8" } } }, { "name": "tensors", "arg": { "asTensors": [ { "name": "buf9" }, { "name": "buf6" } ] } } ], "outputs": [ { "asTensor": { "name": "buf11" } }, { "asTensors": [ { "name": "buf12" }, { "name": "buf13" } ] } ], "metadata": {} } } ``` Test Plan: Test Differential Revision: D49710320 fbshipit-source-id: f4403608c5021f9e1f88926f59338578b844147f
d6784c4 to
101dc7b
Compare
101dc7b to
baf37c7
Compare
|
This pull request was exported from Phabricator. Differential Revision: D49710320 |
1 similar comment
|
This pull request was exported from Phabricator. Differential Revision: D49710320 |
… in returns (pytorch#110187) Summary: ProxyExecutor supports custom ops that return a tuple mixed of Tensor and List[Tensor] e.g. `"fn_with_mix_outputs(Tensor t, Tensor[] tensors) -> (Tensor, Tensor[])"` Example: `out7, [out8, out9] = torch.ops.fb.fn_with_mix_outputs(out5, [out6, out4])` got compiled into ``` AtenTensorHandle buf11_handle; // output buffer AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf11_handle)); RAIIAtenTensorHandle buf11(buf11_handle); AtenTensorHandle buf12_handle; // output buffer AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf12_handle)); RAIIAtenTensorHandle buf12(buf12_handle); AtenTensorHandle buf13_handle; // output buffer AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf13_handle)); RAIIAtenTensorHandle buf13(buf13_handle); AtenTensorHandle tensor_args_var_7[] = {buf8.get(), buf9.get(), buf6.get(), buf11.get(), buf12.get(), buf13.get()}; int64_t int_args_var_8[] = {}; aoti_torch_proxy_executor_call_function(proxy_executor, 3, 0, int_args_var_8, 6, tensor_args_var_7); ``` Serialized extern node ``` { "name": "buf10", "node": { "target": "fb::fn_with_mix_outputs", "inputs": [ { "name": "t", "arg": { "asTensor": { "name": "buf8" } } }, { "name": "tensors", "arg": { "asTensors": [ { "name": "buf9" }, { "name": "buf6" } ] } } ], "outputs": [ { "asTensor": { "name": "buf11" } }, { "asTensors": [ { "name": "buf12" }, { "name": "buf13" } ] } ], "metadata": {} } } ``` Test Plan: Test Differential Revision: D49710320
baf37c7 to
25c6d04
Compare
|
This pull request was exported from Phabricator. Differential Revision: D49710320 |
25c6d04 to
0d4fe24
Compare
|
This pull request was exported from Phabricator. Differential Revision: D49710320 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks.
|
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary:
ProxyExecutor supports custom ops that return a tuple mixed of Tensor and List[Tensor]
e.g.
"fn_with_mix_outputs(Tensor t, Tensor[] tensors) -> (Tensor, Tensor[])"Example:
out7, [out8, out9] = torch.ops.fb.fn_with_mix_outputs(out5, [out6, out4])got compiled into
Serialized extern node
Test Plan: Test
Differential Revision: D49710320
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler