KEMBAR78
[AOTInductor] ProxyExecutor supports Tuple of Tensor and List[Tensor] in returns by SherlockNoMad · Pull Request #110187 · pytorch/pytorch · GitHub
Skip to content

Conversation

@SherlockNoMad
Copy link
Contributor

@SherlockNoMad SherlockNoMad commented Sep 27, 2023

Summary:
ProxyExecutor supports custom ops that return a tuple mixed of Tensor and List[Tensor]
e.g. "fn_with_mix_outputs(Tensor t, Tensor[] tensors) -> (Tensor, Tensor[])"

Example:
out7, [out8, out9] = torch.ops.fb.fn_with_mix_outputs(out5, [out6, out4])
got compiled into

    AtenTensorHandle buf11_handle;  // output buffer
    AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf11_handle));
    RAIIAtenTensorHandle buf11(buf11_handle);
    AtenTensorHandle buf12_handle;  // output buffer
    AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf12_handle));
    RAIIAtenTensorHandle buf12(buf12_handle);
    AtenTensorHandle buf13_handle;  // output buffer
    AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf13_handle));
    RAIIAtenTensorHandle buf13(buf13_handle);
    AtenTensorHandle tensor_args_var_7[] = {buf8.get(), buf9.get(), buf6.get(), buf11.get(), buf12.get(), buf13.get()};
    int64_t int_args_var_8[] = {};
    aoti_torch_proxy_executor_call_function(proxy_executor, 3, 0, int_args_var_8, 6, tensor_args_var_7);

Serialized extern node

    {
      "name": "buf10",
      "node": {
        "target": "fb::fn_with_mix_outputs",
        "inputs": [
          {
            "name": "t",
            "arg": {
              "asTensor": {
                "name": "buf8"
              }
            }
          },
          {
            "name": "tensors",
            "arg": {
              "asTensors": [
                {
                  "name": "buf9"
                },
                {
                  "name": "buf6"
                }
              ]
            }
          }
        ],
        "outputs": [
          {
            "asTensor": {
              "name": "buf11"
            }
          },
          {
            "asTensors": [
              {
                "name": "buf12"
              },
              {
                "name": "buf13"
              }
            ]
          }
        ],
        "metadata": {}
      }
    }

Test Plan: Test

Differential Revision: D49710320

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 27, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/110187

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit 0d4fe24 with merge base bc047ec (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49710320

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49710320

SherlockNoMad added a commit to SherlockNoMad/pytorch that referenced this pull request Sep 28, 2023
… in returns (pytorch#110187)

Summary:
Pull Request resolved: pytorch#110187

ProxyExecutor supports custom ops that return a tuple mixed of Tensor and List[Tensor]
e.g. `"fn_with_mix_outputs(Tensor t, Tensor[] tensors) -> (Tensor, Tensor[])"`

Example:
`out7, [out8, out9] = torch.ops.fb.fn_with_mix_outputs(out5, [out6, out4])`
got compiled into
```
    AtenTensorHandle buf11_handle;  // output buffer
    AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf11_handle));
    RAIIAtenTensorHandle buf11(buf11_handle);
    AtenTensorHandle buf12_handle;  // output buffer
    AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf12_handle));
    RAIIAtenTensorHandle buf12(buf12_handle);
    AtenTensorHandle buf13_handle;  // output buffer
    AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf13_handle));
    RAIIAtenTensorHandle buf13(buf13_handle);
    AtenTensorHandle tensor_args_var_7[] = {buf8.get(), buf9.get(), buf6.get(), buf11.get(), buf12.get(), buf13.get()};
    int64_t int_args_var_8[] = {};
    aoti_torch_proxy_executor_call_function(proxy_executor, 3, 0, int_args_var_8, 6, tensor_args_var_7);
```

Serialized extern node
```
    {
      "name": "buf10",
      "node": {
        "target": "fb::fn_with_mix_outputs",
        "inputs": [
          {
            "name": "t",
            "arg": {
              "asTensor": {
                "name": "buf8"
              }
            }
          },
          {
            "name": "tensors",
            "arg": {
              "asTensors": [
                {
                  "name": "buf9"
                },
                {
                  "name": "buf6"
                }
              ]
            }
          }
        ],
        "outputs": [
          {
            "asTensor": {
              "name": "buf11"
            }
          },
          {
            "asTensors": [
              {
                "name": "buf12"
              },
              {
                "name": "buf13"
              }
            ]
          }
        ],
        "metadata": {}
      }
    }
```

Test Plan: Test

Differential Revision: D49710320

fbshipit-source-id: f4403608c5021f9e1f88926f59338578b844147f
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49710320

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49710320

… in returns (pytorch#110187)

Summary:

ProxyExecutor supports custom ops that return a tuple mixed of Tensor and List[Tensor]
e.g. `"fn_with_mix_outputs(Tensor t, Tensor[] tensors) -> (Tensor, Tensor[])"`


Example:
`out7, [out8, out9] = torch.ops.fb.fn_with_mix_outputs(out5, [out6, out4])`
got compiled into
```
    AtenTensorHandle buf11_handle;  // output buffer
    AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf11_handle));
    RAIIAtenTensorHandle buf11(buf11_handle);
    AtenTensorHandle buf12_handle;  // output buffer
    AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf12_handle));
    RAIIAtenTensorHandle buf12(buf12_handle);
    AtenTensorHandle buf13_handle;  // output buffer
    AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&buf13_handle));
    RAIIAtenTensorHandle buf13(buf13_handle);
    AtenTensorHandle tensor_args_var_7[] = {buf8.get(), buf9.get(), buf6.get(), buf11.get(), buf12.get(), buf13.get()};
    int64_t int_args_var_8[] = {};
    aoti_torch_proxy_executor_call_function(proxy_executor, 3, 0, int_args_var_8, 6, tensor_args_var_7);
```

Serialized extern node
```
    {
      "name": "buf10",
      "node": {
        "target": "fb::fn_with_mix_outputs",
        "inputs": [
          {
            "name": "t",
            "arg": {
              "asTensor": {
                "name": "buf8"
              }
            }
          },
          {
            "name": "tensors",
            "arg": {
              "asTensors": [
                {
                  "name": "buf9"
                },
                {
                  "name": "buf6"
                }
              ]
            }
          }
        ],
        "outputs": [
          {
            "asTensor": {
              "name": "buf11"
            }
          },
          {
            "asTensors": [
              {
                "name": "buf12"
              },
              {
                "name": "buf13"
              }
            ]
          }
        ],
        "metadata": {}
      }
    }
```

Test Plan: Test

Differential Revision: D49710320
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49710320

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49710320

Copy link
Contributor

@chenyang78 chenyang78 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks.

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge

(Initiating merge automatically since Phabricator Diff has merged)

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Sep 30, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants