-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[ONNX] Support symbolic arguments in onnx exporter #157734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] Support symbolic arguments in onnx exporter #157734
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/157734
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit ce23365 with merge base 2e14069 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
| if isinstance(spec.arg, graph_signature.ConstantArgument): | ||
| # If dynamic is set to a constant input, it becomes a | ||
| # symbolic argument, which is not a tensor. | ||
| if isinstance( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we include them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They don't have tensor_meta though. Is there other info we might want from them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just say they are symbolic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a string? Currently, io_spec supports onlt tensor_meta.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PTAL
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is only for printing iirc? If so it's fine as long as we are able to display them in the report.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
Hi @titaiwangms looks like this is broke amazon linux 2023 test: https://github.com/pytorch/test-infra/actions/runs/16525830777/job/46738788918 when numpy not installed: |
|
@pytorchmergebot revert -c nosignal -m "Broke test on amazon linux 2023 when numpy not installed" |
|
@pytorchbot successfully started a revert job. Check the current status here. |
Reverting PR 157734 failedReason: Command Details for Dev Infra teamRaised by workflow job |
|
|
||
|
|
||
| def _to_ort_value(tensor: torch.Tensor) -> ort.OrtValue: | ||
| def _to_ort_value(input: torch.Tensor | int | float | str | bool) -> ort.OrtValue: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@titaiwangms we can hide the numpy import here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the help.
One should not expect numpy to be there during onnx import Forward fix for : #157734 Added regression test to `test_without_numpy` function Test plan: Run `python -c "import sys;sys.path.insert(0, 'fake_numpy');import torch; import torch.onnx"` with/without this fix Pull Request resolved: #159177 Approved by: https://github.com/atalman, https://github.com/justinchuby, https://github.com/titaiwangms, https://github.com/cyyever, https://github.com/Skylion007, https://github.com/andrewboldi
One should not expect numpy to be there during onnx import Forward fix for : #157734 Added regression test to `test_without_numpy` function Test plan: Run `python -c "import sys;sys.path.insert(0, 'fake_numpy');import torch; import torch.onnx"` with/without this fix Pull Request resolved: #159177 Approved by: https://github.com/atalman, https://github.com/justinchuby, https://github.com/titaiwangms, https://github.com/cyyever, https://github.com/Skylion007, https://github.com/andrewboldi
Previous to this PR, torch.onnx.export(..., dynamo=True, veriy=True, report=True) does not support symbolic arguments. Such examples are like follwing:
symbolic arguments are like constant arguments that they don't have tensor_meta wither. Besides, torch.export.export supports model inputs having constants, which is different from the legacy issue: #99534 where we tried to get the FX directly from dynamo export. Thus,
_remove_non_tensoris deleted from args processing.NOTE: If the ConstantArugment shows up in exported_program, it was kept to align the length of inputs to nn.Module, but it's irrelevant to the model graph, hwich is why in ONNX model the input is omitted.
The test
test_constant_argument_user_input_is_omitted_in_onnx_graphneeds #157719