KEMBAR78
Add symbolic_opset19.py and symbolic_opset20.py to support opset 19/20, extend opset 18 support by liqunfu · Pull Request #118828 · pytorch/pytorch · GitHub
Skip to content

Conversation

liqunfu
Copy link
Collaborator

@liqunfu liqunfu commented Feb 1, 2024

Start to fix #114801

Signed-off-by: liqunfu <liqun.fu@microsoft.com>
@pytorch-bot
Copy link

pytorch-bot bot commented Feb 1, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/118828

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 15c31d0 with merge base 37e5632 (image):

NEW FAILURE - The following job has failed:

  • pull / linux-jammy-py3.8-gcc11 / build (gh)
    Could not assume role with user credentials: User: arn:aws:sts::308535385114:assumed-role/gh-ci-github-action-runners-runner-role/i-060fd8478151edf83 is not authorized to perform: sts:TagSession on resource: arn:aws:iam::391835788720:role/gha-pytorch-ci-artifacts-role

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the release notes: onnx torch.onnx related changes that should show up in the release notes label Feb 1, 2024
@thiagocrepaldi thiagocrepaldi added module: onnx Related to torch.onnx onnx-triaged triaged by ONNX team labels Feb 1, 2024
@liqunfu liqunfu marked this pull request as draft February 1, 2024 16:48
… tests of opset > 17

Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
liqunfu and others added 4 commits February 8, 2024 20:57
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Co-authored-by: Thiago Crepaldi <thiagofc@microsoft.com>
Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
@thiagocrepaldi thiagocrepaldi requested a review from albanD March 12, 2024 14:54
Copy link
Collaborator

@thiagocrepaldi thiagocrepaldi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, it looks good. Thank you for that!

My biggest concern is the if opset check within the symbolic_opsetXXX.py files. Each files belong to a specific opset already. If the same op has different implementation for different opsets, their implementation must be "duplicated" in different symbolic_opsetXXX.py files.

Maybe the tests https://github.com/pytorch/pytorch/blob/main/test/onnx/test_utility_funs.py#L276 and https://github.com/pytorch/pytorch/blob/main/test/onnx/test_utility_funs.py#L260 need to be removed?

Signed-off-by: liqunfu <liqun.fu@microsoft.com>
@thiagocrepaldi thiagocrepaldi added the suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter) label Mar 14, 2024
@thiagocrepaldi
Copy link
Collaborator

Suppressing bc-linter check because overload_by_arg_count is only being moved to another file - not being removed. It is not supposed to be public too

liqunfu added 4 commits March 15, 2024 00:47
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
@liqunfu liqunfu changed the title Add symbolic_opset19.py and symbolic_opset20.py to support opset 19/20 (Partially) Add symbolic_opset19.py and symbolic_opset20.py to support opset 19/20 (Partially), extend opset 18 support Mar 15, 2024
@liqunfu liqunfu changed the title Add symbolic_opset19.py and symbolic_opset20.py to support opset 19/20 (Partially), extend opset 18 support Add symbolic_opset19.py and symbolic_opset20.py to support opset 19/20, extend opset 18 support Mar 15, 2024
Copy link
Collaborator

@thiagocrepaldi thiagocrepaldi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR is nearly there :)

Comment on lines +2792 to +2798
else:
mean = g.op(
"ReduceMean",
input,
g.op("Constant", value_t=torch.tensor(axes, dtype=torch.long)),
)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we move this to symbolic_opset18.py or create a helper at symbolic_helper.py?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implementation on this method is not moved to helper because it uses opset9 add, sub, pow, sqrt. It would either introduce circular dependency or above methods need to be refactored/moved as well. For this reason, it is better to leave this method here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Normally I would go with refactoring add, sub, pow and sqrt to keep things consistent, but TorchScript is being deprecated, so maybe it is ok to keep as-is

Comment on lines +2815 to +2820
else:
variance = g.op(
"ReduceMean",
pow(g, numerator, two_cst),
g.op("Constant", value_t=torch.tensor(axes, dtype=torch.long)),
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we move this to symbolic_opset18.py or create a helper at symbolic_helper.py?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above comment for the reason this method stays here.

@thiagocrepaldi
Copy link
Collaborator

@pytorchbot merge -f "unrelated flake CI test"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@thiagocrepaldi
Copy link
Collaborator

Excellent work, thank you!

pytorch-bot bot pushed a commit that referenced this pull request Apr 22, 2024
…0, extend opset 18 support (#118828)

Start to fix #114801

Co-authored-by: Thiago Crepaldi <thiagofc@microsoft.com>
Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
Pull Request resolved: #118828
Approved by: https://github.com/thiagocrepaldi
pytorchmergebot pushed a commit that referenced this pull request Jun 16, 2025
Update default opset for the torchscript exporter to 18 to match the dynamo exporter, because support was actaully added and tested in #118828. In the next version we should plan to update to opset 21 or higher. This change also removes the hard limit on the torchscript exporter for more flexibility.
Pull Request resolved: #156023
Approved by: https://github.com/Skylion007
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: onnx Related to torch.onnx onnx-triaged triaged by ONNX team open source release notes: onnx torch.onnx related changes that should show up in the release notes suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter) topic: new features topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

torch.onnx.export to support opset 20.

10 participants