-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Add symbolic_opset19.py and symbolic_opset20.py to support opset 19/20, extend opset 18 support #118828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/118828
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 15c31d0 with merge base 37e5632 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
… tests of opset > 17 Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Co-authored-by: Thiago Crepaldi <thiagofc@microsoft.com>
Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, it looks good. Thank you for that!
My biggest concern is the if opset check within the symbolic_opsetXXX.py files. Each files belong to a specific opset already. If the same op has different implementation for different opsets, their implementation must be "duplicated" in different symbolic_opsetXXX.py files.
Maybe the tests https://github.com/pytorch/pytorch/blob/main/test/onnx/test_utility_funs.py#L276 and https://github.com/pytorch/pytorch/blob/main/test/onnx/test_utility_funs.py#L260 need to be removed?
|
Suppressing bc-linter check because |
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR is nearly there :)
| else: | ||
| mean = g.op( | ||
| "ReduceMean", | ||
| input, | ||
| g.op("Constant", value_t=torch.tensor(axes, dtype=torch.long)), | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we move this to symbolic_opset18.py or create a helper at symbolic_helper.py?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implementation on this method is not moved to helper because it uses opset9 add, sub, pow, sqrt. It would either introduce circular dependency or above methods need to be refactored/moved as well. For this reason, it is better to leave this method here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normally I would go with refactoring add, sub, pow and sqrt to keep things consistent, but TorchScript is being deprecated, so maybe it is ok to keep as-is
| else: | ||
| variance = g.op( | ||
| "ReduceMean", | ||
| pow(g, numerator, two_cst), | ||
| g.op("Constant", value_t=torch.tensor(axes, dtype=torch.long)), | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we move this to symbolic_opset18.py or create a helper at symbolic_helper.py?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above comment for the reason this method stays here.
|
@pytorchbot merge -f "unrelated flake CI test" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
Excellent work, thank you! |
…0, extend opset 18 support (#118828) Start to fix #114801 Co-authored-by: Thiago Crepaldi <thiagofc@microsoft.com> Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com> Pull Request resolved: #118828 Approved by: https://github.com/thiagocrepaldi
Update default opset for the torchscript exporter to 18 to match the dynamo exporter, because support was actaully added and tested in #118828. In the next version we should plan to update to opset 21 or higher. This change also removes the hard limit on the torchscript exporter for more flexibility. Pull Request resolved: #156023 Approved by: https://github.com/Skylion007
Start to fix #114801