-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[fx] fix qualified name for methods of torch.Tensor #162224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/162224
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 51e597e with merge base b04e922 ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is Dynamo still generating an FX graph with non-existent methods? If so, shouldn't the fix be somewhere there?
| if isinstance( | ||
| func, (types.MethodDescriptorType, types.WrapperDescriptorType) | ||
| ) and func is getattr(torch.Tensor, func.__name__, None): | ||
| if ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@isuruf do you have an explanation of what the bug was and what caused the error?
For example, did Dynamo put an incorrect function in the graph and did that cause something (AOTAutograd?) to later error out during tracing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a printing problem. If we don't get the right qualified name we will generate FX code that isn't actually runnable.
| and func.__qualname__ == f"Tensor.{func.__name__}" | ||
| ): | ||
| return f"torch.Tensor.{func.__name__}" | ||
| name = func.__name__ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using name = func.__qualname__ here is another solution for this problem.
Problem is that for func = torch._tensor.Tensor.split, we need to use __qualname__
>>> import torch
>>> func = torch.Tensor.split
>>> func.__module__
'torch._tensor'
>>> func.__name__
'split'
>>> func.__qualname__
'Tensor.split'However, I'm not sure if using __qualname__ is okay here. @benjaminglass1 mentioned that __qualname__ can be incorrect with pybind11>=3.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@isuruf If this only parses torch.Tensor and it works for you on latest main, then you should be fine. We're already on pybind11 == 3.0.1; I thought this was a more generic function when I provided that feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not only torch.Tensor. It parses quite a few types including lambdas
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
| isinstance(func, (types.MethodDescriptorType, types.WrapperDescriptorType)) | ||
| and func is getattr(torch.Tensor, func.__name__, None) | ||
| ) or ( | ||
| func.__module__ == torch._tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| func.__module__ == torch._tensor | |
| func.__module__ == torch._tensor.__name__ |
callable.__module__ is str | None rather than the module object.
Fixes pytorch#160077, pytorch#154721 Pull Request resolved: pytorch#162224 Approved by: https://github.com/ezyang
Fixes pytorch#160077, pytorch#154721 Pull Request resolved: pytorch#162224 Approved by: https://github.com/ezyang
Fixes pytorch#160077, pytorch#154721 Pull Request resolved: pytorch#162224 Approved by: https://github.com/ezyang
Fixes pytorch#160077, pytorch#154721 Pull Request resolved: pytorch#162224 Approved by: https://github.com/ezyang
Fixes pytorch#160077, pytorch#154721 Pull Request resolved: pytorch#162224 Approved by: https://github.com/ezyang
Stack from ghstack (oldest at bottom):
Fixes #160077, #154721
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben