-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[dtensor] Add propagate_tensor_meta function that skips cache if _are_we_tracing #161334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/161334
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (10 Unrelated Failures)As of commit c94282a with merge base 2f0de0f ( FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment to not use _propagate_tensor_meta directly
86cdb4b to
92c690e
Compare
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
| """ | ||
| return self._propagate_tensor_meta_non_cached(op_schema) | ||
|
|
||
| def propagate_tensor_meta( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR to rename: #161744
Rename the wrapper `propagate_tensor_meta` added in #161334 to make it clearly private, and rename the existing LRU function to accommodate. Pull Request resolved: #161744 Approved by: https://github.com/bdhirsh
…_we_tracing (pytorch#161334) Fixes an issue where the log softmax handler checked the tensor metadata cache without checking for tracing or symints. Probably best to merge this after pytorch#160798, but not strictly blocking. Pull Request resolved: pytorch#161334 Approved by: https://github.com/xmfan
Rename the wrapper `propagate_tensor_meta` added in pytorch#161334 to make it clearly private, and rename the existing LRU function to accommodate. Pull Request resolved: pytorch#161744 Approved by: https://github.com/bdhirsh
Rename the wrapper `propagate_tensor_meta` added in pytorch#161334 to make it clearly private, and rename the existing LRU function to accommodate. Pull Request resolved: pytorch#161744 Approved by: https://github.com/bdhirsh
Rename the wrapper `propagate_tensor_meta` added in pytorch#161334 to make it clearly private, and rename the existing LRU function to accommodate. Pull Request resolved: pytorch#161744 Approved by: https://github.com/bdhirsh
Fixes an issue where the log softmax handler checked the tensor metadata cache without checking for tracing or symints.
Probably best to merge this after #160798, but not strictly blocking.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta