-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Reduce random reads for offset metadata when calling torch.load under FakeTensorMode #157931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce random reads for offset metadata when calling torch.load under FakeTensorMode #157931
Conversation
… FakeTensorMode [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/157931
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit ae48fda with merge base d9426a8 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change sounds good.
It would be good to see if some one off benchmarking shows the improvement from this.
|
After confirming with @teja-rao + some microbenchmarking I did, this did not seem to be that useful as most of cost seems to come from FakeTensor overhead / opening the file, but still landing the PR on his request |
|
@pytorchbot merge |
|
@pytorchbot merge -r |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
We already test the
_get_offsetfunctionality with that TORCH_SERIALIZATION_DEBUG flag that is set in CI, so I didn't add more testing specifically for FakeTensorStack from ghstack (oldest at bottom):