-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Solving pickle error when saving CyclicLR state_dict #110931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/110931
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 8f4d1a1 with merge base c77a4a4 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
090cb14 to
841dd16
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix, could you add a test case to ensure this behavior won't regress in test_lrschedulers.py?
Also, lint! Oh it looks like they've all been canceled. Maybe repushing will fix it.
test/optim/test_lrscheduler.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be None in both situations? I would expect it to not be none here...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this should be None in both situations, since scale_fn is a lamba function and not a callable object with an internal state that can be pickled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should there be a third case where a custom callable is passed in then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added the third case now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just one comment!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Sorry for the delay--I have just gotten back from traveling. Looks good to merge once CI is green (lint seems to be failing)
d75c88c to
1c5b043
Compare
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: Command Details for Dev Infra teamRaised by workflow job |
|
Argh, I am sorry. Could you locally fix the merge conflicts and kick off a @pytorchbot merge on the rebased copy? |
1c5b043 to
8f4d1a1
Compare
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
How to reproduce:
Error:
Fix:
Saving
scale_fnto the state dict only if it is a callable object and not if it is a function or lambda.