-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Customized pin_memory for PackedSequence #18079
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| self.tgt = self.tgt.pin_memory() | ||
| return self | ||
|
|
||
| def is_pinned(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nice touch to make the test cleaner. Do you envision making it a hard/backend requirement that users supply an is_pinned in addition to a pin_memory method when defining a custom batch class they'd like to pin? It doesn't look like this PR establishes such a requirement.
For the record I don't think we need to impose such a requirement (anyone who knows how to define a pin_memory method for their custom batch probably knows how they'd like to check if it has been pinned, and we don't need to force them to additionally define an is_pinned method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with you. I don't think that we should make this a hard requirement either, especially that this logic is not needed for the dataloader memory pinning logic :).
|
Can't speak to the RNN parts with certainty, but it looks like this should be fine with the test + doesn't affect existing custom batch pinning (ie @slayton58 's pinning in mrcnn). |
|
@mcarilli Yes. And also the current behavior is that if one pins a PackedSequence, it will be converted into a tuple and thus useless. So I don't think anyone is relying on that :) |
Summary: fixes pytorch/pytorch#18078 Pull Request resolved: pytorch/pytorch#18079 Reviewed By: ezyang Differential Revision: D14521192 Pulled By: zou3519 fbshipit-source-id: cec773a3a6f2c405a0d9701e213b7caf81649181
fixes #18078