KEMBAR78
Customized pin_memory for PackedSequence by ssnl · Pull Request #18079 · pytorch/pytorch · GitHub
Skip to content

Conversation

@ssnl
Copy link
Collaborator

@ssnl ssnl commented Mar 15, 2019

fixes #18078

@ssnl ssnl requested review from ezyang and zou3519 March 17, 2019 18:32
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

self.tgt = self.tgt.pin_memory()
return self

def is_pinned(self):
Copy link
Collaborator

@mcarilli mcarilli Mar 19, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a nice touch to make the test cleaner. Do you envision making it a hard/backend requirement that users supply an is_pinned in addition to a pin_memory method when defining a custom batch class they'd like to pin? It doesn't look like this PR establishes such a requirement.

For the record I don't think we need to impose such a requirement (anyone who knows how to define a pin_memory method for their custom batch probably knows how they'd like to check if it has been pinned, and we don't need to force them to additionally define an is_pinned method.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you. I don't think that we should make this a hard requirement either, especially that this logic is not needed for the dataloader memory pinning logic :).

@mcarilli
Copy link
Collaborator

Can't speak to the RNN parts with certainty, but it looks like this should be fine with the test + doesn't affect existing custom batch pinning (ie @slayton58 's pinning in mrcnn).

@ssnl
Copy link
Collaborator Author

ssnl commented Mar 19, 2019

@mcarilli Yes. And also the current behavior is that if one pins a PackedSequence, it will be converted into a tuple and thus useless. So I don't think anyone is relying on that :)

@ssnl ssnl deleted the packed_sequence_pin_memory branch March 19, 2019 21:17
zdevito pushed a commit to zdevito/ATen that referenced this pull request Mar 19, 2019
Summary:
fixes pytorch/pytorch#18078
Pull Request resolved: pytorch/pytorch#18079

Reviewed By: ezyang

Differential Revision: D14521192

Pulled By: zou3519

fbshipit-source-id: cec773a3a6f2c405a0d9701e213b7caf81649181
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

PackedSequence should have custom pin_memory for data loading

6 participants