KEMBAR78
[STACK] Disallow changing the device of a tensor via set_. by gchanan · Pull Request #18832 · pytorch/pytorch · GitHub
Skip to content

Conversation

@gchanan
Copy link
Contributor

@gchanan gchanan commented Apr 4, 2019

Stack from ghstack:

This is necessary to cache the device on a TensorImpl.

Differential Revision: D14766231

This is necessary to cache the device on a TensorImpl.
Copy link
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you got your stacks messed up - there's a duplicated PR for it. But stamping as the restore one was fixed in another PR.

facebook-github-bot pushed a commit that referenced this pull request Apr 4, 2019
Summary:
Pull Request resolved: #18831
ghimport-source-id: 2741e0d

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18833 [STACK] Cache device on TensorImpl; clean up TensorImpl constructors.
* #18832 [STACK] Disallow changing the device of a tensor via set_.
* **#18831 [STACK] Stop swapping in Storages of the wrong device for Tensors.**

This is necessary to support device caching, see #18751 and #18578.

In library code, we potentially swap in Storages with the wrong device when device_guard is False.  This happens as follows with "view-like" operations.
1) We allocate a tensor on the 'wrong' device (because device_guard is false).
2) We swap out the 'wrong' storage with the 'right' storage using e.g. THCTensor_setStorage.

Instead, we can just construct the Tensor with the correct Storage from the beginning.  This is what we do with 'view'.

Note there are two other "view-like" cases where this happens:
1) unfold
2) set_()

Because these aren't performance critical, I just added the device_guard instead of applying the above correction.

For completeness, this also includes a test that all `device_guard: false` functions behave properly under these conditions.

Reviewed By: dzhulgakov

Differential Revision: D14766232

fbshipit-source-id: 0865c3ddae3f415df5da7a9869b1ea9f210e81bc
@gchanan
Copy link
Contributor Author

gchanan commented Apr 4, 2019

@dzhulgakov I did a previous ghstack but accidentally ctrl-C'ed in the middle. Even though the PRs looked good, I was worried some internal state was messed up, so I decided to just close them and try again.

[STACK] Disallow changing the device of a tensor via set_.

This is necessary to cache the device on a TensorImpl.

gh-metadata: pytorch pytorch 18832 gh/gchanan/10/head
zdevito pushed a commit to zdevito/ATen that referenced this pull request Apr 4, 2019
Summary:
Pull Request resolved: pytorch/pytorch#18832
ghimport-source-id: fde4ad90541ba52dfa02bdd83466f17e6541e535

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18833 [STACK] Cache device on TensorImpl; clean up TensorImpl constructors.
* **#18832 [STACK] Disallow changing the device of a tensor via set_.**
* #18831 [STACK] Stop swapping in Storages of the wrong device for Tensors.

This is necessary to cache the device on a TensorImpl.

Differential Revision: D14766231

fbshipit-source-id: bba61634b2d6252ac0697b96033c9eea680956e8
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 8732a1b.

gchanan added a commit that referenced this pull request Apr 4, 2019
Summary:
Pull Request resolved: #18831
ghimport-source-id: 2741e0d

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18833 [STACK] Cache device on TensorImpl; clean up TensorImpl constructors.
* #18832 [STACK] Disallow changing the device of a tensor via set_.
* **#18831 [STACK] Stop swapping in Storages of the wrong device for Tensors.**

This is necessary to support device caching, see #18751 and #18578.

In library code, we potentially swap in Storages with the wrong device when device_guard is False.  This happens as follows with "view-like" operations.
1) We allocate a tensor on the 'wrong' device (because device_guard is false).
2) We swap out the 'wrong' storage with the 'right' storage using e.g. THCTensor_setStorage.

Instead, we can just construct the Tensor with the correct Storage from the beginning.  This is what we do with 'view'.

Note there are two other "view-like" cases where this happens:
1) unfold
2) set_()

Because these aren't performance critical, I just added the device_guard instead of applying the above correction.

For completeness, this also includes a test that all `device_guard: false` functions behave properly under these conditions.

Reviewed By: dzhulgakov

Differential Revision: D14766232

fbshipit-source-id: 0865c3ddae3f415df5da7a9869b1ea9f210e81bc
facebook-github-bot pushed a commit that referenced this pull request Apr 5, 2019
Summary:
Pull Request resolved: #18833
ghimport-source-id: 6f2be25

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18833 [STACK] Cache device on TensorImpl; clean up TensorImpl constructors.**
* #18832 [STACK] Disallow changing the device of a tensor via set_.
* #18831 [STACK] Stop swapping in Storages of the wrong device for Tensors.

1) We cache device on TensorImpl.  This means we can access the device without a virtual function and allows us to more easily extend TensorImpls (because they don't need to figure out how to store the Device for themselves).

2) Clean up TensorImpl APIs.  We had a constructor that took a TensorTypeId and an allocator and would allocate a Storage based on the recognized types of TensorTypeIds.  Instead, we just have two different constructors: one for types with a storage, one without.

Reviewed By: dzhulgakov

Differential Revision: D14766230

fbshipit-source-id: 745b8db84dcd6cb58f1a8675ad3ff8d033bc50df
@ezyang ezyang deleted the gh/gchanan/10/head branch May 30, 2019 15:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants