KEMBAR78
Add missing _lazy_init in cuda python module by albanD · Pull Request #4907 · pytorch/pytorch · GitHub
Skip to content

Conversation

@albanD
Copy link
Collaborator

@albanD albanD commented Jan 29, 2018

Current master raises an error for the blas handle (asking to set the blas handle, which sounds like the wrong error message as it cannot be set from python) and segfaults for the empty_cache() function.

@apaszke apaszke merged commit 7a47790 into pytorch:master Jan 29, 2018
@apaszke
Copy link
Contributor

apaszke commented Jan 29, 2018

Thanks Alban!

@albanD albanD deleted the fix_missing_cuda_init branch January 29, 2018 17:22
@ngimel
Copy link
Collaborator

ngimel commented Jan 29, 2018

Please note that _lazy_init messes up context creation (see #4903) and memory use on GPU 0.

@apaszke
Copy link
Contributor

apaszke commented Jan 29, 2018

Yeah, but what can we do if you ask for a blas handle 😕 But I agree it would be slightly nicer to make empty_cache a no-op if CUDA hasn't been initialized. @albanD could you please change that?

@albanD
Copy link
Collaborator Author

albanD commented Jan 30, 2018

@apaszke sure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants