KEMBAR78
Make torch.cuda.empty_cache() a no-op when cuda is not initialized by albanD · Pull Request #4936 · pytorch/pytorch · GitHub
Skip to content

Conversation

@albanD
Copy link
Collaborator

@albanD albanD commented Jan 30, 2018

cc: @ngimel @apaszke

current_blas_handle() still calls _lazy_init() as all the current_*() functions that return something corresponding to the currently set device.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants