KEMBAR78
Reverts force_gpu_half changes from #3660 by colesbury · Pull Request #5000 · pytorch/pytorch · GitHub
Skip to content

Conversation

@colesbury
Copy link
Member

@colesbury colesbury commented Feb 1, 2018

The test_cuda.py setup purports to test half tensors, but actually just
re-tests FloatTensors because the keys in type_map were str instead of
type. Testing HalfTensors is more complicated, requiring changes to
precision and requires excluding some unimplemented methods.

We should fully test half CUDA tensors. This change just deletes the
duplicate tests of FloatTensor.

See #3660

cc @apaszke

The test_cuda.py setup purports to test half tensors, but actually just
re-tests FloatTensors because the keys in type_map were str instead of
type. Testing HalfTensors is more complicated, requiring changes to
precision and requires excluding some unimplemented methods.

We should fully test half CUDA tensors. This change just deletes the
duplicate tests of FloatTensor.
@ezyang
Copy link
Contributor

ezyang commented Feb 3, 2018

CC @ailzhang

@ailzhang
Copy link
Contributor

ailzhang commented Feb 6, 2018

Thanks @ezyang ! This looks good to me. Just started changing test_cuda.py, will use torch.FloatTensor as the corresponding cpu baseline for torch.cuda.HalfTensor since most operations are not implemented in torch.HalfTensor.

@colesbury colesbury merged commit 85e22b5 into pytorch:master Feb 7, 2018
@colesbury colesbury deleted the partial_revert_3660 branch February 7, 2018 20:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants