KEMBAR78
Fix triu and tril for zero-strided inputs on gpu by albanD · Pull Request #4962 · pytorch/pytorch · GitHub
Skip to content

Conversation

@albanD
Copy link
Collaborator

@albanD albanD commented Jan 31, 2018

Fix #4840

This assumes that the output tensor for the operation (if you do inplace operation or if you used the out= flag) does not have a zero-strided dimension.
@apaszke iirc this is an assumption that we make all the time right? If so, do we want to explicitly enforce it at the python api level (one linear check of the stride compared to the python wrapping should be small)? Or do we want to say it explicitly in the documentation somewhere?


precision = custom_precision.get(name, TestCuda.precision)
for inplace in (True, False):
if inplace and no_inplace:

This comment was marked as off-topic.

@soumith soumith merged commit 6c197c2 into pytorch:master Jan 31, 2018
ssnl added a commit to ssnl/pytorch that referenced this pull request Jan 31, 2018
soumith pushed a commit that referenced this pull request Jan 31, 2018
* Revert "Clarify grad_input_mask documentation in derivatives.yaml (#4963)"

This reverts commit 6f3266b.

* Revert "fix triu and tril for zero-strided inputs on gpu (#4962)"

This reverts commit 6c197c2.

* Revert "Add mutex for CPU RNG and move TH to C++ (#4041)"

This reverts commit 96239dd.

* Revert "Support multivariate TransformedDistributions (#4937)"

This reverts commit ca5071d.

* Revert "Only check that arguments are Variables in VariableType (#4943)"

This reverts commit d444379.

* Revert "torch.set_num_threads sets MKL option too (#4949)"

This reverts commit 2aaeec0.
@soumith soumith added the 0.3.1 label Feb 5, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Inconsistent results from torch.triu on GPU compared to CPU

4 participants