KEMBAR78
Add check for slice shape match in index_copy_ and index_add_. by yongjik · Pull Request #4342 · pytorch/pytorch · GitHub
Skip to content

Conversation

@yongjik
Copy link
Contributor

@yongjik yongjik commented Dec 24, 2017

Emits a warning if slices have the same size but different shapes. (It
shouldn't be allowed, but it was, so some code might be unknowingly depending on
the behavior.)

Also refactored argument checking code, including index_fill_.

This fixes #4213.

Emits a warning if slices have the same size but different shapes.  (It
shouldn't be allowed, but it was, so some code might be unknowingly depending on
the behavior.)

Also refactored argument checking code, including index_fill_.
@yongjik
Copy link
Contributor Author

yongjik commented Dec 24, 2017

For example, the following code would silently corrupt the memory, but now it
generates an error:

A = torch.zeros(4, 4).cuda()
B = torch.zeros(3, 5).cuda()
idxs = torch.LongTensor([2, 1, 0]).cuda()
A.index_add_(0, idxs, B)  # or index_copy_
# RuntimeError: invalid argument 2: Source/destination tensor have different
# slice sizes (4 vs 5) at (...)/aten/src/THC/generic/THCTensorIndex.cu:46

Also, the following code generates a warning. (I'm not sure if anybody is
intentionally using this behavior, but I decided to play it safe.)

C = torch.zeros(3, 2, 2).cuda()
A.index_add_(0, idxs, C)  # or index_copy_
# Warning: source/destination slices have same size but different shape for
# an index operation.  This behavior is deprecated.

@yongjik
Copy link
Contributor Author

yongjik commented Dec 30, 2017

Hi, did anyone have a chance to look at this? Thanks!

@soumith
Copy link
Member

soumith commented Dec 30, 2017

@pytorchbot add to whitelist

"Source/destination tensor have different slice sizes (%ld vs %ld)",
dstSliceSize, srcSliceSize);

if (mismatch) {

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

for (int d = 0; d < srcDims; d++) {
if (d != dim) {
srcSliceSize *= src->size[d];
if (!mismatch && dst->size[d] != src->size[d]) mismatch = true;

This comment was marked as off-topic.

This comment was marked as off-topic.

@soumith soumith merged commit d7da504 into pytorch:master Jan 4, 2018
@soumith
Copy link
Member

soumith commented Jan 4, 2018

thanks a lot @yongjik !

@yongjik yongjik deleted the 1220-idxbug branch January 22, 2018 04:15
@soumith soumith added the 0.3.1 label Feb 4, 2018
soumith pushed a commit that referenced this pull request Feb 7, 2018
Emits a warning if slices have the same size but different shapes.  (It
shouldn't be allowed, but it was, so some code might be unknowingly depending on
the behavior.)

Also refactored argument checking code, including index_fill_.
@mingyuliutw
Copy link

I think there might be something wrong with the index_copy_ function in the 0.4.0 release.
These are the variables and my goal is to copy the 1st and 2nd columns of B to the 1st and 2nd columns of A.

A = torch.zeros((3,5))
B = torch.zeros((3,2))
idxs = torch.LongTensor([0, 1])

When I use the following command, it will generate error message. (Dimension mismatching.)

A.index_copy_(1,idxs,B)

But when I transpose A and B and change the dim argument from 1 to 0, then it works.

torch.transpose(A,1,0).index_copy_(0,idxs,torch.transpose(B,1,0))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

index_add_ & index_copy_ do not properly check tensor sizes on GPU

5 participants