-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Closed
Labels
Description
It seems that x.index_fill_() can change memory outside x, when x is a cuda tensor.
If x is non-cuda tensor, we get:
>>> import torch
>>> x = torch.Tensor([1,1,1])
>>> x.index_fill_(0, torch.LongTensor([100]), -1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: invalid argument 2: out of range at /opt/conda/conda-bld/pytorch_1503966894950/work/torch/lib/TH/generic/THTensor.c:861
In contrast, when x is cuda tensor, index_fill_() does not make any error
>>> a = torch.Tensor([1,1,1]).cuda()
>>> a.index_fill_(0, torch.LongTensor([100]).cuda(), -1)
1
1
1
[torch.cuda.FloatTensor of size 3 (GPU 0)]
It's hard to share the whole code, but I have noticed that such operation outside a tensor did affect the performance of existing network, so I'm afraid that this op can change arbitrary memory on GPU which can be dangerous. Could you check this out?