-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Closed
Labels
module: sparseRelated to torch.sparseRelated to torch.sparsetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Bug
To Reproduce
Steps to reproduce the behavior:
Following test does not work when MKL is not enabled
@onlyCPU
@dtypes(torch.float, torch.double)
def test_csr_matvec(self, device, dtype):
side = 100
for index_dtype in [torch.int32, torch.int64]:
csr = self.genSparseCSRTensor((side, side), 1000, device=device, dtype=dtype, index_dtype=index_dtype)
vec = torch.randn(side, dtype=dtype, device=device)
res = csr.matmul(vec)
expected = csr.to_dense().matmul(vec)
self.assertEqual(res, expected)
bad_vec = torch.randn(side + 10, dtype=dtype, device=device)
with self.assertRaisesRegex(RuntimeError, "mv: expected"):
csr.matmul(bad_vec)Produces:
ERROR: test_csr_matvec_cpu_float64 (__main__.TestSparseCSRCPU)
Traceback (most recent call last):
File "/home/alexander/git/pytorch/pytorch_dev/torch/testing/_internal/common_device_type.py", line 297, in instantiated_test
raise rte
File "/home/alexander/git/pytorch/pytorch_dev/torch/testing/_internal/common_device_type.py", line 292, in instantiated_test
result = test_fn(self, *args)
File "/home/alexander/git/pytorch/pytorch_dev/torch/testing/_internal/common_device_type.py", line 729, in only_fn
return fn(slf, device, *args, **kwargs)
File "/home/alexander/git/pytorch/pytorch_dev/test/test_sparse_csr.py", line 220, in test_csr_matvec
res = csr.matmul(vec)
RuntimeError: Cannot access data pointer of Tensor that doesn't have storage
Expected behavior
No runtime errors
walterddr and malfet
Metadata
Metadata
Assignees
Labels
module: sparseRelated to torch.sparseRelated to torch.sparsetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module