KEMBAR78
torch.nn.BatchNorm1d Segmentation Fault with mixed CPU/GPU · Issue #15826 · pytorch/pytorch · GitHub
Skip to content

torch.nn.BatchNorm1d Segmentation Fault with mixed CPU/GPU  #15826

@buotex

Description

@buotex

🐛 Bug

Most torch functions throw a user-friendly exception when they encounter tensors on incompatible devices; however torch.nn.BatchNorm1d segfaults instead.

To Reproduce

Steps to reproduce the behavior:

  1. Create Batchnorm Layer on GPU
  2. Apply it on CPU

This segfaults:

>>> torch.nn.BatchNorm1d(1).cuda()(torch.rand(5,1))
Segmentation fault (core dumped)

Expected behavior

The opposite case throws the expected error:

>>> torch.nn.BatchNorm1d(1)(torch.rand(5,1).cuda())
RuntimeError: Tensor for argument #2 'weight' is on CPU, but expected it to be on GPU (while checking arguments for cudnn_batch_norm)

Environment

PyTorch version: 1.0.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176

OS: Ubuntu 16.04.4 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.5.1

Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: Quadro M2000
Nvidia driver version: 384.130
cuDNN version: Probably one of the following:
/usr/local/MATLAB/R2017b/bin/glnxa64/libcudnn.so.5.1.5

Versions of relevant libraries:
[pip] Could not collect
[conda] blas 1.0 mkl
[conda] mkl 2019.1 144
[conda] mkl-service 1.1.2 py37he904b0f_5
[conda] mkl_fft 1.0.6 py37hd81dba3_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.0.0 py3.7_cuda9.0.176_cudnn7.4.1_1 pytorch
[conda] torchvision 0.2.1 py_2 pytorch

Metadata

Metadata

Assignees

No one assigned

    Labels

    high prioritymodule: crashProblem manifests as a hard crash, as opposed to a RuntimeError

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions