KEMBAR78
Improve LSTM documentation for proj_size > 0 by rodrigoberriel · Pull Request #65102 · pytorch/pytorch · GitHub
Skip to content

Conversation

@rodrigoberriel
Copy link
Contributor

Fixes #65053. Although the documentation states that:

If ``proj_size > 0`` is specified, LSTM with projections will be used. This changes
the LSTM cell in the following way. First, the dimension of :math:`h_t` will be changed from
``hidden_size`` to ``proj_size`` (dimensions of :math:`W_{hi}` will be changed accordingly).
Second, the output hidden state of each layer will be multiplied by a learnable projection
matrix: :math:`h_t = W_{hr}h_t`. Note that as a consequence of this, the output
of LSTM network will be of different shape as well. See Inputs/Outputs sections below for exact
dimensions of all variables. You can find more details in https://arxiv.org/abs/1402.1128.

It seems that the definition of weight_ih_l[k] could be improved by specifying what happens when k > 0 and proj_size > 0. As proj_size is only used in LSTM, no changes are needed for the other RNNs.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Sep 15, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit ecb3c07 (more details on the Dr. CI page):


  • 6/6 failures introduced in this PR

🕵️ 4 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-xenial-cuda11.3-py3.6-gcc7 / test (default, 1, 2, linux.8xlarge.nvidia.gpu) (1/4)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2021-09-16T00:44:55.7172862Z RuntimeError: CUDA error: device-side assert triggered
2021-09-16T00:44:52.5860404Z   File "/opt/conda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 493, in synchronize
2021-09-16T00:44:52.5861402Z     return torch._C._cuda_synchronize()
2021-09-16T00:44:52.5862280Z RuntimeError: CUDA error: device-side assert triggered
2021-09-16T00:44:52.5863660Z CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
2021-09-16T00:44:52.5864702Z For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
2021-09-16T00:44:55.7164411Z /var/lib/jenkins/workspace/aten/src/ATen/native/cuda/TensorCompare.cu:161: _assert_async_cuda_kernel: block: [0,0,0], thread: [0,0,0] Assertion `input[0] != c10::complex<float>(0, 0)` failed.
2021-09-16T00:44:55.7168917Z Traceback (most recent call last):
2021-09-16T00:44:55.7169737Z   File "<string>", line 4, in <module>
2021-09-16T00:44:55.7171173Z   File "/opt/conda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 493, in synchronize
2021-09-16T00:44:55.7171980Z     return torch._C._cuda_synchronize()
2021-09-16T00:44:55.7172862Z RuntimeError: CUDA error: device-side assert triggered
2021-09-16T00:44:55.7173880Z CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
2021-09-16T00:44:55.7174872Z For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
2021-09-16T00:44:55.9260769Z ok (12.508s)
2021-09-16T00:44:55.9330206Z   test_gather_bool (__main__.TestCuda) ... ok (0.007s)
2021-09-16T00:44:55.9374078Z   test_get_device_index (__main__.TestCuda) ... ok (0.004s)
2021-09-16T00:44:55.9433697Z   test_get_set_rng_state_all (__main__.TestCuda) ... ok (0.006s)
2021-09-16T00:44:55.9642158Z   test_grad_scaling_accumulation (__main__.TestCuda) ... ok (0.021s)
2021-09-16T00:44:56.0066851Z   test_grad_scaling_autocast (__main__.TestCuda) ... ok (0.042s)
2021-09-16T00:44:56.0334602Z   test_grad_scaling_clipping (__main__.TestCuda) ... ok (0.027s)
2021-09-16T00:44:56.0596875Z   test_grad_scaling_clipping_separate_unscale (__main__.TestCuda) ... ok (0.026s)

See GitHub Actions build linux-bionic-py3.8-gcc9-coverage / test (default, 2, 2, linux.2xlarge) (2/4)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2021-09-15T23:12:32.5337481Z CONTINUE_THROUGH_ERROR: false
2021-09-15T23:12:32.5331825Z   IN_WHEEL_TEST: 1
2021-09-15T23:12:32.5332247Z   CUSTOM_TEST_ARTIFACT_BUILD_DIR: build/custom_test_artifacts
2021-09-15T23:12:32.5332885Z   ALPINE_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine
2021-09-15T23:12:32.5333411Z   PR_LABELS: []
2021-09-15T23:12:32.5334401Z   DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-py3.8-gcc9:74e757e8b0cf750d2f91db6aa4c29640abce32ea
2021-09-15T23:12:32.5335565Z   JOB_BASE_NAME: linux-bionic-py3.8-gcc9-coverage-test
2021-09-15T23:12:32.5336125Z   TEST_CONFIG: default
2021-09-15T23:12:32.5336427Z   SHARD_NUMBER: 2
2021-09-15T23:12:32.5336727Z   NUM_TEST_SHARDS: 2
2021-09-15T23:12:32.5337078Z   PYTORCH_IGNORE_DISABLED_ISSUES: 65053
2021-09-15T23:12:32.5337481Z   CONTINUE_THROUGH_ERROR: false
2021-09-15T23:12:32.5337796Z   SHM_SIZE: 1g
2021-09-15T23:12:32.5338072Z   PR_NUMBER: 65102
2021-09-15T23:12:32.5338368Z ##[endgroup]
2021-09-15T23:12:45.9110238Z Processing ./dist/torch-1.10.0a0+git80ccc9f-cp38-cp38-linux_x86_64.whl
2021-09-15T23:12:45.9369396Z Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.8/site-packages (from torch==1.10.0a0+git80ccc9f) (3.10.0.2)
2021-09-15T23:12:46.1785800Z Installing collected packages: torch
2021-09-15T23:12:52.3303862Z Successfully installed torch-1.10.0a0+git80ccc9f
2021-09-15T23:12:52.4018740Z ++++ dirname .jenkins/pytorch/common.sh
2021-09-15T23:12:52.4025327Z +++ cd .jenkins/pytorch
2021-09-15T23:12:52.4026367Z +++ pwd -P

See GitHub Actions build win-vs2019-cuda10.2-py3 / build (3/4)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-09-15T23:44:29.1177922Z C:\Program Files (...>> &)': attempting to reference a deleted function
2021-09-15T23:44:29.0860941Z C:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\torch/csrc/jit/codegen/cuda/ir_base_nodes.h(63): note: see declaration of 'torch::jit::fuser::cuda::swap'
2021-09-15T23:44:29.0862943Z C:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\torch\csrc\jit\codegen\cuda\fusion.cpp(120): error C2065: 'swap': undeclared identifier
2021-09-15T23:44:29.0864870Z C:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\torch\csrc\jit\codegen\cuda\fusion.cpp(127): error C2065: 'swap': undeclared identifier
2021-09-15T23:44:29.0866897Z C:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\torch\csrc\jit\codegen\cuda\fusion.cpp(134): error C2065: 'swap': undeclared identifier
2021-09-15T23:44:29.0868323Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2021-09-15T23:44:29.0869324Z Copyright (C) Microsoft Corporation.  All rights reserved.
2021-09-15T23:44:29.0869942Z 
2021-09-15T23:44:29.1078926Z [5546/6320] C:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\win_tmp\bin\sccache-cl.exe   /TP -DIDEEP_USE_MKL -DMAGMA_V2 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DTORCH_CUDA_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\caffe2\contrib\aten -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\caffe2\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\pybind11\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\cub -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\win_tmp\magma\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\include" -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/pytorch-1239454899/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda.dir\__\torch\csrc\jit\codegen\cuda\execu
2021-09-15T23:44:29.1111424Z FAILED: caffe2/CMakeFiles/torch_cuda.dir/__/torch/csrc/jit/codegen/cuda/executor_kernel_arg.cpp.obj 
2021-09-15T23:44:29.1142593Z C:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\win_tmp\bin\sccache-cl.exe   /TP -DIDEEP_USE_MKL -DMAGMA_V2 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DTORCH_CUDA_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\caffe2\contrib\aten -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\caffe2\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\pybind11\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\cmake\..\third_party\cub -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\build\win_tmp\magma\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\include" -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1239454899\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/pytorch-1239454899/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda.dir\__\torch\csrc\jit\codegen\cuda\executor_kernel_a
2021-09-15T23:44:29.1177922Z C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.28.29333\include\xutility(4393): error C2280: 'std::unique_ptr<torch::jit::fuser::cuda::SchedulerEntry,std::default_delete<torch::jit::fuser::cuda::SchedulerEntry>> &std::unique_ptr<torch::jit::fuser::cuda::SchedulerEntry,std::default_delete<torch::jit::fuser::cuda::SchedulerEntry>>::operator =(const std::unique_ptr<torch::jit::fuser::cuda::SchedulerEntry,std::default_delete<torch::jit::fuser::cuda::SchedulerEntry>> &)': attempting to reference a deleted function
2021-09-15T23:44:29.1182668Z C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.28.29333\include\memory(2687): note: see declaration of 'std::unique_ptr<torch::jit::fuser::cuda::SchedulerEntry,std::default_delete<torch::jit::fuser::cuda::SchedulerEntry>>::operator ='
2021-09-15T23:44:29.1188208Z C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.28.29333\include\memory(2687): note: 'std::unique_ptr<torch::jit::fuser::cuda::SchedulerEntry,std::default_delete<torch::jit::fuser::cuda::SchedulerEntry>> &std::unique_ptr<torch::jit::fuser::cuda::SchedulerEntry,std::default_delete<torch::jit::fuser::cuda::SchedulerEntry>>::operator =(const std::unique_ptr<torch::jit::fuser::cuda::SchedulerEntry,std::default_delete<torch::jit::fuser::cuda::SchedulerEntry>> &)': function was explicitly deleted
2021-09-15T23:44:29.1193032Z C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.28.29333\include\vector(1127): note: see reference to function template instantiation '_OutIt *std::_Copy_unchecked<_Iter,std::unique_ptr<torch::jit::fuser::cuda::SchedulerEntry,std::default_delete<torch::jit::fuser::cuda::SchedulerEntry>>*>(_InIt,_InIt,_OutIt)' being compiled
2021-09-15T23:44:29.1196251Z         with
2021-09-15T23:44:29.1196698Z         [
2021-09-15T23:44:29.1198018Z             _OutIt=std::unique_ptr<torch::jit::fuser::cuda::SchedulerEntry,std::default_delete<torch::jit::fuser::cuda::SchedulerEntry>> *,
2021-09-15T23:44:29.1199753Z             _Iter=std::unique_ptr<torch::jit::fuser::cuda::SchedulerEntry,std::default_delete<torch::jit::fuser::cuda::SchedulerEntry>> *,
2021-09-15T23:44:29.1201532Z             _InIt=std::unique_ptr<torch::jit::fuser::cuda::SchedulerEntry,std::default_delete<torch::jit::fuser::cuda::SchedulerEntry>> *
2021-09-15T23:44:29.1202587Z         ]
2021-09-15T23:44:29.1206204Z C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.28.29333\include\vector(1142): note: see reference to function template instantiation 'void std::vector<torch::jit::fuser::cuda::FusionHeuristics::SchedulerEntryOwningPtr,std::allocator<torch::jit::fuser::cuda::FusionHeuristics::SchedulerEntryOwningPtr>>::_Assign_range<_Iter>(_Iter,_Iter,std::forward_iterator_tag)' being compiled

See GitHub Actions build linux-xenial-cuda11.3-py3.6-gcc7 / test (distributed, 1, 1, linux.8xlarge.nvidia.gpu) (4/4)

Step: "Test PyTorch" (full log | diagnosis details | 🔁 rerun)

2021-09-16T01:19:02.3461109Z AssertionError: Fa... dtypes. Got dtypes torch.float32 and torch.int64.
2021-09-16T01:19:02.3451278Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 111, in wrapper
2021-09-16T01:19:02.3452128Z     return func(*args, **kwargs)
2021-09-16T01:19:02.3453112Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 2848, in wrapper
2021-09-16T01:19:02.3453917Z     return func(*args, **kwargs)
2021-09-16T01:19:02.3455111Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 4653, in test_post_localSGD_optimizer_parity
2021-09-16T01:19:02.3456178Z     self.assertEqual(p1.data, p2.data)
2021-09-16T01:19:02.3457264Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1875, in assertEqual
2021-09-16T01:19:02.3458267Z     super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
2021-09-16T01:19:02.3459134Z   File "/opt/conda/lib/python3.6/unittest/case.py", line 682, in assertTrue
2021-09-16T01:19:02.3459896Z     raise self.failureException(msg)
2021-09-16T01:19:02.3461109Z AssertionError: False is not true : Tensors failed to compare as equal!Attempted to compare equality of tensors with different dtypes. Got dtypes torch.float32 and torch.int64.
2021-09-16T01:19:02.3462022Z 
2021-09-16T01:19:02.3462261Z 
2021-09-16T01:19:02.3462570Z 		
2021-09-16T01:19:02.3463074Z ✅ 534 Passed
2021-09-16T01:19:02.3463575Z 💨 197 Skipped
2021-09-16T01:19:02.3464078Z 🚨 1 Failed
2021-09-16T01:19:02.3633953Z ##[group]Run # Remove any previous test reports if they exist
2021-09-16T01:19:02.3634794Z �[36;1m# Remove any previous test reports if they exist�[0m
2021-09-16T01:19:02.3635425Z �[36;1mrm -f test-reports-*.zip�[0m
2021-09-16T01:19:02.3636082Z �[36;1mzip -r "test-reports-${FILE_SUFFIX}.zip" test -i '*.xml'�[0m

2 failures not recognized by patterns:

Job Step Action
CircleCI pytorch_linux_xenial_py3_clang7_onnx_build Build 🔁 rerun
CircleCI pytorch_linux_xenial_py3_6_gcc5_4_build Build 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

Copy link
Contributor

@jbschlosser jbschlosser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for the fix

@facebook-github-bot
Copy link
Contributor

@jbschlosser has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@jbschlosser merged this pull request in 83878e1.

@rodrigoberriel rodrigoberriel deleted the improve-lstm-doc branch September 16, 2021 13:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

LSTM.weight_ih_l[k] dimensions with proj_size

4 participants