KEMBAR78
Revert "Revert D28833086: beef up at::_ops API" by bdhirsh · Pull Request #60214 · pytorch/pytorch · GitHub
Skip to content

Conversation

@bdhirsh
Copy link
Contributor

@bdhirsh bdhirsh commented Jun 17, 2021

Reland of #59115, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in tools/codegen/gen.py: static constexpr -> static CONSTEXPR_CONST_EXCEPT_WIN_CUDA (had to make a new macro that was a slight adjustment to an existing one)

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.

Stack from ghstack:

Differential Revision: D29213932

relanding this PR, but with a fix for windows cuda builds

This reverts commit 6d0fb85.

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jun 17, 2021

💊 CI failures summary and remediations

As of commit 902b609 (more details on the Dr. CI page and at hud.pytorch.org/pr/60214):


  • 3/4 failures introduced in this PR
  • 1/4 broken upstream at merge base acf791e on Jun 17 from 1:09pm to 7:18pm

🕵️ 3 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_test (1/3)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Jun 24 18:24:05 AssertionError: "weight tensor ...nsion is 4, the 1th output dimension is 3. vs. OK)
Jun 24 18:24:05 *** End stack trace ***
Jun 24 18:24:05 
Jun 24 18:24:05 
Jun 24 18:24:05 During handling of the above exception, another exception occurred:
Jun 24 18:24:05 
Jun 24 18:24:05 Traceback (most recent call last):
Jun 24 18:24:05   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 397, in instantiated_test
Jun 24 18:24:05     result = test_fn(self, *args)
Jun 24 18:24:05   File "/var/lib/jenkins/workspace/xla/test/../../test/test_nn.py", line 16007, in test_nll_loss_invalid_weights
Jun 24 18:24:05     F.nll_loss(x, t, weight=weight)
Jun 24 18:24:05 AssertionError: "weight tensor should be defined either for all 3 classes or no classes" does not match "/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-tensorflow/tensorflow/compiler/xla/xla_client/debug_macros.h:27 : Check failed: status.status() == ::tensorflow::Status::OK() (Invalid argument: Input dimension should be either 1 or equal to the output dimension it is broadcasting into; the 0th operand dimension is 4, the 1th output dimension is 3. vs. OK)
Jun 24 18:24:05 *** Begin stack trace ***
Jun 24 18:24:05 	tensorflow::CurrentStackTrace[abi:cxx11]()
Jun 24 18:24:05 	xla::Shape const* ConsumeValue<xla::Shape const*>(tensorflow::StatusOr<xla::Shape const*>&&)
Jun 24 18:24:05 	torch_xla::XlaHelpers::ShapeOfXlaOp(xla::XlaOp)
Jun 24 18:24:05 	torch_xla::ir::ops::InferOutputShape(absl::lts_20210324::Span<xla::Shape const>, std::function<xla::XlaOp (absl::lts_20210324::Span<xla::XlaOp const>)> const&)
Jun 24 18:24:05 	
Jun 24 18:24:05 	torch_xla::ir::Node::GetOpShape(std::function<xla::Shape ()> const&) const
Jun 24 18:24:05 	torch_xla::ir::Node::Node(torch_xla::ir::OpKind, absl::lts_20210324::Span<torch_xla::ir::Value const>, std::function<xla::Shape ()> const&, unsigned long, absl::lts_20210324::uint128)
Jun 24 18:24:05 	torch_xla::ir::ops::NllLoss::NllLoss(torch_xla::ir::Value const&, torch_xla::ir::Value const&, absl::lts_20210324::optional<torch_xla::ir::Value> const&, torch_xla::ReductionMode, int)
Jun 24 18:24:05 	torch_xla::XLATensor::nll_loss(torch_xla::XLATensor const&, torch_xla::XLATensor const&, torch_xla::XLATensor const&, long, int)

See CircleCI build binary_windows_wheel_3_7_cu102_build (2/3)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

ERROR: Command errored out with exit status 1: ...pmgkyp1ij' Check the logs for full command output.
      config_settings, requirements=['wheel'])
    File "C:\Users\circleci\AppData\Local\Temp\pip-build-env-rru106ve\overlay\Lib\site-packages\setuptools\build_meta.py", line 135, in _get_build_requires
      self.run_setup()
    File "C:\Users\circleci\AppData\Local\Temp\pip-build-env-rru106ve\overlay\Lib\site-packages\setuptools\build_meta.py", line 150, in run_setup
      exec(compile(code, __file__, 'exec'), locals())
    File "setup.py", line 219, in <module>
      from tools.build_pytorch_libs import build_caffe2
  ModuleNotFoundError: No module named 'tools'
  Getting requirements to build wheel: finished with status 'error'
WARNING: Discarding file:///C:/w/b/windows/pytorch. Command errored out with exit status 1: 'C:\w\b\windows\conda\envs\py37\python.exe' 'C:\w\b\windows\conda\envs\py37\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' get_requires_for_build_wheel 'C:\Users\circleci\AppData\Local\Temp\tmpmgkyp1ij' Check the logs for full command output.
ERROR: Command errored out with exit status 1: 'C:\w\b\windows\conda\envs\py37\python.exe' 'C:\w\b\windows\conda\envs\py37\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' get_requires_for_build_wheel 'C:\Users\circleci\AppData\Local\Temp\tmpmgkyp1ij' Check the logs for full command output.
Exception information:
Traceback (most recent call last):
  File "C:\w\b\windows\conda\envs\py37\lib\site-packages\pip\_internal\cli\base_command.py", line 180, in _main
    status = self.run(options, args)
  File "C:\w\b\windows\conda\envs\py37\lib\site-packages\pip\_internal\cli\req_command.py", line 204, in wrapper
    return func(self, options, args)
  File "C:\w\b\windows\conda\envs\py37\lib\site-packages\pip\_internal\commands\wheel.py", line 143, in run
    reqs, check_supported_wheels=True
  File "C:\w\b\windows\conda\envs\py37\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 104, in resolve
    req, requested_extras=()

See CircleCI build pytorch_libtorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_build (3/3)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

Jun 24 15:04:47 rm: cannot remove '/var/lib/jenkins/sccache_error.log': No such file or directory
Jun 24 15:04:47 ++++ extract_trap_cmd
Jun 24 15:04:47 ++++ printf '%s\n' ''
Jun 24 15:04:47 +++ printf '%s\n' cleanup
Jun 24 15:04:47 ++ trap -- '
Jun 24 15:04:47 cleanup' EXIT
Jun 24 15:04:47 ++ [[ pytorch-libtorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7-build != *pytorch-win-* ]]
Jun 24 15:04:47 ++ which sccache
Jun 24 15:04:47 ++ sccache --stop-server
Jun 24 15:04:47 ++ true
Jun 24 15:04:47 ++ rm /var/lib/jenkins/sccache_error.log
Jun 24 15:04:47 rm: cannot remove '/var/lib/jenkins/sccache_error.log': No such file or directory
Jun 24 15:04:47 ++ true
Jun 24 15:04:47 ++ [[ -n '' ]]
Jun 24 15:04:47 ++ [[ pytorch-libtorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7-build == *rocm* ]]
Jun 24 15:04:47 ++ SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log
Jun 24 15:04:47 ++ SCCACHE_IDLE_TIMEOUT=1200
Jun 24 15:04:47 ++ RUST_LOG=sccache::server=error
Jun 24 15:04:47 ++ sccache --start-server
Jun 24 15:04:47 sccache: Starting the server...
Jun 24 15:04:47 ++ sccache --zero-stats
Jun 24 15:04:47 Compile requests                      0

2 jobs timed out:

  • pytorch_libtorch_linux_xenial_cuda11_1_cudnn8_py3_gcc7_build
  • pytorch_libtorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_build

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@bdhirsh bdhirsh requested a review from ezyang June 17, 2021 22:23
@bdhirsh
Copy link
Contributor Author

bdhirsh commented Jun 17, 2021

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@bdhirsh bdhirsh requested a review from bhosmer June 17, 2021 22:25
@bdhirsh bdhirsh added ci/all and removed ci/all labels Jun 17, 2021
Relanding this PR, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_EXCEPT_WIN_CUDA`

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.



Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Jun 17, 2021
relanding this PR, but with a fix for windows cuda builds

This reverts commit 6d0fb85.

ghstack-source-id: f6b3d0f
Pull Request resolved: #60214
@bdhirsh
Copy link
Contributor Author

bdhirsh commented Jun 17, 2021

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@bdhirsh
Copy link
Contributor Author

bdhirsh commented Jun 18, 2021

update 1: I had to make a slightly new macro to handle constexpr const properly: CONSTEXPR_CONST_EXCEPT_WIN_CUDA

update 2: looks like in the brief period of time that my original PR landed, some small binary size regressions were detected internally. Gotta figure out where that was coming from before I re-land this. Although inspecting the OSS libtorch_cpu.so showed that we were correctly adding two new symbols for each new op, at::_ops::add_Tensor::call and at::_ops::add_Tensor::redispatch and removing/inlining the corresponding previous symbols, at::add and at::redispatch::add. Maybe internal testing will help more.

Relanding this PR, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_CONST_EXCEPT_WIN_CUDA` (had to make a new macro that was a slight adjustment to an existing one)

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.



Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Jun 18, 2021
relanding this PR, but with a fix for windows cuda builds

This reverts commit 6d0fb85.

ghstack-source-id: ac6ce82
Pull Request resolved: #60214
Relanding this PR, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_CONST_EXCEPT_WIN_CUDA` (had to make a new macro that was a slight adjustment to an existing one)

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.



Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932)

[ghstack-poisoned]
Relanding this PR, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_CONST_EXCEPT_WIN_CUDA` (had to make a new macro that was a slight adjustment to an existing one)

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.



Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932)

[ghstack-poisoned]
Relanding this PR, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_CONST_EXCEPT_WIN_CUDA` (had to make a new macro that was a slight adjustment to an existing one)

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.



Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Jun 22, 2021
relanding this PR, but with a fix for windows cuda builds

This reverts commit 6d0fb85.

ghstack-source-id: 514fa0b
Pull Request resolved: #60214
@bdhirsh
Copy link
Contributor Author

bdhirsh commented Jun 22, 2021

Okay, issue (1) is harder to solve than I thought.

My proposed fix (it's in the PR)

For everything except Windows NVCC builds we do this:

struct add_Tensor {
    static constexpr const char* name = "add";
};

And for Windows NVCC builds we do this:

struct add_Tensor {
    // const instead of constexpr
    static const char* name;
};
// defined outside of the struct definition
const char* add_Tensor name = "add";

I added a note with more details in Macros.h

The problem

I'd like to write code like the following:

struct add_Tensor {
    static constexpr const char* name = "add";
};

But Windows NVCC apparently has a bug (in VS2017, fixed in VS2019) where it can't deal with static constexpr. A workaround issue recommends using static const instead (https://developercommunity.visualstudio.com/t/static-constexpr-gives-member-may-not-be-initializ/444344).

The next reasonable alternative to try is this (we actually have some code that does this with an existing CONSTEXPR_EXCEPT_WIN_CUDA macro):

struct add_Tensor {
    static const char* name = "add";
};

The above fails thanks to the C++ standard (see https://developercommunity.visualstudio.com/t/static-constexpr-gives-member-may-not-be-initializ/444344).

I tried out those two options and a few others in Compiler Explorer - see the examples with descriptions in https://godbolt.org/z/Tn73xdYGz.

It would be great if we can suppress the original Windows NVCC compiler error, but I haven't figured out a way to do that. The alternative that I added at the top works fine, but accesses to at::_ops::add_Tensor::name will be a little slower for windows-NVCC, so we just need to make sure it's not used anywhere that's perf critical.

Relanding this PR, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_CONST_EXCEPT_WIN_CUDA` (had to make a new macro that was a slight adjustment to an existing one)

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.



Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932)

[ghstack-poisoned]
Relanding this PR, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_CONST_EXCEPT_WIN_CUDA` (had to make a new macro that was a slight adjustment to an existing one)

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.



Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932)

[ghstack-poisoned]
@bdhirsh
Copy link
Contributor Author

bdhirsh commented Jun 22, 2021

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Relanding this PR, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_CONST_EXCEPT_WIN_CUDA` (had to make a new macro that was a slight adjustment to an existing one)

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.



Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Jun 22, 2021
relanding this PR, but with a fix for windows cuda builds

This reverts commit 6d0fb85.

ghstack-source-id: 2863525
Pull Request resolved: #60214
@ezyang
Copy link
Contributor

ezyang commented Jun 22, 2021

This is fine, string access is guarded behind executed only once static calls anyway

Relanding this PR, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_CONST_EXCEPT_WIN_CUDA` (had to make a new macro that was a slight adjustment to an existing one)

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.



Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932)

[ghstack-poisoned]
@bdhirsh
Copy link
Contributor Author

bdhirsh commented Jun 22, 2021

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Reland of #59115, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_CONST_EXCEPT_WIN_CUDA` (had to make a new macro that was a slight adjustment to an existing one)

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.



Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Jun 22, 2021
relanding this PR, but with a fix for windows cuda builds

This reverts commit 6d0fb85.

ghstack-source-id: 6c979dc
Pull Request resolved: #60214
@bdhirsh
Copy link
Contributor Author

bdhirsh commented Jun 22, 2021

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Reland of #59115, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_CONST_EXCEPT_WIN_CUDA` (had to make a new macro that was a slight adjustment to an existing one)

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.



Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Jun 22, 2021
relanding this PR, but with a fix for windows cuda builds

This reverts commit 6d0fb85.

ghstack-source-id: 9f9363a
Pull Request resolved: #60214
@bdhirsh
Copy link
Contributor Author

bdhirsh commented Jun 22, 2021

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@bdhirsh
Copy link
Contributor Author

bdhirsh commented Jun 23, 2021

Still waiting for some internal binary size tests to finish running, but after talking to @ezyang I moved some incriminating functions that I'd previously moved from Functions.cpp and TensorMethods.cpp into Functions.h and TensorBody.h back into their original files, and so far I haven't seen any failed tests.

There were a few functions that I'd moved into the header files before that I'm keeping in for now, like Tensor::cpu() and Tensor::options(), because they seem small and useful enough that we might get some perf wins without sacrificing too much in terms of binary size. If more tests fail then I'll try moving those back to cpp files too.

Reland of #59115, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_CONST_EXCEPT_WIN_CUDA` (had to make a new macro that was a slight adjustment to an existing one)

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.



Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Jun 24, 2021
Pull Request resolved: #60214


Relanding this PR, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_EXCEPT_WIN_CUDA`

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.

Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D29213932/)!
ghstack-source-id: 132130129
Reland of #59115, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_CONST_EXCEPT_WIN_CUDA` (had to make a new macro that was a slight adjustment to an existing one)

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.



Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932)

[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Jun 24, 2021
Pull Request resolved: #60214


Relanding this PR, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_EXCEPT_WIN_CUDA`

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.

Differential Revision: [D29213932](https://our.internmc.facebook.com/intern/diff/D29213932/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D29213932/)!
ghstack-source-id: 132300710
@facebook-github-bot
Copy link
Contributor

@bdhirsh merged this pull request in 7bc8645.

@facebook-github-bot facebook-github-bot deleted the gh/bdhirsh/126/head branch June 28, 2021 14:17
asuhan pushed a commit to asuhan/pytorch that referenced this pull request Jun 28, 2021
Summary:
Pull Request resolved: pytorch#60214

Relanding this PR, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_EXCEPT_WIN_CUDA`

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: pytorch#40675

This reverts commit 6d0fb85.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D29213932

Pulled By: bdhirsh

fbshipit-source-id: b90c7c10e5a51f8d6173ddca673b418e5774c248
asuhan pushed a commit that referenced this pull request Jun 30, 2021
Summary:
Pull Request resolved: #60214

Relanding this PR, but with a fix for windows cuda builds (example failure in master here: https://github.com/pytorch/pytorch/runs/2852662871)

This is identical to the original PR except for one change in `tools/codegen/gen.py`: `static constexpr` -> `static CONSTEXPR_EXCEPT_WIN_CUDA`

This actually took a while to figure out, until I tracked down a previous pytorch PR that encountered a similar issue: #40675

This reverts commit 6d0fb85.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D29213932

Pulled By: bdhirsh

fbshipit-source-id: b90c7c10e5a51f8d6173ddca673b418e5774c248
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants