-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[jit] add isinstance static type checking for jit #15076
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
torch/csrc/jit/script/compiler.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want isinstance to be a thing for both the Python and string frontend? If we just want it in Python then adding it as a SugaredValue to script/init.cpp is probably a better path.
test/test_jit.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a test for isinstance(x: Optional[int], int) ?
I don't know that we need to support that behavior, but maybe throw an error here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm Optional[int] in this case is pretty interesting, python isinstance is runtime checking, so it will become true if we pass x=1, in JIT we have static typing, so it will become false in this case...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added the test case with error throwing for optional types
torch/csrc/jit/script/compiler.h
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
easily confused with Instruction or something, can you spell out the whole thing IsinstanceValue
torch/csrc/jit/script/compiler.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this handles something recursive, like
isinstance(data, (list, tuple))
(as in the constructor of PackedSequence)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wanchaol has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| // handle the case for recursive tuple classinfo | ||
| // return true if obj is an instance of any of the types | ||
| for (Expr e: TupleLiteral(classinfo).inputs()) { | ||
| if (isInstanceCheck(obj, e)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this logic. Shouldn't it be recurring through the tuple, and if any of the types mismatch, return false, otherwise return true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the semantics of isinstance(x, (list, tuple)), is that: if any of the types match, return true, otherwise return false. see reference here https://docs.python.org/3/library/functions.html#isinstance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks
| }; | ||
|
|
||
| // matched against for special handling of getattr expressions | ||
| struct TORCH_API GetAttrValue : SugaredValue { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is never used outside of compiler.cpp why did it get exposed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes these are actually never used out of compiler, I just want to group all those sugaredValue definitions in a single place to make easy references since there're too many places for different ones. If compiler.h is meant to specifically only contains the definitions that will used outside, then I can move them back to compiler.
| }; | ||
|
|
||
| // matched against for special handling of isinstance expressions | ||
| struct TORCH_API IsInstanceValue : SugaredValue { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is never used outside of compiler.cpp why is it exposed?
| return NamedValue(attr.range(), attr.name().name(), emitExpr(attr.value())); | ||
| }); | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Weird function, maybe instead:
void checkSpecialApply(const Apply& apply, size_t expected_inputs)
Current name suggests it is used for all apply expressions, and is weirdly hardcoded for the case where there are 2 inputs.
|
per offline conversation, let's just land this and defer the comments until after the compiler.cpp move. |
* silence unreachable code warnings (#15036)
Summary:
Stack:
:black_circle: **#15036 silence unreachable code warnings** [:yellow_heart:](https://our.intern.facebook.com/intern/diff/D13411100/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15036
Differential Revision: D13414712
Pulled By: li-roy
fbshipit-source-id: d4aa84571fa94c66f3c5bfa9575a10c6ee398f9e
* tox.ini -> .flake8 (#15065)
Summary:
We were only using this file to configure flake8, and fbcode linters do not recognize tox.ini which causes spurious linter warnings.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15065
Differential Revision: D13420774
Pulled By: suo
fbshipit-source-id: e43a46befa36862c8b3c0a90074aec6a66531492
* Update onnx coverage script for more accurate result (#15029)
Summary:
The coverage of scalar-input test cases were not accurate. This patch fixed that.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15029
Differential Revision: D13419764
Pulled By: zrphercule
fbshipit-source-id: a14a5cbef432bea8c9126156f5deb1125e1aeb47
* Issue 14984: Remove divide by zero error in index_put_ (#14986)
Summary:
No check for zero index tensor was done in the accumulate=True (serial) case in the new TensorIterator code since https://github.com/pytorch/pytorch/pull/13420.
https://github.com/pytorch/pytorch/issues/14984
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14986
Differential Revision: D13417861
Pulled By: colesbury
fbshipit-source-id: e6ed1af8f708b53a35803fc157ed1f043169ec89
* Supress warnings on generated tests
Summary: Removes all warnings spew for the TestJitGenerated tests
Differential Revision: D13420919
fbshipit-source-id: f251c12f923088ccc5daa2984c15003a67cbd1c1
* Split off fuser tests in test_jit.py to their own test case (#15072)
Summary:
This PR creates TestFuser inside test_jit.py to be a home for graph fuser
specific tests.
This was a useful exercise because now that all the fuser tests are in
one place, I can spot redundant and bitrotting tests for cleanup in a
future PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15072
Differential Revision: D13421458
Pulled By: zou3519
fbshipit-source-id: 80b1a7712feff75a0c186d1664601c4edbbca694
* re-enable copy of python files, but be careful that the copy is only … (#14982)
Summary:
…done once
This allow no-op build to work correctly even when BUILD_CAFFE2_OPS is on.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14982
Differential Revision: D13413960
Pulled By: zdevito
fbshipit-source-id: 6e5412a8c375af8a47c76f548cdd31cff15f3853
* add gloo scatter support on GPU (#14917)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14917
as titled
Reviewed By: pietern
Differential Revision: D13271560
fbshipit-source-id: 0187a3390f8ebd72a2c074e7a651432159d427c0
* Remove deprecated variable_tensor_functions (#15003)
Summary:
Removing the deprecated functions in `torch/csrc/variable_tensor_functions.h` (like `torch::CPU`) and corresponding implementations from `torch/csrc/torch.cpp` from master after the release.
ezyang gchanan soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15003
Differential Revision: D13418086
Pulled By: goldsborough
fbshipit-source-id: a0accdf6f7b0efa1ec07ac7b74b86ff2da37543f
* Add error type to raise statement
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15039
Differential Revision: D13419566
Pulled By: zou3519
fbshipit-source-id: f67a3aebce937e3e640e91e81eb3e184cfdf269c
* Make ATen HIPify out-of-place, but still reuse CUDA names. (#14866)
Summary:
```
This diff changes the HIPification of ATen to be out-of-place.
We now have the following mappings:
- ATen/cuda => ATen/hip
- ATen/native/cuda => ATen/native/hip
- ATen/native/sparse/cuda => ATen/native/sparse/hip
- THC => THH
- THCUNN => THHUNN
The build system is adjusted to know about these new build paths,
and HIPify is taught how to adjust include paths and
THC_GENERIC_FILE appropriately. ATen_hip is now built as
the ATen_hip library, rather than reusing ATen_cuda.
However, despite these new filepaths, none of the identifiers in ATen
have actually changed. So, e.g., THHGeneral.h still defines functions
named THC_blahblah, and HIP still shows up as CUDA in PyTorch itself.
We'll tackle this in a subsequent PR; this diff is just to get the files
out-of-place.
Minor extra improvements:
- Don't edit tmp_install when hipifying
- HIP no longer builds native_cudnn_cpp; it was unnecessary
- Caffe2_HIP_INCLUDES is now Caffe2_HIP_INCLUDE, for consistency
with all the other variables.
- HIP build now properly respects ATEN_CUDA_FILES_GEN_LIB (it
did not previously.)
- You can now override file extension matching in pyHIPIFY
by explicitly specifying its full name in the matching list.
This is used so we can HIPify CMakeLists.txt in some situations.
A little bit of string and ceiling wax:
- gen.py grows a --rocm flag so that it knows to generate CUDA
files which actually refer to the HIP headers (e.g., THH.h)
We'll get rid of this eventually and generate real HIP files,
but not for this PR.
- Management of HIP dependencies is now completely deleted
from the ATen CMakeLists.txt. The old code was dead (because
it was shoveled in ATen_CUDA_DEPENDENCY_LIBS and promptly
ignored by the Caffe2 build system) and didn't actually work.
```
Stacked on https://github.com/pytorch/pytorch/pull/14849 review last commit only
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14866
Differential Revision: D13419475
Pulled By: ezyang
fbshipit-source-id: cb4c843df69a1d8369314c9fab1b7719520fa3db
* Add at::scalar_tensor factory function, use it instead of Type.scalar… (#15074)
Summary:
…_tensor.
This is part of a long series of paring down the Type interface.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15074
Differential Revision: D13421482
Pulled By: gchanan
fbshipit-source-id: 84010ee71fef2cb74d32d5de7858d8ed9f36b885
* Move TensorImpl to c10 (yay!)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14795
Reviewed By: ezyang
Differential Revision: D13336856
fbshipit-source-id: 5375d0e42312ff7564f4df06210a5e49542d59e3
* Fix include paths for TensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14816
Reviewed By: ezyang
Differential Revision: D13348040
fbshipit-source-id: a7204d89c2dd277d13093b0ed862f40b53dee82f
* Move UndefinedTensorImpl to c10 (meh) (#14817)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14817
unfortunately, we still need this.
Reviewed By: ezyang
Differential Revision: D13348041
fbshipit-source-id: e8dcc89f5c71bd1ea2c9813990dac6e58e63b1fd
* Fix include paths for UndefinedTensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14818
Reviewed By: ezyang
Differential Revision: D13348042
fbshipit-source-id: 11bdfc755767ce9d0a6fa95b2cf49d50adde8d60
* add gloo support for gather on GPU (#14916)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14916
as titled
Reviewed By: pietern
Differential Revision: D13267832
fbshipit-source-id: 3b89d08af93f74941f17ff892c33fc2a4a023c19
* Pre-commit flake8/clang-tidy (#15102)
Summary:
Provide a pre-commit hook that does flake8 and clang tidy checks. Enables the clang-tidy script to run in parallel to make it fast enough to be used in a pre-commit hook.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15102
Reviewed By: soumith
Differential Revision: D13429629
Pulled By: zdevito
fbshipit-source-id: bd52fe5652f29b033de8d9926d78350b2da4c2fc
* Update the output format for benchmark_helper. It outputs the dimensi… (#15108)
Summary:
…on first and all the values in the next line. This way, it can output arbitrary blob
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15108
Reviewed By: llyfacebook
Differential Revision: D13429346
Pulled By: sf-wind
fbshipit-source-id: 5e0bba2a46fbe8d997dfc3d55a698484552e3af8
* Fix serialization (#15033)
Summary:
Fixes a bug where (de-)/serializing a hierarchy of submodules where one submodule doesn't have any parameters, but its submodules do, doesn't get properly loaded. This had to do with the fact that the old protobuf format couldn't store empty parameters.
Fixes https://github.com/pytorch/pytorch/issues/14891
soumith ezyang ebetica
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15033
Differential Revision: D13411322
Pulled By: goldsborough
fbshipit-source-id: 2ef73b2aa93fa9e46b1cbe1fd47d9f134d6016d5
* Remove linker and dlopen flags that allowed undefined symbols in rocm build (#15091)
Summary:
Previously the undefined symbols were caused by disabled_modules in tools/amd_build/disabled_features.json (now it's cleared).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15091
Differential Revision: D13429595
Pulled By: bddppq
fbshipit-source-id: b341e83f9e5a8d16440a364e837b045a8a4fd6e1
* Add EmptyNameScope to allow you jump out from current scope. (#14631)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14631
adding a empty name scope to allow people jump out from current namescope.
This could be useful when you want to access blob from parent or sibling scope.
Facebook:
e.g: we encoutered a potential usecase in D13124249 (it's a large diff, please search by EmptyNameScope in that diff), we need to access to a blob declared in root namescope from a device namescope (device namescope has been used by parallel_GPU API). `EmptyNameScope` can help us do that with ease.
I referenced to `EmptyDeviceScope` D6103412 while implementing this one.
Reviewed By: yinghai
Differential Revision: D13272240
fbshipit-source-id: d4cde5abcc2336e456b6c6ef086266ef94d86da8
* Use c10::to_string that works cross platform (#15117)
Summary:
Fix master breakage introduced in #15108
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15117
Differential Revision: D13430568
Pulled By: bddppq
fbshipit-source-id: ce10bc552f085d1bf0afbc13119991bee014ac95
* Don't setup x86_64-linux-gnu-gcc as an sccache wrapper. (#15078)
Summary:
When I do this setup in a local Docker development environment,
I get the following error:
x86_64-linux-gnu-gcc: error trying to exec 'cc1plus': execvp: No such file or directory
Somehow, gcc seems to get confused when it gets run from the wrong
directory. Best not to do it.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15078
Differential Revision: D13432143
Pulled By: ezyang
fbshipit-source-id: b18e15f493503a4c8205c85f92a214e49762a7bc
* fix some tests that I accidentally disabled (#15077)
Summary:
While moving these scenarios into `_test_dim_ops` I accidentally left an empty loop in the actual tests, causing them to do nothing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15077
Differential Revision: D13428759
Pulled By: umanwizard
fbshipit-source-id: 08f53068981d9192c1408878b168e9053f4dc92e
* Add better support for bools in the graph fuser (#15057)
Summary:
Fixes #15038.
aten::_cast_Float(tensor, non_blocking) support was added in #14336.
Its second argument is a bool, but because we don't support generating values
of type bool in the fuser codegen, the codegen errored out.
aten::_cast_Float in the fuser never actually uses its non_blocking
argument, so another way to fix this would be to have a special op for a
fused cast but I thought that we might have fusible ops that do take
bool arguments in the future so this would be good to have.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15057
Differential Revision: D13432091
Pulled By: zou3519
fbshipit-source-id: 455fe574f5f080aca9a112e346b841a2534a8dc3
* Ensure there aren't variables in checked_tensor_unwrap, checked_tenso… (#15105)
Summary:
…r_list_unwrap.
These functions use unsafeGetTensorImpl(), which doesn't work with Variables (in a silent way that may blow up later).
So let's do early checking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15105
Reviewed By: ezyang
Differential Revision: D13429149
Pulled By: gchanan
fbshipit-source-id: b85f6f5b7cdb9a6dd0c40205b924c840a3920ba0
* fix infinite loop when get_max_threads is nonzero but num_threads is 1
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15114
Differential Revision: D13431891
Pulled By: umanwizard
fbshipit-source-id: f968b8e50cf776c346d4a28d72b12e7856c95839
* Kill Type.storage. (#15075)
Summary:
It's not used.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15075
Reviewed By: ezyang
Differential Revision: D13422487
Pulled By: gchanan
fbshipit-source-id: 272aa0a10e96f3ffb97d571490b517f972b9dcf7
* Move CUDAGuard, CUDAStream and CUDAGuardImpl to c10/cuda (#14248)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14248
This diff also introduces a horrifying hack to override CUDA's DeviceGuardImpl
with a HIPGuardImplMasqueradingAsCUDA, to accommodate PyTorch's current
behavior of pretending CUDA is HIP when you build with ROCm enabled.
Reviewed By: bddppq
Differential Revision: D13145293
fbshipit-source-id: ee0e207b6fd132f0d435512957424a002d588f02
* Stop erroneously running aten::warn (#15124)
Summary:
Fixes #15119. Before this PR, we were propagating constants through
aten::warn AND running it as a part of shape analysis.
This caused aten::warn to be run regardless of if it is
supposed to be run dynamically. This PR adds an exclusion for aten::warn
in constant propagation and shape analysis, similar to that of prim::RaiseException.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15124
Differential Revision: D13432815
Pulled By: zou3519
fbshipit-source-id: 15ab533ce2accb2da3fd4e569070c7979ce61708
* Move numa.{h, cc} to c10/util (#15024)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15024
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14393
att
Reviewed By: dzhulgakov
Differential Revision: D13380559
fbshipit-source-id: abc3fc7321cf37323f756dfd614c7b41978734e4
* Move adaptive avg pooling 2d to ATen native (#14714)
Summary:
adaptive_avg_pool1d, adaptive_avg_pool2d, and adaptive_avgpool3d are neural network functions that are currently implemented in our legacy THNN (CPU) / THCUNN (CUDA) libraries. It is generally better if these live in our new library ATen, since it is more feature complete and reduces cognitive overhead.
This change moves currently to adaptive_avg_pool1d and adaptive_avg_pool2d to ATen.
timed relevant cpu tests with this change:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.s.s.s.s.s.s.s...
----------------------------------------------------------------------
Ran 17 tests in 6.273s
OK (skipped=7)
real 0m7.164s
user 3m1.289s
sys 0m0.905s
```
compared to master:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.s.s.s.s.s.s.s...
----------------------------------------------------------------------
Ran 17 tests in 7.232s
OK (skipped=7)
real 0m8.065s
user 3m34.714s
sys 0m2.440s
```
also timed relevant cuda tests with this change:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.................
----------------------------------------------------------------------
Ran 17 tests in 21.049s
OK
real 0m24.106s
user 0m20.890s
sys 0m4.026s
```
compared to master
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.................
----------------------------------------------------------------------
Ran 17 tests in 23.021s
OK
real 0m27.095s
user 0m20.121s
sys 0m3.668s
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14714
Differential Revision: D13384084
Pulled By: xnder
fbshipit-source-id: 344442103ccbbda72d3c010d2feea00e9985d226
* Add script standard library documentation + cleanup (#14912)
Summary:
Documents what is supported in the script standard library.
* Adds `my_script_module._get_method('forward').schema()` method to get function schema from a `ScriptModule`
* Removes `torch.nn.functional` from the list of builtins. The only functions not supported are `nn.functional.fold` and `nn.functional.unfold`, but those currently just dispatch to their corresponding aten ops, so from a user's perspective it looks like they work.
* Allow printing of `IValue::Device` by getting its string representation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14912
Differential Revision: D13385928
Pulled By: driazati
fbshipit-source-id: e391691b2f87dba6e13be05d4aa3ed2f004e31da
* Minor documentation mistake (#15068)
Summary:
keepdim is a optional parameter for torch.max()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15068
Differential Revision: D13437745
Pulled By: zou3519
fbshipit-source-id: b5198c7d4ae17758cd136f6e5aecc6cb5838f174
* Implement torch.tril_indices and torch.triu_indices (#12653) (#14904)
Summary:
This is an optimized implementation that does the following:
1. created an empty Tensor of correct size.
2. fill the Tensor with correct values.
The following three designs to fill in the Tensor result in roughly the same performance. Hence, the 2nd option is taken for simpler code, and to return contiguous tensors.
1. Sequential: fill row coordinates first, then columns. This results in two for-loop and more arithmetic operations.
2. Interleaved: fill in index coordinates one by one, which jumps between the two output Tensor rows in every iteration.
3. Transpose: create a n X 2 Tensor, fill the Tensor sequentially, and then transpose it.
<img width="352" alt="screen shot 2018-12-10 at 3 54 39 pm" src="https://user-images.githubusercontent.com/16999635/49769172-07bd3580-fc94-11e8-8164-41839185e9f9.png">
NOTE:
This implementation returns a 2D tensor, instead of a tuple of two tensors. It means that users will not be able to do the following:
```python
x = torch.ones(3, 3)
i = torch.tril_indices(3, 3)
x[i] # need to first convert the 2D tensor into a tuple of two 1D tensors.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14904
Reviewed By: zou3519
Differential Revision: D13433027
Pulled By: mrshenli
fbshipit-source-id: 41c876aafcf584832d7069f7c5929ffb59e0ae6a
* Optimize CPU GenerateProposals op by lazily generating anchors (3-5x faster) (#15103)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15103
There are two main optimizations in this diff:
1. We generate all anchors for every single spatial grid first, and then apply
NMS to pick 2000 anchors according to RPN_PRE_NMS_TOP_N. By first sorting the
score and picking the 2000 top ones and then lazily generating only the
corresponding anchors is much faster.
2. Transposing bbox_deltas from (num_anchors * 4, H, W) to
(H, W, num_anchors * 4) was also quite slow - taking about 20ms in the RRPN
case when there are lots of anchors which it's negligible for RPN case (like
0.1 ms). Instead of transponsing, performing all operations in the
(num_anchors, H, W) format speeds things up.
For regular RPN scenario, this gives 5x speedup from 5.84ms to 1.18ms a case
with 35 anchors over a 600x600 image.
For rotated boxes with 245 anchors, the runtime down from 80ms to 27ms per
iter.
Reviewed By: newstzpz
Differential Revision: D13428688
fbshipit-source-id: 6006b332925e01a7c9433ded2ff5dc9e6d96f7d3
* use ROCm 1.9.2 fp16 capabilities in rocBLAS and MIOpen interfaces (#14994)
Summary:
* relax MIOpen if statement to allow fp16/fp32 mixed precision training now supported by ROCm 1.9.2
* use gemm_ex API of rocBLAS in ROCm 1.9.2 instead of the previous hgemm API
* with this: enable all but one half test in test_nn
While there, fix also:
* a group convolution issue w/ MIOpen pertaining to initializing MIOpen on multi-GPU systems properly we detected while working on this
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14994
Differential Revision: D13439869
Pulled By: bddppq
fbshipit-source-id: 75e4eb51a59488882e64b5eabdc30555b25be25e
* Add back c2 string_utils include header to benchmark_helper
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15143
Differential Revision: D13439694
fbshipit-source-id: 78698b66d52a0178118cbf3e79a7a5ad1763d47b
* Export defs.bzl to open source for pytorch (#15132)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15132
Pull Request resolved: https://github.com/facebook/fbshipit/pull/64
Reviewed By: dzhulgakov
Differential Revision: D13424093
fbshipit-source-id: bbebef964b9f3aef8f59cd394eca068680c36b5a
* docs: minor spelling tweaks
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15148
Differential Revision: D13443708
Pulled By: suo
fbshipit-source-id: 5e3ec0afd3416ab8ce207f2d04105c49e1c04611
* don't compile dnnlowp.cc in avx2 option (#15147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15147
Forgot to take out dnnlowp.cc from avx2 list in a previous diff.
Reviewed By: dskhudia
Differential Revision: D13440686
fbshipit-source-id: 9ada98b6e885c7d5f22c91a735ff60304480b4cb
* Autoformat build_variables.py (#15152)
Summary:
autoformat `tools/build_variables.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15152
Differential Revision: D13445343
Pulled By: goldsborough
fbshipit-source-id: fd63588de114cb92deda03fa1a0b36f5f9082b2f
* Fix resize for edge case tensors (#14874)
Summary:
Certain tensor shapes failed when being resized. This pull request addresses the bug found in #13404.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14874
Differential Revision: D13429788
Pulled By: soumith
fbshipit-source-id: 8aa6451dbadce46d6d1c47a01cb26e6559bcfc8c
* Implementation of ChannelShuffle Op for MKLDNN (#15106)
Summary:
the speed-up of a single operation is up to 3X .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15106
Differential Revision: D13429596
Pulled By: bddppq
fbshipit-source-id: f8d987cafeac9bef9c3daf7e43ede8c6a4ee2ce5
* support casting to string (#15110)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15110
support casting to string on CPU
Reviewed By: intermilan
Differential Revision: D13429381
fbshipit-source-id: b737a1ba1237b10f692d5c42b42a544b94ba9fd1
* Remove "early-release beta" disclaimer from README (#15136)
Summary:
Now that PyTorch 1.0 is out, this should be updated :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15136
Differential Revision: D13447377
Pulled By: soumith
fbshipit-source-id: bd4e662c53d0699f25d4d90c1b4c1e182b4427c2
* Disable strict-overflow flag to avoid compilation error (#14977)
Summary:
Disable strict-overflow flag to avoid compilation error
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14977
Differential Revision: D13447577
Pulled By: soumith
fbshipit-source-id: 1957bd5aa3c7b79219da3dd53560464977c89526
* minimize header file includes from _avx2.cc (#14950)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14950
Minimize the number of headers included from _avx2.cc files to avoid accidental compilation of functions defined the header files reused by other translation units that can lead to illegal instruction errors.
Reviewed By: dskhudia
Differential Revision: D13394483
fbshipit-source-id: 67149a6fb51f7f047e745bfe395cb6dd4ae7c1ae
* Removes THCNumerics usages in RNN.cu (#15085)
Summary:
We don't need THCNumerics here since at::Half can be implicitly converted to float and the cuda math dispatches are handled by `/usr/local/cuda/include/crt/math_functions.hpp` and `cmath`. ATen should be free of THCNumerics after this and when porting kernels from THC, one should not use THCNumerics.
Should close: https://github.com/pytorch/pytorch/issues/11878
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15085
Differential Revision: D13447558
Pulled By: soumith
fbshipit-source-id: 4ff5cbf838edcd01e2d1397e4d7f4f920e9e9fc3
* Reuse KernelSpec for FusionGroups with equivalent graphs (#14541)
Summary:
Before this PR, loop unrolling + the graph fuser was creating multiple
FusionGroups with the same bodies (with different variable names) for
JIT LSTMs. Each FusionGroup got registered to a separate fusion key;
each key resulted in a different compilation for the same
specializations.
This PR makes it so that when registering FusionGroups with the fusion
compiler, the compiler first checks the KernelSpec cache to see if the
FusionGroup's graph exists already. If it does, then return the
corresponding KernelSpec's key to share compiled kernels.
In addition, graphs in the KernelSpec cache are canonicalized before
being cached. I added a flag to the canonicalize pass to remove unique
names of values.
This shortens the compile time for a JIT LSTM (seq_len of 100, loop
unroll factor of 8) from 5.3s to 2.3s. Most of this compile time is
running the graph fuser and/or fusion compiler; while this PR
makes it so that there is only one unique kernel in the forward pass,
there are a lot of different kernels (6) in the backward pass
(after loop unrolling) that should be investigated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14541
Differential Revision: D13324487
Pulled By: zou3519
fbshipit-source-id: b841d82ed35a959b5cfc72db033bf5a7b42cc4fb
* Python <-> C++ Frontend inter-op (#13481)
Summary:
This PR enables C++ frontend modules to be bound into Python and added as submodules of Python modules. For this, I added lots of pybind11 bindings for the `torch::nn::Module` class, and modified the `torch.nn.Module` class in Python to have a new Metaclass that makes `isinstance(m, torch.nn.Module)` return true when `m` is a C++ frontend module. The methods and fields of C++ modules are bound in such a way that they work seamlessly as submodules of Python modules for most operations (one exception I know of: calling `.to()` ends up calling `.apply()` on each submodule with a Python lambda, which cannot be used in C++ -- this may require small changes on Python side).
I've added quite a bunch of tests to verify the bindings and equality with Python. I think I should also try out adding a C++ module as part of some large PyTorch module, like a WLM or something, and see if everything works smoothly.
The next step for inter-op across our system is ScriptModule <-> C++ Frontend Module inter-op. I think this will then also allow using C++ frontend modules from TorchScript.
apaszke zdevito
CC dzhulgakov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13481
Differential Revision: D12981996
Pulled By: goldsborough
fbshipit-source-id: 147370d3596ebb0e94c82cec92993a148fee50a7
* Unify SparseTensorImpl::size_ and TensorImpl::sizes_
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15130
Differential Revision: D13434981
Pulled By: VitalyFedyunin
fbshipit-source-id: 98bd4d66834a3c3d2ea577adb0c8413852da095d
* Fix bincount for non-contiguous inputs on CPU (#15109)
Summary:
Fixes #15058.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15109
Differential Revision: D13447448
Pulled By: soumith
fbshipit-source-id: 56e8d42934538fb00465105a2c5ccfeb7c18a651
* Use a pool of per-thread cudnn handles for each device, updated (#15080)
Summary:
Rebased version of https://github.com/pytorch/pytorch/pull/14861, hopefully addressing ezyang's comments.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15080
Differential Revision: D13440858
Pulled By: ezyang
fbshipit-source-id: 1c6af5c53538b81c6b92cf1dda231ed333f28035
* Fix typo (#15045)
Summary:
Simple typo fix
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15045
Reviewed By: dzhulgakov
Differential Revision: D13413509
Pulled By: houseroad
fbshipit-source-id: be66700c30d038368b1433232a4e3fd9299c83d6
* Delete defunct USE_SIMPLE_BASE_CTOR_DTOR (#15144)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15144
Differential Revision: D13440872
Pulled By: ezyang
fbshipit-source-id: 2b1d73fac0c63729ba01d8f129642334ae9d9cf3
* Kill non-forward, non-backward functions generated from nn.yaml (#15127)
Summary:
Updating binding to legacy functions.
Remove unused declarations.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15127
Differential Revision: D13433405
Pulled By: VitalyFedyunin
fbshipit-source-id: 58544d38affd20818742338c9eb789d9d14ccbaa
* Fix old tensor OutputTensorCopyFrom usage in ImageInput operator (#15094)
Summary:
cc jerryzh168
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15094
Differential Revision: D13451898
Pulled By: bddppq
fbshipit-source-id: 27906be62fb88aaa13c257441a2e35a285b445ee
* Use std::vector instead of alloca to work around hcc crash
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15175
Differential Revision: D13453708
Pulled By: bddppq
fbshipit-source-id: f8c147ae9f679e395fee9d4c73ebcca052c9a752
* Tensor construction codemod(ResizeLike) - 5/7 (#15084)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15084
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision: D13419711
fbshipit-source-id: dd2b740c3f13d8087085bafc5571aaf908d1af42
* Tensor construction codemod(ResizeLike) - 6/7 (#15137)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15137
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision: D13419736
fbshipit-source-id: f4ad7b9582c2f809258169b7fef9adbca7063d99
* Replace non-printable-ascii characters in ProtoDebugString (#14918)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14918
When ProtoBuf-Lite is in use, ProtoDebugString just calls SerializeAsString.
This produces binary output, which is not a very suitable "debug" string.
Specifically, we've observed it causing problems when calling code tries to
add the debug string to a Java exception message (which requires valid UTF-8).
Now, we replace all non-ASCII bytes with "?".
This is not a very fast implementation, but generating debug strings shouldn't
be a performance-sensitive operation in any application.
Reviewed By: dzhulgakov
Differential Revision: D13385540
fbshipit-source-id: 8868172baf20efaf53fecf7d666a6980f59b64f5
* Tensor construction codemod(ResizeLike) - 4/7 (#15088)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15088
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision: D13419682
fbshipit-source-id: 3e59403bc1c0e71e5cb66df932ed0c6a0a72e643
* Remove _finfo; replace _finfo usage with torch.finfo (#15165)
Summary:
This PR removes the usage of _finfo defined in torch.distributions.utils and changes the call sites
to use torch.finfo instead
Differential Revision: D13451936
Pulled By: soumith
fbshipit-source-id: 6dbda3a6179d9407bc3396bf1a2baf3e85bc4cf2
* Run ONNX cuda backend test cases via ROCm
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15069
Differential Revision: D13427757
Pulled By: bddppq
fbshipit-source-id: ba0273d75986cd5b146f7041a83c63ddf9c6c0cf
* Remove disabled_features in hipify
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15098
Reviewed By: ezyang
Differential Revision: D13453762
Pulled By: bddppq
fbshipit-source-id: e177042c78f5bf393163d660c25b80285353853d
* Add missing caffe2_hip extension in setup.py
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15189
Reviewed By: orionr
Differential Revision: D13457644
Pulled By: bddppq
fbshipit-source-id: c2363e9b8fd21709b62777e5b2199f01ec1c65f8
* Enable performance-unnecessary-value-param in .clang-tidy (#15026)
Summary:
This PR fixes around 250 places in the codebase where we were making unnecessary copies of objects (some large, some small).
ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15026
Differential Revision: D13458784
Pulled By: goldsborough
fbshipit-source-id: be5148b2ce09493588d70952e6f6d6ff5ec5199b
* Remove TensorImpl -> Type dependency
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15086
Reviewed By: dzhulgakov
Differential Revision: D13425628
fbshipit-source-id: 08a8a774d17b071367454e027012a02f96d177d4
* Support torch.tensor in script (#14913)
Summary:
Adding support for torch.tensor in script.
The input list is typed as t[], because it can be arbitrarily nested. I added a check a compile time check that the inner type of the list is a bool, float, or int.
Also adds specialization for Boolean Lists, which already existed at the ivalue level but had not been added to the compiler yet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14913
Differential Revision: D13407930
Pulled By: eellison
fbshipit-source-id: d17f1195a22149d5b0d08d76c89a7fab8444f7c5
* For rotated proposals, replace cv::rotatedRectangleIntersection with a correct version that doesn't have underflow problem (#15113)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15113
cv::rotatedRectangleIntersection has a known float underflow bug that would cause failure in ```CV_Assert(intersection.size() <= 8)```
For rotated proposals, replace cv::rotatedRectangleIntersection with a correct version that doesn't have underflow problem.
Otherwise, when ```USE_CPP_GENERATE_PROPOSALS = true```, the training would fail.
Reviewed By: viswanathgs
Differential Revision: D13429770
fbshipit-source-id: 5e95d059f3c668f14059a0a83e8e53d8554cdb99
* Move TensorImpl::CopyFrom to caffe2::Tensor (1/2) (#14656)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14656
This diff doesn't move it yet, but prepares it to be moved, i.e. removes all access to class internals.
dzhulgakov: Please comment on if you think it still makes sense to land this even though it's not blocking anymore since we're going to move at::CopyBytes anyhow.
ezyang: There's some changes in the implementation, especially handling undefined dest tensors. Please review carefully.
Reviewed By: ezyang
Differential Revision: D13287688
fbshipit-source-id: 17800ca8a79ab1633f23be58d96f99a160d8ed24
* Move TensorImpl::CopyFrom to caffe2::Tensor (2/2) (#14858)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14858
This diff doesn't change logic but just takes the existing code and moves it to caffe2::Tensor
Reviewed By: ezyang
Differential Revision: D13365817
fbshipit-source-id: bc73b27a793602cb14200dcdf357aa63233da43c
* add erf and erfc to fuser/autodiff
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15139
Differential Revision: D13455690
Pulled By: soumith
fbshipit-source-id: b06e5f5d362869c2e5fa11a52f9450d77c30d4cb
* Fix numpy conversion for int8 tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15194
Differential Revision: D13459270
Pulled By: li-roy
fbshipit-source-id: 605534add263860a3ad9a7fa70888301ee0bf8e4
* Fix derivative for mvlgamma (#15049)
Summary:
Fixes #15015.
Added tests to validate derivative.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15049
Reviewed By: soumith
Differential Revision: D13434117
Pulled By: zou3519
fbshipit-source-id: 4a292600af9eb08b67c0f8b5482e9512aac95e72
* caffe2 - easy - Create test_util to make it easier to write C++ unit tests (#15014)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15014
Currently it looks like many of the simple operations such as comparing tensors, creating tensors, fetching tensors... are too verbose and took effort to write correctly in unit tests.
Easy to use utilities are often more important to increase productivity writing unit tests. While caffe2 python unit tests are relatively easier to write at the moment, the C++ side seems lacking.
In this change I create a test_util, started with assertsTensorEquals, getTensor, createTensor, and we can start putting more easy to use utilities there.
Reviewed By: salexspb
Differential Revision: D13370461
fbshipit-source-id: bee467a127e1d032ef19482f98aa5c776cf508c0
* caffe2 - easy - test utils to create operator (#15180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15180
Test utils to create an operator
On top of D13370461
Reviewed By: ZolotukhinM
Differential Revision: D13382773
fbshipit-source-id: a88040ed5a60f31d3e73f1f958219cd7338dc52e
* caffe2 - easy - test utils to fill tensors (#15019)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15019
Put some utils to fill tensors to test_utils
Reviewed By: salexspb
Differential Revision: D13386691
fbshipit-source-id: 51d891aad1ca12dc5133c0352df65b8db4f96edb
* caffe2 - easy - test utils to compare tensors in two workspaces (#15181)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15181
Add test utils to compare tensors in two workspaces
Reviewed By: ZolotukhinM
Differential Revision: D13387212
fbshipit-source-id: e19d932a1ecc696bd0a08ea14d9a7485cce67bb2
* caffe2 - easy - test utils for tensor assertion (#15020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15020
Add test utils for assertion of a tensor (sizes and values)
Reviewed By: salexspb
Differential Revision: D13401146
fbshipit-source-id: bc385df074043e03ea884940b5631b96de4a607e
* caffe2 - easy - utils to set argument of operator (#15022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15022
Add setArgument testing utils to make it easy to set argument for an operator
Reviewed By: yinghai
Differential Revision: D13405225
fbshipit-source-id: b5c1859c6819d53c1a44718e2868e3137067df36
* caffe2 - make DataRandomFiller usable in unit tests (#15027)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15027
- Make DataRandomFiller able to accept input_dims and input_types for only non intermediate inputs. Add a helper to fill input directly to a workspace
Reviewed By: highker
Differential Revision: D13408345
fbshipit-source-id: 5fc54d33da12e3f0a200e79380d4c695b0339b17
* Revert D13407930: [pytorch][PR] Support torch.tensor in script
Differential Revision:
D13407930
Original commit changeset: d17f1195a221
fbshipit-source-id: f4458872c48ec4a2c9983b21ed90bcdc0ae665b7
* Tensor construction codemod(ResizeLike) - 3/7 (#15122)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15122
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: dzhulgakov
Differential Revision: D13419643
fbshipit-source-id: 65b5a037b94d458b944d51f790ba2829db1fb530
* Better tests/support for Python/C++ inter-op (#15193)
Summary:
Methods like `module.named_modules()` returns a container of `shared_ptr<nn::Module>`. Currently the `nn::Module` base class does not have Python bindings. This PR fixes this, and adds more unit tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15193
Differential Revision: D13458713
Pulled By: goldsborough
fbshipit-source-id: 4091fe1b96a1be8db14c6a4307fbacc2b41ff6fe
* Refactor caffe2 CI scripts and add benchmark scripts
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14575
Differential Revision: D13468049
Pulled By: bddppq
fbshipit-source-id: e73bc8742c8a03f498816eee8a72b06a3e19fe48
* Enable all clang-tidy performance checks (#15198)
Summary:
This PR adds the final set of clang-tidy checks we should add for our codebase: a last set of performance-related checks. Most fixes here are around changing `auto` to `const auto&` in a few places where unnecessary copies were made, and adding `reserve()` calls before loops doing repeated `push_back()`. Also a few cases of calling `std::string::find` with a single-character string literal instead of a single char, which uses a less efficient string search algorithm meant for searching larger substrings.

ezyang apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15198
Differential Revision: D13468797
Pulled By: goldsborough
fbshipit-source-id: 2bed1ea1c7c162b7f3e0e1026f17125e88c4d5b2
* Remove __forceinline__ hipification step. (#15229)
Summary:
The HIP definition now correctly contains the inline attribute.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15229
Differential Revision: D13470962
Pulled By: bddppq
fbshipit-source-id: 34f8361bda5f3dce20a2eeb530c3a25d1b1bdd06
* Fix jit doc codeblocks and tables (#15227)
Summary:
Some of the codeblocks were showing up as normal text and the "unsupported modules" table was formatted incorrectly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15227
Differential Revision: D13468847
Pulled By: driazati
fbshipit-source-id: eb7375710d4f6eca1d0f44dfc43c7c506300cb1e
* enabled tests in test_nn, test_cuda and test_sparse (#15232)
Summary:
tests work on ROCm 1.9.2 as present on CI (fp16 bringup, hipMemset and sparse improvements)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15232
Differential Revision: D13470991
Pulled By: bddppq
fbshipit-source-id: 45acc4f9ea5baaaf7672b86eb022948055779925
* Revert D13440858: [pytorch][PR] Use a pool of per-thread cudnn handles for each device, updated
Differential Revision:
D13440858
Original commit changeset: 1c6af5c53538
fbshipit-source-id: fda42ea75000d4a4e9c4a8eeaaa5518f7ad9c298
* Do not ifdef __launch_bounds__ out for ROCm. (#15228)
Summary:
The compiler understands it and profits from knowing it by not using too
many VGPRs as it defaults to 256 default workgroup size.
Fixes a problem in bringup of ROCm 2.0 on gfx906.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15228
Differential Revision: D13470950
Pulled By: bddppq
fbshipit-source-id: f9aa44c7c95299a099c0ea9317b9044cc056acc5
* fix an issue where two rules build the same .py files
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15230
Differential Revision: D13471625
Pulled By: zdevito
fbshipit-source-id: a982413a308c7a9bb5b6a82fe96fd3de44f555aa
* Preserve module hierarchy on traced modules (#15101)
Summary:
We need this, for example, to properly call `_unpack` when we have a traced module in the hierarchy
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15101
Differential Revision: D13468467
Pulled By: jamesr66a
fbshipit-source-id: c2b6740b12cde6e23395d12e42d4fc2c4c7ca3f2
* record unit time in torch.cuda.event (#15221)
Summary: Record unit of time for torch.cuda.Event's elapsed_time
Differential Revision: D13467646
Pulled By: zou3519
fbshipit-source-id: 4f1f4ef5fa4bc5a1b4775dfcec6ab155e5bf8d6e
* Build c10 HIP test
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15233
Reviewed By: ezyang
Differential Revision: D13471002
Pulled By: bddppq
fbshipit-source-id: b42c3bc2b9db672ce50a52eb700cc6ed13d3535f
* Start unittesting our main observer (#15191)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15191
OSS:
just splitting out basic flags from a unit test. So I can extend them in another test where I need to add additional flags.
Reviewed By: yinghai
Differential Revision: D13159184
fbshipit-source-id: 9823e792cf0ed8d0379235c44564862b7d784845
* FP16MomentumSGDUpdate Op fix and enable for ROCm (#15150)
Summary:
1. Fix a bug in FP16MomentumSGDUpdate operator
2. Enable operator for ROCm
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15150
Differential Revision: D13473145
Pulled By: bddppq
fbshipit-source-id: 4c5c5f30cb9bba658e3639dbe193fa08a304d306
* Supply static shape info to Reshape when doing onnxGetCompatibility (#15242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15242
Newer version ONNX Reshape gets shape info from a tensor. Hence for static backend, we need to provide this info to it when doing `onnxGetCompatibility` too.
Reviewed By: jackm321
Differential Revision: D13471959
fbshipit-source-id: 8a58e28edd900b6ad54a1dbd63ff2579fbe0e820
* Add several features to converting images to blobs (#15204)
Summary:
Several enhancements are implemented:
* Resize the images to be within a boundary between min-size and max-size (can be height and weight). It tries to resize the minimum size to match the min-size and keep the aspect ratio. However, if in that case the maximum size is more than the max-size, then resize the maximum size to be equal to the max-size (and the minimum size is less than min-size). The min/max sizes are specified in argument scale, in a comma separated form. If one of the size is -1, then that size is not a restriction.
* Change the OpenCV resize function arguments from using cv::Size() to the x, y scale. Theoretically they should be the same. But in reality, the two ways of specifying them may result to different resized outputs.
* Once the image is read in, change the data to floats. That means, after resize and other preprocessing steps, the float values are preserved (not truncated to int).
* It is possible to convert data in text format to the blob format.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15204
Reviewed By: llyfacebook
Differential Revision: D13467225
Pulled By: sf-wind
fbshipit-source-id: 7da34a72d43a9603cd7ab953f5821c1222d0178f
* Create parser.cpp (#15238)
Summary:
Moves implementation into .cpp file. Parser was getting included in several compilation units.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15238
Differential Revision: D13474635
Pulled By: zdevito
fbshipit-source-id: 7dc824eea8f506d6c8ae1aa67aeec0c34d5285fc
* Tensor method rename dims()->sizes() (#15246)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15246
Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: igorsugak
Differential Revision: D13470369
fbshipit-source-id: ce995beab7c64bebe8b234fb5e6d015940ec2952
* Mention Jacobian-vector product in the doc of torch.autograd (#15197)
Summary:
A friend of me is learning deep learning and pytorch, and he is confused by the following piece of code from the tutorial https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#gradients :
```python
x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(gradients)
print(x.grad)
```
He don't know where the following line comes from:
```python
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
```
What are we computing? Why don't we compute "the gradient of `y` w.r.t `x`"?
In the tutorial, it only says
> You can do many crazy things with autograd!
Which does not explain anything. It seems to be hard for some beginners of deep learning to understand why do we ever do backwards with external gradient fed in and what is the meaning of doing so. So I modified the tutorial in https://github.com/pytorch/tutorials/pull/385
and the docstring correspondingly in this PR, explaining the Jacobian vector product. Please review this PR and https://github.com/pytorch/tutorials/pull/385 together.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15197
Differential Revision: D13476513
Pulled By: soumith
fbshipit-source-id: bee62282e9ab72403247384e4063bcdf59d40c3c
* value-based mark and sweep DCE (#14910)
Summary:
This makes DCE more granular by tracking live values/aliases through the graph (rather than just nodes). So we can be more aggressive in DCE around control flow blocks. For example, in:
```
%a0 = aten::foo()
%b = aten::foo()
%a2, %b2 = prim::If(%cond) {
block0() {
%a1 = aten::foo(%.0)
%b1 = aten::foo(%b)
} -> (%a1, %b1)
}
return (%a2)
```
we will now dce all the `%b` stuff.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14910
Differential Revision: D13476445
Pulled By: suo
fbshipit-source-id: 2bf5db19711c07dde946697a4f4b270bd8baf791
* fix cholesky call in potrs example (#15215)
Summary:
Cholesky by default returns the lower triangular matrix, see [docs](https://pytorch.org/docs/stable/torch.html#torch.cholesky).
However `torch.potrs` by default requires the upper triangular matrix. The naming of the variable `u` suggests that the example expects the upper to be returned, so I've added the flag to make that happen in the example.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15215
Differential Revision: D13476468
Pulled By: soumith
fbshipit-source-id: 7b68035f435a2b1be4d363b3f63e407394af949d
* Fix a typo in the assert
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15265
Reviewed By: llyfacebook
Differential Revision: D13477029
Pulled By: sf-wind
fbshipit-source-id: 9c5571a583c01f9701625541ebec0c836cb923f2
* Delete ffi documentation (#15220)
Summary: Deleting FFI documentation since its deprecated.
Differential Revision: D13477329
Pulled By: soumith
fbshipit-source-id: 0b3d485eb7cef1f05b6b397dff50f21a49d6409e
* Trivial comment correction in dataloader (#15276)
Summary:
Trivial comment correction in dataloader
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15276
Differential Revision: D13477324
Pulled By: soumith
fbshipit-source-id: 2a74a014999655d129311d611f2a09411339cb13
* Refactor hotpatch_vars and apply it to libtorch (#14976)
Summary:
Fixes #14801.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14976
Differential Revision: D13485381
Pulled By: soumith
fbshipit-source-id: 0af3c2e1b90988d56f6f85632328d1e4b788ffd2
* Fix tensor printing bug in Python 2 (#12732)
Summary:
`rsplit` doesn't have kwargs in Python 2 so this line raises an error
Fixes #15135
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12732
Differential Revision: D10458630
Pulled By: driazati
fbshipit-source-id: a63e42fbc0e39e4291480775b516c98122ec05a1
* Tighten up invariants regarding StreamId. (#15125)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15125
I realized that it is really bad juju if you fake a StreamId
out of thin air, because in general this isn't going to work.
So, make the constructor a lot scarier.
Most "faking StreamId out of thin air" happens because someone
just wants to put something on the default stream.
Reviewed By: dzhulgakov
Differential Revision: D13432800
fbshipit-source-id: a86991d6fc1d8aa4e54e8175e5f06f90856238e6
* Adding ONNX export for torch.expand and torch.ne (#15050)
Summary:
`torch.expand` and `torch.ne` are used often in models and this PR adds ONNX export support for them. ArmenAg has created issue https://github.com/pytorch/pytorch/issues/10882 for this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15050
Differential Revision: D13453036
Pulled By: houseroad
fbshipit-source-id: 4724b4ffcebda6cd6b2acac51d6733cb27318daf
* Minor fixes in .jenkins/caffe2/bench.sh
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15304
Differential Revision: D13493876
Pulled By: bddppq
fbshipit-source-id: 7146eb2587e526af65b4b0290c25bd55653a3088
* Fix for issue 14829 (#14908)
Summary:
* Modify the testcase as outlined in the issue
* Issue url: https://github.com/pytorch/pytorch/issues/14829
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14908
Differential Revision: D13490360
Pulled By: ezyang
fbshipit-source-id: ff11a72e19b49223652182e82c2b4e65fe444ca7
* Don't enforce docstrings on bool dispatch (#15306)
Summary:
Allows 2 functions that are boolean dispatched to have no docstrings (the only case that will fail now is if both functions have docstrings)
Fixes #15281
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15306
Differential Revision: D13494884
Pulled By: driazati
fbshipit-source-id: 65fec39ae03a7d6a68ad617c9b270faeb1617930
* Replace SwitchToDevice(0) with SwitchToDevice() (#15126)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15126
I want to make people stop manufacturing StreamId from thin air,
and a first step is to make people use the default stream.
Reviewed By: dzhulgakov
Differential Revision: D13432922
fbshipit-source-id: 9f0d8d70646c50d979bde5ba3c3addeebac48a3d
* Fix the missing caffe2 proto files for Windows (#15157)
Summary:
Fixes #15156
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15157
Differential Revision: D13490420
Pulled By: orionr
fbshipit-source-id: 4387d707f634a5975238af915b1befb2277f8ec7
* add isinstance static type checking for jit (#15076)
Summary:
This PR add isinstance to do static type checking in JIT.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15076
Differential Revision: D13471067
Pulled By: wanchaol
fbshipit-source-id: d39b7ed5db9fcca4b503659d02cf7795950ea8ea
* Bicubic interpolation for nn.functional.interpolate (#9849)
Summary:
Addresses #918, interpolation results should be similar to tf
* Adds bicubic interpolation operator to `nn.functional.interpolate`
* Corresponding test in `test_nn.py`
The operator is added in legacy `TH` to be aligned with the other upsampling operators; they can be refactored/moved to ATen all at once when #10482 is resolved
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9849
Differential Revision: D9007525
Pulled By: driazati
fbshipit-source-id: 93ef49a34ce4e5ffd4bda94cd9a6ddc939f0a4cc
* Removing BUILD_C10_EXPERIMENTAL_OPS option and unglobbing experimental/c10d ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15064
Reviewed By: orionr
Differential Revision: D13474801
Pulled By: pjh5
fbshipit-source-id: 9d3664c3a3a1b6c2d9f083f8476fe3b037296b98
* Allow future type parsing
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14887
Differential Revision: D13490984
Pulled By: highker
fbshipit-source-id: 165fe995867be273793f983154aa6cbce13e4396
* Port nn fold and unfold to c++
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14597
Reviewed By: ezyang
Differential Revision: D13272227
fbshipit-source-id: 6eccab5ff5830a977398a96393b778095120edc6
* caffe2/python/task: added __repr__ methods to all task definitions (#15250)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15250
This adds `__repr__` methods to all of the classes under task.py. This makes the objects much easier to interact with when using them in an interactive manner, such as in a Jupyter notebook.
The default `__repr__` method just returns the object ID which is very unhelpful.
Reviewed By: hanli0612
Differential Revision: D13475758
fbshipit-source-id: 6e1b166ec35163b9776c797b6a2e0d002560cd29
* Add a correctness check for C++ types to custom operators (#15247)
Summary:
The JIT uses `int64_t` for its integer type and `double` for its floating point type, but users quite often want to write `int` or `float` and that currently fails in not-so-nice ways for custom ops. This PR adds a simple `static_assert` to catch these common failure cases.
zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15247
Differential Revision: D13493941
Pulled By: goldsborough
fbshipit-source-id: c1cd0d10ab5838c75f167c0bdb57e45a0bc1344e
* Fix _apply in nn.Module (#15305)
Summary:
Fixes an issue that arose from https://github.com/pytorch/pytorch/pull/13481 where `.shared_memory()` couldn't be called. Effectively undoes all changes to `nn.Module` from that PR and solve the relevant problem in a different way (the goal was to be able to call `._apply()` on the Python wrapper for a C++ module).
soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15305
Differential Revision: D13493937
Pulled By: goldsborough
fbshipit-source-id: 4cb8687f90fc8709a536c5e7eacd0dc8edf6f750
* Reenable OpenMP by reverting the following two commits. (#15315)
Summary:
Revert "Put back linker flag for OpenMP to prevent build break on ppc64le (#14569)"
This reverts commit a84e873bb156080ea76ab182171b1f3b4d5395f6.
Revert "Update OpenMP cmake setting for xcode 9 compiler(AppleClang 9.0) (#14473)"
This reverts commit 8901935ad42fe9bf093d1106ea43606008a4024d.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15315
Differential Revision: D13495852
Pulled By: ezyang
fbshipit-source-id: bcd3f60088b14831c53d3c171f10cd1ab6b35dee
* [TensorIterator fixing mean to output correct result for half precisi… (#14878)
Summary:
…on](#12115)
mean is calculated in two step sum()/numel(). For half precision, data gets
casted back to half after sum().
We fused the division into the reduction kernel by adding pre_op/post_op.
This allows us to do torch.ones(65536).cuda().half().mean() to return correct
result.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14878
Differential Revision: D13491159
Pulled By: soumith
fbshipit-source-id: e83802e1628b6d2615c45e18d7acf991d143a09e
* Allow tracing with fork/wait (#15184)
Summary:
There is still limitation on this: if a script module is somewhere
in the trace, the inputs/outputs can only be tensors or tuples of
tensors.
resolves #15052
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15184
Differential Revision: D13457691
Pulled By: highker
fbshipit-source-id: 8fe46afc41357a0eb8eadd83f687b31d074deb0e
* improve script/no script save error (#15321)
Summary:
Improves the error message for #15116
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15321
Differential Revision: D13499379
Pulled By: zdevito
fbshipit-source-id: b8dc0a83efabff74199f4aab2ee98aa41c42608b
* Updating submodules
Reviewed By: cdelahousse
fbshipit-source-id: 4bf66581d07d839f459869bc9c6428011063cc5b
* Revert D13383102: [pytorch][PR] Upgrade MKL-DNN to version 0.17
Differential Revision:
D13383102
Original commit changeset: c434f0e0ddff
fbshipit-source-id: 690f46ca0710954fa591a5ea77535e9759db4de5
* caffe2 mobile opengl (#15322)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15322
caffe2 mobile opengl code is not used, deleting it to reduce complications when we perform other changes
Reviewed By: Maratyszcza
Differential Revision: D13499943
fbshipit-source-id: 6479f6b9f50f08b5ae28f8f0bc4a1c4fc3f3c3c2
* Method returns a single argument (#15289)
Summary:
This PR changes Method (just Method not all graphs) to always have a single
return argument.
This is part 1 in a set of changes that will enable us to have better handling if early return statements.
The simplification that this change provides greatly reduces the work for the next step.
This change makes it so that Method and Python handle multiple returns in the same way:
* 0 - None
* 1 - <single value>
* many - Tuple[...]
The result is that a lot of special-case handling in compiler.cpp and its
bindings can be removed. It also fixes several bugs in return handling,
including one where return values were not always checked against their
attributed…
* tox.ini -> .flake8 (#15065)
Summary:
We were only using this file to configure flake8, and fbcode linters do not recognize tox.ini which causes spurious linter warnings.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15065
Differential Revision: D13420774
Pulled By: suo
fbshipit-source-id: e43a46befa36862c8b3c0a90074aec6a66531492
* Update onnx coverage script for more accurate result (#15029)
Summary:
The coverage of scalar-input test cases were not accurate. This patch fixed that.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15029
Differential Revision: D13419764
Pulled By: zrphercule
fbshipit-source-id: a14a5cbef432bea8c9126156f5deb1125e1aeb47
* Issue 14984: Remove divide by zero error in index_put_ (#14986)
Summary:
No check for zero index tensor was done in the accumulate=True (serial) case in the new TensorIterator code since https://github.com/pytorch/pytorch/pull/13420.
https://github.com/pytorch/pytorch/issues/14984
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14986
Differential Revision: D13417861
Pulled By: colesbury
fbshipit-source-id: e6ed1af8f708b53a35803fc157ed1f043169ec89
* Supress warnings on generated tests
Summary: Removes all warnings spew for the TestJitGenerated tests
Differential Revision: D13420919
fbshipit-source-id: f251c12f923088ccc5daa2984c15003a67cbd1c1
* Split off fuser tests in test_jit.py to their own test case (#15072)
Summary:
This PR creates TestFuser inside test_jit.py to be a home for graph fuser
specific tests.
This was a useful exercise because now that all the fuser tests are in
one place, I can spot redundant and bitrotting tests for cleanup in a
future PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15072
Differential Revision: D13421458
Pulled By: zou3519
fbshipit-source-id: 80b1a7712feff75a0c186d1664601c4edbbca694
* re-enable copy of python files, but be careful that the copy is only … (#14982)
Summary:
…done once
This allow no-op build to work correctly even when BUILD_CAFFE2_OPS is on.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14982
Differential Revision: D13413960
Pulled By: zdevito
fbshipit-source-id: 6e5412a8c375af8a47c76f548cdd31cff15f3853
* add gloo scatter support on GPU (#14917)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14917
as titled
Reviewed By: pietern
Differential Revision: D13271560
fbshipit-source-id: 0187a3390f8ebd72a2c074e7a651432159d427c0
* Remove deprecated variable_tensor_functions (#15003)
Summary:
Removing the deprecated functions in `torch/csrc/variable_tensor_functions.h` (like `torch::CPU`) and corresponding implementations from `torch/csrc/torch.cpp` from master after the release.
ezyang gchanan soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15003
Differential Revision: D13418086
Pulled By: goldsborough
fbshipit-source-id: a0accdf6f7b0efa1ec07ac7b74b86ff2da37543f
* Add error type to raise statement
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15039
Differential Revision: D13419566
Pulled By: zou3519
fbshipit-source-id: f67a3aebce937e3e640e91e81eb3e184cfdf269c
* Make ATen HIPify out-of-place, but still reuse CUDA names. (#14866)
Summary:
```
This diff changes the HIPification of ATen to be out-of-place.
We now have the following mappings:
- ATen/cuda => ATen/hip
- ATen/native/cuda => ATen/native/hip
- ATen/native/sparse/cuda => ATen/native/sparse/hip
- THC => THH
- THCUNN => THHUNN
The build system is adjusted to know about these new build paths,
and HIPify is taught how to adjust include paths and
THC_GENERIC_FILE appropriately. ATen_hip is now built as
the ATen_hip library, rather than reusing ATen_cuda.
However, despite these new filepaths, none of the identifiers in ATen
have actually changed. So, e.g., THHGeneral.h still defines functions
named THC_blahblah, and HIP still shows up as CUDA in PyTorch itself.
We'll tackle this in a subsequent PR; this diff is just to get the files
out-of-place.
Minor extra improvements:
- Don't edit tmp_install when hipifying
- HIP no longer builds native_cudnn_cpp; it was unnecessary
- Caffe2_HIP_INCLUDES is now Caffe2_HIP_INCLUDE, for consistency
with all the other variables.
- HIP build now properly respects ATEN_CUDA_FILES_GEN_LIB (it
did not previously.)
- You can now override file extension matching in pyHIPIFY
by explicitly specifying its full name in the matching list.
This is used so we can HIPify CMakeLists.txt in some situations.
A little bit of string and ceiling wax:
- gen.py grows a --rocm flag so that it knows to generate CUDA
files which actually refer to the HIP headers (e.g., THH.h)
We'll get rid of this eventually and generate real HIP files,
but not for this PR.
- Management of HIP dependencies is now completely deleted
from the ATen CMakeLists.txt. The old code was dead (because
it was shoveled in ATen_CUDA_DEPENDENCY_LIBS and promptly
ignored by the Caffe2 build system) and didn't actually work.
```
Stacked on https://github.com/pytorch/pytorch/pull/14849 review last commit only
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14866
Differential Revision: D13419475
Pulled By: ezyang
fbshipit-source-id: cb4c843df69a1d8369314c9fab1b7719520fa3db
* Add at::scalar_tensor factory function, use it instead of Type.scalar… (#15074)
Summary:
…_tensor.
This is part of a long series of paring down the Type interface.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15074
Differential Revision: D13421482
Pulled By: gchanan
fbshipit-source-id: 84010ee71fef2cb74d32d5de7858d8ed9f36b885
* Move TensorImpl to c10 (yay!)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14795
Reviewed By: ezyang
Differential Revision: D13336856
fbshipit-source-id: 5375d0e42312ff7564f4df06210a5e49542d59e3
* Fix include paths for TensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14816
Reviewed By: ezyang
Differential Revision: D13348040
fbshipit-source-id: a7204d89c2dd277d13093b0ed862f40b53dee82f
* Move UndefinedTensorImpl to c10 (meh) (#14817)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14817
unfortunately, we still need this.
Reviewed By: ezyang
Differential Revision: D13348041
fbshipit-source-id: e8dcc89f5c71bd1ea2c9813990dac6e58e63b1fd
* Fix include paths for UndefinedTensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14818
Reviewed By: ezyang
Differential Revision: D13348042
fbshipit-source-id: 11bdfc755767ce9d0a6fa95b2cf49d50adde8d60
* add gloo support for gather on GPU (#14916)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14916
as titled
Reviewed By: pietern
Differential Revision: D13267832
fbshipit-source-id: 3b89d08af93f74941f17ff892c33fc2a4a023c19
* Pre-commit flake8/clang-tidy (#15102)
Summary:
Provide a pre-commit hook that does flake8 and clang tidy checks. Enables the clang-tidy script to run in parallel to make it fast enough to be used in a pre-commit hook.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15102
Reviewed By: soumith
Differential Revision: D13429629
Pulled By: zdevito
fbshipit-source-id: bd52fe5652f29b033de8d9926d78350b2da4c2fc
* Update the output format for benchmark_helper. It outputs the dimensi… (#15108)
Summary:
…on first and all the values in the next line. This way, it can output arbitrary blob
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15108
Reviewed By: llyfacebook
Differential Revision: D13429346
Pulled By: sf-wind
fbshipit-source-id: 5e0bba2a46fbe8d997dfc3d55a698484552e3af8
* Fix serialization (#15033)
Summary:
Fixes a bug where (de-)/serializing a hierarchy of submodules where one submodule doesn't have any parameters, but its submodules do, doesn't get properly loaded. This had to do with the fact that the old protobuf format couldn't store empty parameters.
Fixes https://github.com/pytorch/pytorch/issues/14891
soumith ezyang ebetica
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15033
Differential Revision: D13411322
Pulled By: goldsborough
fbshipit-source-id: 2ef73b2aa93fa9e46b1cbe1fd47d9f134d6016d5
* Remove linker and dlopen flags that allowed undefined symbols in rocm build (#15091)
Summary:
Previously the undefined symbols were caused by disabled_modules in tools/amd_build/disabled_features.json (now it's cleared).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15091
Differential Revision: D13429595
Pulled By: bddppq
fbshipit-source-id: b341e83f9e5a8d16440a364e837b045a8a4fd6e1
* Add EmptyNameScope to allow you jump out from current scope. (#14631)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14631
adding a empty name scope to allow people jump out from current namescope.
This could be useful when you want to access blob from parent or sibling scope.
Facebook:
e.g: we encoutered a potential usecase in D13124249 (it's a large diff, please search by EmptyNameScope in that diff), we need to access to a blob declared in root namescope from a device namescope (device namescope has been used by parallel_GPU API). `EmptyNameScope` can help us do that with ease.
I referenced to `EmptyDeviceScope` D6103412 while implementing this one.
Reviewed By: yinghai
Differential Revision: D13272240
fbshipit-source-id: d4cde5abcc2336e456b6c6ef086266ef94d86da8
* Use c10::to_string that works cross platform (#15117)
Summary:
Fix master breakage introduced in #15108
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15117
Differential Revision: D13430568
Pulled By: bddppq
fbshipit-source-id: ce10bc552f085d1bf0afbc13119991bee014ac95
* Don't setup x86_64-linux-gnu-gcc as an sccache wrapper. (#15078)
Summary:
When I do this setup in a local Docker development environment,
I get the following error:
x86_64-linux-gnu-gcc: error trying to exec 'cc1plus': execvp: No such file or directory
Somehow, gcc seems to get confused when it gets run from the wrong
directory. Best not to do it.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15078
Differential Revision: D13432143
Pulled By: ezyang
fbshipit-source-id: b18e15f493503a4c8205c85f92a214e49762a7bc
* fix some tests that I accidentally disabled (#15077)
Summary:
While moving these scenarios into `_test_dim_ops` I accidentally left an empty loop in the actual tests, causing them to do nothing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15077
Differential Revision: D13428759
Pulled By: umanwizard
fbshipit-source-id: 08f53068981d9192c1408878b168e9053f4dc92e
* Add better support for bools in the graph fuser (#15057)
Summary:
Fixes #15038.
aten::_cast_Float(tensor, non_blocking) support was added in #14336.
Its second argument is a bool, but because we don't support generating values
of type bool in the fuser codegen, the codegen errored out.
aten::_cast_Float in the fuser never actually uses its non_blocking
argument, so another way to fix this would be to have a special op for a
fused cast but I thought that we might have fusible ops that do take
bool arguments in the future so this would be good to have.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15057
Differential Revision: D13432091
Pulled By: zou3519
fbshipit-source-id: 455fe574f5f080aca9a112e346b841a2534a8dc3
* Ensure there aren't variables in checked_tensor_unwrap, checked_tenso… (#15105)
Summary:
…r_list_unwrap.
These functions use unsafeGetTensorImpl(), which doesn't work with Variables (in a silent way that may blow up later).
So let's do early checking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15105
Reviewed By: ezyang
Differential Revision: D13429149
Pulled By: gchanan
fbshipit-source-id: b85f6f5b7cdb9a6dd0c40205b924c840a3920ba0
* fix infinite loop when get_max_threads is nonzero but num_threads is 1
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15114
Differential Revision: D13431891
Pulled By: umanwizard
fbshipit-source-id: f968b8e50cf776c346d4a28d72b12e7856c95839
* Kill Type.storage. (#15075)
Summary:
It's not used.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15075
Reviewed By: ezyang
Differential Revision: D13422487
Pulled By: gchanan
fbshipit-source-id: 272aa0a10e96f3ffb97d571490b517f972b9dcf7
* Move CUDAGuard, CUDAStream and CUDAGuardImpl to c10/cuda (#14248)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14248
This diff also introduces a horrifying hack to override CUDA's DeviceGuardImpl
with a HIPGuardImplMasqueradingAsCUDA, to accommodate PyTorch's current
behavior of pretending CUDA is HIP when you build with ROCm enabled.
Reviewed By: bddppq
Differential Revision: D13145293
fbshipit-source-id: ee0e207b6fd132f0d435512957424a002d588f02
* Stop erroneously running aten::warn (#15124)
Summary:
Fixes #15119. Before this PR, we were propagating constants through
aten::warn AND running it as a part of shape analysis.
This caused aten::warn to be run regardless of if it is
supposed to be run dynamically. This PR adds an exclusion for aten::warn
in constant propagation and shape analysis, similar to that of prim::RaiseException.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15124
Differential Revision: D13432815
Pulled By: zou3519
fbshipit-source-id: 15ab533ce2accb2da3fd4e569070c7979ce61708
* Move numa.{h, cc} to c10/util (#15024)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15024
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14393
att
Reviewed By: dzhulgakov
Differential Revision: D13380559
fbshipit-source-id: abc3fc7321cf37323f756dfd614c7b41978734e4
* Move adaptive avg pooling 2d to ATen native (#14714)
Summary:
adaptive_avg_pool1d, adaptive_avg_pool2d, and adaptive_avgpool3d are neural network functions that are currently implemented in our legacy THNN (CPU) / THCUNN (CUDA) libraries. It is generally better if these live in our new library ATen, since it is more feature complete and reduces cognitive overhead.
This change moves currently to adaptive_avg_pool1d and adaptive_avg_pool2d to ATen.
timed relevant cpu tests with this change:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.s.s.s.s.s.s.s...
----------------------------------------------------------------------
Ran 17 tests in 6.273s
OK (skipped=7)
real 0m7.164s
user 3m1.289s
sys 0m0.905s
```
compared to master:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.s.s.s.s.s.s.s...
----------------------------------------------------------------------
Ran 17 tests in 7.232s
OK (skipped=7)
real 0m8.065s
user 3m34.714s
sys 0m2.440s
```
also timed relevant cuda tests with this change:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.................
----------------------------------------------------------------------
Ran 17 tests in 21.049s
OK
real 0m24.106s
user 0m20.890s
sys 0m4.026s
```
compared to master
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.................
----------------------------------------------------------------------
Ran 17 tests in 23.021s
OK
real 0m27.095s
user 0m20.121s
sys 0m3.668s
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14714
Differential Revision: D13384084
Pulled By: xnder
fbshipit-source-id: 344442103ccbbda72d3c010d2feea00e9985d226
* Add script standard library documentation + cleanup (#14912)
Summary:
Documents what is supported in the script standard library.
* Adds `my_script_module._get_method('forward').schema()` method to get function schema from a `ScriptModule`
* Removes `torch.nn.functional` from the list of builtins. The only functions not supported are `nn.functional.fold` and `nn.functional.unfold`, but those currently just dispatch to their corresponding aten ops, so from a user's perspective it looks like they work.
* Allow printing of `IValue::Device` by getting its string representation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14912
Differential Revision: D13385928
Pulled By: driazati
fbshipit-source-id: e391691b2f87dba6e13be05d4aa3ed2f004e31da
* Minor documentation mistake (#15068)
Summary:
keepdim is a optional parameter for torch.max()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15068
Differential Revision: D13437745
Pulled By: zou3519
fbshipit-source-id: b5198c7d4ae17758cd136f6e5aecc6cb5838f174
* Implement torch.tril_indices and torch.triu_indices (#12653) (#14904)
Summary:
This is an optimized implementation that does the following:
1. created an empty Tensor of correct size.
2. fill the Tensor with correct values.
The following three designs to fill in the Tensor result in roughly the same performance. Hence, the 2nd option is taken for simpler code, and to return contiguous tensors.
1. Sequential: fill row coordinates first, then columns. This results in two for-loop and more arithmetic operations.
2. Interleaved: fill in index coordinates one by one, which jumps between the two output Tensor rows in every iteration.
3. Transpose: create a n X 2 Tensor, fill the Tensor sequentially, and then transpose it.
<img width="352" alt="screen shot 2018-12-10 at 3 54 39 pm" src="https://user-images.githubusercontent.com/16999635/49769172-07bd3580-fc94-11e8-8164-41839185e9f9.png">
NOTE:
This implementation returns a 2D tensor, instead of a tuple of two tensors. It means that users will not be able to do the following:
```python
x = torch.ones(3, 3)
i = torch.tril_indices(3, 3)
x[i] # need to first convert the 2D tensor into a tuple of two 1D tensors.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14904
Reviewed By: zou3519
Differential Revision: D13433027
Pulled By: mrshenli
fbshipit-source-id: 41c876aafcf584832d7069f7c5929ffb59e0ae6a
* Optimize CPU GenerateProposals op by lazily generating anchors (3-5x faster) (#15103)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15103
There are two main optimizations in this diff:
1. We generate all anchors for every single spatial grid first, and then apply
NMS to pick 2000 anchors according to RPN_PRE_NMS_TOP_N. By first sorting the
score and picking the 2000 top ones and then lazily generating only the
corresponding anchors is much faster.
2. Transposing bbox_deltas from (num_anchors * 4, H, W) to
(H, W, num_anchors * 4) was also quite slow - taking about 20ms in the RRPN
case when there are lots of anchors which it's negligible for RPN case (like
0.1 ms). Instead of transponsing, performing all operations in the
(num_anchors, H, W) format speeds things up.
For regular RPN scenario, this gives 5x speedup from 5.84ms to 1.18ms a case
with 35 anchors over a 600x600 image.
For rotated boxes with 245 anchors, the runtime down from 80ms to 27ms per
iter.
Reviewed By: newstzpz
Differential Revision: D13428688
fbshipit-source-id: 6006b332925e01a7c9433ded2ff5dc9e6d96f7d3
* use ROCm 1.9.2 fp16 capabilities in rocBLAS and MIOpen interfaces (#14994)
Summary:
* relax MIOpen if statement to allow fp16/fp32 mixed precision training now supported by ROCm 1.9.2
* use gemm_ex API of rocBLAS in ROCm 1.9.2 instead of the previous hgemm API
* with this: enable all but one half test in test_nn
While there, fix also:
* a group convolution issue w/ MIOpen pertaining to initializing MIOpen on multi-GPU systems properly we detected while working on this
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14994
Differential Revision: D13439869
Pulled By: bddppq
fbshipit-source-id: 75e4eb51a59488882e64b5eabdc30555b25be25e
* Add back c2 string_utils include header to benchmark_helper
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15143
Differential Revision: D13439694
fbshipit-source-id: 78698b66d52a0178118cbf3e79a7a5ad1763d47b
* Export defs.bzl to open source for pytorch (#15132)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15132
Pull Request resolved: https://github.com/facebook/fbshipit/pull/64
Reviewed By: dzhulgakov
Differential Revision: D13424093
fbshipit-source-id: bbebef964b9f3aef8f59cd394eca068680c36b5a
* docs: minor spelling tweaks
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15148
Differential Revision: D13443708
Pulled By: suo
fbshipit-source-id: 5e3ec0afd3416ab8ce207f2d04105c49e1c04611
* don't compile dnnlowp.cc in avx2 option (#15147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15147
Forgot to take out dnnlowp.cc from avx2 list in a previous diff.
Reviewed By: dskhudia
Differential Revision: D13440686
fbshipit-source-id: 9ada98b6e885c7d5f22c91a735ff60304480b4cb
* Autoformat build_variables.py (#15152)
Summary:
autoformat `tools/build_variables.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15152
Differential Revision: D13445343
Pulled By: goldsborough
fbshipit-source-id: fd63588de114cb92deda03fa1a0b36f5f9082b2f
* Fix resize for edge case tensors (#14874)
Summary:
Certain tensor shapes failed when being resized. This pull request addresses the bug found in #13404.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14874
Differential Revision: D13429788
Pulled By: soumith
fbshipit-source-id: 8aa6451dbadce46d6d1c47a01cb26e6559bcfc8c
* Implementation of ChannelShuffle Op for MKLDNN (#15106)
Summary:
the speed-up of a single operation is up to 3X .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15106
Differential Revision: D13429596
Pulled By: bddppq
fbshipit-source-id: f8d987cafeac9bef9c3daf7e43ede8c6a4ee2ce5
* support casting to string (#15110)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15110
support casting to string on CPU
Reviewed By: intermilan
Differential Revision: D13429381
fbshipit-source-id: b737a1ba1237b10f692d5c42b42a544b94ba9fd1
* Remove "early-release beta" disclaimer from README (#15136)
Summary:
Now that PyTorch 1.0 is out, this should be updated :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15136
Differential Revision: D13447377
Pulled By: soumith
fbshipit-source-id: bd4e662c53d0699f25d4d90c1b4c1e182b4427c2
* Disable strict-overflow flag to avoid compilation error (#14977)
Summary:
Disable strict-overflow flag to avoid compilation error
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14977
Differential Revision: D13447577
Pulled By: soumith
fbshipit-source-id: 1957bd5aa3c7b79219da3dd53560464977c89526
* minimize header file includes from _avx2.cc (#14950)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14950
Minimize the number of headers included from _avx2.cc files to avoid accidental compilation of functions defined the header files reused by other translation units that can lead to illegal instruction errors.
Reviewed By: dskhudia
Differential Revision: D13394483
fbshipit-source-id: 67149a6fb51f7f047e745bfe395cb6dd4ae7c1ae
* Removes THCNumerics usages in RNN.cu (#15085)
Summary:
We don't need THCNumerics here since at::Half can be implicitly converted to float and the cuda math dispatches are handled by `/usr/local/cuda/include/crt/math_functions.hpp` and `cmath`. ATen should be free of THCNumerics after this and when porting kernels from THC, one should not use THCNumerics.
Should close: https://github.com/pytorch/pytorch/issues/11878
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15085
Differential Revision: D13447558
Pulled By: soumith
fbshipit-source-id: 4ff5cbf838edcd01e2d1397e4d7f4f920e9e9fc3
* Reuse KernelSpec for FusionGroups with equivalent graphs (#14541)
Summary:
Before this PR, loop unrolling + the graph fuser was creating multiple
FusionGroups with the same bodies (with different variable names) for
JIT LSTMs. Each FusionGroup got registered to a separate fusion key;
each key resulted in a different compilation for the same
specializations.
This PR makes it so that when registering FusionGroups with the fusion
compiler, the compiler first checks the KernelSpec cache to see if the
FusionGroup's graph exists already. If it does, then return the
corresponding KernelSpec's key to share compiled kernels.
In addition, graphs in the KernelSpec cache are canonicalized before
being cached. I added a flag to the canonicalize pass to remove unique
names of values.
This shortens the compile time for a JIT LSTM (seq_len of 100, loop
unroll factor of 8) from 5.3s to 2.3s. Most of this compile time is
running the graph fuser and/or fusion compiler; while this PR
makes it so that there is only one unique kernel in the forward pass,
there are a lot of different kernels (6) in the backward pass
(after loop unrolling) that should be investigated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14541
Differential Revision: D13324487
Pulled By: zou3519
fbshipit-source-id: b841d82ed35a959b5cfc72db033bf5a7b42cc4fb
* Python <-> C++ Frontend inter-op (#13481)
Summary:
This PR enables C++ frontend modules to be bound into Python and added as submodules of Python modules. For this, I added lots of pybind11 bindings for the `torch::nn::Module` class, and modified the `torch.nn.Module` class in Python to have a new Metaclass that makes `isinstance(m, torch.nn.Module)` return true when `m` is a C++ frontend module. The methods and fields of C++ modules are bound in such a way that they work seamlessly as submodules of Python modules for most operations (one exception I know of: calling `.to()` ends up calling `.apply()` on each submodule with a Python lambda, which cannot be used in C++ -- this may require small changes on Python side).
I've added quite a bunch of tests to verify the bindings and equality with Python. I think I should also try out adding a C++ module as part of some large PyTorch module, like a WLM or something, and see if everything works smoothly.
The next step for inter-op across our system is ScriptModule <-> C++ Frontend Module inter-op. I think this will then also allow using C++ frontend modules from TorchScript.
apaszke zdevito
CC dzhulgakov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13481
Differential Revision: D12981996
Pulled By: goldsborough
fbshipit-source-id: 147370d3596ebb0e94c82cec92993a148fee50a7
* Unify SparseTensorImpl::size_ and TensorImpl::sizes_
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15130
Differential Revision: D13434981
Pulled By: VitalyFedyunin
fbshipit-source-id: 98bd4d66834a3c3d2ea577adb0c8413852da095d
* Fix bincount for non-contiguous inputs on CPU (#15109)
Summary:
Fixes #15058.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15109
Differential Revision: D13447448
Pulled By: soumith
fbshipit-source-id: 56e8d42934538fb00465105a2c5ccfeb7c18a651
* Use a pool of per-thread cudnn handles for each device, updated (#15080)
Summary:
Rebased version of https://github.com/pytorch/pytorch/pull/14861, hopefully addressing ezyang's comments.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15080
Differential Revision: D13440858
Pulled By: ezyang
fbshipit-source-id: 1c6af5c53538b81c6b92cf1dda231ed333f28035
* Fix typo (#15045)
Summary:
Simple typo fix
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15045
Reviewed By: dzhulgakov
Differential Revision: D13413509
Pulled By: houseroad
fbshipit-source-id: be66700c30d038368b1433232a4e3fd9299c83d6
* Delete defunct USE_SIMPLE_BASE_CTOR_DTOR (#15144)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15144
Differential Revision: D13440872
Pulled By: ezyang
fbshipit-source-id: 2b1d73fac0c63729ba01d8f129642334ae9d9cf3
* Kill non-forward, non-backward functions generated from nn.yaml (#15127)
Summary:
Updating binding to legacy functions.
Remove unused declarations.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15127
Differential Revision: D13433405
Pulled By: VitalyFedyunin
fbshipit-source-id: 58544d38affd20818742338c9eb789d9d14ccbaa
* Fix old tensor OutputTensorCopyFrom usage in ImageInput operator (#15094)
Summary:
cc jerryzh168
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15094
Differential Revision: D13451898
Pulled By: bddppq
fbshipit-source-id: 27906be62fb88aaa13c257441a2e35a285b445ee
* Use std::vector instead of alloca to work around hcc crash
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15175
Differential Revision: D13453708
Pulled By: bddppq
fbshipit-source-id: f8c147ae9f679e395fee9d4c73ebcca052c9a752
* Tensor construction codemod(ResizeLike) - 5/7 (#15084)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15084
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision: D13419711
fbshipit-source-id: dd2b740c3f13d8087085bafc5571aaf908d1af42
* Tensor construction codemod(ResizeLike) - 6/7 (#15137)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15137
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision: D13419736
fbshipit-source-id: f4ad7b9582c2f809258169b7fef9adbca7063d99
* Replace non-printable-ascii characters in ProtoDebugString (#14918)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14918
When ProtoBuf-Lite is in use, ProtoDebugString just calls SerializeAsString.
This produces binary output, which is not a very suitable "debug" string.
Specifically, we've observed it causing problems when calling code tries to
add the debug string to a Java exception message (which requires valid UTF-8).
Now, we replace all non-ASCII bytes with "?".
This is not a very fast implementation, but generating debug strings shouldn't
be a performance-sensitive operation in any application.
Reviewed By: dzhulgakov
Differential Revision: D13385540
fbshipit-source-id: 8868172baf20efaf53fecf7d666a6980f59b64f5
* Tensor construction codemod(ResizeLike) - 4/7 (#15088)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15088
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision: D13419682
fbshipit-source-id: 3e59403bc1c0e71e5cb66df932ed0c6a0a72e643
* Remove _finfo; replace _finfo usage with torch.finfo (#15165)
Summary:
This PR removes the usage of _finfo defined in torch.distributions.utils and changes the call sites
to use torch.finfo instead
Differential Revision: D13451936
Pulled By: soumith
fbshipit-source-id: 6dbda3a6179d9407bc3396bf1a2baf3e85bc4cf2
* Run ONNX cuda backend test cases via ROCm
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15069
Differential Revision: D13427757
Pulled By: bddppq
fbshipit-source-id: ba0273d75986cd5b146f7041a83c63ddf9c6c0cf
* Remove disabled_features in hipify
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15098
Reviewed By: ezyang
Differential Revision: D13453762
Pulled By: bddppq
fbshipit-source-id: e177042c78f5bf393163d660c25b80285353853d
* Add missing caffe2_hip extension in setup.py
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15189
Reviewed By: orionr
Differential Revision: D13457644
Pulled By: bddppq
fbshipit-source-id: c2363e9b8fd21709b62777e5b2199f01ec1c65f8
* Enable performance-unnecessary-value-param in .clang-tidy (#15026)
Summary:
This PR fixes around 250 places in the codebase where we were making unnecessary copies of objects (some large, some small).
ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15026
Differential Revision: D13458784
Pulled By: goldsborough
fbshipit-source-id: be5148b2ce09493588d70952e6f6d6ff5ec5199b
* Remove TensorImpl -> Type dependency
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15086
Reviewed By: dzhulgakov
Differential Revision: D13425628
fbshipit-source-id: 08a8a774d17b071367454e027012a02f96d177d4
* Support torch.tensor in script (#14913)
Summary:
Adding support for torch.tensor in script.
The input list is typed as t[], because it can be arbitrarily nested. I added a check a compile time check that the inner type of the list is a bool, float, or int.
Also adds specialization for Boolean Lists, which already existed at the ivalue level but had not been added to the compiler yet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14913
Differential Revision: D13407930
Pulled By: eellison
fbshipit-source-id: d17f1195a22149d5b0d08d76c89a7fab8444f7c5
* For rotated proposals, replace cv::rotatedRectangleIntersection with a correct version that doesn't have underflow problem (#15113)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15113
cv::rotatedRectangleIntersection has a known float underflow bug that would cause failure in ```CV_Assert(intersection.size() <= 8)```
For rotated proposals, replace cv::rotatedRectangleIntersection with a correct version that doesn't have underflow problem.
Otherwise, when ```USE_CPP_GENERATE_PROPOSALS = true```, the training would fail.
Reviewed By: viswanathgs
Differential Revision: D13429770
fbshipit-source-id: 5e95d059f3c668f14059a0a83e8e53d8554cdb99
* Move TensorImpl::CopyFrom to caffe2::Tensor (1/2) (#14656)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14656
This diff doesn't move it yet, but prepares it to be moved, i.e. removes all access to class internals.
dzhulgakov: Please comment on if you think it still makes sense to land this even though it's not blocking anymore since we're going to move at::CopyBytes anyhow.
ezyang: There's some changes in the implementation, especially handling undefined dest tensors. Please review carefully.
Reviewed By: ezyang
Differential Revision: D13287688
fbshipit-source-id: 17800ca8a79ab1633f23be58d96f99a160d8ed24
* Move TensorImpl::CopyFrom to caffe2::Tensor (2/2) (#14858)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14858
This diff doesn't change logic but just takes the existing code and moves it to caffe2::Tensor
Reviewed By: ezyang
Differential Revision: D13365817
fbshipit-source-id: bc73b27a793602cb14200dcdf357aa63233da43c
* add erf and erfc to fuser/autodiff
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15139
Differential Revision: D13455690
Pulled By: soumith
fbshipit-source-id: b06e5f5d362869c2e5fa11a52f9450d77c30d4cb
* Fix numpy conversion for int8 tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15194
Differential Revision: D13459270
Pulled By: li-roy
fbshipit-source-id: 605534add263860a3ad9a7fa70888301ee0bf8e4
* Fix derivative for mvlgamma (#15049)
Summary:
Fixes #15015.
Added tests to validate derivative.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15049
Reviewed By: soumith
Differential Revision: D13434117
Pulled By: zou3519
fbshipit-source-id: 4a292600af9eb08b67c0f8b5482e9512aac95e72
* caffe2 - easy - Create test_util to make it easier to write C++ unit tests (#15014)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15014
Currently it looks like many of the simple operations such as comparing tensors, creating tensors, fetching tensors... are too verbose and took effort to write correctly in unit tests.
Easy to use utilities are often more important to increase productivity writing unit tests. While caffe2 python unit tests are relatively easier to write at the moment, the C++ side seems lacking.
In this change I create a test_util, started with assertsTensorEquals, getTensor, createTensor, and we can start putting more easy to use utilities there.
Reviewed By: salexspb
Differential Revision: D13370461
fbshipit-source-id: bee467a127e1d032ef19482f98aa5c776cf508c0
* caffe2 - easy - test utils to create operator (#15180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15180
Test utils to create an operator
On top of D13370461
Reviewed By: ZolotukhinM
Differential Revision: D13382773
fbshipit-source-id: a88040ed5a60f31d3e73f1f958219cd7338dc52e
* caffe2 - easy - test utils to fill tensors (#15019)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15019
Put some utils to fill tensors to test_utils
Reviewed By: salexspb
Differential Revision: D13386691
fbshipit-source-id: 51d891aad1ca12dc5133c0352df65b8db4f96edb
* caffe2 - easy - test utils to compare tensors in two workspaces (#15181)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15181
Add test utils to compare tensors in two workspaces
Reviewed By: ZolotukhinM
Differential Revision: D13387212
fbshipit-source-id: e19d932a1ecc696bd0a08ea14d9a7485cce67bb2
* caffe2 - easy - test utils for tensor assertion (#15020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15020
Add test utils for assertion of a tensor (sizes and values)
Reviewed By: salexspb
Differential Revision: D13401146
fbshipit-source-id: bc385df074043e03ea884940b5631b96de4a607e
* caffe2 - easy - utils to set argument of operator (#15022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15022
Add setArgument testing utils to make it easy to set argument for an operator
Reviewed By: yinghai
Differential Revision: D13405225
fbshipit-source-id: b5c1859c6819d53c1a44718e2868e3137067df36
* caffe2 - make DataRandomFiller usable in unit tests (#15027)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15027
- Make DataRandomFiller able to accept input_dims and input_types for only non intermediate inputs. Add a helper to fill input directly to a workspace
Reviewed By: highker
Differential Revision: D13408345
fbshipit-source-id: 5fc54d33da12e3f0a200e79380d4c695b0339b17
* Revert D13407930: [pytorch][PR] Support torch.tensor in script
Differential Revision:
D13407930
Original commit changeset: d17f1195a221
fbshipit-source-id: f4458872c48ec4a2c9983b21ed90bcdc0ae665b7
* Tensor construction codemod(ResizeLike) - 3/7 (#15122)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15122
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: dzhulgakov
Differential Revision: D13419643
fbshipit-source-id: 65b5a037b94d458b944d51f790ba2829db1fb530
* Better tests/support for Python/C++ inter-op (#15193)
Summary:
Methods like `module.named_modules()` returns a container of `shared_ptr<nn::Module>`. Currently the `nn::Module` base class does not have Python bindings. This PR fixes this, and adds more unit tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15193
Differential Revision: D13458713
Pulled By: goldsborough
fbshipit-source-id: 4091fe1b96a1be8db14c6a4307fbacc2b41ff6fe
* Refactor caffe2 CI scripts and add benchmark scripts
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14575
Differential Revision: D13468049
Pulled By: bddppq
fbshipit-source-id: e73bc8742c8a03f498816eee8a72b06a3e19fe48
* Enable all clang-tidy performance checks (#15198)
Summary:
This PR adds the final set of clang-tidy checks we should add for our codebase: a last set of performance-related checks. Most fixes here are around changing `auto` to `const auto&` in a few places where unnecessary copies were made, and adding `reserve()` calls before loops doing repeated `push_back()`. Also a few cases of calling `std::string::find` with a single-character string literal instead of a single char, which uses a less efficient string search algorithm meant for searching larger substrings.

ezyang apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15198
Differential Revision: D13468797
Pulled By: goldsborough
fbshipit-source-id: 2bed1ea1c7c162b7f3e0e1026f17125e88c4d5b2
* Remove __forceinline__ hipification step. (#15229)
Summary:
The HIP definition now correctly contains the inline attribute.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15229
Differential Revision: D13470962
Pulled By: bddppq
fbshipit-source-id: 34f8361bda5f3dce20a2eeb530c3a25d1b1bdd06
* Fix jit doc codeblocks and tables (#15227)
Summary:
Some of the codeblocks were showing up as normal text and the "unsupported modules" table was formatted incorrectly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15227
Differential Revision: D13468847
Pulled By: driazati
fbshipit-source-id: eb7375710d4f6eca1d0f44dfc43c7c506300cb1e
* enabled tests in test_nn, test_cuda and test_sparse (#15232)
Summary:
tests work on ROCm 1.9.2 as present on CI (fp16 bringup, hipMemset and sparse improvements)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15232
Differential Revision: D13470991
Pulled By: bddppq
fbshipit-source-id: 45acc4f9ea5baaaf7672b86eb022948055779925
* Revert D13440858: [pytorch][PR] Use a pool of per-thread cudnn handles for each device, updated
Differential Revision:
D13440858
Original commit changeset: 1c6af5c53538
fbshipit-source-id: fda42ea75000d4a4e9c4a8eeaaa5518f7ad9c298
* Do not ifdef __launch_bounds__ out for ROCm. (#15228)
Summary:
The compiler understands it and profits from knowing it by not using too
many VGPRs as it defaults to 256 default workgroup size.
Fixes a problem in bringup of ROCm 2.0 on gfx906.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15228
Differential Revision: D13470950
Pulled By: bddppq
fbshipit-source-id: f9aa44c7c95299a099c0ea9317b9044cc056acc5
* fix an issue where two rules build the same .py files
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15230
Differential Revision: D13471625
Pulled By: zdevito
fbshipit-source-id: a982413a308c7a9bb5b6a82fe96fd3de44f555aa
* Preserve module hierarchy on traced modules (#15101)
Summary:
We need this, for example, to properly call `_unpack` when we have a traced module in the hierarchy
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15101
Differential Revision: D13468467
Pulled By: jamesr66a
fbshipit-source-id: c2b6740b12cde6e23395d12e42d4fc2c4c7ca3f2
* record unit time in torch.cuda.event (#15221)
Summary: Record unit of time for torch.cuda.Event's elapsed_time
Differential Revision: D13467646
Pulled By: zou3519
fbshipit-source-id: 4f1f4ef5fa4bc5a1b4775dfcec6ab155e5bf8d6e
* Build c10 HIP test
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15233
Reviewed By: ezyang
Differential Revision: D13471002
Pulled By: bddppq
fbshipit-source-id: b42c3bc2b9db672ce50a52eb700cc6ed13d3535f
* Start unittesting our main observer (#15191)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15191
OSS:
just splitting out basic flags from a unit test. So I can extend them in another test where I need to add additional flags.
Reviewed By: yinghai
Differential Revision: D13159184
fbshipit-source-id: 9823e792cf0ed8d0379235c44564862b7d784845
* FP16MomentumSGDUpdate Op fix and enable for ROCm (#15150)
Summary:
1. Fix a bug in FP16MomentumSGDUpdate operator
2. Enable operator for ROCm
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15150
Differential Revision: D13473145
Pulled By: bddppq
fbshipit-source-id: 4c5c5f30cb9bba658e3639dbe193fa08a304d306
* Supply static shape info to Reshape when doing onnxGetCompatibility (#15242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15242
Newer version ONNX Reshape gets shape info from a tensor. Hence for static backend, we need to provide this info to it when doing `onnxGetCompatibility` too.
Reviewed By: jackm321
Differential Revision: D13471959
fbshipit-source-id: 8a58e28edd900b6ad54a1dbd63ff2579fbe0e820
* Add several features to converting images to blobs (#15204)
Summary:
Several enhancements are implemented:
* Resize the images to be within a boundary between min-size and max-size (can be height and weight). It tries to resize the minimum size to match the min-size and keep the aspect ratio. However, if in that case the maximum size is more than the max-size, then resize the maximum size to be equal to the max-size (and the minimum size is less than min-size). The min/max sizes are specified in argument scale, in a comma separated form. If one of the size is -1, then that size is not a restriction.
* Change the OpenCV resize function arguments from using cv::Size() to the x, y scale. Theoretically they should be the same. But in reality, the two ways of specifying them may result to different resized outputs.
* Once the image is read in, change the data to floats. That means, after resize and other preprocessing steps, the float values are preserved (not truncated to int).
* It is possible to convert data in text format to the blob format.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15204
Reviewed By: llyfacebook
Differential Revision: D13467225
Pulled By: sf-wind
fbshipit-source-id: 7da34a72d43a9603cd7ab953f5821c1222d0178f
* Create parser.cpp (#15238)
Summary:
Moves implementation into .cpp file. Parser was getting included in several compilation units.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15238
Differential Revision: D13474635
Pulled By: zdevito
fbshipit-source-id: 7dc824eea8f506d6c8ae1aa67aeec0c34d5285fc
* Tensor method rename dims()->sizes() (#15246)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15246
Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: igorsugak
Differential Revision: D13470369
fbshipit-source-id: ce995beab7c64bebe8b234fb5e6d015940ec2952
* Mention Jacobian-vector product in the doc of torch.autograd (#15197)
Summary:
A friend of me is learning deep learning and pytorch, and he is confused by the following piece of code from the tutorial https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#gradients :
```python
x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(gradients)
print(x.grad)
```
He don't know where the following line comes from:
```python
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
```
What are we computing? Why don't we compute "the gradient of `y` w.r.t `x`"?
In the tutorial, it only says
> You can do many crazy things with autograd!
Which does not explain anything. It seems to be hard for some beginners of deep learning to understand why do we ever do backwards with external gradient fed in and what is the meaning of doing so. So I modified the tutorial in https://github.com/pytorch/tutorials/pull/385
and the docstring correspondingly in this PR, explaining the Jacobian vector product. Please review this PR and https://github.com/pytorch/tutorials/pull/385 together.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15197
Differential Revision: D13476513
Pulled By: soumith
fbshipit-source-id: bee62282e9ab72403247384e4063bcdf59d40c3c
* value-based mark and sweep DCE (#14910)
Summary:
This makes DCE more granular by tracking live values/aliases through the graph (rather than just nodes). So we can be more aggressive in DCE around control flow blocks. For example, in:
```
%a0 = aten::foo()
%b = aten::foo()
%a2, %b2 = prim::If(%cond) {
block0() {
%a1 = aten::foo(%.0)
%b1 = aten::foo(%b)
} -> (%a1, %b1)
}
return (%a2)
```
we will now dce all the `%b` stuff.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14910
Differential Revision: D13476445
Pulled By: suo
fbshipit-source-id: 2bf5db19711c07dde946697a4f4b270bd8baf791
* fix cholesky call in potrs example (#15215)
Summary:
Cholesky by default returns the lower triangular matrix, see [docs](https://pytorch.org/docs/stable/torch.html#torch.cholesky).
However `torch.potrs` by default requires the upper triangular matrix. The naming of the variable `u` suggests that the example expects the upper to be returned, so I've added the flag to make that happen in the example.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15215
Differential Revision: D13476468
Pulled By: soumith
fbshipit-source-id: 7b68035f435a2b1be4d363b3f63e407394af949d
* Fix a typo in the assert
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15265
Reviewed By: llyfacebook
Differential Revision: D13477029
Pulled By: sf-wind
fbshipit-source-id: 9c5571a583c01f9701625541ebec0c836cb923f2
* Delete ffi documentation (#15220)
Summary: Deleting FFI documentation since its deprecated.
Differential Revision: D13477329
Pulled By: soumith
fbshipit-source-id: 0b3d485eb7cef1f05b6b397dff50f21a49d6409e
* Trivial comment correction in dataloader (#15276)
Summary:
Trivial comment correction in dataloader
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15276
Differential Revision: D13477324
Pulled By: soumith
fbshipit-source-id: 2a74a014999655d129311d611f2a09411339cb13
* Refactor hotpatch_vars and apply it to libtorch (#14976)
Summary:
Fixes #14801.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14976
Differential Revision: D13485381
Pulled By: soumith
fbshipit-source-id: 0af3c2e1b90988d56f6f85632328d1e4b788ffd2
* Fix tensor printing bug in Python 2 (#12732)
Summary:
`rsplit` doesn't have kwargs in Python 2 so this line raises an error
Fixes #15135
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12732
Differential Revision: D10458630
Pulled By: driazati
fbshipit-source-id: a63e42fbc0e39e4291480775b516c98122ec05a1
* Tighten up invariants regarding StreamId. (#15125)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15125
I realized that it is really bad juju if you fake a StreamId
out of thin air, because in general this isn't going to work.
So, make the constructor a lot scarier.
Most "faking StreamId out of thin air" happens because someone
just wants to put something on the default stream.
Reviewed By: dzhulgakov
Differential Revision: D13432800
fbshipit-source-id: a86991d6fc1d8aa4e54e8175e5f06f90856238e6
* Adding ONNX export for torch.expand and torch.ne (#15050)
Summary:
`torch.expand` and `torch.ne` are used often in models and this PR adds ONNX export support for them. ArmenAg has created issue https://github.com/pytorch/pytorch/issues/10882 for this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15050
Differential Revision: D13453036
Pulled By: houseroad
fbshipit-source-id: 4724b4ffcebda6cd6b2acac51d6733cb27318daf
* Minor fixes in .jenkins/caffe2/bench.sh
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15304
Differential Revision: D13493876
Pulled By: bddppq
fbshipit-source-id: 7146eb2587e526af65b4b0290c25bd55653a3088
* Fix for issue 14829 (#14908)
Summary:
* Modify the testcase as outlined in the issue
* Issue url: https://github.com/pytorch/pytorch/issues/14829
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14908
Differential Revision: D13490360
Pulled By: ezyang
fbshipit-source-id: ff11a72e19b49223652182e82c2b4e65fe444ca7
* Don't enforce docstrings on bool dispatch (#15306)
Summary:
Allows 2 functions that are boolean dispatched to have no docstrings (the only case that will fail now is if both functions have docstrings)
Fixes #15281
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15306
Differential Revision: D13494884
Pulled By: driazati
fbshipit-source-id: 65fec39ae03a7d6a68ad617c9b270faeb1617930
* Replace SwitchToDevice(0) with SwitchToDevice() (#15126)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15126
I want to make people stop manufacturing StreamId from thin air,
and a first step is to make people use the default stream.
Reviewed By: dzhulgakov
Differential Revision: D13432922
fbshipit-source-id: 9f0d8d70646c50d979bde5ba3c3addeebac48a3d
* Fix the missing caffe2 proto files for Windows (#15157)
Summary:
Fixes #15156
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15157
Differential Revision: D13490420
Pulled By: orionr
fbshipit-source-id: 4387d707f634a5975238af915b1befb2277f8ec7
* add isinstance static type checking for jit (#15076)
Summary:
This PR add isinstance to do static type checking in JIT.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15076
Differential Revision: D13471067
Pulled By: wanchaol
fbshipit-source-id: d39b7ed5db9fcca4b503659d02cf7795950ea8ea
* Bicubic interpolation for nn.functional.interpolate (#9849)
Summary:
Addresses #918, interpolation results should be similar to tf
* Adds bicubic interpolation operator to `nn.functional.interpolate`
* Corresponding test in `test_nn.py`
The operator is added in legacy `TH` to be aligned with the other upsampling operators; they can be refactored/moved to ATen all at once when #10482 is resolved
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9849
Differential Revision: D9007525
Pulled By: driazati
fbshipit-source-id: 93ef49a34ce4e5ffd4bda94cd9a6ddc939f0a4cc
* Removing BUILD_C10_EXPERIMENTAL_OPS option and unglobbing experimental/c10d ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15064
Reviewed By: orionr
Differential Revision: D13474801
Pulled By: pjh5
fbshipit-source-id: 9d3664c3a3a1b6c2d9f083f8476fe3b037296b98
* Allow future type parsing
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14887
Differential Revision: D13490984
Pulled By: highker
fbshipit-source-id: 165fe995867be273793f983154aa6cbce13e4396
* Port nn fold and unfold to c++
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14597
Reviewed By: ezyang
Differential Revision: D13272227
fbshipit-source-id: 6eccab5ff5830a977398a96393b778095120edc6
* caffe2/python/task: added __repr__ methods to all task definitions (#15250)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15250
This adds `__repr__` methods to all of the classes under task.py. This makes the objects much easier to interact with when using them in an interactive manner, such as in a Jupyter notebook.
The default `__repr__` method just returns the object ID which is very unhelpful.
Reviewed By: hanli0612
Differential Revision: D13475758
fbshipit-source-id: 6e1b166ec35163b9776c797b6a2e0d002560cd29
* Add a correctness check for C++ types to custom operators (#15247)
Summary:
The JIT uses `int64_t` for its integer type and `double` for its floating point type, but users quite often want to write `int` or `float` and that currently fails in not-so-nice ways for custom ops. This PR adds a simple `static_assert` to catch these common failure cases.
zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15247
Differential Revision: D13493941
Pulled By: goldsborough
fbshipit-source-id: c1cd0d10ab5838c75f167c0bdb57e45a0bc1344e
* Fix _apply in nn.Module (#15305)
Summary:
Fixes an issue that arose from https://github.com/pytorch/pytorch/pull/13481 where `.shared_memory()` couldn't be called. Effectively undoes all changes to `nn.Module` from that PR and solve the relevant problem in a different way (the goal was to be able to call `._apply()` on the Python wrapper for a C++ module).
soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15305
Differential Revision: D13493937
Pulled By: goldsborough
fbshipit-source-id: 4cb8687f90fc8709a536c5e7eacd0dc8edf6f750
* Reenable OpenMP by reverting the following two commits. (#15315)
Summary:
Revert "Put back linker flag for OpenMP to prevent build break on ppc64le (#14569)"
This reverts commit a84e873bb156080ea76ab182171b1f3b4d5395f6.
Revert "Update OpenMP cmake setting for xcode 9 compiler(AppleClang 9.0) (#14473)"
This reverts commit 8901935ad42fe9bf093d1106ea43606008a4024d.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15315
Differential Revision: D13495852
Pulled By: ezyang
fbshipit-source-id: bcd3f60088b14831c53d3c171f10cd1ab6b35dee
* [TensorIterator fixing mean to output correct result for half precisi… (#14878)
Summary:
…on](#12115)
mean is calculated in two step sum()/numel(). For half precision, data gets
casted back to half after sum().
We fused the division into the reduction kernel by adding pre_op/post_op.
This allows us to do torch.ones(65536).cuda().half().mean() to return correct
result.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14878
Differential Revision: D13491159
Pulled By: soumith
fbshipit-source-id: e83802e1628b6d2615c45e18d7acf991d143a09e
* Allow tracing with fork/wait (#15184)
Summary:
There is still limitation on this: if a script module is somewhere
in the trace, the inputs/outputs can only be tensors or tuples of
tensors.
resolves #15052
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15184
Differential Revision: D13457691
Pulled By: highker
fbshipit-source-id: 8fe46afc41357a0eb8eadd83f687b31d074deb0e
* improve script/no script save error (#15321)
Summary:
Improves the error message for #15116
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15321
Differential Revision: D13499379
Pulled By: zdevito
fbshipit-source-id: b8dc0a83efabff74199f4aab2ee98aa41c42608b
* Updating submodules
Reviewed By: cdelahousse
fbshipit-source-id: 4bf66581d07d839f459869bc9c6428011063cc5b
* Revert D13383102: [pytorch][PR] Upgrade MKL-DNN to version 0.17
Differential Revision:
D13383102
Original commit changeset: c434f0e0ddff
fbshipit-source-id: 690f46ca0710954fa591a5ea77535e9759db4de5
* caffe2 mobile opengl (#15322)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15322
caffe2 mobile opengl code is not used, deleting it to reduce complications when we perform other changes
Reviewed By: Maratyszcza
Differential Revision: D13499943
fbshipit-source-id: 6479f6b9f50f08b5ae28f8f0bc4a1c4fc3f3c3c2
* Method returns a single argument (#15289)
Summary:
This PR changes Method (just Method not all graphs) to always have a single
return argument.
This is part 1 in a set of changes that will enable us to have better handling if early return statements.
The simplification that this change provides greatly reduces the work for the next step.
This change makes it so that Method and Python handle multiple returns in the same way:
* 0 - None
* 1 - <single value>
* many - Tuple[...]
The result is that a lot of special-case handling in compiler.cpp and its
bindings can be removed. It also fixes several bugs in return handling,
including one where return values were not always checked against their
attributed values.
Notes:
* inferTypeFrom is renamed to be more accurate and discourage use.
* This has uncovered some bugs in other components, which are noted in
the diff.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15289
Differential Revision: D13481649
Pulled By: zdevito
fbshipit-source-id: 0e2242a40bb28cca2d0e8be48bede96195e4858c
* Fix the (reduce)min and (reduce)max ONNX exporting (#1…
* tox.ini -> .flake8 (#15065)
Summary:
We were only using this file to configure flake8, and fbcode linters do not recognize tox.ini which causes spurious linter warnings.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15065
Differential Revision: D13420774
Pulled By: suo
fbshipit-source-id: e43a46befa36862c8b3c0a90074aec6a66531492
* Update onnx coverage script for more accurate result (#15029)
Summary:
The coverage of scalar-input test cases were not accurate. This patch fixed that.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15029
Differential Revision: D13419764
Pulled By: zrphercule
fbshipit-source-id: a14a5cbef432bea8c9126156f5deb1125e1aeb47
* Issue 14984: Remove divide by zero error in index_put_ (#14986)
Summary:
No check for zero index tensor was done in the accumulate=True (serial) case in the new TensorIterator code since https://github.com/pytorch/pytorch/pull/13420.
https://github.com/pytorch/pytorch/issues/14984
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14986
Differential Revision: D13417861
Pulled By: colesbury
fbshipit-source-id: e6ed1af8f708b53a35803fc157ed1f043169ec89
* Supress warnings on generated tests
Summary: Removes all warnings spew for the TestJitGenerated tests
Differential Revision: D13420919
fbshipit-source-id: f251c12f923088ccc5daa2984c15003a67cbd1c1
* Split off fuser tests in test_jit.py to their own test case (#15072)
Summary:
This PR creates TestFuser inside test_jit.py to be a home for graph fuser
specific tests.
This was a useful exercise because now that all the fuser tests are in
one place, I can spot redundant and bitrotting tests for cleanup in a
future PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15072
Differential Revision: D13421458
Pulled By: zou3519
fbshipit-source-id: 80b1a7712feff75a0c186d1664601c4edbbca694
* re-enable copy of python files, but be careful that the copy is only … (#14982)
Summary:
…done once
This allow no-op build to work correctly even when BUILD_CAFFE2_OPS is on.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14982
Differential Revision: D13413960
Pulled By: zdevito
fbshipit-source-id: 6e5412a8c375af8a47c76f548cdd31cff15f3853
* add gloo scatter support on GPU (#14917)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14917
as titled
Reviewed By: pietern
Differential Revision: D13271560
fbshipit-source-id: 0187a3390f8ebd72a2c074e7a651432159d427c0
* Remove deprecated variable_tensor_functions (#15003)
Summary:
Removing the deprecated functions in `torch/csrc/variable_tensor_functions.h` (like `torch::CPU`) and corresponding implementations from `torch/csrc/torch.cpp` from master after the release.
ezyang gchanan soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15003
Differential Revision: D13418086
Pulled By: goldsborough
fbshipit-source-id: a0accdf6f7b0efa1ec07ac7b74b86ff2da37543f
* Add error type to raise statement
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15039
Differential Revision: D13419566
Pulled By: zou3519
fbshipit-source-id: f67a3aebce937e3e640e91e81eb3e184cfdf269c
* Make ATen HIPify out-of-place, but still reuse CUDA names. (#14866)
Summary:
```
This diff changes the HIPification of ATen to be out-of-place.
We now have the following mappings:
- ATen/cuda => ATen/hip
- ATen/native/cuda => ATen/native/hip
- ATen/native/sparse/cuda => ATen/native/sparse/hip
- THC => THH
- THCUNN => THHUNN
The build system is adjusted to know about these new build paths,
and HIPify is taught how to adjust include paths and
THC_GENERIC_FILE appropriately. ATen_hip is now built as
the ATen_hip library, rather than reusing ATen_cuda.
However, despite these new filepaths, none of the identifiers in ATen
have actually changed. So, e.g., THHGeneral.h still defines functions
named THC_blahblah, and HIP still shows up as CUDA in PyTorch itself.
We'll tackle this in a subsequent PR; this diff is just to get the files
out-of-place.
Minor extra improvements:
- Don't edit tmp_install when hipifying
- HIP no longer builds native_cudnn_cpp; it was unnecessary
- Caffe2_HIP_INCLUDES is now Caffe2_HIP_INCLUDE, for consistency
with all the other variables.
- HIP build now properly respects ATEN_CUDA_FILES_GEN_LIB (it
did not previously.)
- You can now override file extension matching in pyHIPIFY
by explicitly specifying its full name in the matching list.
This is used so we can HIPify CMakeLists.txt in some situations.
A little bit of string and ceiling wax:
- gen.py grows a --rocm flag so that it knows to generate CUDA
files which actually refer to the HIP headers (e.g., THH.h)
We'll get rid of this eventually and generate real HIP files,
but not for this PR.
- Management of HIP dependencies is now completely deleted
from the ATen CMakeLists.txt. The old code was dead (because
it was shoveled in ATen_CUDA_DEPENDENCY_LIBS and promptly
ignored by the Caffe2 build system) and didn't actually work.
```
Stacked on https://github.com/pytorch/pytorch/pull/14849 review last commit only
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14866
Differential Revision: D13419475
Pulled By: ezyang
fbshipit-source-id: cb4c843df69a1d8369314c9fab1b7719520fa3db
* Add at::scalar_tensor factory function, use it instead of Type.scalar… (#15074)
Summary:
…_tensor.
This is part of a long series of paring down the Type interface.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15074
Differential Revision: D13421482
Pulled By: gchanan
fbshipit-source-id: 84010ee71fef2cb74d32d5de7858d8ed9f36b885
* Move TensorImpl to c10 (yay!)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14795
Reviewed By: ezyang
Differential Revision: D13336856
fbshipit-source-id: 5375d0e42312ff7564f4df06210a5e49542d59e3
* Fix include paths for TensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14816
Reviewed By: ezyang
Differential Revision: D13348040
fbshipit-source-id: a7204d89c2dd277d13093b0ed862f40b53dee82f
* Move UndefinedTensorImpl to c10 (meh) (#14817)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14817
unfortunately, we still need this.
Reviewed By: ezyang
Differential Revision: D13348041
fbshipit-source-id: e8dcc89f5c71bd1ea2c9813990dac6e58e63b1fd
* Fix include paths for UndefinedTensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14818
Reviewed By: ezyang
Differential Revision: D13348042
fbshipit-source-id: 11bdfc755767ce9d0a6fa95b2cf49d50adde8d60
* add gloo support for gather on GPU (#14916)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14916
as titled
Reviewed By: pietern
Differential Revision: D13267832
fbshipit-source-id: 3b89d08af93f74941f17ff892c33fc2a4a023c19
* Pre-commit flake8/clang-tidy (#15102)
Summary:
Provide a pre-commit hook that does flake8 and clang tidy checks. Enables the clang-tidy script to run in parallel to make it fast enough to be used in a pre-commit hook.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15102
Reviewed By: soumith
Differential Revision: D13429629
Pulled By: zdevito
fbshipit-source-id: bd52fe5652f29b033de8d9926d78350b2da4c2fc
* Update the output format for benchmark_helper. It outputs the dimensi… (#15108)
Summary:
…on first and all the values in the next line. This way, it can output arbitrary blob
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15108
Reviewed By: llyfacebook
Differential Revision: D13429346
Pulled By: sf-wind
fbshipit-source-id: 5e0bba2a46fbe8d997dfc3d55a698484552e3af8
* Fix serialization (#15033)
Summary:
Fixes a bug where (de-)/serializing a hierarchy of submodules where one submodule doesn't have any parameters, but its submodules do, doesn't get properly loaded. This had to do with the fact that the old protobuf format couldn't store empty parameters.
Fixes https://github.com/pytorch/pytorch/issues/14891
soumith ezyang ebetica
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15033
Differential Revision: D13411322
Pulled By: goldsborough
fbshipit-source-id: 2ef73b2aa93fa9e46b1cbe1fd47d9f134d6016d5
* Remove linker and dlopen flags that allowed undefined symbols in rocm build (#15091)
Summary:
Previously the undefined symbols were caused by disabled_modules in tools/amd_build/disabled_features.json (now it's cleared).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15091
Differential Revision: D13429595
Pulled By: bddppq
fbshipit-source-id: b341e83f9e5a8d16440a364e837b045a8a4fd6e1
* Add EmptyNameScope to allow you jump out from current scope. (#14631)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14631
adding a empty name scope to allow people jump out from current namescope.
This could be useful when you want to access blob from parent or sibling scope.
Facebook:
e.g: we encoutered a potential usecase in D13124249 (it's a large diff, please search by EmptyNameScope in that diff), we need to access to a blob declared in root namescope from a device namescope (device namescope has been used by parallel_GPU API). `EmptyNameScope` can help us do that with ease.
I referenced to `EmptyDeviceScope` D6103412 while implementing this one.
Reviewed By: yinghai
Differential Revision: D13272240
fbshipit-source-id: d4cde5abcc2336e456b6c6ef086266ef94d86da8
* Use c10::to_string that works cross platform (#15117)
Summary:
Fix master breakage introduced in #15108
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15117
Differential Revision: D13430568
Pulled By: bddppq
fbshipit-source-id: ce10bc552f085d1bf0afbc13119991bee014ac95
* Don't setup x86_64-linux-gnu-gcc as an sccache wrapper. (#15078)
Summary:
When I do this setup in a local Docker development environment,
I get the following error:
x86_64-linux-gnu-gcc: error trying to exec 'cc1plus': execvp: No such file or directory
Somehow, gcc seems to get confused when it gets run from the wrong
directory. Best not to do it.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15078
Differential Revision: D13432143
Pulled By: ezyang
fbshipit-source-id: b18e15f493503a4c8205c85f92a214e49762a7bc
* fix some tests that I accidentally disabled (#15077)
Summary:
While moving these scenarios into `_test_dim_ops` I accidentally left an empty loop in the actual tests, causing them to do nothing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15077
Differential Revision: D13428759
Pulled By: umanwizard
fbshipit-source-id: 08f53068981d9192c1408878b168e9053f4dc92e
* Add better support for bools in the graph fuser (#15057)
Summary:
Fixes #15038.
aten::_cast_Float(tensor, non_blocking) support was added in #14336.
Its second argument is a bool, but because we don't support generating values
of type bool in the fuser codegen, the codegen errored out.
aten::_cast_Float in the fuser never actually uses its non_blocking
argument, so another way to fix this would be to have a special op for a
fused cast but I thought that we might have fusible ops that do take
bool arguments in the future so this would be good to have.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15057
Differential Revision: D13432091
Pulled By: zou3519
fbshipit-source-id: 455fe574f5f080aca9a112e346b841a2534a8dc3
* Ensure there aren't variables in checked_tensor_unwrap, checked_tenso… (#15105)
Summary:
…r_list_unwrap.
These functions use unsafeGetTensorImpl(), which doesn't work with Variables (in a silent way that may blow up later).
So let's do early checking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15105
Reviewed By: ezyang
Differential Revision: D13429149
Pulled By: gchanan
fbshipit-source-id: b85f6f5b7cdb9a6dd0c40205b924c840a3920ba0
* fix infinite loop when get_max_threads is nonzero but num_threads is 1
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15114
Differential Revision: D13431891
Pulled By: umanwizard
fbshipit-source-id: f968b8e50cf776c346d4a28d72b12e7856c95839
* Kill Type.storage. (#15075)
Summary:
It's not used.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15075
Reviewed By: ezyang
Differential Revision: D13422487
Pulled By: gchanan
fbshipit-source-id: 272aa0a10e96f3ffb97d571490b517f972b9dcf7
* Move CUDAGuard, CUDAStream and CUDAGuardImpl to c10/cuda (#14248)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14248
This diff also introduces a horrifying hack to override CUDA's DeviceGuardImpl
with a HIPGuardImplMasqueradingAsCUDA, to accommodate PyTorch's current
behavior of pretending CUDA is HIP when you build with ROCm enabled.
Reviewed By: bddppq
Differential Revision: D13145293
fbshipit-source-id: ee0e207b6fd132f0d435512957424a002d588f02
* Stop erroneously running aten::warn (#15124)
Summary:
Fixes #15119. Before this PR, we were propagating constants through
aten::warn AND running it as a part of shape analysis.
This caused aten::warn to be run regardless of if it is
supposed to be run dynamically. This PR adds an exclusion for aten::warn
in constant propagation and shape analysis, similar to that of prim::RaiseException.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15124
Differential Revision: D13432815
Pulled By: zou3519
fbshipit-source-id: 15ab533ce2accb2da3fd4e569070c7979ce61708
* Move numa.{h, cc} to c10/util (#15024)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15024
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14393
att
Reviewed By: dzhulgakov
Differential Revision: D13380559
fbshipit-source-id: abc3fc7321cf37323f756dfd614c7b41978734e4
* Move adaptive avg pooling 2d to ATen native (#14714)
Summary:
adaptive_avg_pool1d, adaptive_avg_pool2d, and adaptive_avgpool3d are neural network functions that are currently implemented in our legacy THNN (CPU) / THCUNN (CUDA) libraries. It is generally better if these live in our new library ATen, since it is more feature complete and reduces cognitive overhead.
This change moves currently to adaptive_avg_pool1d and adaptive_avg_pool2d to ATen.
timed relevant cpu tests with this change:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.s.s.s.s.s.s.s...
----------------------------------------------------------------------
Ran 17 tests in 6.273s
OK (skipped=7)
real 0m7.164s
user 3m1.289s
sys 0m0.905s
```
compared to master:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.s.s.s.s.s.s.s...
----------------------------------------------------------------------
Ran 17 tests in 7.232s
OK (skipped=7)
real 0m8.065s
user 3m34.714s
sys 0m2.440s
```
also timed relevant cuda tests with this change:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.................
----------------------------------------------------------------------
Ran 17 tests in 21.049s
OK
real 0m24.106s
user 0m20.890s
sys 0m4.026s
```
compared to master
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.................
----------------------------------------------------------------------
Ran 17 tests in 23.021s
OK
real 0m27.095s
user 0m20.121s
sys 0m3.668s
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14714
Differential Revision: D13384084
Pulled By: xnder
fbshipit-source-id: 344442103ccbbda72d3c010d2feea00e9985d226
* Add script standard library documentation + cleanup (#14912)
Summary:
Documents what is supported in the script standard library.
* Adds `my_script_module._get_method('forward').schema()` method to get function schema from a `ScriptModule`
* Removes `torch.nn.functional` from the list of builtins. The only functions not supported are `nn.functional.fold` and `nn.functional.unfold`, but those currently just dispatch to their corresponding aten ops, so from a user's perspective it looks like they work.
* Allow printing of `IValue::Device` by getting its string representation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14912
Differential Revision: D13385928
Pulled By: driazati
fbshipit-source-id: e391691b2f87dba6e13be05d4aa3ed2f004e31da
* Minor documentation mistake (#15068)
Summary:
keepdim is a optional parameter for torch.max()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15068
Differential Revision: D13437745
Pulled By: zou3519
fbshipit-source-id: b5198c7d4ae17758cd136f6e5aecc6cb5838f174
* Implement torch.tril_indices and torch.triu_indices (#12653) (#14904)
Summary:
This is an optimized implementation that does the following:
1. created an empty Tensor of correct size.
2. fill the Tensor with correct values.
The following three designs to fill in the Tensor result in roughly the same performance. Hence, the 2nd option is taken for simpler code, and to return contiguous tensors.
1. Sequential: fill row coordinates first, then columns. This results in two for-loop and more arithmetic operations.
2. Interleaved: fill in index coordinates one by one, which jumps between the two output Tensor rows in every iteration.
3. Transpose: create a n X 2 Tensor, fill the Tensor sequentially, and then transpose it.
<img width="352" alt="screen shot 2018-12-10 at 3 54 39 pm" src="https://user-images.githubusercontent.com/16999635/49769172-07bd3580-fc94-11e8-8164-41839185e9f9.png">
NOTE:
This implementation returns a 2D tensor, instead of a tuple of two tensors. It means that users will not be able to do the following:
```python
x = torch.ones(3, 3)
i = torch.tril_indices(3, 3)
x[i] # need to first convert the 2D tensor into a tuple of two 1D tensors.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14904
Reviewed By: zou3519
Differential Revision: D13433027
Pulled By: mrshenli
fbshipit-source-id: 41c876aafcf584832d7069f7c5929ffb59e0ae6a
* Optimize CPU GenerateProposals op by lazily generating anchors (3-5x faster) (#15103)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15103
There are two main optimizations in this diff:
1. We generate all anchors for every single spatial grid first, and then apply
NMS to pick 2000 anchors according to RPN_PRE_NMS_TOP_N. By first sorting the
score and picking the 2000 top ones and then lazily generating only the
corresponding anchors is much faster.
2. Transposing bbox_deltas from (num_anchors * 4, H, W) to
(H, W, num_anchors * 4) was also quite slow - taking about 20ms in the RRPN
case when there are lots of anchors which it's negligible for RPN case (like
0.1 ms). Instead of transponsing, performing all operations in the
(num_anchors, H, W) format speeds things up.
For regular RPN scenario, this gives 5x speedup from 5.84ms to 1.18ms a case
with 35 anchors over a 600x600 image.
For rotated boxes with 245 anchors, the runtime down from 80ms to 27ms per
iter.
Reviewed By: newstzpz
Differential Revision: D13428688
fbshipit-source-id: 6006b332925e01a7c9433ded2ff5dc9e6d96f7d3
* use ROCm 1.9.2 fp16 capabilities in rocBLAS and MIOpen interfaces (#14994)
Summary:
* relax MIOpen if statement to allow fp16/fp32 mixed precision training now supported by ROCm 1.9.2
* use gemm_ex API of rocBLAS in ROCm 1.9.2 instead of the previous hgemm API
* with this: enable all but one half test in test_nn
While there, fix also:
* a group convolution issue w/ MIOpen pertaining to initializing MIOpen on multi-GPU systems properly we detected while working on this
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14994
Differential Revision: D13439869
Pulled By: bddppq
fbshipit-source-id: 75e4eb51a59488882e64b5eabdc30555b25be25e
* Add back c2 string_utils include header to benchmark_helper
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15143
Differential Revision: D13439694
fbshipit-source-id: 78698b66d52a0178118cbf3e79a7a5ad1763d47b
* Export defs.bzl to open source for pytorch (#15132)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15132
Pull Request resolved: https://github.com/facebook/fbshipit/pull/64
Reviewed By: dzhulgakov
Differential Revision: D13424093
fbshipit-source-id: bbebef964b9f3aef8f59cd394eca068680c36b5a
* docs: minor spelling tweaks
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15148
Differential Revision: D13443708
Pulled By: suo
fbshipit-source-id: 5e3ec0afd3416ab8ce207f2d04105c49e1c04611
* don't compile dnnlowp.cc in avx2 option (#15147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15147
Forgot to take out dnnlowp.cc from avx2 list in a previous diff.
Reviewed By: dskhudia
Differential Revision: D13440686
fbshipit-source-id: 9ada98b6e885c7d5f22c91a735ff60304480b4cb
* Autoformat build_variables.py (#15152)
Summary:
autoformat `tools/build_variables.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15152
Differential Revision: D13445343
Pulled By: goldsborough
fbshipit-source-id: fd63588de114cb92deda03fa1a0b36f5f9082b2f
* Fix resize for edge case tensors (#14874)
Summary:
Certain tensor shapes failed when being resized. This pull request addresses the bug found in #13404.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14874
Differential Revision: D13429788
Pulled By: soumith
fbshipit-source-id: 8aa6451dbadce46d6d1c47a01cb26e6559bcfc8c
* Implementation of ChannelShuffle Op for MKLDNN (#15106)
Summary:
the speed-up of a single operation is up to 3X .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15106
Differential Revision: D13429596
Pulled By: bddppq
fbshipit-source-id: f8d987cafeac9bef9c3daf7e43ede8c6a4ee2ce5
* support casting to string (#15110)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15110
support casting to string on CPU
Reviewed By: intermilan
Differential Revision: D13429381
fbshipit-source-id: b737a1ba1237b10f692d5c42b42a544b94ba9fd1
* Remove "early-release beta" disclaimer from README (#15136)
Summary:
Now that PyTorch 1.0 is out, this should be updated :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15136
Differential Revision: D13447377
Pulled By: soumith
fbshipit-source-id: bd4e662c53d0699f25d4d90c1b4c1e182b4427c2
* Disable strict-overflow flag to avoid compilation error (#14977)
Summary:
Disable strict-overflow flag to avoid compilation error
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14977
Differential Revision: D13447577
Pulled By: soumith
fbshipit-source-id: 1957bd5aa3c7b79219da3dd53560464977c89526
* minimize header file includes from _avx2.cc (#14950)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14950
Minimize the number of headers included from _avx2.cc files to avoid accidental compilation of functions defined the header files reused by other translation units that can lead to illegal instruction errors.
Reviewed By: dskhudia
Differential Revision: D13394483
fbshipit-source-id: 67149a6fb51f7f047e745bfe395cb6dd4ae7c1ae
* Removes THCNumerics usages in RNN.cu (#15085)
Summary:
We don't need THCNumerics here since at::Half can be implicitly converted to float and the cuda math dispatches are handled by `/usr/local/cuda/include/crt/math_functions.hpp` and `cmath`. ATen should be free of THCNumerics after this and when porting kernels from THC, one should not use THCNumerics.
Should close: https://github.com/pytorch/pytorch/issues/11878
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15085
Differential Revision: D13447558
Pulled By: soumith
fbshipit-source-id: 4ff5cbf838edcd01e2d1397e4d7f4f920e9e9fc3
* Reuse KernelSpec for FusionGroups with equivalent graphs (#14541)
Summary:
Before this PR, loop unrolling + the graph fuser was creating multiple
FusionGroups with the same bodies (with different variable names) for
JIT LSTMs. Each FusionGroup got registered to a separate fusion key;
each key resulted in a different compilation for the same
specializations.
This PR makes it so that when registering FusionGroups with the fusion
compiler, the compiler first checks the KernelSpec cache to see if the
FusionGroup's graph exists already. If it does, then return the
corresponding KernelSpec's key to share compiled kernels.
In addition, graphs in the KernelSpec cache are canonicalized before
being cached. I added a flag to the canonicalize pass to remove unique
names of values.
This shortens the compile time for a JIT LSTM (seq_len of 100, loop
unroll factor of 8) from 5.3s to 2.3s. Most of this compile time is
running the graph fuser and/or fusion compiler; while this PR
makes it so that there is only one unique kernel in the forward pass,
there are a lot of different kernels (6) in the backward pass
(after loop unrolling) that should be investigated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14541
Differential Revision: D13324487
Pulled By: zou3519
fbshipit-source-id: b841d82ed35a959b5cfc72db033bf5a7b42cc4fb
* Python <-> C++ Frontend inter-op (#13481)
Summary:
This PR enables C++ frontend modules to be bound into Python and added as submodules of Python modules. For this, I added lots of pybind11 bindings for the `torch::nn::Module` class, and modified the `torch.nn.Module` class in Python to have a new Metaclass that makes `isinstance(m, torch.nn.Module)` return true when `m` is a C++ frontend module. The methods and fields of C++ modules are bound in such a way that they work seamlessly as submodules of Python modules for most operations (one exception I know of: calling `.to()` ends up calling `.apply()` on each submodule with a Python lambda, which cannot be used in C++ -- this may require small changes on Python side).
I've added quite a bunch of tests to verify the bindings and equality with Python. I think I should also try out adding a C++ module as part of some large PyTorch module, like a WLM or something, and see if everything works smoothly.
The next step for inter-op across our system is ScriptModule <-> C++ Frontend Module inter-op. I think this will then also allow using C++ frontend modules from TorchScript.
apaszke zdevito
CC dzhulgakov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13481
Differential Revision: D12981996
Pulled By: goldsborough
fbshipit-source-id: 147370d3596ebb0e94c82cec92993a148fee50a7
* Unify SparseTensorImpl::size_ and TensorImpl::sizes_
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15130
Differential Revision: D13434981
Pulled By: VitalyFedyunin
fbshipit-source-id: 98bd4d66834a3c3d2ea577adb0c8413852da095d
* Fix bincount for non-contiguous inputs on CPU (#15109)
Summary:
Fixes #15058.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15109
Differential Revision: D13447448
Pulled By: soumith
fbshipit-source-id: 56e8d42934538fb00465105a2c5ccfeb7c18a651
* Use a pool of per-thread cudnn handles for each device, updated (#15080)
Summary:
Rebased version of https://github.com/pytorch/pytorch/pull/14861, hopefully addressing ezyang's comments.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15080
Differential Revision: D13440858
Pulled By: ezyang
fbshipit-source-id: 1c6af5c53538b81c6b92cf1dda231ed333f28035
* Fix typo (#15045)
Summary:
Simple typo fix
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15045
Reviewed By: dzhulgakov
Differential Revision: D13413509
Pulled By: houseroad
fbshipit-source-id: be66700c30d038368b1433232a4e3fd9299c83d6
* Delete defunct USE_SIMPLE_BASE_CTOR_DTOR (#15144)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15144
Differential Revision: D13440872
Pulled By: ezyang
fbshipit-source-id: 2b1d73fac0c63729ba01d8f129642334ae9d9cf3
* Kill non-forward, non-backward functions generated from nn.yaml (#15127)
Summary:
Updating binding to legacy functions.
Remove unused declarations.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15127
Differential Revision: D13433405
Pulled By: VitalyFedyunin
fbshipit-source-id: 58544d38affd20818742338c9eb789d9d14ccbaa
* Fix old tensor OutputTensorCopyFrom usage in ImageInput operator (#15094)
Summary:
cc jerryzh168
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15094
Differential Revision: D13451898
Pulled By: bddppq
fbshipit-source-id: 27906be62fb88aaa13c257441a2e35a285b445ee
* Use std::vector instead of alloca to work around hcc crash
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15175
Differential Revision: D13453708
Pulled By: bddppq
fbshipit-source-id: f8c147ae9f679e395fee9d4c73ebcca052c9a752
* Tensor construction codemod(ResizeLike) - 5/7 (#15084)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15084
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision: D13419711
fbshipit-source-id: dd2b740c3f13d8087085bafc5571aaf908d1af42
* Tensor construction codemod(ResizeLike) - 6/7 (#15137)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15137
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision: D13419736
fbshipit-source-id: f4ad7b9582c2f809258169b7fef9adbca7063d99
* Replace non-printable-ascii characters in ProtoDebugString (#14918)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14918
When ProtoBuf-Lite is in use, ProtoDebugString just calls SerializeAsString.
This produces binary output, which is not a very suitable "debug" string.
Specifically, we've observed it causing problems when calling code tries to
add the debug string to a Java exception message (which requires valid UTF-8).
Now, we replace all non-ASCII bytes with "?".
This is not a very fast implementation, but generating debug strings shouldn't
be a performance-sensitive operation in any application.
Reviewed By: dzhulgakov
Differential Revision: D13385540
fbshipit-source-id: 8868172baf20efaf53fecf7d666a6980f59b64f5
* Tensor construction codemod(ResizeLike) - 4/7 (#15088)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15088
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision: D13419682
fbshipit-source-id: 3e59403bc1c0e71e5cb66df932ed0c6a0a72e643
* Remove _finfo; replace _finfo usage with torch.finfo (#15165)
Summary:
This PR removes the usage of _finfo defined in torch.distributions.utils and changes the call sites
to use torch.finfo instead
Differential Revision: D13451936
Pulled By: soumith
fbshipit-source-id: 6dbda3a6179d9407bc3396bf1a2baf3e85bc4cf2
* Run ONNX cuda backend test cases via ROCm
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15069
Differential Revision: D13427757
Pulled By: bddppq
fbshipit-source-id: ba0273d75986cd5b146f7041a83c63ddf9c6c0cf
* Remove disabled_features in hipify
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15098
Reviewed By: ezyang
Differential Revision: D13453762
Pulled By: bddppq
fbshipit-source-id: e177042c78f5bf393163d660c25b80285353853d
* Add missing caffe2_hip extension in setup.py
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15189
Reviewed By: orionr
Differential Revision: D13457644
Pulled By: bddppq
fbshipit-source-id: c2363e9b8fd21709b62777e5b2199f01ec1c65f8
* Enable performance-unnecessary-value-param in .clang-tidy (#15026)
Summary:
This PR fixes around 250 places in the codebase where we were making unnecessary copies of objects (some large, some small).
ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15026
Differential Revision: D13458784
Pulled By: goldsborough
fbshipit-source-id: be5148b2ce09493588d70952e6f6d6ff5ec5199b
* Remove TensorImpl -> Type dependency
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15086
Reviewed By: dzhulgakov
Differential Revision: D13425628
fbshipit-source-id: 08a8a774d17b071367454e027012a02f96d177d4
* Support torch.tensor in script (#14913)
Summary:
Adding support for torch.tensor in script.
The input list is typed as t[], because it can be arbitrarily nested. I added a check a compile time check that the inner type of the list is a bool, float, or int.
Also adds specialization for Boolean Lists, which already existed at the ivalue level but had not been added to the compiler yet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14913
Differential Revision: D13407930
Pulled By: eellison
fbshipit-source-id: d17f1195a22149d5b0d08d76c89a7fab8444f7c5
* For rotated proposals, replace cv::rotatedRectangleIntersection with a correct version that doesn't have underflow problem (#15113)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15113
cv::rotatedRectangleIntersection has a known float underflow bug that would cause failure in ```CV_Assert(intersection.size() <= 8)```
For rotated proposals, replace cv::rotatedRectangleIntersection with a correct version that doesn't have underflow problem.
Otherwise, when ```USE_CPP_GENERATE_PROPOSALS = true```, the training would fail.
Reviewed By: viswanathgs
Differential Revision: D13429770
fbshipit-source-id: 5e95d059f3c668f14059a0a83e8e53d8554cdb99
* Move TensorImpl::CopyFrom to caffe2::Tensor (1/2) (#14656)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14656
This diff doesn't move it yet, but prepares it to be moved, i.e. removes all access to class internals.
dzhulgakov: Please comment on if you think it still makes sense to land this even though it's not blocking anymore since we're going to move at::CopyBytes anyhow.
ezyang: There's some changes in the implementation, especially handling undefined dest tensors. Please review carefully.
Reviewed By: ezyang
Differential Revision: D13287688
fbshipit-source-id: 17800ca8a79ab1633f23be58d96f99a160d8ed24
* Move TensorImpl::CopyFrom to caffe2::Tensor (2/2) (#14858)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14858
This diff doesn't change logic but just takes the existing code and moves it to caffe2::Tensor
Reviewed By: ezyang
Differential Revision: D13365817
fbshipit-source-id: bc73b27a793602cb14200dcdf357aa63233da43c
* add erf and erfc to fuser/autodiff
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15139
Differential Revision: D13455690
Pulled By: soumith
fbshipit-source-id: b06e5f5d362869c2e5fa11a52f9450d77c30d4cb
* Fix numpy conversion for int8 tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15194
Differential Revision: D13459270
Pulled By: li-roy
fbshipit-source-id: 605534add263860a3ad9a7fa70888301ee0bf8e4
* Fix derivative for mvlgamma (#15049)
Summary:
Fixes #15015.
Added tests to validate derivative.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15049
Reviewed By: soumith
Differential Revision: D13434117
Pulled By: zou3519
fbshipit-source-id: 4a292600af9eb08b67c0f8b5482e9512aac95e72
* caffe2 - easy - Create test_util to make it easier to write C++ unit tests (#15014)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15014
Currently it looks like many of the simple operations such as comparing tensors, creating tensors, fetching tensors... are too verbose and took effort to write correctly in unit tests.
Easy to use utilities are often more important to increase productivity writing unit tests. While caffe2 python unit tests are relatively easier to write at the moment, the C++ side seems lacking.
In this change I create a test_util, started with assertsTensorEquals, getTensor, createTensor, and we can start putting more easy to use utilities there.
Reviewed By: salexspb
Differential Revision: D13370461
fbshipit-source-id: bee467a127e1d032ef19482f98aa5c776cf508c0
* caffe2 - easy - test utils to create operator (#15180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15180
Test utils to create an operator
On top of D13370461
Reviewed By: ZolotukhinM
Differential Revision: D13382773
fbshipit-source-id: a88040ed5a60f31d3e73f1f958219cd7338dc52e
* caffe2 - easy - test utils to fill tensors (#15019)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15019
Put some utils to fill tensors to test_utils
Reviewed By: salexspb
Differential Revision: D13386691
fbshipit-source-id: 51d891aad1ca12dc5133c0352df65b8db4f96edb
* caffe2 - easy - test utils to compare tensors in two workspaces (#15181)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15181
Add test utils to compare tensors in two workspaces
Reviewed By: ZolotukhinM
Differential Revision: D13387212
fbshipit-source-id: e19d932a1ecc696bd0a08ea14d9a7485cce67bb2
* caffe2 - easy - test utils for tensor assertion (#15020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15020
Add test utils for assertion of a tensor (sizes and values)
Reviewed By: salexspb
Differential Revision: D13401146
fbshipit-source-id: bc385df074043e03ea884940b5631b96de4a607e
* caffe2 - easy - utils to set argument of operator (#15022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15022
Add setArgument testing utils to make it easy to set argument for an operator
Reviewed By: yinghai
Differential Revision: D13405225
fbshipit-source-id: b5c1859c6819d53c1a44718e2868e3137067df36
* caffe2 - make DataRandomFiller usable in unit tests (#15027)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15027
- Make DataRandomFiller able to accept input_dims and input_types for only non intermediate inputs. Add a helper to fill input directly to a workspace
Reviewed By: highker
Differential Revision: D13408345
fbshipit-source-id: 5fc54d33da12e3f0a200e79380d4c695b0339b17
* Revert D13407930: [pytorch][PR] Support torch.tensor in script
Differential Revision:
D13407930
Original commit changeset: d17f1195a221
fbshipit-source-id: f4458872c48ec4a2c9983b21ed90bcdc0ae665b7
* Tensor construction codemod(ResizeLike) - 3/7 (#15122)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15122
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: dzhulgakov
Differential Revision: D13419643
fbshipit-source-id: 65b5a037b94d458b944d51f790ba2829db1fb530
* Better tests/support for Python/C++ inter-op (#15193)
Summary:
Methods like `module.named_modules()` returns a container of `shared_ptr<nn::Module>`. Currently the `nn::Module` base class does not have Python bindings. This PR fixes this, and adds more unit tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15193
Differential Revision: D13458713
Pulled By: goldsborough
fbshipit-source-id: 4091fe1b96a1be8db14c6a4307fbacc2b41ff6fe
* Refactor caffe2 CI scripts and add benchmark scripts
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14575
Differential Revision: D13468049
Pulled By: bddppq
fbshipit-source-id: e73bc8742c8a03f498816eee8a72b06a3e19fe48
* Enable all clang-tidy performance checks (#15198)
Summary:
This PR adds the final set of clang-tidy checks we should add for our codebase: a last set of performance-related checks. Most fixes here are around changing `auto` to `const auto&` in a few places where unnecessary copies were made, and adding `reserve()` calls before loops doing repeated `push_back()`. Also a few cases of calling `std::string::find` with a single-character string literal instead of a single char, which uses a less efficient string search algorithm meant for searching larger substrings.

ezyang apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15198
Differential Revision: D13468797
Pulled By: goldsborough
fbshipit-source-id: 2bed1ea1c7c162b7f3e0e1026f17125e88c4d5b2
* Remove __forceinline__ hipification step. (#15229)
Summary:
The HIP definition now correctly contains the inline attribute.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15229
Differential Revision: D13470962
Pulled By: bddppq
fbshipit-source-id: 34f8361bda5f3dce20a2eeb530c3a25d1b1bdd06
* Fix jit doc codeblocks and tables (#15227)
Summary:
Some of the codeblocks were showing up as normal text and the "unsupported modules" table was formatted incorrectly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15227
Differential Revision: D13468847
Pulled By: driazati
fbshipit-source-id: eb7375710d4f6eca1d0f44dfc43c7c506300cb1e
* enabled tests in test_nn, test_cuda and test_sparse (#15232)
Summary:
tests work on ROCm 1.9.2 as present on CI (fp16 bringup, hipMemset and sparse improvements)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15232
Differential Revision: D13470991
Pulled By: bddppq
fbshipit-source-id: 45acc4f9ea5baaaf7672b86eb022948055779925
* Revert D13440858: [pytorch][PR] Use a pool of per-thread cudnn handles for each device, updated
Differential Revision:
D13440858
Original commit changeset: 1c6af5c53538
fbshipit-source-id: fda42ea75000d4a4e9c4a8eeaaa5518f7ad9c298
* Do not ifdef __launch_bounds__ out for ROCm. (#15228)
Summary:
The compiler understands it and profits from knowing it by not using too
many VGPRs as it defaults to 256 default workgroup size.
Fixes a problem in bringup of ROCm 2.0 on gfx906.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15228
Differential Revision: D13470950
Pulled By: bddppq
fbshipit-source-id: f9aa44c7c95299a099c0ea9317b9044cc056acc5
* fix an issue where two rules build the same .py files
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15230
Differential Revision: D13471625
Pulled By: zdevito
fbshipit-source-id: a982413a308c7a9bb5b6a82fe96fd3de44f555aa
* Preserve module hierarchy on traced modules (#15101)
Summary:
We need this, for example, to properly call `_unpack` when we have a traced module in the hierarchy
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15101
Differential Revision: D13468467
Pulled By: jamesr66a
fbshipit-source-id: c2b6740b12cde6e23395d12e42d4fc2c4c7ca3f2
* record unit time in torch.cuda.event (#15221)
Summary: Record unit of time for torch.cuda.Event's elapsed_time
Differential Revision: D13467646
Pulled By: zou3519
fbshipit-source-id: 4f1f4ef5fa4bc5a1b4775dfcec6ab155e5bf8d6e
* Build c10 HIP test
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15233
Reviewed By: ezyang
Differential Revision: D13471002
Pulled By: bddppq
fbshipit-source-id: b42c3bc2b9db672ce50a52eb700cc6ed13d3535f
* Start unittesting our main observer (#15191)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15191
OSS:
just splitting out basic flags from a unit test. So I can extend them in another test where I need to add additional flags.
Reviewed By: yinghai
Differential Revision: D13159184
fbshipit-source-id: 9823e792cf0ed8d0379235c44564862b7d784845
* FP16MomentumSGDUpdate Op fix and enable for ROCm (#15150)
Summary:
1. Fix a bug in FP16MomentumSGDUpdate operator
2. Enable operator for ROCm
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15150
Differential Revision: D13473145
Pulled By: bddppq
fbshipit-source-id: 4c5c5f30cb9bba658e3639dbe193fa08a304d306
* Supply static shape info to Reshape when doing onnxGetCompatibility (#15242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15242
Newer version ONNX Reshape gets shape info from a tensor. Hence for static backend, we need to provide this info to it when doing `onnxGetCompatibility` too.
Reviewed By: jackm321
Differential Revision: D13471959
fbshipit-source-id: 8a58e28edd900b6ad54a1dbd63ff2579fbe0e820
* Add several features to converting images to blobs (#15204)
Summary:
Several enhancements are implemented:
* Resize the images to be within a boundary between min-size and max-size (can be height and weight). It tries to resize the minimum size to match the min-size and keep the aspect ratio. However, if in that case the maximum size is more than the max-size, then resize the maximum size to be equal to the max-size (and the minimum size is less than min-size). The min/max sizes are specified in argument scale, in a comma separated form. If one of the size is -1, then that size is not a restriction.
* Change the OpenCV resize function arguments from using cv::Size() to the x, y scale. Theoretically they should be the same. But in reality, the two ways of specifying them may result to different resized outputs.
* Once the image is read in, change the data to floats. That means, after resize and other preprocessing steps, the float values are preserved (not truncated to int).
* It is possible to convert data in text format to the blob format.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15204
Reviewed By: llyfacebook
Differential Revision: D13467225
Pulled By: sf-wind
fbshipit-source-id: 7da34a72d43a9603cd7ab953f5821c1222d0178f
* Create parser.cpp (#15238)
Summary:
Moves implementation into .cpp file. Parser was getting included in several compilation units.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15238
Differential Revision: D13474635
Pulled By: zdevito
fbshipit-source-id: 7dc824eea8f506d6c8ae1aa67aeec0c34d5285fc
* Tensor method rename dims()->sizes() (#15246)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15246
Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: igorsugak
Differential Revision: D13470369
fbshipit-source-id: ce995beab7c64bebe8b234fb5e6d015940ec2952
* Mention Jacobian-vector product in the doc of torch.autograd (#15197)
Summary:
A friend of me is learning deep learning and pytorch, and he is confused by the following piece of code from the tutorial https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#gradients :
```python
x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(gradients)
print(x.grad)
```
He don't know where the following line comes from:
```python
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
```
What are we computing? Why don't we compute "the gradient of `y` w.r.t `x`"?
In the tutorial, it only says
> You can do many crazy things with autograd!
Which does not explain anything. It seems to be hard for some beginners of deep learning to understand why do we ever do backwards with external gradient fed in and what is the meaning of doing so. So I modified the tutorial in https://github.com/pytorch/tutorials/pull/385
and the docstring correspondingly in this PR, explaining the Jacobian vector product. Please review this PR and https://github.com/pytorch/tutorials/pull/385 together.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15197
Differential Revision: D13476513
Pulled By: soumith
fbshipit-source-id: bee62282e9ab72403247384e4063bcdf59d40c3c
* value-based mark and sweep DCE (#14910)
Summary:
This makes DCE more granular by tracking live values/aliases through the graph (rather than just nodes). So we can be more aggressive in DCE around control flow blocks. For example, in:
```
%a0 = aten::foo()
%b = aten::foo()
%a2, %b2 = prim::If(%cond) {
block0() {
%a1 = aten::foo(%.0)
%b1 = aten::foo(%b)
} -> (%a1, %b1)
}
return (%a2)
```
we will now dce all the `%b` stuff.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14910
Differential Revision: D13476445
Pulled By: suo
fbshipit-source-id: 2bf5db19711c07dde946697a4f4b270bd8baf791
* fix cholesky call in potrs example (#15215)
Summary:
Cholesky by default returns the lower triangular matrix, see [docs](https://pytorch.org/docs/stable/torch.html#torch.cholesky).
However `torch.potrs` by default requires the upper triangular matrix. The naming of the variable `u` suggests that the example expects the upper to be returned, so I've added the flag to make that happen in the example.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15215
Differential Revision: D13476468
Pulled By: soumith
fbshipit-source-id: 7b68035f435a2b1be4d363b3f63e407394af949d
* Fix a typo in the assert
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15265
Reviewed By: llyfacebook
Differential Revision: D13477029
Pulled By: sf-wind
fbshipit-source-id: 9c5571a583c01f9701625541ebec0c836cb923f2
* Delete ffi documentation (#15220)
Summary: Deleting FFI documentation since its deprecated.
Differential Revision: D13477329
Pulled By: soumith
fbshipit-source-id: 0b3d485eb7cef1f05b6b397dff50f21a49d6409e
* Trivial comment correction in dataloader (#15276)
Summary:
Trivial comment correction in dataloader
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15276
Differential Revision: D13477324
Pulled By: soumith
fbshipit-source-id: 2a74a014999655d129311d611f2a09411339cb13
* Refactor hotpatch_vars and apply it to libtorch (#14976)
Summary:
Fixes #14801.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14976
Differential Revision: D13485381
Pulled By: soumith
fbshipit-source-id: 0af3c2e1b90988d56f6f85632328d1e4b788ffd2
* Fix tensor printing bug in Python 2 (#12732)
Summary:
`rsplit` doesn't have kwargs in Python 2 so this line raises an error
Fixes #15135
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12732
Differential Revision: D10458630
Pulled By: driazati
fbshipit-source-id: a63e42fbc0e39e4291480775b516c98122ec05a1
* Tighten up invariants regarding StreamId. (#15125)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15125
I realized that it is really bad juju if you fake a StreamId
out of thin air, because in general this isn't going to work.
So, make the constructor a lot scarier.
Most "faking StreamId out of thin air" happens because someone
just wants to put something on the default stream.
Reviewed By: dzhulgakov
Differential Revision: D13432800
fbshipit-source-id: a86991d6fc1d8aa4e54e8175e5f06f90856238e6
* Adding ONNX export for torch.expand and torch.ne (#15050)
Summary:
`torch.expand` and `torch.ne` are used often in models and this PR adds ONNX export support for them. ArmenAg has created issue https://github.com/pytorch/pytorch/issues/10882 for this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15050
Differential Revision: D13453036
Pulled By: houseroad
fbshipit-source-id: 4724b4ffcebda6cd6b2acac51d6733cb27318daf
* Minor fixes in .jenkins/caffe2/bench.sh
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15304
Differential Revision: D13493876
Pulled By: bddppq
fbshipit-source-id: 7146eb2587e526af65b4b0290c25bd55653a3088
* Fix for issue 14829 (#14908)
Summary:
* Modify the testcase as outlined in the issue
* Issue url: https://github.com/pytorch/pytorch/issues/14829
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14908
Differential Revision: D13490360
Pulled By: ezyang
fbshipit-source-id: ff11a72e19b49223652182e82c2b4e65fe444ca7
* Don't enforce docstrings on bool dispatch (#15306)
Summary:
Allows 2 functions that are boolean dispatched to have no docstrings (the only case that will fail now is if both functions have docstrings)
Fixes #15281
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15306
Differential Revision: D13494884
Pulled By: driazati
fbshipit-source-id: 65fec39ae03a7d6a68ad617c9b270faeb1617930
* Replace SwitchToDevice(0) with SwitchToDevice() (#15126)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15126
I want to make people stop manufacturing StreamId from thin air,
and a first step is to make people use the default stream.
Reviewed By: dzhulgakov
Differential Revision: D13432922
fbshipit-source-id: 9f0d8d70646c50d979bde5ba3c3addeebac48a3d
* Fix the missing caffe2 proto files for Windows (#15157)
Summary:
Fixes #15156
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15157
Differential Revision: D13490420
Pulled By: orionr
fbshipit-source-id: 4387d707f634a5975238af915b1befb2277f8ec7
* add isinstance static type checking for jit (#15076)
Summary:
This PR add isinstance to do static type checking in JIT.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15076
Differential Revision: D13471067
Pulled By: wanchaol
fbshipit-source-id: d39b7ed5db9fcca4b503659d02cf7795950ea8ea
* Bicubic interpolation for nn.functional.interpolate (#9849)
Summary:
Addresses #918, interpolation results should be similar to tf
* Adds bicubic interpolation operator to `nn.functional.interpolate`
* Corresponding test in `test_nn.py`
The operator is added in legacy `TH` to be aligned with the other upsampling operators; they can be refactored/moved to ATen all at once when #10482 is resolved
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9849
Differential Revision: D9007525
Pulled By: driazati
fbshipit-source-id: 93ef49a34ce4e5ffd4bda94cd9a6ddc939f0a4cc
* Removing BUILD_C10_EXPERIMENTAL_OPS option and unglobbing experimental/c10d ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15064
Reviewed By: orionr
Differential Revision: D13474801
Pulled By: pjh5
fbshipit-source-id: 9d3664c3a3a1b6c2d9f083f8476fe3b037296b98
* Allow future type parsing
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14887
Differential Revision: D13490984
Pulled By: highker
fbshipit-source-id: 165fe995867be273793f983154aa6cbce13e4396
* Port nn fold and unfold to c++
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14597
Reviewed By: ezyang
Differential Revision: D13272227
fbshipit-source-id: 6eccab5ff5830a977398a96393b778095120edc6
* caffe2/python/task: added __repr__ methods to all task definitions (#15250)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15250
This adds `__repr__` methods to all of the classes under task.py. This makes the objects much easier to interact with when using them in an interactive manner, such as in a Jupyter notebook.
The default `__repr__` method just returns the object ID which is very unhelpful.
Reviewed By: hanli0612
Differential Revision: D13475758
fbshipit-source-id: 6e1b166ec35163b9776c797b6a2e0d002560cd29
* Add a correctness check for C++ types to custom operators (#15247)
Summary:
The JIT uses `int64_t` for its integer type and `double` for its floating point type, but users quite often want to write `int` or `float` and that currently fails in not-so-nice ways for custom ops. This PR adds a simple `static_assert` to catch these common failure cases.
zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15247
Differential Revision: D13493941
Pulled By: goldsborough
fbshipit-source-id: c1cd0d10ab5838c75f167c0bdb57e45a0bc1344e
* Fix _apply in nn.Module (#15305)
Summary:
Fixes an issue that arose from https://github.com/pytorch/pytorch/pull/13481 where `.shared_memory()` couldn't be called. Effectively undoes all changes to `nn.Module` from that PR and solve the relevant problem in a different way (the goal was to be able to call `._apply()` on the Python wrapper for a C++ module).
soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15305
Differential Revision: D13493937
Pulled By: goldsborough
fbshipit-source-id: 4cb8687f90fc8709a536c5e7eacd0dc8edf6f750
* Reenable OpenMP by reverting the following two commits. (#15315)
Summary:
Revert "Put back linker flag for OpenMP to prevent build break on ppc64le (#14569)"
This reverts commit a84e873bb156080ea76ab182171b1f3b4d5395f6.
Revert "Update OpenMP cmake setting for xcode 9 compiler(AppleClang 9.0) (#14473)"
This reverts commit 8901935ad42fe9bf093d1106ea43606008a4024d.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15315
Differential Revision: D13495852
Pulled By: ezyang
fbshipit-source-id: bcd3f60088b14831c53d3c171f10cd1ab6b35dee
* [TensorIterator fixing mean to output correct result for half precisi… (#14878)
Summary:
…on](#12115)
mean is calculated in two step sum()/numel(). For half precision, data gets
casted back to half after sum().
We fused the division into the reduction kernel by adding pre_op/post_op.
This allows us to do torch.ones(65536).cuda().half().mean() to return correct
result.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14878
Differential Revision: D13491159
Pulled By: soumith
fbshipit-source-id: e83802e1628b6d2615c45e18d7acf991d143a09e
* Allow tracing with fork/wait (#15184)
Summary:
There is still limitation on this: if a script module is somewhere
in the trace, the inputs/outputs can only be tensors or tuples of
tensors.
resolves #15052
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15184
Differential Revision: D13457691
Pulled By: highker
fbshipit-source-id: 8fe46afc41357a0eb8eadd83f687b31d074deb0e
* improve script/no script save error (#15321)
Summary:
Improves the error message for #15116
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15321
Differential Revision: D13499379
Pulled By: zdevito
fbshipit-source-id: b8dc0a83efabff74199f4aab2ee98aa41c42608b
* Updating submodules
Reviewed By: cdelahousse
fbshipit-source-id: 4bf66581d07d839f459869bc9c6428011063cc5b
* Revert D13383102: [pytorch][PR] Upgrade MKL-DNN to version 0.17
Differential Revision:
D13383102
Original commit changeset: c434f0e0ddff
fbshipit-source-id: 690f46ca0710954fa591a5ea77535e9759db4de5
* caffe2 mobile opengl (#15322)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15322
caffe2 mobile opengl code is not used, deleting it to reduce complications when we perform other changes
Reviewed By: Maratyszcza
Differential Revision: D13499943
fbshipit-source-id: 6479f6b9f50f08b5ae28f8f0bc4a1c4fc3f3c3c2
* Method returns a single argument (#15289)
Summary:
This PR changes Method (just Method not all graphs) to always have a single
return argument.
This is part 1 in a set of changes that will enable us to have better handling if early return statements.
The simplification that this change provides greatly reduces the work for the next step.
This change makes it so that Method and Python handle multiple returns in the same way:
* 0 - None
* 1 - <single value>
* many - Tuple[...]
The result is that a lot of special-case handling in compiler.cpp and its
bindings can be removed. It also fixes several bugs in return handling,
including one where return values were not always checked against their
attributed values.
Notes:
* inferTypeFrom is renamed to be more accurate and discourage use.
* This has uncovered some bugs in other components, which are noted in
the diff.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15289
Differential Revision: D13481649
Pulled By: zdevito
fbshipit-source-id: 0e2242a40bb28cca2d0e8be48bede96195e4858c
* Fix the (reduce)min and (reduce)max ONNX exporting (#15241)
Summary:
max and reducemax are smashed together, we need to support one input case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15241
Reviewed By: yinghai
Differential Revision: D13473312
Pulled By: houseroad
fbshipit-source-id: 9b8c847286a2631b006ca900271bc0d26574101a
* Add (Un)Fold modules to standard library (#14759)
Summary:
Depends on #14597 for the corresponding aten ops.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14759
Differential Revision: D13325356
Pulled By: driazati
fbshipit-source-id: 99e39449c1ccfa293de05672c31a11e580bdd11f
* Port torch.linspace to ATen and parallelize it on CPU.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15320
Reviewed By: ezyang
Differential Revision: D13498995
Pulled By: gchanan
fbshipit-source-id: fba655d51d978fffaa53a5e4cae4a99ebfb0eddc
* fix clang-tidy script for python 3
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15360
Differential Revision: D13509668
Pulled By: suo
fbshipit-source-id: a3448a115eaac8dd4c3f179901a23bdbc5098408
* add dense vector to id_list operator (#15090)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15090
as title
step 2 of the linked task
Reviewed By: ellie-wen
Differential Revision: D13425977
fbshipit-source-id: f3538ed68f42470ba39c5b779af764d4a5591a9d
* Minor cleanup for TestFuser tests (#15134)
Summary:
Changelog:
- change some expect tests that didn't have to be expect tests,
instead use self.assertAllFused
- Some of the fuser tests weren't using self.assertAllFused.
- Minor test renames
cc apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15134
Differential Revision: D13507481
Pulled By: zou3519
fbshipit-source-id: dd0788530a60bb5ed2f42b961fae3db2b4404b64
* Replace resize_dim() with set_sizes_and_strides() in (#15348)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15348
We have a function resize_dim() on TensorImpl in c10/core/TensorImpl.h…
This PR add isinstance to do static type checking in JIT.