KEMBAR78
Update intro doc for derivatives.yaml by albanD · Pull Request #60614 · pytorch/pytorch · GitHub
Skip to content

Conversation

@albanD
Copy link
Collaborator

@albanD albanD commented Jun 24, 2021

Clarify some phrasing and document the findings on the different non differentiable states.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jun 24, 2021

💊 CI failures summary and remediations

As of commit 32237ff (more details on the Dr. CI page and at hud.pytorch.org/pr/60614):


  • 2/2 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Jun 24 17:07:39 AssertionError: "weight tensor ...nsion is 4, the 1th output dimension is 3. vs. OK)
Jun 24 17:07:39 *** End stack trace ***
Jun 24 17:07:39 
Jun 24 17:07:39 
Jun 24 17:07:39 During handling of the above exception, another exception occurred:
Jun 24 17:07:39 
Jun 24 17:07:39 Traceback (most recent call last):
Jun 24 17:07:39   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 397, in instantiated_test
Jun 24 17:07:39     result = test_fn(self, *args)
Jun 24 17:07:39   File "/var/lib/jenkins/workspace/xla/test/../../test/test_nn.py", line 16007, in test_nll_loss_invalid_weights
Jun 24 17:07:39     F.nll_loss(x, t, weight=weight)
Jun 24 17:07:39 AssertionError: "weight tensor should be defined either for all 3 classes or no classes" does not match "/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-tensorflow/tensorflow/compiler/xla/xla_client/debug_macros.h:27 : Check failed: status.status() == ::tensorflow::Status::OK() (Invalid argument: Input dimension should be either 1 or equal to the output dimension it is broadcasting into; the 0th operand dimension is 4, the 1th output dimension is 3. vs. OK)
Jun 24 17:07:39 *** Begin stack trace ***
Jun 24 17:07:39 	tensorflow::CurrentStackTrace[abi:cxx11]()
Jun 24 17:07:39 	xla::Shape const* ConsumeValue<xla::Shape const*>(tensorflow::StatusOr<xla::Shape const*>&&)
Jun 24 17:07:39 	torch_xla::XlaHelpers::ShapeOfXlaOp(xla::XlaOp)
Jun 24 17:07:39 	torch_xla::ir::ops::InferOutputShape(absl::lts_20210324::Span<xla::Shape const>, std::function<xla::XlaOp (absl::lts_20210324::Span<xla::XlaOp const>)> const&)
Jun 24 17:07:39 	
Jun 24 17:07:39 	torch_xla::ir::Node::GetOpShape(std::function<xla::Shape ()> const&) const
Jun 24 17:07:39 	torch_xla::ir::Node::Node(torch_xla::ir::OpKind, absl::lts_20210324::Span<torch_xla::ir::Value const>, std::function<xla::Shape ()> const&, unsigned long, absl::lts_20210324::uint128)
Jun 24 17:07:39 	torch_xla::ir::ops::NllLoss::NllLoss(torch_xla::ir::Value const&, torch_xla::ir::Value const&, absl::lts_20210324::optional<torch_xla::ir::Value> const&, torch_xla::ReductionMode, int)
Jun 24 17:07:39 	torch_xla::XLATensor::nll_loss(torch_xla::XLATensor const&, torch_xla::XLATensor const&, torch_xla::XLATensor const&, long, int)

1 failure not recognized by patterns:

Job Step Action
CircleCI pytorch_linux_bionic_py3_8_gcc9_coverage_test1 Run tests 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

# function is not differentiable with respect to that argument for
# example. You should either:
# - Do not specify any formula for this argument
# - Specify explicitely that this argument is "non_differentiable". Note that in this case,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: explicitly

@codecov
Copy link

codecov bot commented Jun 24, 2021

Codecov Report

Merging #60614 (b9eac93) into master (da030c5) will decrease coverage by 0.26%.
The diff coverage is n/a.

❗ Current head b9eac93 differs from pull request most recent head 32237ff. Consider uploading reports for the commit 32237ff to get more accurate results

@@            Coverage Diff             @@
##           master   #60614      +/-   ##
==========================================
- Coverage   76.11%   75.85%   -0.27%     
==========================================
  Files        2058     2060       +2     
  Lines      205791   207824    +2033     
==========================================
+ Hits       156637   157643    +1006     
- Misses      49154    50181    +1027     

@facebook-github-bot
Copy link
Contributor

@albanD has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@albanD merged this pull request in a3ebc40.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants