KEMBAR78
[fix] torch.{lin, log}space(): properly examine passed dtype by kshitij12345 · Pull Request #53685 · pytorch/pytorch · GitHub
Skip to content

Conversation

@kshitij12345
Copy link
Collaborator

Fixes #53171

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Mar 10, 2021

💊 CI failures summary and remediations

As of commit 4bc39f0 (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

1 failure not recognized by patterns:

Job Step Action
GitHub Actions Windows CI (pytorch-win-vs2019-cpu-py3) / render_test_results Set up job 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

if (_r.isNone(4)) {
// aten::logspace(Scalar start, Scalar end, int? steps=None, float base=10.0, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
const auto options = TensorOptions()
.dtype(_r.scalartypeOptional(5))
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything is same here w.r.t generated code except for _r.scalartypeOptional(5) instead of _r.scalartype(5)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto re explanatory comment

if (_r.isNone(3)) {
// aten::linspace(Scalar start, Scalar end, int? steps=None, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
const auto options = TensorOptions()
.dtype(_r.scalartypeOptional(4))
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything is same here w.r.t generated code except for _r.scalartypeOptional(5) instead of _r.scalartype(5)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might want to add a comment a la https://github.com/pytorch/pytorch/blob/master/tools/autograd/templates/python_torch_functions.cpp#L104 - helps explain why this needs to be a manual binding

@kshitij12345 kshitij12345 marked this pull request as ready for review March 31, 2021 09:38
@kshitij12345 kshitij12345 requested review from anjali411 and mruberry and removed request for albanD and soulitzer March 31, 2021 09:39
@gchanan gchanan added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Mar 31, 2021
@mruberry
Copy link
Collaborator

mruberry commented Apr 5, 2021

@anjali411 Would you take the lead on reviewing this?

@anjali411
Copy link
Contributor

anjali411 commented May 10, 2021

@kshitij12345 sorry I missed the notification for this PR before. This PR looks great overall. We should probably set the manual_cpp_binding flag to True right? edit -- this is not correct

const auto steps_ = steps.value_or(100);
TORCH_CHECK(steps_ >= 0, "number of steps must be non-negative");
auto result_options = linspace_logspace_infer_options(start, end, options);
auto result_options = linspace_logspace_infer_options(start, end, options, "torch.linspace()");
Copy link
Contributor

@anjali411 anjali411 May 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kshitij12345 so if the user doesn't specify the dtype, the dtype is not set until here, or the line below in which case we create a tensor with the default dtype?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies missed the comment. Thanks @bhosmer for pointing out. Have updated linspace_logspace_infer_options.

@facebook-github-bot
Copy link
Contributor

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

static PyObject * THPVariable_numel(PyObject* self_, PyObject* args, PyObject* kwargs);

// linspace
static PyObject * THPVariable_linspace(PyObject* self_, PyObject* args, PyObject* kwargs)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bhosmer would you like to check these custom bindings?

@mruberry
Copy link
Collaborator

Hey @kshitij12345, this looks correct but I'll need more time to review the new handwritten bindings; I cc'd @bhosmer to see if he'd like to take a look or suggest an alternative approach, too

With the branch cut I'll need some extra time to review the new bindings carefully; would you ping me on this next week? I hate to make you wait but it's an incredibly busy time

@mruberry
Copy link
Collaborator

Linking with #56335, which also updates lin and log space

@kshitij12345
Copy link
Collaborator Author

@mruberry Sure. I understand. Will ping you by next week here!

@mruberry @anjali411 Thanks for looking into it :)

Copy link

@bhosmer bhosmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @mruberry @anjali411 @kshitij12345 - yeah, unfortunately the codegen still locks you into the dtype defaulting behavior for ops that take TensorOptions args, so manual bindings are the way to make it work for now. Couple small suggestions inline, otherwise LGTM

if (_r.isNone(3)) {
// aten::linspace(Scalar start, Scalar end, int? steps=None, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
const auto options = TensorOptions()
.dtype(_r.scalartypeOptional(4))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might want to add a comment a la https://github.com/pytorch/pytorch/blob/master/tools/autograd/templates/python_torch_functions.cpp#L104 - helps explain why this needs to be a manual binding

if (_r.isNone(4)) {
// aten::logspace(Scalar start, Scalar end, int? steps=None, float base=10.0, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
const auto options = TensorOptions()
.dtype(_r.scalartypeOptional(5))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto re explanatory comment

}

return result_options;
return options;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per @anjali411's comment below, the other change introduced here besides the error checking is that the options we return will no longer have the default dtype patched in, in the non-complex case.

It may be that the current downstream consumer (empty) happens to implement the same defaulting behavior atm, but I think it'll be more robust if we don't depend on that, and instead do it here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. Thanks!!

@codecov
Copy link

codecov bot commented May 13, 2021

Codecov Report

Merging #53685 (6215555) into master (6ee9466) will increase coverage by 0.36%.
The diff coverage is 100.00%.

❗ Current head 6215555 differs from pull request most recent head 3cdac1f. Consider uploading reports for the commit 3cdac1f to get more accurate results

@@            Coverage Diff             @@
##           master   #53685      +/-   ##
==========================================
+ Coverage   76.44%   76.80%   +0.36%     
==========================================
  Files        2022     1986      -36     
  Lines      202375   198202    -4173     
==========================================
- Hits       154701   152237    -2464     
+ Misses      47674    45965    -1709     

@kshitij12345
Copy link
Collaborator Author

would you ping me on this next week?
@mruberry gentle ping :)

@kshitij12345
Copy link
Collaborator Author

@mruberry Gentle ping :)

Copy link
Contributor

@anjali411 anjali411 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks @kshitij12345 this is ready for land once you rebase it and update the comment for more context! :D

@facebook-github-bot
Copy link
Contributor

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@anjali411 anjali411 added the module: complex Related to complex number support in PyTorch label Jun 8, 2021
@kshitij12345
Copy link
Collaborator Author

@anjali411 gentle ping :)

@facebook-github-bot
Copy link
Contributor

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in c902609.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: complex Related to complex number support in PyTorch open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[complex] {lin,log}space: raises warning incorrectly

7 participants