-
Notifications
You must be signed in to change notification settings - Fork 25.7k
torch.complex and torch.polar #39617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@anjali411 @ezyang Hi, I saw the update about having the polar keyword, but was wondering, is something like what I wrote along the right track to implement |
💊 CI failures summary and remediationsAs of commit a481888 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 97 times. |
|
Hi @anjali411 @mruberry, made some updates. A couple question/comments,
However, would it be better to have specific |
|
|
Also see: https://github.com/pytorch/pytorch/pull/40246/files, which might give you some ideas for manipulating the complex types. |
|
@mruberry @anjali411 sorry for just getting back to this. I think backward won't work altogether since it's fitting a complex gradient into a float input tensor. This was making me wonder how this function should work. Should it be able to accept complex inputs as well (and just use their real components)? It also made me wonder if this function would really be useful, since my guess is once x and y are both complex, the speed advantage over x + 1j * y wouldn't be that great. |
@wjgan7 I think backward definition should be similar to that of The function should only be able to expect floating point tensors (cc. @mruberry). For cpu, you can define a function in this file: As Mike mentioned, let's skip Vec256 implementation for this PR. |
|
For the backward formula, I think we can just wait on the formula Anjali is going to put in the doc and apply it here. That will contain all the real to complex/complex to real changes that need to be applied. |
|
@anjali411 @albanD Sounds good, I can just follow what @mruberry said about the CUDA implementation and write tests for that then. |
|
@wjgan7 any updates? |
For some reason old branch isn't compiling properly anymore
|
@anjali411 Apologies for delays, will try to get an update soon. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM after the autograd tests have been moved from test_torch.py to test_autograd.py.
cc. @mruberry to take a final look!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Synced with @albanD offline. We shouldn't add any new autograd formulas until JAX vs tf autograd issue is resolved so we should just add not_implemented("fn_name") in derivatives.yaml definitions.
|
@anjali411 Not sure if I can view the failing CircleCI tests. Do you know what the issue is? |
|
@wjgan7 they are just upstream failures, but if you rebase on the latest master, CI should be green. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM :) thanks for the awesome work @wjgan7
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good but please fix the "{out}" in the two doc updates.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@anjali411 merged this pull request in e437308. |
|
@anjali411 @mruberry Thanks for all the reviews, I learned a lot! |
For #35312 and #38458 (comment).