-
Notifications
You must be signed in to change notification settings - Fork 25.7k
OpInfo for torch.nn.functional.normalize
#62635
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpInfo for torch.nn.functional.normalize
#62635
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 4c7d80d (more details on the Dr. CI page): ✅ None of the CI failures appear to be your fault 💚
❄️ 1 failure tentatively classified as flakybut reruns have not yet been triggered to confirm:
|
| # RuntimeError: aliasOp != torch::jit::getOperatorAliasMap().end() | ||
| # INTERNAL ASSERT FAILED at "../torch/csrc/jit/passes/utils/check_alias_annotation.cpp":159, | ||
| # please report a bug to PyTorch. | ||
| SkipInfo('TestJit', 'test_variant_consistency_jit',), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @eellison
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I understood so far, it's probably because normalize calls norm and torch.div (or / operator) in its definition:
pytorch/torch/nn/functional.py
Lines 4437 to 4441 in 88af4d8
| denom = input.norm(p, dim, keepdim=True).clamp_min(eps).expand_as(input) | |
| return input / denom | |
| else: | |
| denom = input.norm(p, dim, keepdim=True).clamp_min_(eps).expand_as(input) | |
| return torch.div(input, denom, out=out) |
Hence the graph created from:
pytorch/torch/testing/_internal/jit_metaprogramming_utils.py
Lines 488 to 491 in 88af4d8
| torch._C._jit_pass_inline(CU.the_method.graph) | |
| torch._C._jit_pass_constant_propagation(CU.the_method.graph) | |
| torch._C._jit_check_alias_annotation(CU.the_method.graph, tuple(tensors), aten_name) | |
aten::normalize but instead has the nodes for norm and div. This leads to continuing from the loop here (no if condition satisfied): pytorch/torch/csrc/jit/passes/utils/check_alias_annotation.cpp
Lines 151 to 153 in 88af4d8
| for (const auto node : g.nodes()) { | |
| if (node->kind() == opName) { | |
| return node; |
And hence the final failure here:
| AT_ASSERT(aliasOp != torch::jit::getOperatorAliasMap().end()); |
Was just trying to see what's happening. :) In case it helps!
|
@zou3519 would you make sure this covers the cases you need and shepherd this in? |
…//github.com/krshrimali/pytorch into opinfo/high_priority/nn/functional/normalize
|
Yes, can do! |
|
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
|
@krshrimali could you rebase this please? |
|
Thanks for the ping, @zou3519 - I've rebased the branch. Also removed |
|
ping @zou3519 |
|
Sorry, I missed this notification. @krshrimali could you please rebase this again? (it looks like there are merge conflicts) |
Thanks, @zou3519 for taking a look. I understand, and I should have pinged again but I missed it as well. I have fixed the merge conflicts. :) Thanks again! |
|
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Summary: See pytorch/functorch#78 and #54261 cc: mruberry zou3519 Chillee Pull Request resolved: #62635 Reviewed By: H-Huang Differential Revision: D30136503 Pulled By: zou3519 fbshipit-source-id: 258c069f30d9c2a51ed27dadf94f3703b9432a4a
See pytorch/functorch#78 and #54261
cc: @mruberry @zou3519 @Chillee