-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[CPU] Expand torch.special.i1 to Half and BF16
#137899
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
To match behavior for torch.special.i0 Noticed while looking at the failures in #137849
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/137899
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 030ed3a with merge base f8a5b71 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| } | ||
|
|
||
| // Upcast bfloat16/half input to float for numerical accuracy purposes | ||
| inline c10::BFloat16 calc_i1(c10::BFloat16 a) { return calc_i1(static_cast<float>(a)); } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why aren't these just template instatiations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because
pytorch/aten/src/ATen/native/Math.h
Line 1466 in 929797d
| inline typename std::enable_if<std::is_floating_point<T>::value, T>::type |
| // Upcast bfloat16 input to float for numerical accuracy purposes | ||
| // Upcast bfloat16/half input to float for numerical accuracy purposes | ||
| inline c10::BFloat16 calc_i0(c10::BFloat16 a) { return calc_i0(static_cast<float>(a)); } | ||
| inline c10::Half calc_i0(c10::Half a) { return calc_i0(static_cast<float>(a)); } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same concern
|
@malfet Need to update optests like:
|
I sometimes deliberately delay those to see that our OpInfo is comprehensive enough to fail with unexpected successes... |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
|
@pytorchbot merge -i "cpp_threads failure is clearly unrelated" |
|
❌ 🤖 pytorchbot command failed: Try |
|
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 1 checks: pull / linux-jammy-py3.10-clang15-asan / test (default, 6, 6, lf.linux.4xlarge) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
To match behavior of
torch.special.i0Noticed while looking at the failures in #137849
Also, add explicit high-precision template specialization for
calc_i0andcalc_i1forBFloat16andHalfcc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10