-
Notifications
You must be signed in to change notification settings - Fork 25.7k
enabled AT_USE_JITERATOR() for tan and tanh kernels.
#102427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/102427
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (3 Unrelated Failures)As of commit 8b29a12 with merge base c323944 ( FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@mruberry @peterbell10 @ngimel Can you let me know if i need to format my code or update documentation anywhere for this change? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test failures look real, e.g.
test_unary_ufuncs.py::TestUnaryUfuncsCUDA::test_reference_numerics_extremal_asinh_cuda_complex64 - AssertionError: Tensor-likes are not close!
|
@peterbell10 Could you take a look at this again? |
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Rebase failed due to Command Raised by https://github.com/pytorch/pytorch/actions/runs/5280395913 |
|
@parth-desai would you mind resolving the merge conflicts? I'm hoping a rebase would resolve the CI issues. |
0e65d90 to
dd03d66
Compare
@peterbell10 I merged the latest main to my branch. Please review again. |
|
This test failure looks related: It's strange though, because this PR should only effect complex dtypes, but the test is for |
|
@peterbell10 I couldn't reproduce the error on my local machine. I even tried the CI docker image as well as build scripts. Any suggestions on fixing this test? |
dd03d66 to
bc8f744
Compare
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just FYI, main often has some test failures from PRs that need to be reverted. You're best rebasing against viable/strict, which should be the most recent commit where CI passed. That should improve the signal-to-noise on CI.
I've set the keep-going flag so you should see all the tests failures, and it looks like there are still quite a lot of relevant CI failures for complex dtypes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should remove these comments as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The bfloat16 failure is due to this toleranceOverride taking precedence over the one above for bfloat16. I can get the test to pass locally by copying bfloat16's tol term from above here.
bc8f744 to
5b1e327
Compare
|
@peterbell10 Thanks for the pointer. I didn't realize my environment had issues and my build was not running cuda tests. Anyways I have fixed the |
f77ead2 to
6884e84
Compare
|
@peterbell10 The failures now seem to be CI infrastructure failures. Can you take a look? |
|
@pytorchbot drci |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 2 mandatory check(s) failed. The first few are:
Dig deeper by viewing the failures on hud |
|
@huydhn Any suggestions on how to fix these? |
|
@pytorchbot merge -f 'Bypass unrelated issues as Dr.CI has classified them as flaky' Sorry I was about to force merge your change but got side tracked by other tasks |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
And thank you for your first contribution to PyTorch (IIRC) :) |
|
thank you @huydhn @peterbell10 for your help. |
This MR fixes the test failures for
jiteratorimplemenation fortanandtanhUnary kernels as mentioned in #100842.The failures were fixed by adjusting tolerances but some failures were in
test_unary_ufuncs.pyrequired adjusting input values as well. Since the jiterator kernels are using libstdc++, the supported input range is smaller than thrust implementation.