-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[BE] Raise NotImplementedError
#155470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BE] Raise NotImplementedError
#155470
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/155470
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 2 PendingAs of commit 31d2853 with merge base 9f5153b ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me, not sure how actually BC-breaking it is though
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
@malfet please add the release notes bc-breaking note to the PR description |
Merge failedReason: 1 mandatory check(s) failed. The first few are: Dig deeper by viewing the failures on hud |
@pytorchbot merge -f "This seems fine?" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
@pytorchbot revert -m "foreach tests are failing on ROCm because we are not running the same on CUDA" |
❌ 🤖 pytorchbot command failed:
Try |
@pytorchbot revert -m "foreach tests are failing on ROCm because we are not running the same on CUDA" -c nosignal |
@pytorchbot successfully started a revert job. Check the current status here. |
@malfet your PR has been successfully reverted. |
This reverts commit 5ab6a3f. Reverted #155470 on behalf of https://github.com/malfet due to foreach tests are failing on ROCm because we are not running the same on CUDA ([comment](#155470 (comment)))
@pytorchbot merge -f "Trunk is red, Tests are green, Merging this PR— Best I've seen!" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
By introducing `check_for_unsupported_clamp_dtypes` similar to `check_for_unsupported_isin_dtypes` Pull Request resolved: #155930 Approved by: https://github.com/albanD, https://github.com/janeyx99, https://github.com/clee2000 ghstack dependencies: #155470
Stack from ghstack (oldest at bottom):
NotImplementedError
#155470When op is unimplemented for a specific dtype
Which makes more sense, than a RuntimeError
Example
release notes bc-breaking: After this release
NotImplementedError
exception will be raised when ATen operation is called on the combinaiton of input tensor dtypes it has not been implemented forMark few more unary ops as unimplemented to satisfy foreach testing error reporting consistency between CPU and CUDA
cc @albanD