-
Notifications
You must be signed in to change notification settings - Fork 25.7k
torch/ao/quantization/utils.py: Moving eps to targeted device to avoid device mismatch issue #135204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…d device mismatch issue
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/135204
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 7a4033b with merge base 5b442e8 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…d device mismatch issue
…into fix_eps_device
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…, move eps to device of other tensors to avoid device mismatch issue. 2) pytorch upstream pr: pytorch#135204. This pull request is merged. Change-Id: I52cad8dda1df4952bbdb2e4bc8eb39d9ab3e800f Signed-off-by:: internal developer <developer@habana.ai>
MOTIVATION
We recently verified some quantization tests on devices other than cpu (eg. CUDA and Intel Gaudi devices identified as 'hpu'). We noticed a device mismatch error as eps is a tensor created on cpu but other tensors (min_val_neg, max_val_pos, scale, zero_point) are moved to the targeted device.
CHANGES
Move eps to device of other tensors.