-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Fix mismatched tensor metadata between FakeTensor and Intel XPU concrete tensor when running F.logsigmoid
#141333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/141333
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New Failures, 5 Unrelated FailuresAs of commit d47f443 with merge base 5deca07 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "topic: not user facing" |
|
@pytorchbot label "module: xpu" |
|
Could you add a UT for XPU? |
Will test these cases in project torch-xpu-ops |
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
112253c to
9469010
Compare
F.logsigmoid
F.logsigmoidF.logsigmoid
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pls. fix the lint issue.
| min = torch.minimum(self.new_zeros(()), self) | ||
| z = torch.exp(-torch.abs(self)) | ||
| if self.is_cuda: | ||
| if self.is_cuda or self.is_xpu: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add is_xpu to tensor doc, refer to
Lines 6688 to 6693 in 6e61ff4
| add_docstr_all( | |
| "is_cuda", | |
| r""" | |
| Is ``True`` if the Tensor is stored on the GPU, ``False`` otherwise. | |
| """, | |
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are is_xpu in this file. ref:https://github.com/pytorch/pytorch/blob/main/torch/_tensor_docs.py#L6716
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add is_xpu to pyi, refer to
Lines 1205 to 1208 in 5ca75ac
| "is_cpu": ["is_cpu: _bool"], | |
| "is_cuda": ["is_cuda: _bool"], | |
| "is_leaf": ["is_leaf: _bool"], | |
| "is_nested": ["is_nested: _bool"], |
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
63a480e to
92fa4de
Compare
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
92fa4de to
7ce274e
Compare
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
@pytorchbot merge -f "this looks fine to force land" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…ete tensor when running `F.logsigmoid` (pytorch#141333) Fixes pytorch#141332 `F.logsigmoid` will return two outputs: `output` and `buffer`. For `F.logsigmoid` cpu path, it will use buffer to store some intermediate values and use them when computing gradients, so it returns a `buffer` tensor with nonzero size. For cuda and xpu paths, buffer is useless, so the `buffer ` tensor size of xpu `F.logsigmoid` will be zero, just like cuda. The root cause of the issue is that the codes in `decompositions.py` (ref:https://github.com/pytorch/pytorch/blob/main/torch/_decomp/decompositions.py#L2803) only handle the cuda cases, when the a fake tensor with device is xpu run to here, it will use the cpu path and return a `buffer` with nonzero size, which is conflict to the implementation of intel xpu concrete tensor. Therefore this pr add conditions to handle xpu cases. Make sure the two returned buffer sizes match each other. Pull Request resolved: pytorch#141333 Approved by: https://github.com/guangyey, https://github.com/EikanWang, https://github.com/ezyang
Fixes #141332
F.logsigmoidwill return two outputs:outputandbuffer.For
F.logsigmoidcpu path, it will use buffer to store some intermediate values and use them when computing gradients, so it returns abuffertensor with nonzero size. For cuda and xpu paths, buffer is useless, so thebuffertensor size of xpuF.logsigmoidwill be zero, just like cuda. The root cause of the issue is that the codes indecompositions.py(ref:https://github.com/pytorch/pytorch/blob/main/torch/_decomp/decompositions.py#L2803) only handle the cuda cases, when the a fake tensor with device is xpu run to here, it will use the cpu path and return abufferwith nonzero size, which is conflict to the implementation of intel xpu concrete tensor. Therefore this pr add conditions to handle xpu cases. Make sure the two returned buffer sizes match each other.cc @gujinghui @EikanWang @fengyuan14 @guangyey