-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Implement traceable torch.tensor when you have SymInt/SymFloat inputs #109515
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Edward Z. Yang <ezyang@meta.com> [ghstack-poisoned]
…loat inputs" I just ported the C++ torch.tensor implementation to Python, swapping out the inner bits to successively stack tensors together, so that we can trace through `scalar_tensor`. Signed-off-by: Edward Z. Yang <ezyangmeta.com> cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov kadeng [ghstack-poisoned]
| ) | ||
| type_inference = dtype is None | ||
| new_tensor = _internal_new_from_data( | ||
| {"device": "cpu"}, # TODO: use torch.get_default_tensor_type |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why device cpu??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because that's the default device, when you don't specify device
| zero_ = _make_inplace(zero) | ||
|
|
||
|
|
||
| def _isStorage(obj): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hypernit camelcase
| ) | ||
| var = data | ||
| if copy_variables: | ||
| var = var.detach() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry, why detach and not clone?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you know, I have no idea why the og code does that. The logic was added in #14097 but we didn't discuss it in review. If I had to guess, it is to deal with the situation if var requires_grad=True, on the inside we might do a .to() call and we want to avoid generating a backwards graph when this happens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
stamp because port but questions
| if cur_item is obj: | ||
| raise TypeError("new(): self-referential lists are incompatible") | ||
| """ | ||
| item_scalarType = _infer_scalar_type(cur_item) # recurse! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know its ported but camelcase
| """ | ||
| item_scalarType = _infer_scalar_type(cur_item) # recurse! | ||
| if scalarType is not None: | ||
| scalarType = torch.promote_types(scalarType, item_scalarType) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
more camels
| type_inference = dtype is None | ||
| new_tensor = _internal_new_from_data( | ||
| {"device": "cpu"}, # TODO: use torch.get_default_tensor_type | ||
| dtype if dtype is not None else torch.get_default_dtype(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: type_inference bool instead of dtype is not None
| var = data | ||
| if copy_variables: | ||
| var = var.detach() | ||
| inferred_scalar_type = var.dtype if type_inference else scalar_type |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is the same line as inferred_scalar_type = _infer_scalar_type(data) if type_inference else scalar_type below because that line routes to .dtype anyway - combine and move up?
|
I resolved all the PR comments that would not have caused divergence from the original code |
…loat inputs" I just ported the C++ torch.tensor implementation to Python, swapping out the inner bits to successively stack tensors together, so that we can trace through `scalar_tensor`. Signed-off-by: Edward Z. Yang <ezyangmeta.com> cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov kadeng [ghstack-poisoned]
|
@pytorchbot merge -f "known master breakage only" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
I just ported the C++ torch.tensor implementation to Python, swapping out the inner bits to successively stack tensors together, so that we can trace through
scalar_tensor.Signed-off-by: Edward Z. Yang ezyang@meta.com
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng