KEMBAR78
Implement traceable torch.tensor when you have SymInt/SymFloat inputs by ezyang · Pull Request #109515 · pytorch/pytorch · GitHub
Skip to content

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Sep 18, 2023

Stack from ghstack (oldest at bottom):

I just ported the C++ torch.tensor implementation to Python, swapping out the inner bits to successively stack tensors together, so that we can trace through scalar_tensor.

Signed-off-by: Edward Z. Yang ezyang@meta.com

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Sep 18, 2023

…loat inputs"


I just ported the C++ torch.tensor implementation to Python, swapping out the inner bits to successively stack tensors together, so that we can trace through `scalar_tensor`.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov kadeng

[ghstack-poisoned]
@ezyang ezyang added the ciflow/trunk Trigger trunk jobs on your pull request label Sep 18, 2023
)
type_inference = dtype is None
new_tensor = _internal_new_from_data(
{"device": "cpu"}, # TODO: use torch.get_default_tensor_type
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why device cpu??

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because that's the default device, when you don't specify device

zero_ = _make_inplace(zero)


def _isStorage(obj):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hypernit camelcase

)
var = data
if copy_variables:
var = var.detach()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, why detach and not clone?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you know, I have no idea why the og code does that. The logic was added in #14097 but we didn't discuss it in review. If I had to guess, it is to deal with the situation if var requires_grad=True, on the inside we might do a .to() call and we want to avoid generating a backwards graph when this happens.

Copy link
Collaborator

@voznesenskym voznesenskym left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stamp because port but questions

if cur_item is obj:
raise TypeError("new(): self-referential lists are incompatible")
"""
item_scalarType = _infer_scalar_type(cur_item) # recurse!
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know its ported but camelcase

"""
item_scalarType = _infer_scalar_type(cur_item) # recurse!
if scalarType is not None:
scalarType = torch.promote_types(scalarType, item_scalarType)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

more camels

type_inference = dtype is None
new_tensor = _internal_new_from_data(
{"device": "cpu"}, # TODO: use torch.get_default_tensor_type
dtype if dtype is not None else torch.get_default_dtype(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: type_inference bool instead of dtype is not None

var = data
if copy_variables:
var = var.detach()
inferred_scalar_type = var.dtype if type_inference else scalar_type
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the same line as inferred_scalar_type = _infer_scalar_type(data) if type_inference else scalar_type below because that line routes to .dtype anyway - combine and move up?

@ezyang
Copy link
Contributor Author

ezyang commented Sep 19, 2023

I resolved all the PR comments that would not have caused divergence from the original code

…loat inputs"


I just ported the C++ torch.tensor implementation to Python, swapping out the inner bits to successively stack tensors together, so that we can trace through `scalar_tensor`.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov kadeng

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Sep 19, 2023
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

ghstack-source-id: 2e082a2
Pull Request resolved: #109515
@ezyang
Copy link
Contributor Author

ezyang commented Sep 19, 2023

@pytorchbot merge -f "known master breakage only"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants