KEMBAR78
chunk_size should always be int64_t for Foreach functors by janeyx99 · Pull Request #156872 · pytorch/pytorch · GitHub
Skip to content

Conversation

@janeyx99
Copy link
Contributor

@janeyx99 janeyx99 commented Jun 25, 2025

@pytorch-bot
Copy link

pytorch-bot bot commented Jun 25, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/156872

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 4414075 with merge base 070aa59 (image):

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the release notes: foreach_frontend release notes category label Jun 25, 2025
janeyx99 added a commit that referenced this pull request Jun 25, 2025
@janeyx99 janeyx99 added the topic: bug fixes topic category label Jun 25, 2025
@janeyx99 janeyx99 requested review from albanD and ngimel June 25, 2025 19:46
@ngimel
Copy link
Collaborator

ngimel commented Jun 25, 2025

I'm not sure if all foreach functors strictly require this, but probably it doesn't hurt.

See #156261 (comment)

Testing is a valid q--it is pretty expensive to test such large tensors for all these ops.




[ghstack-poisoned]
janeyx99 added a commit that referenced this pull request Jun 25, 2025
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see you're not changing any caller into these operators? Does that mean that we used to have a warning somewhere that we were narrowing the argument we ignored?

See #156261 (comment)

Testing is a valid q--it is pretty expensive to test such large tensors for all these ops.




[ghstack-poisoned]
janeyx99 added a commit that referenced this pull request Jun 27, 2025
@janeyx99
Copy link
Contributor Author

I see you're not changing any caller into these operators? Does that mean that we used to have a warning somewhere that we were narrowing the argument we ignored?

We don't have a warning, but these functors are called here, where kChunkSize is an int64_t: https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/cuda/MultiTensorApply.cuh#L106

@janeyx99
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jun 27, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@github-actions github-actions bot deleted the gh/janeyx99/273/head branch July 28, 2025 02:21
pytorchmergebot pushed a commit that referenced this pull request Oct 21, 2025
zhudada0120 pushed a commit to zhudada0120/pytorch that referenced this pull request Oct 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: foreach_frontend release notes category topic: bug fixes topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants