KEMBAR78
correctly keep track of processed tensors for foreach reductions by ngimel · Pull Request #140103 · pytorch/pytorch · GitHub
Skip to content

Conversation

@ngimel
Copy link
Collaborator

@ngimel ngimel commented Nov 8, 2024

Fixes #140066

@ngimel ngimel requested review from eqy and syed-ahmed as code owners November 8, 2024 04:20
@pytorch-bot pytorch-bot bot added the release notes: cuda release notes category label Nov 8, 2024
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 8, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/140103

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit cbbb78f with merge base 43f0fe6 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@janeyx99 janeyx99 requested a review from mruberry as a code owner November 8, 2024 16:51
@janeyx99 janeyx99 added ciflow/trunk Trigger trunk jobs on your pull request release notes: foreach_frontend release notes category and removed release notes: cuda release notes category labels Nov 8, 2024
Copy link
Collaborator Author

@ngimel ngimel Nov 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure inputs here are large enough to trigger the bug. We need to test 2 cases

  1. Enough chunks across all the tensors to trigger multiple kernel launches. That requires tensors totalling
    >65536 * 320 elements today, given that we launch at most 320 blocks
  2. Enough tensors to trigger multiple kernel launches, don't know off the top of my head but should be at least 50-100
    By looking at it, foreach inputs would generate at most 10 small-ish tensors, so might not trigger either case.
    We might be better off just writing one off tests for this, wdyt?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, that'd probably be easiest, the meta tests also just failed too so that makes the decision easier

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eventually we probably want both tests

@janeyx99 janeyx99 force-pushed the ngimel/foreach_norm branch from 544a37d to dacd07b Compare November 8, 2024 19:23
@janeyx99
Copy link
Contributor

janeyx99 commented Nov 8, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

zero000064 pushed a commit to zero000064/pytorch that referenced this pull request Nov 14, 2024
Ryo-not-rio pushed a commit to Ryo-not-rio/pytorch that referenced this pull request Dec 2, 2024
pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
@github-actions github-actions bot deleted the ngimel/foreach_norm branch December 9, 2024 02:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: foreach_frontend release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

_foreach_norm produces wrong results

3 participants