KEMBAR78
[IntraNodeComm] fix a recent breakage by yifuwang · Pull Request #141200 · pytorch/pytorch · GitHub
Skip to content

Conversation

@yifuwang
Copy link
Collaborator

@yifuwang yifuwang commented Nov 21, 2024

Stack from ghstack (oldest at bottom):

  • Pass group_name to CUDASymmetricMemory::alloc() instead of CUDASymmetricMemory::rendezvous(). We can only move the argument to rendezvous() once all the underlying operators do the same.
  • Added float to the allowlist for intra-node all-reduces.
  • Added a warning when IntraNodeComm::rendezvous() is performed with overlapping devices among participants.

cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o

@pytorch-bot pytorch-bot bot added oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category labels Nov 21, 2024
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 21, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/141200

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 2f51ba1 with merge base 161425f (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

  • linux-binary-manywheel / manywheel-py3_9-cuda12_6-test / test (gh) (similar failure)
    RuntimeError: cuDNN version incompatibility: PyTorch was compiled against (9, 5, 1) but found runtime version (9, 1, 0). PyTorch already comes bundled with cuDNN. One option to resolving this error is to ensure PyTorch can find the bundled cuDNN. one possibility is that there is a conflicting cuDNN in LD_LIBRARY_PATH.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

yifuwang pushed a commit that referenced this pull request Nov 21, 2024
ghstack-source-id: e57edc3
Pull Request resolved: #141200
@yifuwang
Copy link
Collaborator Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 25, 2024
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Copy link
Contributor

@kwen2501 kwen2501 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the quick fix!

Comment on lines -13 to +14
input.dtype() == at::kBFloat16,
"oneShotAllReduce only supports bf16 for now");
input.dtype() == at::kBFloat16 || input.dtype() == at::kFloat,
"oneShotAllReduce only supports float and bf16 for now");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice fix. Was hitting this issue last week. It unblocks me now :)

pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
- Pass `group_name` to `CUDASymmetricMemory::alloc()` instead of `CUDASymmetricMemory::rendezvous()`. We can only move the argument to rendezvous() once all the underlying operators do the same.
- Added `float` to the allowlist for intra-node all-reduces.
- Added a warning when `IntraNodeComm::rendezvous()` is performed with overlapping devices among participants.

Pull Request resolved: pytorch#141200
Approved by: https://github.com/weifengpy, https://github.com/kwen2501
@github-actions github-actions bot deleted the gh/yifuwang/175/head branch December 26, 2024 02:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants