KEMBAR78
[PGNCCL] Record device index for GPU guarding during NCCLComm method calls by kwen2501 · Pull Request #141270 · pytorch/pytorch · GitHub
Skip to content

Conversation

@kwen2501
Copy link
Contributor

@kwen2501 kwen2501 commented Nov 21, 2024

Stack from ghstack (oldest at bottom):

Motivation

ncclCommInitRank needs GPU guard (documented in NCCL).

ncclCommAbort, ncclCommFinalize and ncclCommDestroy may also need GPU guard (undocumented in NCCL); otherwise, extra CUDA context may be created (or worse, hang); both effects have been seen before in our tests.

Solution

This PR records a device index during NCCLComm object creation, so that we can add GPU guard in NCCLComm's method calling which direct to the above NCCL APIs.

Note

This is not a bug fix. Just a safety improvement.

cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o

@pytorch-bot pytorch-bot bot added oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category labels Nov 21, 2024
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 21, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/141270

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 90b8360 with merge base 740d1eb (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

…omm method calls"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k c-p-i-o

[ghstack-poisoned]
kwen2501 added a commit that referenced this pull request Nov 21, 2024
@kwen2501 kwen2501 added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 22, 2024
…omm method calls"


### Motivation
`ncclCommInitRank` needs GPU guard (documented in NCCL).

`ncclCommAbort`, `ncclCommFinalize` and `ncclCommDestroy` may also need GPU guard (undocumented in NCCL); otherwise, extra CUDA context may be created (or worse, hang); both effects have been seen before in our tests.

### Solution
This PR records a device index during `NCCLComm` object creation, so that we can add GPU guard in `NCCLComm`'s method calling which direct to the above NCCL APIs.

### Note
This is not a bug fix. Just a safety improvement.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k c-p-i-o

[ghstack-poisoned]
LOG(INFO) << "Rank " << source->rank_ << ": split from parent comm "
<< source->repr() << " with color_id " << color_id << " and rank "
<< rank;
at::cuda::OptionalCUDAGuard gpuGuard(source->deviceIndex_);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this fail if called with a device index of -1? So we would never init a comm with an uninitialized device index, correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, all of our created comms should have recorded a device index. If this line can fail by itself, then it is great (better than silent pass-through).

@kwen2501
Copy link
Contributor Author

@pytorchbot merge -f "CI was previously green; rebased accidentally"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

auto& devName = it.first;
auto& ncclComm = it.second;
at::cuda::OptionalCUDAGuard gpuGuard;
at::DeviceIndex deviceIndex = getIndexFromDeviceKey(devName);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we also remove getIndexFromDeviceKey?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh do you mean this function has no use now? If so, yeah, we should remove it

pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
…calls (pytorch#141270)

### Motivation
`ncclCommInitRank` needs GPU guard (documented in NCCL).

`ncclCommAbort`, `ncclCommFinalize` and `ncclCommDestroy` may also need GPU guard (undocumented in NCCL); otherwise, extra CUDA context may be created (or worse, hang); both effects have been seen before in our tests.

### Solution
This PR records a device index during `NCCLComm` object creation, so that we can add GPU guard in `NCCLComm`'s method calling which direct to the above NCCL APIs.

### Note
This is not a bug fix. Just a safety improvement.

Pull Request resolved: pytorch#141270
Approved by: https://github.com/eqy
ghstack dependencies: pytorch#141374
Esquains pushed a commit to Esquains/study1 that referenced this pull request Dec 15, 2024
…calls

[ghstack-poisoned]

ghstack-source-id: 4ce17c2
Pull Request resolved: pytorch/pytorch#141270
@github-actions github-actions bot deleted the gh/kwen2501/99/head branch January 1, 2025 02:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants