KEMBAR78
[NCCL][Profiler] Add functionality to call dump function of NCCL profiler plugin by zhiyongww · Pull Request #137523 · pytorch/pytorch · GitHub
Skip to content

Conversation

@zhiyongww
Copy link
Contributor

@zhiyongww zhiyongww commented Oct 8, 2024

Summary:
NCCL 2.23.4 provides the profiler plugin feature, which traces collective, p2p, proxyOps, and other events.

The diff supports the following feature: when NCCL times out, the flight recorder can also dump traces in the profiler plugin.

Test Plan:

        tensor = torch.tensor([dist.get_rank()], dtype=torch.int32, device=dev)
        # Create a list with same number of elements as world size (aka no. of ranks)
        # During allgather this list is going to be populated with tensors from all ranks (aka all gather)
        gathered_tensors = [torch.zeros_like(tensor) for _ in range(WORLD_SIZE)]
        # get collective from all ranks
        if i <= 10 or RANK != 0:
            dist.all_gather(gathered_tensors, tensor)

My script triggers flight recoder.

trainer/0 [0]:E0927 12:07:22.643702 1012209 ProcessGroupNCCL.cpp:1356] [PG ID 0 PG GUID 0(default_pg) Rank 0] ProcessGroupNCCL preparing to dump debug info.
trainer/0 [0]:I0927 12:07:22.643784 1012209 ProcessGroupNCCL.cpp:392] NCCL_PROFILER_PLUGIN: /data/users/zhiyongww/fbsource/fbcode/scripts/nbahl/libnccl_profiler_plugin.so
trainer/0 [0]:I0927 12:07:22.643805 1012209 plugin.cpp:559] Profiler start dump
trainer/0 [0]:I0927 12:07:22.645249 1012209 ProcessGroupNCCL.cpp:1363] [PG ID 0 PG GUID 0(default_pg) Rank 0] ProcessGroupNCCL dumping nccl trace to /tmp/nccl_trace_rank_0
trainer/0 [0]:I0927 12:07:22.645418 1012209 NCCLUtils.cpp:348] Finished writing NCCLPG debug info to /tmp/nccl_trace_rank_0

Content from /tmp/nccl_trace_rank_0: P1614645283

Differential Revision: D61929401

cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 8, 2024

This appears to be a diff that was exported from phabricator, but the PR author does not have sufficient permissions to run CI. @zhiyongww, please do step 2 of internal wiki to get write access so you do not need to get CI approvals in the future. If you think this is a mistake, please contact the Pytorch Dev Infra team.

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Oct 8, 2024

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: zhiyongww / name: zhiyongww (162468a)

@pytorch-bot pytorch-bot bot added oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category labels Oct 8, 2024
@pytorch-bot
Copy link

pytorch-bot bot commented Oct 8, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/137523

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 162468a with merge base 76ab1ab (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61929401

…iler plugin (pytorch#137523)

Summary:

NCCL 2.23.4 provides the profiler plugin feature, which traces collective, p2p, proxyOps, and other events.

The diff supports the following feature: when NCCL times out, the flight recorder can also dump traces in the profiler plugin.

Test Plan:
```
        tensor = torch.tensor([dist.get_rank()], dtype=torch.int32, device=dev)
        # Create a list with same number of elements as world size (aka no. of ranks)
        # During allgather this list is going to be populated with tensors from all ranks (aka all gather)
        gathered_tensors = [torch.zeros_like(tensor) for _ in range(WORLD_SIZE)]
        # get collective from all ranks
        if i <= 10 or RANK != 0:
            dist.all_gather(gathered_tensors, tensor)
```
My script triggers flight recoder.
```
trainer/0 [0]:E0927 12:07:22.643702 1012209 ProcessGroupNCCL.cpp:1356] [PG ID 0 PG GUID 0(default_pg) Rank 0] ProcessGroupNCCL preparing to dump debug info.
trainer/0 [0]:I0927 12:07:22.643784 1012209 ProcessGroupNCCL.cpp:392] NCCL_PROFILER_PLUGIN: /data/users/zhiyongww/fbsource/fbcode/scripts/nbahl/libnccl_profiler_plugin.so
trainer/0 [0]:I0927 12:07:22.643805 1012209 plugin.cpp:559] Profiler start dump
trainer/0 [0]:I0927 12:07:22.645249 1012209 ProcessGroupNCCL.cpp:1363] [PG ID 0 PG GUID 0(default_pg) Rank 0] ProcessGroupNCCL dumping nccl trace to /tmp/nccl_trace_rank_0
trainer/0 [0]:I0927 12:07:22.645418 1012209 NCCLUtils.cpp:348] Finished writing NCCLPG debug info to /tmp/nccl_trace_rank_0
```
Content from /tmp/nccl_trace_rank_0: P1614645283

Differential Revision: D61929401
@zhiyongww zhiyongww force-pushed the export-D61929401 branch 2 times, most recently from 2bc47e0 to 162468a Compare October 8, 2024 21:38
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61929401

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61929401

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 8, 2024

This appears to be a diff that was exported from phabricator, but the PR author does not have sufficient permissions to run CI. @zhiyongww, please do step 2 of internal wiki to get write access so you do not need to get CI approvals in the future. If you think this is a mistake, please contact the Pytorch Dev Infra team.

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge -f 'Landed internally'

(Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally)

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@zhiyongww zhiyongww deleted the export-D61929401 branch October 11, 2024 05:15
pytorchmergebot pushed a commit that referenced this pull request Nov 8, 2024
…ler plugin (#139847)

Reverts PR #137523

Reasons for the reversion:
1. NCCL profiler plugin is meant to be opened by NCCL. And the profiler's implementation is meant to be provided by a profiler. There is no evidence that `torch.distributed` is at a better position to be either an opener or a provider. (The PR to be reverted made `torch.distributed` an opener).

2. The main purpose of the reverted PR is to dlopen a dump function, with the help of an environment variable `NCCL_PROFILER_PLUGIN_FUN` that provides the symbol name, as in code below:
https://github.com/pytorch/pytorch/blob/c19c38469030cdf399714eb2051887f4583006a8/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L415-L427
After some investigation, NCCL does not support env var `NCCL_PROFILER_PLUGIN_FUN`. And NCCL's profiler contract `nccl_profiler.h` does not have a function called "ncclProfilerPluginDump" defined. So this looks like a private add-on.

Pull Request resolved: #139847
Approved by: https://github.com/c-p-i-o
pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
…L profiler plugin (pytorch#139847)

Reverts PR pytorch#137523

Reasons for the reversion:
1. NCCL profiler plugin is meant to be opened by NCCL. And the profiler's implementation is meant to be provided by a profiler. There is no evidence that `torch.distributed` is at a better position to be either an opener or a provider. (The PR to be reverted made `torch.distributed` an opener).

2. The main purpose of the reverted PR is to dlopen a dump function, with the help of an environment variable `NCCL_PROFILER_PLUGIN_FUN` that provides the symbol name, as in code below:
https://github.com/pytorch/pytorch/blob/c19c38469030cdf399714eb2051887f4583006a8/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L415-L427
After some investigation, NCCL does not support env var `NCCL_PROFILER_PLUGIN_FUN`. And NCCL's profiler contract `nccl_profiler.h` does not have a function called "ncclProfilerPluginDump" defined. So this looks like a private add-on.

Pull Request resolved: pytorch#139847
Approved by: https://github.com/c-p-i-o
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

fb-exported Merged oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants