KEMBAR78
Add guards for USE_C10D_FOO in relevant c10d files by lw · Pull Request #59697 · pytorch/pytorch · GitHub
Skip to content

Conversation

@lw
Copy link
Contributor

@lw lw commented Jun 9, 2021

Stack from ghstack:

The c10d build process selectively adds files based on the USE_C10D_FOO flags (where FOO is one of GLOO, NCCL or MPI). Replicating this logic inside libtorch will be harder, since libtorch uses a simpler approach (i.e., it lists the files in build_variables.bzl). So instead we could always include all files, and "disable" each file as needed using #ifdefs. Note that this is not a new approach: we already do the same for all the files of the TensorPipe agent based on the flag USE_TENSORPIPE.

Differential Revision: D28987577

The c10d build process selectively adds files based on the `USE_C10D_FOO` flags (where `FOO` is one of `GLOO`, `NCCL` or `MPI`). Replicating this logic inside libtorch will be harder, since libtorch uses a simpler approach (i.e., it lists the files in `build_variables.bzl`). So instead we could always include all files, and "disable" each file as needed using `#ifdef`s. Note that this is not a new approach: we already do the same for all the files of the TensorPipe agent based on the flag `USE_TENSORPIPE`.

Differential Revision: [D28987577](https://our.internmc.facebook.com/intern/diff/D28987577/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jun 9, 2021

💊 CI failures summary and remediations

As of commit 88de4d8 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-scanned failure(s)

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

lw added 4 commits June 9, 2021 01:45
The c10d build process selectively adds files based on the `USE_C10D_FOO` flags (where `FOO` is one of `GLOO`, `NCCL` or `MPI`). Replicating this logic inside libtorch will be harder, since libtorch uses a simpler approach (i.e., it lists the files in `build_variables.bzl`). So instead we could always include all files, and "disable" each file as needed using `#ifdef`s. Note that this is not a new approach: we already do the same for all the files of the TensorPipe agent based on the flag `USE_TENSORPIPE`.

Differential Revision: [D28987577](https://our.internmc.facebook.com/intern/diff/D28987577/)

[ghstack-poisoned]
The c10d build process selectively adds files based on the `USE_C10D_FOO` flags (where `FOO` is one of `GLOO`, `NCCL` or `MPI`). Replicating this logic inside libtorch will be harder, since libtorch uses a simpler approach (i.e., it lists the files in `build_variables.bzl`). So instead we could always include all files, and "disable" each file as needed using `#ifdef`s. Note that this is not a new approach: we already do the same for all the files of the TensorPipe agent based on the flag `USE_TENSORPIPE`.

Differential Revision: [D28987577](https://our.internmc.facebook.com/intern/diff/D28987577/)

[ghstack-poisoned]
The c10d build process selectively adds files based on the `USE_C10D_FOO` flags (where `FOO` is one of `GLOO`, `NCCL` or `MPI`). Replicating this logic inside libtorch will be harder, since libtorch uses a simpler approach (i.e., it lists the files in `build_variables.bzl`). So instead we could always include all files, and "disable" each file as needed using `#ifdef`s. Note that this is not a new approach: we already do the same for all the files of the TensorPipe agent based on the flag `USE_TENSORPIPE`.

Differential Revision: [D28987577](https://our.internmc.facebook.com/intern/diff/D28987577/)

[ghstack-poisoned]
The c10d build process selectively adds files based on the `USE_C10D_FOO` flags (where `FOO` is one of `GLOO`, `NCCL` or `MPI`). Replicating this logic inside libtorch will be harder, since libtorch uses a simpler approach (i.e., it lists the files in `build_variables.bzl`). So instead we could always include all files, and "disable" each file as needed using `#ifdef`s. Note that this is not a new approach: we already do the same for all the files of the TensorPipe agent based on the flag `USE_TENSORPIPE`.

Differential Revision: [D28987577](https://our.internmc.facebook.com/intern/diff/D28987577/)

[ghstack-poisoned]
Copy link
Contributor

@agolynski agolynski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG

The c10d build process selectively adds files based on the `USE_C10D_FOO` flags (where `FOO` is one of `GLOO`, `NCCL` or `MPI`). Replicating this logic inside libtorch will be harder, since libtorch uses a simpler approach (i.e., it lists the files in `build_variables.bzl`). So instead we could always include all files, and "disable" each file as needed using `#ifdef`s. Note that this is not a new approach: we already do the same for all the files of the TensorPipe agent based on the flag `USE_TENSORPIPE`.

Differential Revision: [D28987577](https://our.internmc.facebook.com/intern/diff/D28987577/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in c9e4d13.

@facebook-github-bot facebook-github-bot deleted the gh/lw/214/head branch June 14, 2021 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged oncall: distributed Add this issue/PR to distributed oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants