KEMBAR78
Remove CUDA 9.2 references conditionals and workarounds by janeyx99 · Pull Request #65070 · pytorch/pytorch · GitHub
Skip to content

Conversation

@janeyx99
Copy link
Contributor

Title says it all

@pytorch-probot
Copy link

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/janeyx99/pytorch/blob/c864c991b552688e9a8c88e9970ab3b0441bd577/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-bionic-py3.8-gcc9-coverage ciflow/all, ciflow/coverage, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda10.2-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
paralleltbb-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Sep 15, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit c864c99 (more details on the Dr. CI page):


  • 1/2 failures introduced in this PR
  • 1/2 broken upstream at merge base 3fb33b3 on Sep 15 from 7:18am to 9:41am

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-xenial-cuda11.3-py3.6-gcc7 / test (distributed, 1, 1, linux.8xlarge.nvidia.gpu) (1/1)

Step: "Test PyTorch" (full log | diagnosis details | 🔁 rerun)

2021-09-15T18:01:31.7732541Z AssertionError: Fa... dtypes. Got dtypes torch.float32 and torch.int64.
2021-09-15T18:01:31.7722766Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 111, in wrapper
2021-09-15T18:01:31.7723600Z     return func(*args, **kwargs)
2021-09-15T18:01:31.7724575Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 2848, in wrapper
2021-09-15T18:01:31.7725347Z     return func(*args, **kwargs)
2021-09-15T18:01:31.7726537Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 4653, in test_post_localSGD_optimizer_parity
2021-09-15T18:01:31.7727581Z     self.assertEqual(p1.data, p2.data)
2021-09-15T18:01:31.7728659Z   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1875, in assertEqual
2021-09-15T18:01:31.7729775Z     super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
2021-09-15T18:01:31.7730615Z   File "/opt/conda/lib/python3.6/unittest/case.py", line 682, in assertTrue
2021-09-15T18:01:31.7731372Z     raise self.failureException(msg)
2021-09-15T18:01:31.7732541Z AssertionError: False is not true : Tensors failed to compare as equal!Attempted to compare equality of tensors with different dtypes. Got dtypes torch.float32 and torch.int64.
2021-09-15T18:01:31.7733436Z 
2021-09-15T18:01:31.7733663Z 
2021-09-15T18:01:31.7733979Z 		
2021-09-15T18:01:31.7734472Z ✅ 534 Passed
2021-09-15T18:01:31.7735012Z 💨 197 Skipped
2021-09-15T18:01:31.7735500Z 🚨 1 Failed
2021-09-15T18:01:31.7923699Z ##[group]Run # Remove any previous test reports if they exist
2021-09-15T18:01:31.7924581Z �[36;1m# Remove any previous test reports if they exist�[0m
2021-09-15T18:01:31.7925202Z �[36;1mrm -f test-reports-*.zip�[0m
2021-09-15T18:01:31.7925826Z �[36;1mzip -r "test-reports-${FILE_SUFFIX}.zip" test -i '*.xml'�[0m

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@janeyx99 janeyx99 requested a review from a team September 15, 2021 16:23
@facebook-github-bot
Copy link
Contributor

@janeyx99 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@codecov
Copy link

codecov bot commented Sep 15, 2021

Codecov Report

Merging #65070 (c864c99) into master (26e43fe) will decrease coverage by 0.00%.
The diff coverage is 50.00%.

@@            Coverage Diff             @@
##           master   #65070      +/-   ##
==========================================
- Coverage   66.39%   66.39%   -0.01%     
==========================================
  Files         725      725              
  Lines       93462    93461       -1     
==========================================
- Hits        62055    62053       -2     
- Misses      31407    31408       +1     

@albanD albanD removed their request for review September 16, 2021 17:51
@facebook-github-bot
Copy link
Contributor

@janeyx99 merged this pull request in 1ee66a5.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants