KEMBAR78
deprecating nvfuser c++ API by jjsjann123 · Pull Request #110318 · pytorch/pytorch · GitHub
Skip to content

Conversation

@jjsjann123
Copy link
Collaborator

@jjsjann123 jjsjann123 commented Sep 29, 2023

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 29, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/110318

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit a7b22b6 with merge base f767a6c (image):

UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the release notes: jit release notes category label Sep 29, 2023
@jjsjann123 jjsjann123 added the ciflow/trunk Trigger trunk jobs on your pull request label Sep 29, 2023
@jjsjann123 jjsjann123 marked this pull request as ready for review October 3, 2023 06:18
@colesbury colesbury added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Oct 3, 2023
@jjsjann123
Copy link
Collaborator Author

bumping base branch. Hopefully the noisy rocm failure would go away.

@jjsjann123
Copy link
Collaborator Author

rocm failure looks persistent. Now I'm wondering if I did anything wrong. I'll take another look

@jeffdaily
Copy link
Collaborator

We're working on the ROCm CI failures. Our CI was/is in unstable status so was not a blocking CI signal and there have been a number of PRs merged that further broke ROCm CI. We have a fix for

TestQuantizePT2EQAT.test_qat_conv_bn_relu_fusion_cuda

But working on the inductor test hang.

I don't think your PR is to blame here. Thanks for actually paying attention to the signal, though. I really appreciate it. Helps us a lot.

@jjsjann123
Copy link
Collaborator Author

We're working on the ROCm CI failures. Our CI was/is in unstable status so was not a blocking CI signal and there have been a number of PRs merged that further broke ROCm CI. We have a fix for

TestQuantizePT2EQAT.test_qat_conv_bn_relu_fusion_cuda

But working on the inductor test hang.

I don't think your PR is to blame here. Thanks for actually paying attention to the signal, though. I really appreciate it. Helps us a lot.

Awesome. Thanks for comfirming this one.
In that sense this PR is good to go. Pinging @davidberard98 for a review

@jjsjann123
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@NicolasHug
Copy link
Member

NicolasHug commented Oct 9, 2023

Hi @jjsjann123 @davidberard98 , this PR seems to be causing warnings whenever one calls a scripted module (e.g. nn.Conv2d). I opened #110857, could you please take a look? (This is causing a ton of warnings in the torchvision test suite as well)

Thanks!

@davidberard98
Copy link
Contributor

@pytorchbot revert

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 9, 2023

❌ 🤖 pytorchbot command failed:

@pytorchbot revert: error: the following arguments are required: -m/--message, -c/--classification

usage: @pytorchbot revert -m MESSAGE -c
                          {nosignal,ignoredsignal,landrace,weird,ghfirst}

Try @pytorchbot --help for more info.

@davidberard98
Copy link
Contributor

@pytorchbot revert -m "too many warnings being thrown in torchvision #110857" -c nosignal

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

@pytorchmergebot
Copy link
Collaborator

@jjsjann123 your PR has been successfully reverted.

pytorchmergebot added a commit that referenced this pull request Oct 9, 2023
This reverts commit bf0866f.

Reverted #110318 on behalf of https://github.com/davidberard98 due to too many warnings being thrown in torchvision #110857 ([comment](#110318 (comment)))
@jjsjann123
Copy link
Collaborator Author

ha. sorry about the excessive warning. I'll switch to TORCH_WARN_ONCE instead.

jjsjann123 added a commit to jjsjann123/pytorch that referenced this pull request Oct 9, 2023
pytorchmergebot pushed a commit that referenced this pull request Oct 10, 2023
attempting to re-try #110318 deprecating nvfuser c++ API

warning has been updated to TORCH_WARN_ONCE;
Warning thrown inside torch::jit::fuser::cuda::isEnabled() is turned off and will be deprecated when we pulled out TorchScript integration in the follow up PR.
Pull Request resolved: #110881
Approved by: https://github.com/davidberard98, https://github.com/NicolasHug
@jjsjann123 jjsjann123 deleted the nvfuser_cpp_api_deprecation branch October 11, 2023 16:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged module: inductor open source release notes: jit release notes category Reverted triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants