KEMBAR78
Add option for TorchDispatchMode to ignore torch.compile internals by zou3519 · Pull Request #161648 · pytorch/pytorch · GitHub
Skip to content

Conversation

@zou3519
Copy link
Contributor

@zou3519 zou3519 commented Aug 27, 2025

Stack from ghstack (oldest at bottom):

If TorchDispatchMode.ignore_compile_internals() is True, then we turn
off the TorchDispatchMode during the compilation process, instead
turning it back on during runtime of the compiled artifact.

Test Plan:

  • new test

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @Lucaskabela

If TorchDispatchMode.ignore_compile_internals() is True, then we turn
off the TorchDispatchMode during the compilation process, instead
turning it back on during runtime of the compiled artifact.

Test Plan:
- new test

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Aug 27, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/161648

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 46106a1 with merge base cd87f30 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

zou3519 added a commit that referenced this pull request Aug 27, 2025
If TorchDispatchMode.ignore_compile_internals() is True, then we turn
off the TorchDispatchMode during the compilation process, instead
turning it back on during runtime of the compiled artifact.

Test Plan:
- new test

ghstack-source-id: 845b5af
Pull Request resolved: #161648
@zou3519 zou3519 requested review from a team and anijain2305 August 27, 2025 19:15
@zou3519 zou3519 added the release notes: composability release notes category label Aug 27, 2025
@zou3519 zou3519 requested a review from bdhirsh August 27, 2025 19:18
f(x)
The above example will not log anything if
``LoggingMode.ignore_compile_internals`` is True.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in this docstring ignore_compile_internals looks like an attribute, not a class method. Do you want to to keep it as a class method? (my first thought would be that attribute is simpler and easier for a mode to override if they want to, although i'm not sure if ability-to-override is your goal

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is_infra_mode() is a method, so I made this a method. I'm open to making it a class attribute though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nvm i see you override the class method in your test, either seems fine

…nternals"

If TorchDispatchMode.ignore_compile_internals() is True, then we turn
off the TorchDispatchMode during the compilation process, instead
turning it back on during runtime of the compiled artifact.

Test Plan:
- new test

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames Lucaskabela

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Aug 27, 2025
If TorchDispatchMode.ignore_compile_internals() is True, then we turn
off the TorchDispatchMode during the compilation process, instead
turning it back on during runtime of the compiled artifact.

Test Plan:
- new test

ghstack-source-id: 6b1f7cc
Pull Request resolved: #161648
foo(x)

self.assertEqual(len(_checksums), 2)
# If you are getting "3" here, then that means the .abs().sum()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm i'm a bit confused by the test - even without your change, wouldnt' the .abs().sum() never get compiled? we only call that in the case when our TorchDispatchMode intercepts the custom op foo, but it doesn't look like the test ever calls mylib.foo in a compiled region.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just going to delete the comment because I don't know what exactly is going on here. The .abs().sum() isn't being compiled. When Dynamo falls back to eager, it turns out that dynamo decides to compile each of the ops individually (the mul, sin, cos). This means that it had an opportunity to turn itself back on, but I don't know why or how.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was a bug in my implementation (great catch!)

continue
if mode.is_infra_mode():
continue
return True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm do you mind explaining what the behavior is when you have multiple modes active, and one has ignore_compile_internals set but the other does not?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dynamo will fallback to eager and all modes will see exactly the same ops that they saw in eager mode

Copy link
Contributor

@bdhirsh bdhirsh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left some questions, stamp to unblock

…nternals"

If TorchDispatchMode.ignore_compile_internals() is True, then we turn
off the TorchDispatchMode during the compilation process, instead
turning it back on during runtime of the compiled artifact.

Test Plan:
- new test

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames Lucaskabela

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Aug 27, 2025
If TorchDispatchMode.ignore_compile_internals() is True, then we turn
off the TorchDispatchMode during the compilation process, instead
turning it back on during runtime of the compiled artifact.

Test Plan:
- new test

ghstack-source-id: 0a21476
Pull Request resolved: #161648
@zou3519 zou3519 added the ciflow/trunk Trigger trunk jobs on your pull request label Aug 27, 2025
…nternals"

If TorchDispatchMode.ignore_compile_internals() is True, then we turn
off the TorchDispatchMode during the compilation process, instead
turning it back on during runtime of the compiled artifact.

Test Plan:
- new test

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames Lucaskabela

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Aug 27, 2025
If TorchDispatchMode.ignore_compile_internals() is True, then we turn
off the TorchDispatchMode during the compilation process, instead
turning it back on during runtime of the compiled artifact.

Test Plan:
- new test

ghstack-source-id: 5752937
Pull Request resolved: #161648
@zou3519 zou3519 added the ci-no-td Do not run TD on this PR label Aug 27, 2025
@zou3519
Copy link
Contributor Author

zou3519 commented Aug 28, 2025

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

markc-614 pushed a commit to markc-614/pytorch that referenced this pull request Sep 17, 2025
…ytorch#161648)

If TorchDispatchMode.ignore_compile_internals() is True, then we turn
off the TorchDispatchMode during the compilation process, instead
turning it back on during runtime of the compiled artifact.

Test Plan:
- new test

Pull Request resolved: pytorch#161648
Approved by: https://github.com/bdhirsh
@github-actions github-actions bot deleted the gh/zou3519/1194/head branch September 28, 2025 02:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-no-td Do not run TD on this PR ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged module: dynamo release notes: composability release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants