KEMBAR78
Unify cache disable and cache bypass paths by ezyang · Pull Request #141685 · pytorch/pytorch · GitHub
Skip to content

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Nov 27, 2024

Stack from ghstack (oldest at bottom):

I was constantly annoyed at the fact that we had a separate else branch for when cache was disabled which was distinct from when cache was bypassed. This diff gets rid of the disabled cache branch, so we use the same logic for bypass/disable. I actually think this change probably didn't actually matter much for the POC but I think it's cleaner.

Signed-off-by: Edward Z. Yang ezyang@meta.com

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 27, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/141685

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit 5dcfc5a with merge base 0f261e8 (image):

UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

ezyang added a commit that referenced this pull request Nov 27, 2024
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

ghstack-source-id: b40bf12
Pull Request resolved: #141685
[ghstack-poisoned]
ezyang added a commit that referenced this pull request Nov 27, 2024
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

ghstack-source-id: b3dbf6e
Pull Request resolved: #141685
ezyang added a commit that referenced this pull request Nov 27, 2024
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

ghstack-source-id: b3dbf6e
Pull Request resolved: #141685
@ezyang ezyang added the topic: not user facing topic category label Nov 27, 2024
ezyang added a commit that referenced this pull request Nov 27, 2024
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

ghstack-source-id: b3dbf6e
Pull Request resolved: #141685
ezyang added a commit that referenced this pull request Nov 27, 2024
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

ghstack-source-id: b3dbf6e
Pull Request resolved: #141685
ezyang added a commit that referenced this pull request Nov 27, 2024
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

ghstack-source-id: b3dbf6e
Pull Request resolved: #141685
ezyang added a commit that referenced this pull request Nov 27, 2024
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

ghstack-source-id: b3dbf6e
Pull Request resolved: #141685
Copy link
Contributor

@jamesjwu jamesjwu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm having some trouble following the intention of this one. What does it really mean here to "unify" bypass and miss, if you still have two separate branches for them? I.e what's the end goal?

input._is_inductor_static = True # type: ignore[attr-defined]
# TODO: This is a hack purely to get some info to extract_tensor_metadata_for_cache_key,
# figure out how to not have to modify example inputs
for i, input in enumerate(example_inputs):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come we can unconditionally do this now instead of before?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a behavior change, but I guessed it would be harmless because I audited the use sites and there are no "negative" usages (e.g., explicitly tests that this field is not defined). So I am guessing it is harmless to always have this populated. CI seems to agree.


local = config.fx_graph_cache
remote = fx_graph_remote_cache
# TODO: Remove this short circuit once types are unified here
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok wait, isn't the entire point of this refactor to unify this type? Or is there another PR incoming to do that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This gets eliminated in #141695

ezyang added a commit that referenced this pull request Nov 27, 2024
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

ghstack-source-id: b3dbf6e
Pull Request resolved: #141685
# In that case, we don't need to run all post compilation steps, we just need
# to return the string directly.
return compiled_graph
compiled_graph.post_compile2(example_inputs, cudagraphs, gm)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Specifically, this else branch got deleted

@ezyang ezyang requested a review from jamesjwu November 27, 2024 21:02
ezyang added a commit that referenced this pull request Nov 27, 2024
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

ghstack-source-id: 870ad95
Pull Request resolved: #141685
ezyang added a commit that referenced this pull request Nov 27, 2024
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

ghstack-source-id: 870ad95
Pull Request resolved: #141685
[ghstack-poisoned]
@ezyang
Copy link
Contributor Author

ezyang commented Nov 28, 2024

@pytorchbot merge -f "unrelated failures"

@ezyang ezyang requested a review from jansel November 28, 2024 04:44
@ezyang
Copy link
Contributor Author

ezyang commented Nov 28, 2024

still needs review

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: PR #141685 has not been reviewed yet

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

[ghstack-poisoned]
pytorchmergebot pushed a commit that referenced this pull request Nov 29, 2024
Stacked on top of #141685

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: #141688
Approved by: https://github.com/Skylion007, https://github.com/jansel
ghstack dependencies: #141681, #141683, #141685
pytorchmergebot pushed a commit that referenced this pull request Nov 29, 2024
Stacked on #141688

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: #141689
Approved by: https://github.com/jansel
ghstack dependencies: #141681, #141683, #141685, #141688
pytorchmergebot pushed a commit that referenced this pull request Nov 29, 2024
Stacked on #141689

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: #141691
Approved by: https://github.com/jansel
ghstack dependencies: #141681, #141683, #141685, #141688, #141689
pytorchmergebot pushed a commit that referenced this pull request Nov 30, 2024
Stacked on #141691

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: #141695
Approved by: https://github.com/aorenste
ghstack dependencies: #141681, #141683, #141685, #141688, #141689, #141691
pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
I was constantly annoyed at the fact that we had a separate else branch for when cache was disabled which was distinct from when cache was bypassed. This diff gets rid of the disabled cache branch, so we use the same logic for bypass/disable. I actually think this change probably didn't actually matter much for the POC but I think it's cleaner.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: pytorch#141685
Approved by: https://github.com/aorenste
ghstack dependencies: pytorch#141681, pytorch#141683
pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
Stacked on top of pytorch#141685

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: pytorch#141688
Approved by: https://github.com/Skylion007, https://github.com/jansel
ghstack dependencies: pytorch#141681, pytorch#141683, pytorch#141685
pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
Stacked on pytorch#141688

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: pytorch#141689
Approved by: https://github.com/jansel
ghstack dependencies: pytorch#141681, pytorch#141683, pytorch#141685, pytorch#141688
pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
Stacked on pytorch#141689

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: pytorch#141691
Approved by: https://github.com/jansel
ghstack dependencies: pytorch#141681, pytorch#141683, pytorch#141685, pytorch#141688, pytorch#141689
pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
Stacked on pytorch#141691

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: pytorch#141695
Approved by: https://github.com/aorenste
ghstack dependencies: pytorch#141681, pytorch#141683, pytorch#141685, pytorch#141688, pytorch#141689, pytorch#141691
@github-actions github-actions bot deleted the gh/ezyang/3025/head branch December 30, 2024 02:08
desai0007 pushed a commit to desai0007/test-repo-pytorch that referenced this pull request Feb 26, 2025
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

ghstack-source-id: 2b93a01
Pull Request resolved: pytorch/pytorch#141685
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants