-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[Quantization] Add "quantization_tag" as metadata to fx proxy #108764
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary: In order to make sure that quantization_tag is preserved through second stage export, this PR adds it as a special metadata that should be preserved. Since quantization in export path will work on top of pre dispatch graph, subsequent post dispatch op decomposition, will decompose ops that quant workflow tagged. In order to make sure that the patterns identified by quantizer, remains identifiable, even after decompositions are applied, we must preserve "quantization_tag". This enables backend delegates, that quantized a model for specific backend, to be able to identify "quantized" patterns. Test Plan: metadata porting tests Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/108764
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit a2c8f01 with merge base 53acdb6 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Summary: In order to make sure that quantization_tag is preserved through second stage export, this PR adds it as a special metadata that should be preserved. Since quantization in export path will work on top of pre dispatch graph, subsequent post dispatch op decomposition, will decompose ops that quant workflow tagged. In order to make sure that the patterns identified by quantizer, remains identifiable, even after decompositions are applied, we must preserve "quantization_tag". This enables backend delegates, that quantized a model for specific backend, to be able to identify "quantized" patterns. Test Plan: metadata porting tests Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 8aaae92 Pull Request resolved: #108764
|
@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
I have a potentially dumb question: "How is this different from using source_fn to do pattern matching?" |
source_fn based pattern matching doesnt allow for arbitrary pattern. It is related to nn.Module or nn.function (functinal actually doesnt work). So if you have some fusion patterns it doesnt work. Plus this specific taag enables quant workflow to prot metadata for nodes that quantization adds |
| ) | ||
|
|
||
| from_node_to_tags = { | ||
| torch.ops.aten.adaptive_avg_pool2d.default: "BackendA_adaptive_avg_pool2d_0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we test metadata on get_attr nodes?
…oxy" Summary: In order to make sure that quantization_tag is preserved through second stage export, this PR adds it as a special metadata that should be preserved. Since quantization in export path will work on top of pre dispatch graph, subsequent post dispatch op decomposition, will decompose ops that quant workflow tagged. In order to make sure that the patterns identified by quantizer, remains identifiable, even after decompositions are applied, we must preserve "quantization_tag". This enables backend delegates, that quantized a model for specific backend, to be able to identify "quantized" patterns. Test Plan: metadata porting tests Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D49056259](https://our.internmc.facebook.com/intern/diff/D49056259) [ghstack-poisoned]
|
@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
…oxy" Summary: In order to make sure that quantization_tag is preserved through second stage export, this PR adds it as a special metadata that should be preserved. Since quantization in export path will work on top of pre dispatch graph, subsequent post dispatch op decomposition, will decompose ops that quant workflow tagged. In order to make sure that the patterns identified by quantizer, remains identifiable, even after decompositions are applied, we must preserve "quantization_tag". This enables backend delegates, that quantized a model for specific backend, to be able to identify "quantized" patterns. Test Plan: metadata porting tests Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D49056259](https://our.internmc.facebook.com/intern/diff/D49056259) [ghstack-poisoned]
Summary: In order to make sure that quantization_tag is preserved through second stage export, this PR adds it as a special metadata that should be preserved. Since quantization in export path will work on top of pre dispatch graph, subsequent post dispatch op decomposition, will decompose ops that quant workflow tagged. In order to make sure that the patterns identified by quantizer, remains identifiable, even after decompositions are applied, we must preserve "quantization_tag". This enables backend delegates, that quantized a model for specific backend, to be able to identify "quantized" patterns. Test Plan: metadata porting tests Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 90ee800 Pull Request resolved: #108764
|
@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…h#108764) Summary: In order to make sure that quantization_tag is preserved through second stage export, this PR adds it as a special metadata that should be preserved. Since quantization in export path will work on top of pre dispatch graph, subsequent post dispatch op decomposition, will decompose ops that quant workflow tagged. In order to make sure that the patterns identified by quantizer, remains identifiable, even after decompositions are applied, we must preserve "quantization_tag". This enables backend delegates, that quantized a model for specific backend, to be able to identify "quantized" patterns. Test Plan: metadata porting tests Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D49056259](https://our.internmc.facebook.com/intern/diff/D49056259) Pull Request resolved: pytorch#108764 Approved by: https://github.com/tugsbayasgalan, https://github.com/jerryzh168
…h#108764) Summary: In order to make sure that quantization_tag is preserved through second stage export, this PR adds it as a special metadata that should be preserved. Since quantization in export path will work on top of pre dispatch graph, subsequent post dispatch op decomposition, will decompose ops that quant workflow tagged. In order to make sure that the patterns identified by quantizer, remains identifiable, even after decompositions are applied, we must preserve "quantization_tag". This enables backend delegates, that quantized a model for specific backend, to be able to identify "quantized" patterns. Test Plan: metadata porting tests Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D49056259](https://our.internmc.facebook.com/intern/diff/D49056259) Pull Request resolved: pytorch#108764 Approved by: https://github.com/tugsbayasgalan, https://github.com/jerryzh168
Stack from ghstack (oldest at bottom):
Summary:
In order to make sure that quantization_tag is preserved through second
stage export, this PR adds it as a special metadata that should be
preserved.
Since quantization in export path will work on top of pre dispatch
graph, subsequent post dispatch op decomposition, will decompose ops
that quant workflow tagged. In order to make sure that the patterns
identified by quantizer, remains identifiable, even after decompositions
are applied, we must preserve "quantization_tag".
This enables backend delegates, that quantized a model for specific
backend, to be able to identify "quantized" patterns.
Test Plan:
metadata porting tests
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: D49056259