-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[quant][pt2][be] Remove add/relu from conv-bn QAT pattern #113006
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary: This commit significantly simplifies the QAT fusion code for the `conv-bn` pattern by removing add and relu nodes from the match and replacement patterns. This does not reduce functionality; patterns like `conv-bn-relu`, `conv-bn-add`, and `conv-bn-add-relu` are still supported. We simply do not match these extra nodes, since there is actually no need to replace them. This has the additional benefit of reducing the number of patterns being matched by 16x, since for each add and relu variant of the `conv-bn` pattern there is also an in-place variant. This also enables more flexible `conv-bn` pattern matching in the future and keeps the number of patterns more scalable. One important change needed in this commit was to remove the match filter that requires the input and output activations to be quantized. This was necessary because otherwise we would always expect q-dq nodes immediately after the getitem node, instead of after the add or relu nodes for example. This has another side benefit of keeping QAT fusion flexible enough to support weight only quantization. Test Plan: python test/test_quantization.py TestQuantizePT2EQAT Reviewers: jerryzh168, kimishpatel Subscribers: jerryzh168, kimishpatel [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/113006
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit b5e3cbe with merge base e1c872e ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Summary: This commit significantly simplifies the QAT fusion code for the `conv-bn` pattern by removing add and relu nodes from the match and replacement patterns. This does not reduce functionality; patterns like `conv-bn-relu`, `conv-bn-add`, and `conv-bn-add-relu` are still supported. We simply do not match these extra nodes, since there is actually no need to replace them. This has the additional benefit of reducing the number of patterns being matched by 16x, since for each add and relu variant of the `conv-bn` pattern there is also an in-place variant. This also enables more flexible `conv-bn` pattern matching in the future and keeps the number of patterns more scalable. One important change needed in this commit was to remove the match filter that requires the input and output activations to be quantized. This was necessary because otherwise we would always expect q-dq nodes immediately after the getitem node, instead of after the add or relu nodes for example. This has another side benefit of keeping QAT fusion flexible enough to support weight only quantization. Test Plan: python test/test_quantization.py TestQuantizePT2EQAT Reviewers: jerryzh168, kimishpatel Subscribers: jerryzh168, kimishpatel ghstack-source-id: 251cf2e Pull Request resolved: #113006
|
Thanks for the simplification.
This also fixes one of issues reported in #112833 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LG, thanks!
Summary: This commit significantly simplifies the QAT fusion code for the `conv-bn` pattern by removing add and relu nodes from the match and replacement patterns. This does not reduce functionality; patterns like `conv-bn-relu`, `conv-bn-add`, and `conv-bn-add-relu` are still supported. We simply do not match these extra nodes, since there is actually no need to replace them. This has the additional benefit of reducing the number of patterns being matched by 16x, since for each add and relu variant of the `conv-bn` pattern there is also an in-place variant. This also enables more flexible `conv-bn` pattern matching in the future and keeps the number of patterns more scalable. One important change needed in this commit was to remove the match filter that requires the input and output activations to be quantized. This was necessary because otherwise we would always expect q-dq nodes immediately after the getitem node, instead of after the add or relu nodes for example. This has another side benefit of keeping QAT fusion flexible enough to support weight only quantization. Test Plan: python test/test_quantization.py TestQuantizePT2EQAT Reviewers: jerryzh168, kimishpatel Subscribers: jerryzh168, kimishpatel [ghstack-poisoned]
Summary: This commit significantly simplifies the QAT fusion code for the `conv-bn` pattern by removing add and relu nodes from the match and replacement patterns. This does not reduce functionality; patterns like `conv-bn-relu`, `conv-bn-add`, and `conv-bn-add-relu` are still supported. We simply do not match these extra nodes, since there is actually no need to replace them. This has the additional benefit of reducing the number of patterns being matched by 16x, since for each add and relu variant of the `conv-bn` pattern there is also an in-place variant. This also enables more flexible `conv-bn` pattern matching in the future and keeps the number of patterns more scalable. One important change needed in this commit was to remove the match filter that requires the input and output activations to be quantized. This was necessary because otherwise we would always expect q-dq nodes immediately after the getitem node, instead of after the add or relu nodes for example. This has another side benefit of keeping QAT fusion flexible enough to support weight only quantization. Test Plan: python test/test_quantization.py TestQuantizePT2EQAT Reviewers: jerryzh168, kimishpatel Subscribers: jerryzh168, kimishpatel [ghstack-poisoned]
Summary: This commit significantly simplifies the QAT fusion code for the `conv-bn` pattern by removing add and relu nodes from the match and replacement patterns. This does not reduce functionality; patterns like `conv-bn-relu`, `conv-bn-add`, and `conv-bn-add-relu` are still supported. We simply do not match these extra nodes, since there is actually no need to replace them. This has the additional benefit of reducing the number of patterns being matched by 16x, since for each add and relu variant of the `conv-bn` pattern there is also an in-place variant. This also enables more flexible `conv-bn` pattern matching in the future and keeps the number of patterns more scalable. One important change needed in this commit was to remove the match filter that requires the input and output activations to be quantized. This was necessary because otherwise we would always expect q-dq nodes immediately after the getitem node, instead of after the add or relu nodes for example. This has another side benefit of keeping QAT fusion flexible enough to support weight only quantization. Test Plan: python test/test_quantization.py TestQuantizePT2EQAT Reviewers: jerryzh168, kimishpatel Subscribers: jerryzh168, kimishpatel ghstack-source-id: ef30c6f Pull Request resolved: #113006
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: Command Details for Dev Infra teamRaised by workflow job |
Summary: This commit significantly simplifies the QAT fusion code for the `conv-bn` pattern by removing add and relu nodes from the match and replacement patterns. This does not reduce functionality; patterns like `conv-bn-relu`, `conv-bn-add`, and `conv-bn-add-relu` are still supported. We simply do not match these extra nodes, since there is actually no need to replace them. This has the additional benefit of reducing the number of patterns being matched by 16x, since for each add and relu variant of the `conv-bn` pattern there is also an in-place variant. This also enables more flexible `conv-bn` pattern matching in the future and keeps the number of patterns more scalable. One important change needed in this commit was to remove the match filter that requires the input and output activations to be quantized. This was necessary because otherwise we would always expect q-dq nodes immediately after the getitem node, instead of after the add or relu nodes for example. This has another side benefit of keeping QAT fusion flexible enough to support weight only quantization. Test Plan: python test/test_quantization.py TestQuantizePT2EQAT Reviewers: jerryzh168, kimishpatel Subscribers: jerryzh168, kimishpatel [ghstack-poisoned]
Summary: This commit significantly simplifies the QAT fusion code for the `conv-bn` pattern by removing add and relu nodes from the match and replacement patterns. This does not reduce functionality; patterns like `conv-bn-relu`, `conv-bn-add`, and `conv-bn-add-relu` are still supported. We simply do not match these extra nodes, since there is actually no need to replace them. This has the additional benefit of reducing the number of patterns being matched by 16x, since for each add and relu variant of the `conv-bn` pattern there is also an in-place variant. This also enables more flexible `conv-bn` pattern matching in the future and keeps the number of patterns more scalable. One important change needed in this commit was to remove the match filter that requires the input and output activations to be quantized. This was necessary because otherwise we would always expect q-dq nodes immediately after the getitem node, instead of after the add or relu nodes for example. This has another side benefit of keeping QAT fusion flexible enough to support weight only quantization. Test Plan: python test/test_quantization.py TestQuantizePT2EQAT Reviewers: jerryzh168, kimishpatel Subscribers: jerryzh168, kimishpatel ghstack-source-id: 9724061 Pull Request resolved: #113006
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
Summary: This commit significantly simplifies the QAT fusion
code for the
conv-bnpattern by removing add and relu nodesfrom the match and replacement patterns. This does not reduce
functionality; patterns like
conv-bn-relu,conv-bn-add,and
conv-bn-add-reluare still supported. We simply do notmatch these extra nodes, since there is actually no need to
replace them.
This has the additional benefit of reducing the number of
patterns being matched by 16x, since for each add and relu
variant of the
conv-bnpattern there is also an in-placevariant. This also enables more flexible
conv-bnpatternmatching in the future and keeps the number of patterns
more scalable.
One important change needed in this commit was to remove
the match filter that requires the input and output
activations to be quantized. This was necessary because
otherwise we would always expect q-dq nodes immediately
after the getitem node, instead of after the add or relu
nodes for example. This has another side benefit of
keeping QAT fusion flexible enough to support weight
only quantization.
Test Plan:
python test/test_quantization.py TestQuantizePT2EQAT
Reviewers: jerryzh168, kimishpatel
Subscribers: jerryzh168, kimishpatel