-
Notifications
You must be signed in to change notification settings - Fork 30.9k
Fix broken Llama4 accuracy in MoE part #40609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix broken Llama4 accuracy in MoE part #40609
Conversation
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Until we upgrade to a new HF transformers version containing the fix, we will monkey patch HF transformers to make these tests pass again. Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Until we upgrade to a new HF transformers version containing the fix, we will monkey patch HF transformers to make these tests pass again. Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
|
cc @ArthurZucker! |
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Until we upgrade to a new HF transformers version containing the fix, we will monkey patch HF transformers to make these tests pass again. Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
|
Confirmed with the given repro that this issue also affects Llama4 CPU execution on transformers v4.54, v4.55, and v4.56. The output is accurate on v4.53. |
|
@ArthurZucker Could you review this? Thanks! |
Llama4 accuracy is broken by a bug in huggingface#39501 . It forgot to transpose the router_scores before applying it to routed_in, causing Llama4 to generate garbage output. This PR fixes that issue by adding back the transpose() and adding some comments explaining why the transpose() is needed. Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
004dd11 to
4bebe5a
Compare
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Until we upgrade to a new HF transformers version containing the fix, we will monkey patch HF transformers to make these tests pass again. Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, this got lost! Thanks a lot for the fix!!
|
[For maintainers] Suggested jobs to run (before merge) run-slow: llama4 |
* Fix broken Llama4 accuracy in MoE part Llama4 accuracy is broken by a bug in #39501 . It forgot to transpose the router_scores before applying it to routed_in, causing Llama4 to generate garbage output. This PR fixes that issue by adding back the transpose() and adding some comments explaining why the transpose() is needed. Signed-off-by: Po-Han Huang <pohanh@nvidia.com> * remove comment --------- Signed-off-by: Po-Han Huang <pohanh@nvidia.com> Co-authored-by: Cyril Vallez <cyril.vallez@gmail.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
… failures The test_modeling_llama_min_latency.py::test_llama_allclose_to_hf tests are failing with latest HF transformers due to a bug in their code. A PR has been submitted to fix it in upstream repo: huggingface/transformers#40609 Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
Llama4 accuracy is broken by a bug in
#39501 . It forgot to transpose the router_scores before applying it to routed_in, causing Llama4 to generate garbage output.
This PR fixes that issue by adding back the transpose() and adding some comments explaining why the transpose() is needed.
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker please review since #39501 was made by you. Thanks!
Accuracy tests
Test script:
Before the fix on H200:
After the fix on H200: