KEMBAR78
[None][fix] Fix Qwen3 FP8 per-tensor when requesting TRTLLM-GEN MoE backend by achartier · Pull Request #8075 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@achartier
Copy link
Collaborator

@achartier achartier commented Sep 29, 2025

Summary by CodeRabbit

  • Refactor
    • Mixture-of-Experts backend selection is now class-based instead of string-based, preserving default behavior.
    • Constructor for the gating module now accepts a backend class; update any custom usages accordingly.
    • Backend resolution helper simplified: no longer requires routing method or dtype inputs.
    • Model wiring updated to use the new class-based backend resolution.

Description

FP8 per-tensor is not supported in the TRTLLM backend so the CUTLASS backend gets selected by get_moe_cls. So, Qwen3Gate routing method needs to check the actual type of the MoE backend instead of the requested one.

This PR also removes a couple of unused arguments to get_moe_cls in order to avoid a circular depending between creating Qwen3Gate and the MoE backend.

Test Coverage

Existing Qwen3 tests.

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@achartier
Copy link
Collaborator Author

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 29, 2025

📝 Walkthrough

Walkthrough

Replaces string-based MoE backend selection with class-based injection across Qwen3 MoE components. Updates Qwen3Gate constructor to accept moe_backend_cls and stores it. Qwen3MoE now obtains the backend via get_moe_cls(model_config). The get_moe_cls API is simplified by removing routing_method and dtype parameters.

Changes

Cohort / File(s) Summary
Qwen3 MoE gating API refactor
tensorrt_llm/_torch/models/modeling_qwen3_moe.py
Qwen3Gate init signature changed to take moe_backend_cls: Type[MoE] (default CutlassFusedMoE) instead of a string. Internal references updated to use the class. Qwen3MoE now constructs Qwen3Gate with moe_backend_cls=get_moe_cls(model_config) and aligns routing/output dtype logic to class-based backend.
MoE backend factory
tensorrt_llm/_torch/modules/fused_moe/create_moe.py
get_moe_cls signature simplified to (model_config, override_quant_config=None) -> Type[MoE]; removed routing_method and dtype parameters. Call sites updated accordingly.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant User
  participant Qwen3MoE
  participant Factory as get_moe_cls
  participant Gate as Qwen3Gate
  participant Backend as MoE Backend (Class)

  User->>Qwen3MoE: initialize(model_config, ...)
  Qwen3MoE->>Factory: get_moe_cls(model_config)
  Factory-->>Qwen3MoE: Backend class (e.g., CutlassFusedMoE)
  Qwen3MoE->>Gate: new Qwen3Gate(..., moe_backend_cls=Backend)
  Note right of Gate: Stores moe_backend_cls and routes using class-based backend
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title "[None][fix] Fix Qwen3 FP8 per-tensor when requesting TRTLLM-GEN MoE backend" is clearly related to the main change in the changeset. The changeset modifies how Qwen3Gate determines which MoE backend to use, switching from a string-based approach to a class-based approach to properly handle cases where FP8 per-tensor quantization causes the backend to fall back from TRTLLM to CUTLASS. The title accurately describes the specific issue being fixed (Qwen3 FP8 per-tensor with TRTLLM-GEN MoE backend) and follows the required format with a valid ticket identifier ([None]) and type ([fix]).
Description Check ✅ Passed The PR description follows the template structure and includes all required sections. The Description section clearly explains both the problem (FP8 per-tensor not supported in TRTLLM backend causing fallback to CUTLASS, and the need for Qwen3Gate to check actual backend type) and the solution (switching to class-based backend selection and removing unused arguments). The Test Coverage section references existing Qwen3 tests, and the PR Checklist is present and marked as reviewed. While the description could be slightly more detailed about the technical implementation, it adequately covers the what and why of the changes.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 334e2ca and d4ae9a4.

📒 Files selected for processing (2)
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py (5 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: djns99
PR: NVIDIA/TensorRT-LLM#6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.
🧬 Code graph analysis (1)
tensorrt_llm/_torch/models/modeling_qwen3_moe.py (5)
tensorrt_llm/_torch/modules/fused_moe/routing.py (10)
  • BaseMoeRoutingMethod (158-181)
  • RenormalizeMoeRoutingMethod (230-262)
  • RenormalizeNaiveMoeRoutingMethod (405-420)
  • RoutingMethodType (143-155)
  • routing_method_type (180-181)
  • routing_method_type (213-214)
  • routing_method_type (226-227)
  • routing_method_type (261-262)
  • routing_method_type (281-282)
  • routing_method_type (419-420)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)
  • CutlassFusedMoE (25-603)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1)
  • TRTLLMGenFusedMoE (22-513)
tensorrt_llm/_torch/modules/fused_moe/create_moe.py (2)
  • create_moe (59-208)
  • get_moe_cls (22-56)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • MoE (101-347)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (7)
tensorrt_llm/_torch/modules/fused_moe/create_moe.py (2)

22-24: LGTM - Signature simplification aligns with PR objectives.

The removal of routing_method and dtype parameters is correct. These parameters were never used in the function body, which only relies on model_config.moe_backend and quant_config for backend selection. This simplification avoids the circular dependency mentioned in the PR description between creating Qwen3Gate and the MoE backend.


77-77: LGTM - Call site correctly updated.

The call to get_moe_cls correctly passes only model_config and override_quant_config, matching the updated signature.

tensorrt_llm/_torch/models/modeling_qwen3_moe.py (5)

2-2: LGTM - Imports correctly updated for class-based backend selection.

The added imports (Type, get_moe_cls, MoE, TRTLLMGenFusedMoE, CutlassFusedMoE) are all necessary for the class-based backend selection approach implemented in this PR.

Also applies to: 18-23


43-47: LGTM - Class-based backend selection implemented correctly.

The change from moe_backend: str to moe_backend_cls: Type[MoE] with default CutlassFusedMoE aligns with the PR objective. This allows Qwen3Gate to check the actual backend type selected by get_moe_cls rather than the originally requested string, which is critical for handling FP8 per-tensor cases where CUTLASS is selected instead of TRTLLM.

Note: This is a breaking API change for any external callers of Qwen3Gate.


68-79: Critical fix - Correctly checks actual backend type instead of requested backend.

This change fixes the core issue described in the PR title. Previously, when FP8 per-tensor was requested with TRTLLM backend, get_moe_cls would return CutlassFusedMoE, but the routing method would still check the string moe_backend == "TRTLLM" and incorrectly use bfloat16. Now it correctly checks self.moe_backend_cls == TRTLLMGenFusedMoE to determine the output dtype based on the actual backend class selected.


53-54: Technical debt: FIXME comment indicates unresolved out_dtype issue.

The FIXME comment on line 53 indicates that conditional out_dtype selection based on moe_backend_cls doesn't work as expected. The commented-out logic suggests the intent was to use torch.float32 for TRTLLMGenFusedMoE, but this is currently disabled. Consider investigating and resolving this issue in a future PR.


109-109: LGTM - Correctly wires class-based backend selection.

The instantiation correctly passes moe_backend_cls=get_moe_cls(model_config) to Qwen3Gate, enabling it to check the actual backend class selected (e.g., CutlassFusedMoE when FP8 per-tensor is not supported by TRTLLM) rather than the originally requested string.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🧪 Early access (Sonnet 4.5): enabled

We are currently testing the Sonnet 4.5 model, which is expected to improve code review quality. However, this model may lead to increased noise levels in the review comments. Please disable the early access features if the noise level causes any inconvenience.

Note:

  • Public repositories are always opted into early access features.
  • You can enable or disable early access features from the CodeRabbit UI or by updating the CodeRabbit configuration file.

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20295 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20295 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #15306 completed with status: 'FAILURE'

Copy link
Collaborator

@byshiue byshiue left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@achartier
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20327 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20327 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15331 completed with status: 'FAILURE'

@achartier
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20383 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20383 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15378 completed with status: 'FAILURE'

@achartier
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20406 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20406 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15397 completed with status: 'FAILURE'

@achartier achartier force-pushed the qwen3_fp8_per_tensor branch from fbd91ba to 12974b3 Compare October 1, 2025 01:28
@achartier
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20433 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20433 [ run ] completed with state DISABLED
L0 testing is limited to prioritized users. User achartier is not in the prioritized list. L0 testing cannot be triggered.

@achartier
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20467 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20467 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15433 completed with status: 'FAILURE'

@achartier
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20491 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20491 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15456 completed with status: 'FAILURE'

Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
@achartier achartier force-pushed the qwen3_fp8_per_tensor branch from 12974b3 to 02d30db Compare October 2, 2025 18:37
@achartier
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20550 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20550 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15507 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@achartier achartier merged commit 9db4366 into NVIDIA:main Oct 3, 2025
5 checks passed
evezhier pushed a commit to evezhier/TensorRT-LLM that referenced this pull request Oct 3, 2025
…ackend (NVIDIA#8075)

Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
@achartier achartier deleted the qwen3_fp8_per_tensor branch October 23, 2025 22:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants