KEMBAR78
[None][fix] Cherry-pick 6850: Complete the last missing allreduce op in Llama3/4. by hyukn · Pull Request #7420 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@hyukn
Copy link
Collaborator

@hyukn hyukn commented Sep 1, 2025

Cherry-pick commits in #6850.

Summary by CodeRabbit

  • Bug Fixes

    • Improved decoder layer behavior when no subsequent layer norm is present, ensuring correct post-fusion handling.
    • Harmonized all-reduce across quantization modes (including NVFP4/FP8) for more stable and consistent outputs.
  • Documentation

    • Updated README release badge to 1.1.0rc2.post1.
  • Chores

    • Bumped package version to 1.1.0rc2.post1.
    • Updated example dependency constraints to the new post-release version.

@hyukn hyukn requested a review from litaotju September 1, 2025 05:33
@hyukn hyukn requested review from a team as code owners September 1, 2025 05:33
@hyukn hyukn changed the base branch from main to release/1.1.0rc2 September 1, 2025 05:36
@hyukn hyukn removed request for a team, QiJune, mikeiovine and nv-guomingz September 1, 2025 05:36
@hyukn hyukn force-pushed the fix/chery_pick_6850 branch from e5f1825 to 570930f Compare September 1, 2025 05:38
@hyukn
Copy link
Collaborator Author

hyukn commented Sep 1, 2025

/bot run --disable-fail-fast

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 1, 2025

Caution

Review failed

Failed to post review comments.

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between efaefca and e5f1825.

📒 Files selected for processing (4)
  • README.md (1 hunks)
  • examples/constraints.txt (1 hunks)
  • tensorrt_llm/_torch/models/modeling_llama.py (3 hunks)
  • tensorrt_llm/version.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{cpp,cc,cxx,cu,h,hpp,hh,hxx,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{cpp,cc,cxx,cu,h,hpp,hh,hxx,cuh,py}: Use spaces only; no tabs; indent with 4 spaces
Prepend NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)

Files:

  • tensorrt_llm/version.py
  • tensorrt_llm/_torch/models/modeling_llama.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Indent Python with 4 spaces; no tabs
Preserve module namespaces when importing: from package.subpackage import foo; then call foo.SomeClass() instead of importing the class directly
Python naming: files snake_case; classes PascalCase; functions/methods snake_case; locals snake_case (prefix k_ when starting with a number); globals UPPER_SNAKE_CASE with G_ prefix; constants UPPER_SNAKE_CASE
Avoid shadowing outer-scope variables; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; limit comments to function-internal or file-local interfaces
Use Google-style docstrings for classes and functions; document attributes/variables inline so Sphinx can render them
Avoid reflection when simpler alternatives exist; prefer explicit parameters and return dicts over locals()/dynamic tricks
In try/except, catch the narrowest exceptions possible; keep try bodies minimal and use else for the main logic when doing duck-typing checks

Files:

  • tensorrt_llm/version.py
  • tensorrt_llm/_torch/models/modeling_llama.py
🧠 Learnings (7)
📚 Learning: 2025-08-21T00:16:56.457Z
Learnt from: farshadghodsian
PR: NVIDIA/TensorRT-LLM#7101
File: docs/source/blogs/tech_blog/blog9_Deploying_GPT_OSS_on_TRTLLM.md:36-36
Timestamp: 2025-08-21T00:16:56.457Z
Learning: TensorRT-LLM container release tags in documentation should only reference published NGC container images. The README badge version may be ahead of the actual published container versions.

Applied to files:

  • README.md
  • tensorrt_llm/version.py
📚 Learning: 2025-08-27T14:23:55.566Z
Learnt from: ixlmar
PR: NVIDIA/TensorRT-LLM#7294
File: tensorrt_llm/_torch/modules/rms_norm.py:17-17
Timestamp: 2025-08-27T14:23:55.566Z
Learning: The TensorRT-LLM project requires Python 3.10+ as evidenced by the use of TypeAlias from typing module, match/case statements, and union type | syntax throughout the codebase, despite some documentation still mentioning Python 3.8+.

Applied to files:

  • README.md
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • README.md
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • README.md
  • examples/constraints.txt
📚 Learning: 2025-08-11T20:09:24.389Z
Learnt from: achartier
PR: NVIDIA/TensorRT-LLM#6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.

Applied to files:

  • README.md
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • README.md
📚 Learning: 2025-08-21T21:48:35.135Z
Learnt from: djns99
PR: NVIDIA/TensorRT-LLM#7104
File: cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp:399-417
Timestamp: 2025-08-21T21:48:35.135Z
Learning: CUTLASS extensions in TensorRT-LLM (located under cpp/tensorrt_llm/cutlass_extensions/) are designed to integrate with and extend functionality in the external CUTLASS repository. When analyzing these extensions, their consumers and functionality wiring may exist in the CUTLASS codebase rather than within TensorRT-LLM itself.

Applied to files:

  • README.md
🧬 Code graph analysis (1)
tensorrt_llm/_torch/models/modeling_llama.py (4)
tensorrt_llm/functional.py (2)
  • AllReduceParams (3900-3939)
  • AllReduceFusionOp (3888-3897)
cpp/tensorrt_llm/kernels/customAllReduceKernels.h (1)
  • AllReduceFusionOp (69-171)
cpp/tensorrt_llm/thop/allreduceOp.cpp (2)
  • moe_allreduce (1039-1084)
  • moe_allreduce (1039-1042)
tensorrt_llm/_torch/utils.py (1)
  • Fp4QuantizedTensor (97-104)
📝 Walkthrough

Walkthrough

Version bumped to 1.1.0rc2.post1 across version.py, README badge, and examples constraints. In Llama4DecoderLayer.forward, modified post-MOE/post-MLP fusion handling: added pure all_reduce paths when no next_layer_layernorm; adjusted scale and fusion_op selection when next_layer_layernorm exists, preserving cutlass_min_latency_mode behavior.

Changes

Cohort / File(s) Summary
Version bump artifacts
README.md, examples/constraints.txt, tensorrt_llm/version.py
Updated release badge and dependency constraint to 1.1.0rc2.post1; set version to "1.1.0rc2.post1".
Llama decoder fusion/all-reduce logic
tensorrt_llm/_torch/models/modeling_llama.py
In Llama4DecoderLayer.forward: added pure all_reduce (fusion_op=None) when no next_layer_layernorm for POST_MOE_FUSION/POST_MLP_FUSION; when next_layer_layernorm exists, adjusted scale derivation (NVFP4/FP8 qkv input scale) and fusion_op selection; preserved moe_allreduce vs all_reduce under cutlass_min_latency_mode; minor formatting/comments.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Input as Hidden State
  participant Layer as Llama4DecoderLayer
  participant AR as TensorParallel AllReduce
  participant LN as Next LayerNorm (optional)

  Note over Layer: POST_MOE_FUSION / POST_MLP_FUSION branch

  Input->>Layer: forward(...)
  alt next_layer_layernorm is None
    Layer->>AR: all_reduce(fusion_op=None, norm_weight=None, scale=None)
    AR-->>Layer: reduced hidden
    Layer-->>Input: return reduced hidden
  else next_layer_layernorm exists
    Note over Layer: Determine scale\n- if NVFP4/FP8 and next_attn: use qkv_proj.input_scale\n- else: scale=None
    Layer->>AR: all_reduce(fusion_op=post_*_fusion_op, norm_weight=LN.weight, eps, scale)
    AR-->>Layer: reduced+fused output
    Layer->>LN: (implicit in fusion_op behavior)
    LN-->>Layer: output (if applicable)
    Layer-->>Input: return output
  end

  opt cutlass_min_latency_mode and MOE
    Note over Layer: Use moe_allreduce instead of generic all_reduce
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • litaotju
  • mikeiovine
  • nv-yilinf
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17160 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17160 [ run ] completed with state SUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #3 completed with status: 'FAILURE'

@hyukn hyukn force-pushed the fix/chery_pick_6850 branch from 570930f to ae8e7eb Compare September 2, 2025 05:52
@hyukn
Copy link
Collaborator Author

hyukn commented Sep 2, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17308 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17308 [ run ] completed with state SUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #14 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Sep 3, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17468 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17468 [ run ] completed with state SUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #38 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Sep 4, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17584 [ run ] triggered by Bot

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
@hyukn hyukn force-pushed the fix/chery_pick_6850 branch from ae8e7eb to 04279b0 Compare September 4, 2025 02:27
@hyukn
Copy link
Collaborator Author

hyukn commented Sep 4, 2025

/bot run

@hyukn
Copy link
Collaborator Author

hyukn commented Sep 4, 2025

/bot kill

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17607 [ kill ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17584 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17607 [ kill ] completed with state SUCCESS
Successfully killed previous jobs for commit 04279b0

@hyukn
Copy link
Collaborator Author

hyukn commented Sep 4, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17610 [ run ] triggered by Bot

@litaotju litaotju added the Release Blocker PRs that blocking the final release build or branching out the release branch label Sep 4, 2025
@litaotju litaotju enabled auto-merge (squash) September 4, 2025 15:47
@tensorrt-cicd
Copy link
Collaborator

PR_Github #17610 [ run ] completed with state SUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #58 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@litaotju litaotju merged commit 49b457c into NVIDIA:release/1.1.0rc2 Sep 4, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Release Blocker PRs that blocking the final release build or branching out the release branch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants