KEMBAR78
[TRTLLM-6741] [feat] enable LM tp for MTP, under attention dp case by Njuapp · Pull Request #7128 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@Njuapp
Copy link
Collaborator

@Njuapp Njuapp commented Aug 21, 2025

Summary by CodeRabbit

  • New Features
    • Added an optional beta setting to enable LM tensor parallelism when using Attention Data Parallelism. Configurable via args/CLI and off by default.
  • Performance
    • Improved throughput and scalability for ADP + TP setups, including optimized cross-rank sampling in speculative decoding (MTP/Eagle).
  • Stability
    • More robust speculative decoding via automatic padding/cropping of variable-length batches to prevent shape mismatches.
  • Compatibility
    • No breaking changes; existing behaviors remain unchanged unless the new option is enabled.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Njuapp Njuapp requested review from a team as code owners August 21, 2025 11:32
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 21, 2025

Caution

Review failed

Failed to post review comments.

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between eee58cb and e4e8fa6.

📒 Files selected for processing (4)
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py (1 hunks)
  • tensorrt_llm/_torch/models/modeling_utils.py (1 hunks)
  • tensorrt_llm/_torch/modules/embedding.py (3 hunks)
  • tensorrt_llm/_torch/speculative/mtp.py (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/models/modeling_utils.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/modules/embedding.py
  • tensorrt_llm/_torch/speculative/mtp.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/modules/embedding.py
  • tensorrt_llm/_torch/speculative/mtp.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/modules/embedding.py
  • tensorrt_llm/_torch/speculative/mtp.py
🧠 Learnings (6)
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • tensorrt_llm/_torch/modules/embedding.py
📚 Learning: 2025-09-04T17:00:29.500Z
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-09-04T17:00:29.500Z
Learning: Applies to **/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py} : Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Applied to files:

  • tensorrt_llm/_torch/modules/embedding.py
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
PR: NVIDIA/TensorRT-LLM#6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.

Applied to files:

  • tensorrt_llm/_torch/speculative/mtp.py
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
PR: NVIDIA/TensorRT-LLM#6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.

Applied to files:

  • tensorrt_llm/_torch/speculative/mtp.py
📚 Learning: 2025-08-08T04:10:19.038Z
Learnt from: djns99
PR: NVIDIA/TensorRT-LLM#6728
File: cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp:966-966
Timestamp: 2025-08-08T04:10:19.038Z
Learning: TensorRT plugins currently don't support padding functionality, and TensorRT is not getting new features (in maintenance mode). This means that duplicating parameters like mExpertHiddenSize in function calls, even with TODO comments, can be acceptable as pragmatic solutions within these constraints.

Applied to files:

  • tensorrt_llm/_torch/speculative/mtp.py
📚 Learning: 2025-08-09T20:57:04.084Z
Learnt from: sklevtsov-nvidia
PR: NVIDIA/TensorRT-LLM#3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.

Applied to files:

  • tensorrt_llm/_torch/speculative/mtp.py
🧬 Code graph analysis (2)
tensorrt_llm/_torch/modules/embedding.py (2)
tensorrt_llm/_torch/distributed/communicator.py (5)
  • tp_size (46-47)
  • pp_size (42-43)
  • world_size (26-27)
  • rank (22-23)
  • tp_rank (54-55)
tensorrt_llm/mapping.py (4)
  • Mapping (32-517)
  • rank (326-327)
  • rank (330-337)
  • tp_rank (340-341)
tensorrt_llm/_torch/speculative/mtp.py (2)
tensorrt_llm/_torch/distributed/ops.py (1)
  • allgather (141-225)
tensorrt_llm/mapping.py (3)
  • Mapping (32-517)
  • rank (326-327)
  • rank (330-337)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
📝 Walkthrough

Walkthrough

Introduces an ADP + LM TP mode with a new enable_lm_tp_in_adp flag, adds pre-LM hidden_state all-gather, updates LMHead to support multi-TP slicing via a new is_mtp_head arg, adapts speculative MTP workers for padding/gather and cross-rank draft sampling, and plumbs the flag through Mapping and LLM args.

Changes

Cohort / File(s) Summary of modifications
Config & Mapping plumbing
tensorrt_llm/llmapi/llm_args.py, tensorrt_llm/mapping.py
Adds enable_lm_tp_in_adp flag to BaseLlmArgs, _ParallelConfig, Mapping constructor, and to_dict; propagates flag through parallel config to mapping.
Model invocation updates
tensorrt_llm/_torch/models/modeling_deepseekv3.py, tensorrt_llm/_torch/models/modeling_utils.py
Adds pre-LM all-gather of hidden_states in ADP+LM TP; toggles lm_head.gather_output around logits; LM head called with is_mtp_head=True; selects lightweight LMHead path only when ADP enabled and LM TP in ADP disabled.
LMHead changes
tensorrt_llm/_torch/modules/embedding.py
Reads LM_TP_SIZE env; rebuilds mapping for LM TP in ADP; extends LMHead.forward with is_mtp_head; when set, slices weight per TP rank and computes F.linear on slice; preserves existing post-processing.
Speculative MTP updates
tensorrt_llm/_torch/speculative/mtp.py
Adds padding via F.pad; aligns logits computation with gathered/padded hidden_states; updates draft_sampler to support ADP+LM TP cross-rank argmax via constructed mapping; extends get_local_max_and_combined to accept optional mapping_lm_tp.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor R as Rank (ADP+LM TP)
  participant M as Mapping (LM TP)
  participant Dist as Distributed
  participant LM as LMHead

  R->>R: Forward decoder to hidden_states
  alt enable_attention_dp && enable_lm_tp_in_adp
    R->>M: Build LM TP mapping (lm_tp_size, lm_pp_size)
    R->>Dist: allgather(hidden_states, dim=0, mapping=M)
    Dist-->>R: gathered_hidden_states
    R->>LM: forward(gathered_hidden_states, is_mtp_head=True)
    LM-->>R: logits
  else
    R->>LM: forward(hidden_states)
    LM-->>R: logits
  end
Loading
sequenceDiagram
  autonumber
  participant W as MTPWorker
  participant Map as Mapping (LM TP)
  participant Coll as Cross-rank Collectives

  W->>W: logits per rank
  alt ADP+LM TP enabled
    W->>Map: Build LM TP mapping
    W->>W: get_local_max_and_combined(logits, mapping_lm_tp=Map)
    W->>Coll: all-gather interleaved [idx,val] across ranks
    Coll-->>W: gathered [idx,val] for all ranks
    W->>W: slice to local rank, derive draft_tokens
  else
    W->>W: existing TP or single-rank path
  end
  W-->>W: draft_tokens (cropped by token_count)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

AutoDeploy

Suggested reviewers

  • schetlur-nv
  • syuoni
  • QiJune
  • lfr-0531
  • Superjomn
  • yweng0828
  • nv-guomingz
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@Njuapp Njuapp changed the title enable LM tp for MTP, under attention dp case draft: enable LM tp for MTP, under attention dp case Aug 21, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/distributed/ops.py (1)

19-25: Avoid shadowing the project logger.

You import logger from tensorrt_llm.logger (line 19) and then overwrite it with logging.getLogger(name) (line 24). This can drop project-specific logging configuration.

-logger = logging.getLogger(__name__)
+# Use the project logger imported above; if a module-local logger is needed:
+# py_logger = logging.getLogger(__name__)
🧹 Nitpick comments (10)
examples/llm-api/extra-llm-api-config.yml (1)

4-6: Normalize YAML indentation under speculative_config.

Indentation under speculative_config is inconsistent with the rest of the file. Use 2 spaces to avoid surprises with strict YAML parsers and to match the repo style.

 speculative_config:
-    decoding_type: MTP
-    num_nextn_predict_layers: 3
+  decoding_type: MTP
+  num_nextn_predict_layers: 3
tensorrt_llm/_torch/modules/embedding.py (1)

1-1: Add NVIDIA copyright/SPDX header.

Per coding guidelines, prepend the NVIDIA copyright header to all Python sources.

+# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+
 import math
tensorrt_llm/_torch/models/modeling_utils.py (1)

1-1: Add NVIDIA copyright/SPDX header.

Please add the required header at the top of this file.

+# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+
 import contextlib
tensorrt_llm/_torch/distributed/ops.py (1)

1-1: Add NVIDIA copyright/SPDX header.

Please add the required header to this Python source.

+# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+
 import logging
tensorrt_llm/_torch/models/modeling_deepseekv3.py (2)

164-170: Pre‑LM allgather: add tp_size guard and drop redundant local import

  • The local import of allgather (Line 168) is redundant; this module already imports it at the top.
  • Guarding on mapping.tp_size > 1 avoids unnecessary collective when TP is 1.

Apply this diff:

-        # Add pre-lm gather logic
-        if (self.model_config.mapping.enable_attention_dp and 
-            getattr(self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
-            # ADP + LM TP mode: perform All-Gather before LM_head
-            from ..distributed import allgather
-            hidden_states = allgather(hidden_states, self.model_config.mapping, dim=-1)
+        # Add pre-LM gather logic
+        mapping = self.model_config.mapping
+        if (mapping.enable_attention_dp
+                and getattr(mapping, 'enable_lm_tp_in_adp', False)
+                and mapping.tp_size > 1):
+            # ADP + LM TP mode: gather hidden across the last dim (TP shards)
+            hidden_states = allgather(hidden_states, mapping, dim=-1)

1-20: Header/license consistency with repo guideline

Repo guideline requires an NVIDIA copyright header for Python sources. This file only carries the upstream MIT header. If the repo policy is to prepend NVIDIA’s header while preserving upstream attribution, please add it at the top.

tensorrt_llm/_torch/speculative/mtp.py (2)

1042-1057: Avoid casting token indices to float32 when packing

Casting indices to float can risk precision if vocab grows large. While 32‑bit floats are fine for today’s vocab sizes, consider keeping indices as int32 and only casting values, or use two allgathers (values and indices) as in the fix above to avoid packing altogether.


1060-1071: Minor: clarify assumptions in gathered layout

The even/odd slicing depends on the [idx,val] packing; add a brief comment documenting the layout and why float types are used to avoid surprises during maintenance. No functional issues otherwise.

tensorrt_llm/llmapi/llm_args.py (1)

226-227: Flag plumbing looks good; add validation to prevent silent misconfigurations

Propagation of enable_lm_tp_in_adp through _ParallelConfig and Mapping is correct. To prevent confusion, add a validator or a warning when:

  • enable_lm_tp_in_adp is True but enable_attention_dp is False (flag has no effect),
  • or tp_size == 1 (no LM TP to enable).

This keeps configs self‑consistent and avoids enabling a knob that does nothing.

I can add a small model_validator to emit a warning in these cases if you’d like.

Also applies to: 290-291, 1198-1202, 1450-1451

cpp/tensorrt_llm/thop/reducescatterOp.cpp (1)

43-45: reduceOp plumbed correctly; consider input validation and docs updates

  • Passing ncclRedOp_t through the constructor and using it in ncclReduceScatter/ncclReduce looks correct. Nice.
  • Please guard reduceOp against invalid enum values (e.g., out‑of‑range ints) and document the accepted values at the Python wrapper layer. A quick check with TLLM_CHECK can prevent undefined behavior.
  • Minor: for the ragged path (sizes.has_value()), consider using nullptr for recvbuff on non‑root ranks in ncclReduce to make intent explicit.

Example guard near the cast site:

int64_t ro = reduceOp;
TLLM_CHECK_WITH_INFO(ro == ncclSum || ro == ncclProd || ro == ncclMax || ro == ncclMin,
    "Unsupported reduceOp %ld", static_cast<long>(ro));
ncclRedOp_t ncclReduceOp = static_cast<ncclRedOp_t>(ro);

Also applies to: 96-96, 107-107, 126-126, 134-144, 153-163, 175-176

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 90bfc8c and 0cd3ede.

📒 Files selected for processing (9)
  • cpp/tensorrt_llm/thop/reducescatterOp.cpp (6 hunks)
  • examples/llm-api/extra-llm-api-config.yml (1 hunks)
  • tensorrt_llm/_torch/distributed/ops.py (3 hunks)
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py (1 hunks)
  • tensorrt_llm/_torch/models/modeling_utils.py (1 hunks)
  • tensorrt_llm/_torch/modules/embedding.py (1 hunks)
  • tensorrt_llm/_torch/speculative/mtp.py (1 hunks)
  • tensorrt_llm/llmapi/llm_args.py (4 hunks)
  • tensorrt_llm/mapping.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tensorrt_llm/_torch/modules/embedding.py
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/models/modeling_utils.py
  • tensorrt_llm/mapping.py
  • tensorrt_llm/llmapi/llm_args.py
  • tensorrt_llm/_torch/speculative/mtp.py
  • tensorrt_llm/_torch/distributed/ops.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/_torch/modules/embedding.py
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/models/modeling_utils.py
  • cpp/tensorrt_llm/thop/reducescatterOp.cpp
  • tensorrt_llm/mapping.py
  • tensorrt_llm/llmapi/llm_args.py
  • tensorrt_llm/_torch/speculative/mtp.py
  • tensorrt_llm/_torch/distributed/ops.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}: In C++, close namespaces with a comment naming the namespace (e.g., } // namespace foo)
Prefer const/constexpr variables over #define for constants
Declare variables const if not modified after initialization
Use Allman brace style in C++
C++ filenames use lowerCamelCase and must be case-insensitively unique within a build target
C++ type names use UpperCamelCase
Local variables, methods, and namespaces use lowerCamelCase
Global non-static variables not in anonymous namespace use gPrefix lowerCamelCase (e.g., gExample)
Static globals or globals in anonymous namespaces use sPrefix lowerCamelCase
Locally visible static variables start with 's' (e.g., static std::once_flag sFlag;)
Member variables use mPrefix lowerCamelCase; public members may omit but are encouraged to use 'm'
Constants (enums, global/static/function-scope magic numbers) use kPREFIXED_UPPER_SNAKE (e.g., kDIGIT_NUM)
If macros are unavoidable, use UPPER_SNAKE_CASE (prefer constants over #define)
Constructor parameter that conflicts with a public member name gets trailing underscore (foo_)
Literal suffixes should be uppercase (e.g., 1234L not 1234l)
C++: use spaces only; indent 4 spaces
Run clang-format (LLVM style) before submitting; wrap lines at 120 characters
If formatting must be bypassed, use // clang-format off/on around the section
Prefer smart pointers; use unique_ptr for sole ownership, shared_ptr for shared; weak_ptr only in exceptional cases
Do not use deprecated pre-C++11 smart pointers
Use C++ style comments; avoid C comments except special inline cases; prefer // single-line
Capitalize and punctuate full-sentence comments
Follow Doxygen rules: use //! for comments and //!< for members in C++
Disable code with #if/#endif and mnemonic conditions; avoid commented-out code; avoid dead code
Do not throw exceptions across library boundaries
Use least-forceful casts; avoid removing const/volatile; avoid C-style and functional casts (except constructors); p...

Files:

  • cpp/tensorrt_llm/thop/reducescatterOp.cpp
**/*.{cpp,cxx,cc,cu}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu}: Avoid literal values except for 0, nullptr, true, false; use named constexpr for other literals
Place semicolon of empty for/while loop on a new line
Always use brace-delimited bodies for switch/while/do-for/if/else
Use inline C comments in argument lists when parameter meaning is unclear (e.g., /* checkForErrors = */ false)
Do not use assignment in subexpressions (e.g., if (x = y) ... is forbidden)
Switch on enums should enumerate all values and omit default to catch new values at compile time
Structure switch statements; prohibit fallthrough except between empty cases; each case ends with break or throw; return at end of case not allowed; put break inside braces for compound case
Prefer anonymous namespaces over static for internal linkage of functions
Every defined function must be called at least once (no unused methods)

Files:

  • cpp/tensorrt_llm/thop/reducescatterOp.cpp
**/*.{h,hpp,hxx,hh,cuh,cpp,cxx,cc,cu}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Parameter names must be consistent between declarations and definitions

Files:

  • cpp/tensorrt_llm/thop/reducescatterOp.cpp
🧠 Learnings (2)
📓 Common learnings
Learnt from: timlee0212
PR: NVIDIA/TensorRT-LLM#6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
PR: NVIDIA/TensorRT-LLM#6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.

Applied to files:

  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • cpp/tensorrt_llm/thop/reducescatterOp.cpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (6)
examples/llm-api/extra-llm-api-config.yml (1)

5-6: All good – num_nextn_predict_layers is the correct key

Search across the codebase confirms that the configuration field is consistently defined and referenced as num_nextn_predict_layers, with no occurrences of variants like num_next_n_predict_layers or num_next_predict_layers. This key is used in:

  • Parsing and argument definitions (tensorrt_llm/llmapi/llm_args.py)
  • MTP decoding implementations and utilities
  • Test suites (unit and integration tests)
  • Examples and documentation

No action is required.

tensorrt_llm/_torch/modules/embedding.py (1)

40-43: Gating in LMHead init is correct — it disables any embedding parallelism when attention‐based DP is on (unless enable_lm_tp_in_adp=True), so by default both Embedding and LMHead fall back to tensor_parallel_mode=None and remain in sync. I’ve checked the major call sites—none explicitly pass a non-None tensor_parallel_mode—so tied embeddings will not trigger the TP mode/size assertions in DecoderModelForCausalLM.

• Double-check the two constructors in
tensorrt_llm/_torch/models/modeling_utils.py (lines 354–358)
to ensure that, if you do set enable_lm_tp_in_adp=True, you propagate the same tensor_parallel_mode into both the vocab embedding and the LM head.

With that confirmed, no code changes are needed here.

tensorrt_llm/_torch/models/modeling_utils.py (1)

355-383: Verified ADP+LM TP gating and gather_output behavior

  • The enable_attention_dp and enable_lm_tp_in_adp flags in tensorrt_llm/mapping.py default to False, preserving the legacy ADP-only code path.
  • In torch/models/modeling_utils.py, LMHead is instantiated without tensor parallelism when ADP is enabled but LM TP in ADP is disabled, and with gather_output=True only in the TP-aware branch.
  • In torch/models/modeling_deepseekv3.py, the code performs an all-gather before the LM head when both ADP and LM TP in ADP are on, and dynamically toggles lm_head.gather_output around the forward call to ensure consistency.

A regex search confirmed that all gather_output assignments throughout the repo align with these gating conditions. Approving these changes.

tensorrt_llm/mapping.py (1)

294-305: Add enable_attention_dp and enable_lm_tp_in_adp to Mapping’s equality, hash, and serialization

To prevent cache collisions and stale workspaces (since Mapping instances are used as dict-keys in allreduce workspaces, KV-cache pools, etc.), please extend all three methods consistently:

• In Mapping.__eq__ (currently around lines 294–305, and similarly at 308–324): append

    and self.enable_attention_dp == other.enable_attention_dp
    and self.enable_lm_tp_in_adp == other.enable_lm_tp_in_adp

to the final return.

• In Mapping.__hash__ (around lines 300–315): include

    self.enable_attention_dp,
    self.enable_lm_tp_in_adp,

in the tuple passed to hash((…)).

• In Mapping.to_dict (around lines 500–515): add

    'enable_attention_dp': self.enable_attention_dp,
    'enable_lm_tp_in_adp': self.enable_lm_tp_in_adp,

to the returned dict.

Also, verify that Mapping.from_dict (if implemented) reads these two keys back—otherwise serialized configs won’t round-trip.

After applying these changes, please re-run your cache-collision check to ensure no unintended merges remain, e.g.:

rg -nP --type=py -C2 '\bMapping\b' tensorrt_llm | sed -n '1,200p'

…and confirm that every codepath which uses a Mapping as a dict key now distinguishes different DP settings.

tensorrt_llm/_torch/distributed/ops.py (2)

12-14: Re-exporting ReduceOp is fine.

This provides a convenient import path for call sites without pulling torch.distributed directly.


232-239: No existing call sites pass ints or ReduceOp
Ran rg -nP --type=py -C2 'reducescatter\([^)]*$' and inspected all callers of reducescatter (in tensorrt_llm/_torch/speculative/mtp.py, .../modules/fused_moe/interface.py, .../fused_moe_vanilla.py, and .../fused_moe_wide_ep.py). None of them supply a non-string reduce_op argument, so they’ll continue to use the default "sum". No call-site updates are required.

Comment on lines 171 to 197
# Temporarily disable gather_output when not in ADP mode or (in ADP mode and LM TP is enabled)
if (not self.model_config.mapping.enable_attention_dp) or (self.model_config.mapping.enable_attention_dp and
getattr(self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
lm_head.gather_output = False
logits = lm_head(hidden_states)
if not (self.model_config.mapping.enable_attention_dp):
if (not self.model_config.mapping.enable_attention_dp) or (self.model_config.mapping.enable_attention_dp and
getattr(self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
lm_head.gather_output = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Make gather_output toggle exception‑safe and restore the previous value

Mutating lm_head.gather_output globally without saving/restoring the prior value is fragile: exceptions or nested calls can leave the module in the wrong state. Save the prior value and restore it in a finally block. Also reuse the same condition via a local boolean for readability.

Apply this diff:

-        # Temporarily disable gather_output when not in ADP mode or (in ADP mode and LM TP is enabled)
-        if (not self.model_config.mapping.enable_attention_dp) or (self.model_config.mapping.enable_attention_dp and
-                getattr(self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
-            lm_head.gather_output = False
-        logits = lm_head(hidden_states)
-        if (not self.model_config.mapping.enable_attention_dp) or (self.model_config.mapping.enable_attention_dp and
-                getattr(self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
-            lm_head.gather_output = True
+        # Temporarily disable gather_output when not in ADP, or when ADP + LM TP is enabled
+        disable_gather = (not mapping.enable_attention_dp) or getattr(mapping, 'enable_lm_tp_in_adp', False)
+        prev_gather = lm_head.gather_output
+        try:
+            if disable_gather:
+                lm_head.gather_output = False
+            logits = lm_head(hidden_states)
+        finally:
+            if disable_gather:
+                lm_head.gather_output = prev_gather

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In tensorrt_llm/_torch/models/modeling_deepseekv3.py around lines 171 to 178,
the code toggles lm_head.gather_output without saving/restoring its previous
value which can leave the module in an inconsistent state on exceptions; replace
the inline repeated condition with a local boolean (e.g. should_toggle = (not
self.model_config.mapping.enable_attention_dp) or
(self.model_config.mapping.enable_attention_dp and
getattr(self.model_config.mapping, 'enable_lm_tp_in_adp', False))), save the
previous_value = lm_head.gather_output, then if should_toggle set
lm_head.gather_output = False, compute logits = lm_head(hidden_states) inside a
try/finally and in finally always restore lm_head.gather_output =
previous_value; keep logic and readability but ensure exception-safety and avoid
repeating the condition.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this added by accident?

if (self.model_config.mapping.enable_attention_dp and
getattr(self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
# ADP + LM TP mode: perform All-Gather before LM_head
from ..distributed import allgather
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please move this to the header of the file.

@kaiyux kaiyux requested review from lfr-0531 and yweng0828 August 21, 2025 14:24
description="Enable attention data parallel.",
status="beta")

enable_lm_tp_in_adp: bool = Field(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Superjomn FYI - is it ok to add another argument here? Any other suggestions?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is OK for the prototype stage. The mechanism, I think, is like this:

  1. If there is no existing XxConfig to hold the new knob, it is fine to add a dangling knob, but mark it as a prototype
  2. We can wait if there are more than two or three knobs in the same category, then we can consider grouping them into a Xxconfig, no rush to introduce a hierarchical config before we are sure the knobs need it.
  3. When the feature is somewhat stable, we can mark the xxx_config beta then.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, in that case, for this knob it should be status="prototype"?

Copy link
Collaborator

@Superjomn Superjomn Sep 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think so, for a dangling knob, it should start from "prototype", as we may refactor it with some hierarchical Config later.

@kaiyux kaiyux requested a review from Superjomn August 22, 2025 01:28
@Njuapp Njuapp requested a review from a team as a code owner August 26, 2025 06:52
@Njuapp Njuapp requested review from chuangz0 and schetlur-nv August 26, 2025 06:52
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/tensorrt_llm/thop/reducescatterOp.cpp (1)

22-27: Missing include (std::accumulate usage)

run_list uses std::accumulate but is not included. This may fail to compile depending on headers.

Apply this diff:

 #include <torch/extension.h>
 #if ENABLE_MULTI_DEVICE
 #include <nccl.h>
 #endif // ENABLE_MULTI_DEVICE
 
 #include <cassert>
+#include <numeric>
 #include <set>
 #include <vector>
♻️ Duplicate comments (3)
tensorrt_llm/llmapi/llm_args.py (1)

1251-1255: Mark the new public knob as prototype, per prior guidance

Previous discussion recommended marking brand‑new, dangling knobs as prototype until stabilized. Please set status="prototype" here to match that guidance.

Apply this diff:

-    enable_lm_tp_in_adp: bool = Field(
-        default=False,
-        description="Enable lm tp in attention dp.",
-        status="beta")
+    enable_lm_tp_in_adp: bool = Field(
+        default=False,
+        description="Enable LM tensor-parallel in Attention DP mode.",
+        status="prototype")
tensorrt_llm/_torch/models/modeling_deepseekv3.py (2)

163-169: Remove redundant local import; it's already imported at the file header

allgather is already imported at the top; keep imports centralized.

-            from ..distributed import allgather
             hidden_states = allgather(hidden_states, self.model_config.mapping, dim=-1)

170-177: Make gather_output toggle exception-safe and restore prior value

Current code may leave lm_head.gather_output in the wrong state on exceptions or nested calls. Compute the condition once, save the prior value, and restore it in finally.

Apply this diff:

-        # Temporarily disable gather_output when not in ADP mode or (in ADP mode and LM TP is enabled)
-        if (not self.model_config.mapping.enable_attention_dp) or (self.model_config.mapping.enable_attention_dp and
-                getattr(self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
-            lm_head.gather_output = False
-        logits = lm_head(hidden_states)
-        if (not self.model_config.mapping.enable_attention_dp) or (self.model_config.mapping.enable_attention_dp and
-                getattr(self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
-            lm_head.gather_output = True
+        # Temporarily disable gather_output when not in ADP, or when ADP+LM TP is enabled
+        mapping = self.model_config.mapping
+        disable_gather = (not mapping.enable_attention_dp) or getattr(mapping, 'enable_lm_tp_in_adp', False)
+        prev_gather = lm_head.gather_output
+        try:
+            if disable_gather:
+                lm_head.gather_output = False
+            logits = lm_head(hidden_states)
+        finally:
+            if disable_gather:
+                lm_head.gather_output = prev_gather
🧹 Nitpick comments (3)
tensorrt_llm/llmapi/llm_args.py (1)

1503-1504: Good: flag is propagated into _ParallelConfig

Consider warning if enable_lm_tp_in_adp is set while enable_attention_dp is False (no effect). Optional.

Example validator to add (outside the selected lines):

@model_validator(mode="after")
def warn_lm_tp_in_adp_without_adp(self):
    if self.enable_lm_tp_in_adp and not self.enable_attention_dp:
        logger.warning("enable_lm_tp_in_adp has no effect unless enable_attention_dp is True.")
    return self
examples/wide_ep/slurm_scripts/submit_e2e.sh (1)

14-15: mtp_size default change

Bumping mtp_size to 3 aligns with the new MTP path. Consider exposing this as an env override similar to other parameters.

-mtp_size=3
+mtp_size="${MTP_SIZE:-3}"
cpp/tensorrt_llm/thop/reducescatterOp.cpp (1)

2-2: Nit: update copyright year

Header still says 1993-2024; repository guideline asks for current year.

- * SPDX-FileCopyrightText: Copyright (c) 1993-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+ * SPDX-FileCopyrightText: Copyright (c) 1993-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 0cd3ede and 28ab958.

📒 Files selected for processing (11)
  • cpp/tensorrt_llm/thop/reducescatterOp.cpp (6 hunks)
  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm (1 hunks)
  • examples/llm-api/extra-llm-api-config.yml (1 hunks)
  • examples/wide_ep/slurm_scripts/submit_e2e.sh (1 hunks)
  • tensorrt_llm/_torch/distributed/ops.py (3 hunks)
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py (1 hunks)
  • tensorrt_llm/_torch/models/modeling_utils.py (1 hunks)
  • tensorrt_llm/_torch/modules/embedding.py (1 hunks)
  • tensorrt_llm/_torch/speculative/mtp.py (1 hunks)
  • tensorrt_llm/llmapi/llm_args.py (4 hunks)
  • tensorrt_llm/mapping.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
  • tensorrt_llm/mapping.py
  • examples/llm-api/extra-llm-api-config.yml
  • tensorrt_llm/_torch/modules/embedding.py
  • tensorrt_llm/_torch/distributed/ops.py
  • tensorrt_llm/_torch/speculative/mtp.py
  • tensorrt_llm/_torch/models/modeling_utils.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code must target Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Preserve module namespaces when importing; import modules/packages and access members via the module (e.g., from package.subpackage import foo; foo.SomeClass())
Python file names should be snake_case
Python class names should be PascalCase
Python functions/methods and local variables should be snake_case; variables beginning with a number should be prefixed with k_ (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE prefixed with G_ (e.g., G_MY_GLOBAL); constants should be UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments should be reserved for in-function or file-local interfaces
Use Google-style docstrings for classes and functions; attributes and variables may be documented inline with trailing string literals
Avoid reflection when simpler, explicit code suffices (e.g., avoid dict(**locals()) patterns)
In try/except, catch the narrowest exceptions possible
For duck-typing patterns, keep the try body minimal and move logic to else to avoid masking unrelated failures

Files:

  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/llmapi/llm_args.py
**/*.{c,cc,cpp,cxx,h,hh,hpp,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)

Files:

  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/llmapi/llm_args.py
  • cpp/tensorrt_llm/thop/reducescatterOp.cpp
**/*.{c,cc,cpp,cxx,cu}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{c,cc,cpp,cxx,cu}: Closing braces of C++ namespaces must include a trailing comment naming the namespace (e.g., } // namespace foo)
Use Allman brace style; empty for/while loop semicolon on its own line; always use braces for control statements
C++ filenames must be lowerCamelCase (e.g., thisIsAFilename.cpp) and be case-insensitively unique within a compilation target
Use smart pointers; prefer unique_ptr for sole ownership, shared_ptr for shared; weak_ptr only in exceptional cases; do not use deprecated smart pointers
In implementation, prefer C++ comments (//); use inline C comments only for annotating parameters in calls (e.g., /* checkForErrors = */ false)
Do not use assignment in subexpressions (e.g., if (x = y) or chained x = y = z)

Files:

  • cpp/tensorrt_llm/thop/reducescatterOp.cpp
**/*.{c,cc,cpp,cxx,cu,h,hh,hpp,hxx,cuh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{c,cc,cpp,cxx,cu,h,hh,hpp,hxx,cuh}: Prefer const or constexpr variables over #define for constants; variables not modified after initialization must be declared const
Avoid using literals (except 0, nullptr, true, false) outside of initialization; prefer named constexpr constants
Type names (classes, structs, enums, typedefs) must be UpperCamelCase
Local variables, methods, and namespaces must be lowerCamelCase
Non-magic-number global variables that are non-static/not in anonymous namespace must be prefixed with g (e.g., gDontUseGlobalFoos)
Non-magic-number globals that are static or in an anonymous namespace must be prefixed with s (e.g., sMutableStaticGlobal)
Locally visible static variables should be lowerCamelCase prefixed with s (e.g., static std::once_flag sFlag)
Member variables should be lowerCamelCase prefixed with m (e.g., mNbFooValues); public members may omit but prefix is encouraged for clarity
Constants (enums, globals, static constants, and function-scope magic numbers) should be UPPER_SNAKE_CASE with k prefix (e.g., kDIGIT_NUM)
Avoid Hungarian notation except limited 'apps Hungarian' like nb for counts; literal suffixes should be uppercase (e.g., 1234L)
Use spaces only; indent with 4 spaces (no tabs)
Format C++ code with clang-format (LLVM style) and limit lines to 120 characters; exceptions must be bracketed with // clang-format off/on
Disable code with #if/#endif (prefer mnemonic conditions) or macros that noop in release; do not comment out code; avoid dead code
Use the least forceful cast necessary; avoid removing const/volatile; avoid C-style and functional casts (except explicit constructors); cast void* to T* with static_cast; use reinterpret_cast only as last resort; avoid dynamic_cast
Switch on enum should cover all values and omit default when possible; switch statements must be well-structured with no fall-through except between adjacent empty cases; each case must end with break or throw; returns at end of case are not allowed; if ...

Files:

  • cpp/tensorrt_llm/thop/reducescatterOp.cpp
🧠 Learnings (2)
📓 Common learnings
Learnt from: timlee0212
PR: NVIDIA/TensorRT-LLM#6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
PR: NVIDIA/TensorRT-LLM#6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.

Applied to files:

  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • cpp/tensorrt_llm/thop/reducescatterOp.cpp
🧬 Code graph analysis (2)
tensorrt_llm/_torch/models/modeling_deepseekv3.py (4)
tensorrt_llm/_torch/distributed/ops.py (1)
  • allgather (145-229)
tensorrt_llm/functional.py (1)
  • allgather (4142-4226)
cpp/tensorrt_llm/thop/allgatherOp.cpp (2)
  • allgather (122-137)
  • allgather (122-122)
tensorrt_llm/_torch/distributed/communicator.py (2)
  • allgather (94-95)
  • allgather (107-108)
cpp/tensorrt_llm/thop/reducescatterOp.cpp (3)
cpp/tensorrt_llm/common/opUtils.cpp (2)
  • getDtypeMap (33-47)
  • getDtypeMap (33-33)
cpp/tensorrt_llm/thop/allreduceOp.cpp (14)
  • input (159-191)
  • input (159-162)
  • input (207-285)
  • input (207-209)
  • input (287-306)
  • input (287-289)
  • input (308-339)
  • input (308-310)
  • input (341-406)
  • input (341-343)
  • input (408-590)
  • input (408-412)
  • input (592-658)
  • input (592-595)
cpp/tensorrt_llm/thop/allgatherOp.cpp (4)
  • input (108-111)
  • input (108-108)
  • input_list (58-106)
  • input_list (58-58)
🪛 Shellcheck (0.10.0)
examples/wide_ep/slurm_scripts/submit_e2e.sh

[warning] 5-5: Use var=$(command) to assign output (or quote to assign string).

(SC2209)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tensorrt_llm/llmapi/llm_args.py (1)

228-230: Plumbing the new flag through _ParallelConfig and Mapping looks correct

to_mapping now forwards enable_lm_tp_in_adp; no behavioral regressions with default False.

Also applies to: 292-293

cpp/tensorrt_llm/thop/reducescatterOp.cpp (1)

175-177: API extension looks good

Torch bindings and default reduceOp ensure backward compatibility.

Comment on lines 133 to 144
extern torch::Tensor reducescatter(
torch::Tensor input, torch::optional<torch::List<int64_t>> sizes, torch::List<int64_t> group_)
torch::Tensor input, torch::optional<torch::List<int64_t>> sizes, torch::List<int64_t> group_, int64_t reduceOp = 0)
{
#if ENABLE_MULTI_DEVICE
std::set<int> group;
for (int64_t rank : group_)
{
group.insert(static_cast<int>(rank));
}
ReducescatterOp op(group);
ncclRedOp_t ncclReduceOp = static_cast<ncclRedOp_t>(reduceOp);
ReducescatterOp op(group, ncclReduceOp);
op.initialize();
auto output = op.run(input, sizes);
return output;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Validate reduceOp to supported NCCL ops

Guard against invalid reduceOp to fail fast with a clear error instead of undefined NCCL behavior.

Apply this diff:

-    ncclRedOp_t ncclReduceOp = static_cast<ncclRedOp_t>(reduceOp);
+    ncclRedOp_t ncclReduceOp = static_cast<ncclRedOp_t>(reduceOp);
+    switch (ncclReduceOp)
+    {
+    case ncclSum:
+    case ncclProd:
+    case ncclMax:
+    case ncclMin:
+        break;
+    default:
+        TLLM_CHECK_WITH_INFO(false, "Unsupported reduceOp: %ld", static_cast<long>(reduceOp));
+    }

Repeat the same check in reducescatter_list().

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
extern torch::Tensor reducescatter(
torch::Tensor input, torch::optional<torch::List<int64_t>> sizes, torch::List<int64_t> group_)
torch::Tensor input, torch::optional<torch::List<int64_t>> sizes, torch::List<int64_t> group_, int64_t reduceOp = 0)
{
#if ENABLE_MULTI_DEVICE
std::set<int> group;
for (int64_t rank : group_)
{
group.insert(static_cast<int>(rank));
}
ReducescatterOp op(group);
ncclRedOp_t ncclReduceOp = static_cast<ncclRedOp_t>(reduceOp);
ReducescatterOp op(group, ncclReduceOp);
op.initialize();
auto output = op.run(input, sizes);
return output;
extern torch::Tensor reducescatter(
torch::Tensor input,
torch::optional<torch::List<int64_t>> sizes,
torch::List<int64_t> group_,
int64_t reduceOp = 0)
{
#if ENABLE_MULTI_DEVICE
std::set<int> group;
for (int64_t rank : group_)
{
group.insert(static_cast<int>(rank));
}
ncclRedOp_t ncclReduceOp = static_cast<ncclRedOp_t>(reduceOp);
switch (ncclReduceOp)
{
case ncclSum:
case ncclProd:
case ncclMax:
case ncclMin:
break;
default:
TLLM_CHECK_WITH_INFO(false, "Unsupported reduceOp: %ld", static_cast<long>(reduceOp));
}
ReducescatterOp op(group, ncclReduceOp);
op.initialize();
auto output = op.run(input, sizes);
return output;
#else
TORCH_CHECK(false, "Multi-device support is disabled");
#endif
}
🤖 Prompt for AI Agents
In cpp/tensorrt_llm/thop/reducescatterOp.cpp around lines 133 to 146, the code
casts reduceOp to ncclRedOp_t without validating it; add an explicit check that
reduceOp is one of the supported NCCL ops (e.g., NCCL_SUM, NCCL_PROD, NCCL_MIN,
NCCL_MAX) and throw/raise a clear error (with message indicating invalid
reduceOp value) if it is outside that set before constructing ReducescatterOp;
apply the same validation logic to the reducescatter_list() implementation so
both entry points fail fast on invalid reduceOp values.


container_name=disaggr
logdir=${workdir}/benchmark-${isl}-${osl}
logdir=${workdir}/mtp-benchmark-${isl}-${osl}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Renaming log path prefix to mtp-benchmark: verify downstream consumers and consider a backward-compatible toggle.

Changing the base logdir to mtp-benchmark- can break any log collectors, Jenkins parsers, or notebooks that rely on the old benchmark- prefix. If the intent is: use mtp-benchmark only when MTP is active, consider gating the prefix by mtp_size to preserve old paths for non-MTP runs.

Apply this minimal change to make the prefix configurable while keeping current behavior when mtp_size>0:

- logdir=${workdir}/mtp-benchmark-${isl}-${osl}
+ logdir=${workdir}/${log_prefix}-${isl}-${osl}

Then define log_prefix right above line 80 (new lines shown below), so mtp runs use mtp-benchmark and others keep benchmark:

# place just above the logdir assignment
if [ "${mtp_size}" -gt 0 ]; then
  log_prefix="mtp-benchmark"
else
  log_prefix="benchmark"
fi

To catch potential breakages in scripts/docs that still assume benchmark-, run this repo-wide scan:


🏁 Script executed:

#!/bin/bash
# Find places that assume the old "benchmark-" prefix
rg -n --no-ignore -S 'benchmark-' examples scripts docs ci .github || true
# Spot-check new usage of "mtp-benchmark-"
rg -n --no-ignore -S 'mtp-benchmark-' examples scripts docs ci .github || true

Length of output: 1053


Enable configurable logdir prefix and update downstream references

Found occurrences of the hard-coded “benchmark-” prefix in both code and documentation. To avoid breaking existing log-collector scripts, notebooks, or documentation, make the prefix configurable—and update any downstream references if you decide to change the default.

• Code change (in examples/disaggregated/slurm/benchmark/disaggr_torch.slurm around line 80):

- logdir=${workdir}/mtp-benchmark-${isl}-${osl}
+ # configure log-prefix based on mtp_size for backward compatibility
+ if [ "${mtp_size}" -gt 0 ]; then
+   log_prefix="mtp-benchmark"
+ else
+   log_prefix="benchmark"
+ fi
+ logdir=${workdir}/${log_prefix}-${isl}-${osl}

• Documentation anchors and examples to update if you change the default “benchmark-” prefix:
– docs/source/commands/trtllm-serve/index.rst:9 (run-benchmark-with-trtllm-serve)
– docs/source/blogs/Best_perf_practice_on_DeepSeek-R1_in_TensorRT-LLM.md:25 ([Benchmark](#benchmark-1))
– examples/models/core/deepseek_v3/README.md:38 (#example-multi-node-benchmark-on-gb200-slurm-cluster)
– examples/models/core/enc_dec/README.md:20 (#benchmark-c-runtime)

• Catch any remaining hard-coded references repo-wide:

# find old “benchmark-” usages
rg -n --no-ignore -S 'benchmark-' . || true
# spot-check new “mtp-benchmark-” usages
rg -n --no-ignore -S 'mtp-benchmark-' . || true

Please review and update these locations if you adopt the new default prefix.

🤖 Prompt for AI Agents
In examples/disaggregated/slurm/benchmark/disaggr_torch.slurm around line 80,
the logdir uses a hard-coded "benchmark-" prefix; make this prefix configurable
by introducing a variable (e.g., LOG_PREFIX or PREFIX) at the top of the script
with a default value of "benchmark-" and change the logdir assignment to use
that variable (logdir=${workdir}/${LOG_PREFIX}mtp-${isl}-${osl} or equivalent).
After deciding whether to keep the default or switch to a new default (e.g.,
"mtp-benchmark-"), update the downstream documentation and README anchors listed
in the review (docs/source/commands/trtllm-serve/index.rst,
docs/source/blogs/Best_perf_practice_on_DeepSeek-R1_in_TensorRT-LLM.md,
examples/models/core/deepseek_v3/README.md,
examples/models/core/enc_dec/README.md) to reflect the chosen prefix, and run
the provided ripgrep checks to find and fix any remaining hard-coded occurrences
repo-wide.

logdir=${workdir}/benchmark-${isl}-${osl}
logdir=${workdir}/mtp-benchmark-${isl}-${osl}
mkdir -p ${logdir}
full_logdir=${logdir}/ctx${num_ctx_servers}_gen${num_gen_servers}_dep${gen_tp_size}_batch${gen_batch_size}_eplb${eplb_num_slots}_mtp${mtp_size}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Inconsistent token in full_logdir: dep/tep vs expected tp; unify to avoid split log naming.

full_logdir currently encodes gen_tp_size with dep in one branch and tep in another. This will scatter results across two different folder patterns depending on gen_enable_attention_dp, making automation brittle.

Apply this diff to use a single, consistent key (tp) in both places:

- full_logdir=${logdir}/ctx${num_ctx_servers}_gen${num_gen_servers}_dep${gen_tp_size}_batch${gen_batch_size}_eplb${eplb_num_slots}_mtp${mtp_size}
+ full_logdir=${logdir}/ctx${num_ctx_servers}_gen${num_gen_servers}_tp${gen_tp_size}_batch${gen_batch_size}_eplb${eplb_num_slots}_mtp${mtp_size}
- full_logdir=${logdir}/ctx${num_ctx_servers}_gen${num_gen_servers}_tep${gen_tp_size}_batch${gen_batch_size}_eplb${eplb_num_slots}_mtp${mtp_size}
+ full_logdir=${logdir}/ctx${num_ctx_servers}_gen${num_gen_servers}_tp${gen_tp_size}_batch${gen_batch_size}_eplb${eplb_num_slots}_mtp${mtp_size}

Also applies to: 95-95

🤖 Prompt for AI Agents
In examples/disaggregated/slurm/benchmark/disaggr_torch.slurm around lines 82
and 95, the log directory naming uses inconsistent tokens ("dep" or "tep") for
gen_tp_size depending on gen_enable_attention_dp; update both full_logdir
assignments to use a single consistent token "tp" for the gen_tp_size component
so both branches produce the same folder pattern (replace the incorrect "dep" or
"tep" segment with "tp").

Comment on lines 5 to 14
partition=batch
account=coreai_comparch_trtllm
job_name=deepseekr1
container_image=/lustre/fs1/portfolios/coreai/users/kevxie/workspace/containers/sw-tensorrt-docker+tensorrt-llm+pytorch-25.06-py3-aarch64-ubuntu24.04-trt10.11.0.33-skip-tritondevel-202508201630-pre-test.sqsh
mounts=/lustre/fs1/portfolios/coreai # e.g. /mnt/data:/mnt/data
workdir=/lustre/fs1/portfolios/coreai/projects/coreai_comparch_trtllm/users/chang/TensorRT-LLM/examples/disaggregated/slurm/benchmark # Path to disaggr_torch.slurm
model_dir=/lustre/fs1/portfolios/coreai/users/xqiao/DeepSeek-R1-FP4-v2 # Path to the model checkpoint
repo_dir=/lustre/fs1/portfolios/coreai/projects/coreai_comparch_trtllm/users/chang/TensorRT-LLM # Path to the repo to install TensorRT-LLM, if this is empty, the pre-installed version will be used

mtp_size=0
mtp_size=3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Avoid hard-coded cluster-specific values; quote assignments

Hard-coded partition/account/paths make this script non-portable and leak internal details. Quote assignments to satisfy shellcheck and avoid word-splitting. Prefer env-driven defaults.

Apply this diff:

-partition=batch
-account=coreai_comparch_trtllm
-job_name=deepseekr1
-container_image=/lustre/fs1/portfolios/coreai/users/kevxie/workspace/containers/sw-tensorrt-docker+tensorrt-llm+pytorch-25.06-py3-aarch64-ubuntu24.04-trt10.11.0.33-skip-tritondevel-202508201630-pre-test.sqsh
-mounts=/lustre/fs1/portfolios/coreai  # e.g. /mnt/data:/mnt/data
-workdir=/lustre/fs1/portfolios/coreai/projects/coreai_comparch_trtllm/users/chang/TensorRT-LLM/examples/disaggregated/slurm/benchmark  # Path to disaggr_torch.slurm
-model_dir=/lustre/fs1/portfolios/coreai/users/xqiao/DeepSeek-R1-FP4-v2  # Path to the model checkpoint
-repo_dir=/lustre/fs1/portfolios/coreai/projects/coreai_comparch_trtllm/users/chang/TensorRT-LLM # Path to the repo to install TensorRT-LLM, if this is empty, the pre-installed version will be used
+partition="${PARTITION:-batch}"
+account="${ACCOUNT:-coreai_comparch_trtllm}"
+job_name="${JOB_NAME:-deepseekr1}"
+container_image="${CONTAINER_IMAGE:-/path/to/container.sqsh}"
+mounts="${MOUNTS:-/mnt/data:/mnt/data}"  # adjust as needed
+workdir="${WORKDIR:-/path/to/examples/disaggregated/slurm/benchmark}"  # Path to disaggr_torch.slurm
+model_dir="${MODEL_DIR:-/path/to/model}"  # Path to the model checkpoint
+repo_dir="${REPO_DIR:-}" # Optional: repo to install TensorRT-LLM; empty uses pre-installed

Also, please verify that the non-standard sbatch flag --segment is valid for your environment; it is not a standard Slurm option.


🏁 Script executed:

#!/bin/bash
# Display the first 20 lines of the script to inspect assignments
sed -n '1,20p' examples/wide_ep/slurm_scripts/submit_e2e.sh

# Search for where sbatch is called, to check for --segment usage
grep -n "sbatch" examples/wide_ep/slurm_scripts/submit_e2e.sh

# Search for any mention of the --segment flag
grep -n "\--segment" examples/wide_ep/slurm_scripts/submit_e2e.sh

Length of output: 1423


Generalize and Quote Configuration Parameters
Hard-coded Slurm settings and file paths make this script non-portable and expose internal details. Quote assignments to prevent word-splitting and drive them via environment variables with sensible defaults. Also, confirm whether your Slurm installation supports the non-standard --segment flag (used on lines 56 and 94); if it’s unsupported, remove or replace it.

Pinpoint locations:

  • Lines 5–14: unquoted, hard-coded partition, account, job_name, container_image, mounts, workdir, model_dir, repo_dir
  • Lines 56 & 94: use of non-standard --segment=${total_node_num}

Proposed diff:

-partition=batch
-account=coreai_comparch_trtllm
-job_name=deepseekr1
-container_image=/lustre/fs1/portfolios/coreai/users/kevxie/.../container.sqsh
-mounts=/lustre/fs1/portfolios/coreai  # e.g. /mnt/data:/mnt/data
-workdir=/lustre/fs1/portfolios/coreai/.../examples/disaggregated/slurm/benchmark
-model_dir=/lustre/fs1/portfolios/coreai/users/xqiao/DeepSeek-R1-FP4-v2
-repo_dir=/lustre/fs1/portfolios/coreai/.../TensorRT-LLM
+partition="${PARTITION:-batch}"
+account="${ACCOUNT:-coreai_comparch_trtllm}"
+job_name="${JOB_NAME:-deepseekr1}"
+container_image="${CONTAINER_IMAGE:-/path/to/container.sqsh}"
+mounts="${MOUNTS:-/mnt/data:/mnt/data}"            # adjust as needed
+workdir="${WORKDIR:-/path/to/examples/disaggregated/slurm/benchmark}"
+model_dir="${MODEL_DIR:-/path/to/model}"
+repo_dir="${REPO_DIR:-}"                            # empty → use pre-installed

Also, at each sbatch invocation (lines 56 & 94), review:

--segment=${total_node_num}

Ensure --segment is a valid option in your environment or replace it with the correct flag/plugin.

🧰 Tools
🪛 Shellcheck (0.10.0)

[warning] 5-5: Use var=$(command) to assign output (or quote to assign string).

(SC2209)

🤖 Prompt for AI Agents
examples/wide_ep/slurm_scripts/submit_e2e.sh lines 5-14: the script uses
unquoted, hard-coded Slurm settings and absolute paths; update each assignment
to use quoted values and environment-driven defaults (e.g.
partition="${PARTITION:-batch}", account="${ACCOUNT:-coreai_comparch_trtllm}",
job_name="${JOB_NAME:-deepseekr1}",
container_image="${CONTAINER_IMAGE:-/path/to/default.sqsh}",
mounts="${MOUNTS:-/lustre/fs1/portfolios/coreai}",
workdir="${WORKDIR:-/path/to/workdir}",
model_dir="${MODEL_DIR:-/path/to/model}",
repo_dir="${REPO_DIR:-/path/to/repo}"), ensuring each variable is quoted when
used to prevent word-splitting; additionally inspect sbatch calls at lines 56
and 94 and remove or replace the non-standard --segment=${total_node_num} with a
supported option (or guard it behind a conditional that checks sbatch --help for
--segment support) so the script is portable.

@Njuapp Njuapp changed the base branch from main to release/1.1.0rc2 September 1, 2025 05:50
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/models/modeling_utils.py (1)

386-396: Avoid hard-coding .cuda() during init; use the LM head’s device

Hard-coding .cuda() can break CPU initialization, non-default CUDA device, or deferred device placement.

-                    x = weight.to(self.lm_head.dtype).cuda()
+                    x = weight.to(
+                        dtype=self.lm_head.weight.dtype,
+                        device=self.lm_head.weight.device,
+                    )
                     self.lm_head.weight.data.copy_(x)
🧹 Nitpick comments (5)
tensorrt_llm/_torch/modules/logits_processor.py (2)

70-76: Avoid shadowed imports; import allgather once at module scope

The second allgather relies on a prior in-block import. Import from ..distributed.ops at module scope for clarity and safety.

-            logits = allgather(logits, self.model_config.mapping, dim=-1)
+            # logits are local-shard vocab; gather to full vocab
+            from ..distributed.ops import allgather  # consider hoisting to module imports
+            logits = allgather(logits, self.model_config.mapping, dim=-1)
             batch_size = logits.shape[0]
             local_batch_size = batch_size // self.model_config.mapping.tp_size
-            logits = logits.view(self.model_config.mapping.tp_size,
+            logits = logits.reshape(self.model_config.mapping.tp_size,
                                 local_batch_size, -1)
             logits = logits[self.model_config.mapping.tp_rank][:token_count]

33-34: Use reshape for robustness

Switch view(...) to reshape(...) to be resilient to non-contiguous hidden_states.

-        token_count = hidden_states.view(-1, hidden_states.shape[-1]).shape[0]
+        token_count = hidden_states.reshape(-1, hidden_states.shape[-1]).shape[0]
tensorrt_llm/_torch/speculative/mtp.py (1)

861-867: Remove commented debug prints or guard behind a verbose flag

Keep the hot path clean; these prints clutter diffs and can hurt traceability if re-enabled accidentally.

tensorrt_llm/_torch/modules/embedding.py (1)

40-42: Gating looks good; confirm downstream expectations

Disabling LM TP under Attention DP unless enable_lm_tp_in_adp is set aligns with the new flag. Please confirm that callers relying on LM TP under ADP explicitly set the flag in Mapping/args, and add a brief docstring note where the flag is defined.

tensorrt_llm/_torch/models/modeling_utils.py (1)

355-385: Replace prints with logger and simplify LMHead gating to a single code path

  • Avoid print in multi-rank runs; use logger.debug (rank-gated if noisy).
  • You can rely on LMHead’s internal ADP gating and always construct it with mapping and TP settings to remove duplicated branches.

Apply:

-        if config.mapping.enable_attention_dp and not getattr(config.mapping, 'enable_lm_tp_in_adp', False):
-            print(f"In DecoderModelForCausalLM, creating LMHead without TP")
-            self.lm_head = LMHead(
-                vocab_size,
-                hidden_size,
-                dtype=config.pretrained_config.torch_dtype,
-            )
-        else:
-            print(f"In DecoderModelForCausalLM, creating LMHead with TP")
-            # TODO(zhenhuanc): Currently lm_head Linear will not accept QuantConfig
-            # will considering per layer QuantConfig in the future.
-            if (hasattr(config, 'lora_config')
-                    and config.lora_config is not None
-                    and len(config.lora_config.lora_dir) == 1):
-                # Only check for custom lm_head in HF LoRA, not NeMo
-                if config.lora_config.lora_ckpt_source == "hf":
-                    lora_loader = HfLoraLoader(config.lora_config.lora_dir)
-                    if lora_loader.lm_head is not None and lora_loader.vocab_size != 0:
-                        weight = lora_loader.lm_head
-                        self.has_custom_lm_head = True
-                        vocab_size = lora_loader.vocab_size
-
-            self.lm_head = LMHead(
-                vocab_size,
-                hidden_size,
-                dtype=config.pretrained_config.torch_dtype,
-                mapping=config.mapping,
-                tensor_parallel_mode=TensorParallelMode.COLUMN,
-                gather_output=True,
-            )
+        # Detect custom LoRA LM head (HF), may override vocab_size.
+        if (hasattr(config, 'lora_config')
+                and config.lora_config is not None
+                and len(config.lora_config.lora_dir) == 1
+                and config.lora_config.lora_ckpt_source == "hf"):
+            lora_loader = HfLoraLoader(config.lora_config.lora_dir)
+            if lora_loader.lm_head is not None and lora_loader.vocab_size != 0:
+                weight = lora_loader.lm_head
+                self.has_custom_lm_head = True
+                vocab_size = lora_loader.vocab_size
+
+        # Always construct with mapping; LMHead internally disables TP for ADP when enable_lm_tp_in_adp=False.
+        self.lm_head = LMHead(
+            vocab_size,
+            hidden_size,
+            dtype=config.pretrained_config.torch_dtype,
+            mapping=config.mapping,
+            tensor_parallel_mode=TensorParallelMode.COLUMN,
+            gather_output=True,
+        )
+        logger.debug(
+            "DecoderModelForCausalLM: LMHead constructed (tp_mode=%s, tp_size=%s, gather_output=%s)",
+            getattr(self.lm_head, "tp_mode", None),
+            getattr(self.lm_head, "tp_size", 1),
+            getattr(self.lm_head, "gather_output", False),
+        )
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 28ab958 and fbc750a.

📒 Files selected for processing (8)
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py (1 hunks)
  • tensorrt_llm/_torch/models/modeling_speculative.py (1 hunks)
  • tensorrt_llm/_torch/models/modeling_utils.py (2 hunks)
  • tensorrt_llm/_torch/modules/embedding.py (2 hunks)
  • tensorrt_llm/_torch/modules/logits_processor.py (2 hunks)
  • tensorrt_llm/_torch/speculative/mtp.py (6 hunks)
  • tensorrt_llm/llmapi/llm_args.py (4 hunks)
  • tensorrt_llm/mapping.py (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • tensorrt_llm/_torch/models/modeling_speculative.py
🚧 Files skipped from review as they are similar to previous changes (3)
  • tensorrt_llm/mapping.py
  • tensorrt_llm/llmapi/llm_args.py
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Filenames compiled into a target must be case-insensitively unique

Files:

  • tensorrt_llm/_torch/modules/logits_processor.py
  • tensorrt_llm/_torch/modules/embedding.py
  • tensorrt_llm/_torch/models/modeling_utils.py
  • tensorrt_llm/_torch/speculative/mtp.py
**/*.{h,hpp,hh,hxx,cc,cpp,cxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use spaces, not tabs; indent 4 spaces

Files:

  • tensorrt_llm/_torch/modules/logits_processor.py
  • tensorrt_llm/_torch/modules/embedding.py
  • tensorrt_llm/_torch/models/modeling_utils.py
  • tensorrt_llm/_torch/speculative/mtp.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code must target Python 3.8+
Indent with 4 spaces; do not use tabs (Python)
Maintain module namespace on import: prefer from package.subpackage import foo; use foo.Symbol()
Python filenames use snake_case
Python class names use PascalCase
Python functions and methods use snake_case
Python local variables use snake_case; if starting with a number concept, prefix with k (e.g., k_99th_percentile)
Python global variables use G_ prefix with UPPER_SNAKE_CASE
Python constants use UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes
Initialize all externally visible class members in init
For public interfaces, prefer docstrings over comments; comments should be for in-function or file-local interfaces
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes and variables inline with docstrings immediately after assignment
Avoid reflection when a non-reflective approach suffices
Limit except clauses to specific exceptions where possible
When using try/except for duck-typing, keep try body minimal and move logic to else

Files:

  • tensorrt_llm/_torch/modules/logits_processor.py
  • tensorrt_llm/_torch/modules/embedding.py
  • tensorrt_llm/_torch/models/modeling_utils.py
  • tensorrt_llm/_torch/speculative/mtp.py
**/*.{cpp,cc,cxx,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/_torch/modules/logits_processor.py
  • tensorrt_llm/_torch/modules/embedding.py
  • tensorrt_llm/_torch/models/modeling_utils.py
  • tensorrt_llm/_torch/speculative/mtp.py
🧠 Learnings (3)
📚 Learning: 2025-08-08T04:10:19.038Z
Learnt from: djns99
PR: NVIDIA/TensorRT-LLM#6728
File: cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp:966-966
Timestamp: 2025-08-08T04:10:19.038Z
Learning: TensorRT plugins currently don't support padding functionality, and TensorRT is not getting new features (in maintenance mode). This means that duplicating parameters like mExpertHiddenSize in function calls, even with TODO comments, can be acceptable as pragmatic solutions within these constraints.

Applied to files:

  • tensorrt_llm/_torch/speculative/mtp.py
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
PR: NVIDIA/TensorRT-LLM#6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.

Applied to files:

  • tensorrt_llm/_torch/speculative/mtp.py
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
PR: NVIDIA/TensorRT-LLM#6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.

Applied to files:

  • tensorrt_llm/_torch/speculative/mtp.py
🧬 Code graph analysis (3)
tensorrt_llm/_torch/modules/logits_processor.py (3)
tensorrt_llm/_torch/attention_backend/interface.py (1)
  • AttentionMetadata (39-328)
tensorrt_llm/_torch/modules/linear.py (1)
  • Linear (1495-1704)
tensorrt_llm/mapping.py (1)
  • tp_rank (340-341)
tensorrt_llm/_torch/models/modeling_utils.py (2)
tensorrt_llm/_torch/modules/embedding.py (1)
  • LMHead (15-118)
tensorrt_llm/_torch/modules/logits_processor.py (1)
  • LogitsProcessor (10-78)
tensorrt_llm/_torch/speculative/mtp.py (3)
tensorrt_llm/_torch/distributed/ops.py (1)
  • allgather (141-225)
cpp/tensorrt_llm/thop/allgatherOp.cpp (2)
  • allgather (122-137)
  • allgather (122-122)
tensorrt_llm/mapping.py (1)
  • tp_rank (340-341)

self.model.keep_embed_tokens = True

self.logits_processor = LogitsProcessor()
self.logits_processor = LogitsProcessor(config)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Confirm all LogitsProcessor call sites use the new constructor signature

Constructor now expects config. Ensure no remaining LogitsProcessor() usages elsewhere.

Run:


🏁 Script executed:

#!/bin/bash
set -euo pipefail
# Likely old usage (no args)
rg -nP --type=py 'LogitsProcessor\s*\(\s*\)' -C2 || true
# All instantiations to eyeball
rg -nP --type=py 'LogitsProcessor\s*\(' -C2
# Forward call sites (should be unaffected)
rg -nP --type=py '\.logits_processor\.forward\s*\(' -C2

Length of output: 8162


Update the LogitsProcessor instantiation in modeling_mllama.py
tensorrt_llm/_torch/models/modeling_mllama.py:297: change

self.logits_processor = LogitsProcessor()

to

self.logits_processor = LogitsProcessor(config)
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/models/modeling_mllama.py around line 297, the
LogitsProcessor is instantiated without the model config; replace the no-arg
call with one that passes the config (i.e., use LogitsProcessor(config)) so the
processor is initialized with the model configuration.

Comment on lines 36 to 53
if (self.model_config.mapping.enable_attention_dp and getattr(
self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
# ADP + LM TP mode: perform All-Gather before LM_head
from ..distributed import allgather
all_rank_max_num_tokens = attn_metadata.all_rank_max_num_tokens
pad_len = all_rank_max_num_tokens - token_count
if pad_len > 0:
padded_hidden_states = F.pad(hidden_states.view(
-1, hidden_states.shape[-1]), (0, 0, 0, pad_len),
mode="constant",
value=0)
else:
padded_hidden_states = hidden_states.view(
-1, hidden_states.shape[-1])
hidden_states = allgather(padded_hidden_states,
self.model_config.mapping,
dim=0)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Pre‑LM gather: guard on metadata and prefer reshape for safety

  • Add a fast-fail to ensure attn_metadata is present when entering ADP+LM TP.
  • Prefer reshape over view to avoid surprises on non-contiguous inputs.
-        if (self.model_config.mapping.enable_attention_dp and getattr(
-                self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
+        if (self.model_config.mapping.enable_attention_dp and getattr(
+                self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
+            assert attn_metadata is not None, "attn_metadata is required for ADP + LM TP path"
             # ADP + LM TP mode: perform All-Gather before LM_head
-            from ..distributed import allgather
+            from ..distributed.ops import allgather
             all_rank_max_num_tokens = attn_metadata.all_rank_max_num_tokens
             pad_len = all_rank_max_num_tokens - token_count
             if pad_len > 0:
-                padded_hidden_states = F.pad(hidden_states.view(
-                    -1, hidden_states.shape[-1]), (0, 0, 0, pad_len),
+                padded_hidden_states = F.pad(hidden_states.reshape(
+                    -1, hidden_states.shape[-1]), (0, 0, 0, pad_len),
                                              mode="constant",
                                              value=0)
             else:
-                padded_hidden_states = hidden_states.view(
+                padded_hidden_states = hidden_states.reshape(
                     -1, hidden_states.shape[-1])
             hidden_states = allgather(padded_hidden_states,
                                       self.model_config.mapping,
                                       dim=0)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (self.model_config.mapping.enable_attention_dp and getattr(
self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
# ADP + LM TP mode: perform All-Gather before LM_head
from ..distributed import allgather
all_rank_max_num_tokens = attn_metadata.all_rank_max_num_tokens
pad_len = all_rank_max_num_tokens - token_count
if pad_len > 0:
padded_hidden_states = F.pad(hidden_states.view(
-1, hidden_states.shape[-1]), (0, 0, 0, pad_len),
mode="constant",
value=0)
else:
padded_hidden_states = hidden_states.view(
-1, hidden_states.shape[-1])
hidden_states = allgather(padded_hidden_states,
self.model_config.mapping,
dim=0)
if (self.model_config.mapping.enable_attention_dp and getattr(
self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
assert attn_metadata is not None, "attn_metadata is required for ADP + LM TP path"
# ADP + LM TP mode: perform All-Gather before LM_head
from ..distributed.ops import allgather
all_rank_max_num_tokens = attn_metadata.all_rank_max_num_tokens
pad_len = all_rank_max_num_tokens - token_count
if pad_len > 0:
padded_hidden_states = F.pad(hidden_states.reshape(
-1, hidden_states.shape[-1]), (0, 0, 0, pad_len),
mode="constant",
value=0)
else:
padded_hidden_states = hidden_states.reshape(
-1, hidden_states.shape[-1])
hidden_states = allgather(padded_hidden_states,
self.model_config.mapping,
dim=0)
🤖 Prompt for AI Agents
tensorrt_llm/_torch/modules/logits_processor.py around lines 36 to 53: when
entering the ADP + LM TP path, add a guard that ensures attn_metadata is not
None and raise a clear RuntimeError if missing, then replace uses of
hidden_states.view(...) with hidden_states.reshape(...) for safety on
non‑contiguous tensors; keep the existing padding logic and allgather call
unchanged.

Comment on lines 54 to 63
# Temporarily disable gather_output when not in ADP mode or (in ADP mode and LM TP is enabled)
if (not self.model_config.mapping.enable_attention_dp) or (
self.model_config.mapping.enable_attention_dp and getattr(
self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
lm_head.gather_output = False
logits = lm_head(hidden_states)
if (not self.model_config.mapping.enable_attention_dp) or (
self.model_config.mapping.enable_attention_dp and getattr(
self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
lm_head.gather_output = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrectly disabling lm_head.gather_output for non-ADP returns sharded logits

This turns off the LM head’s column-TP gather even when Attention DP is disabled, so callers receive per-shard vocab logits. Also, the code unconditionally sets gather_output back to True, potentially changing the original configuration. Preserve and restore the original value, and only disable gather_output in ADP+LM TP mode.

-        # Temporarily disable gather_output when not in ADP mode or (in ADP mode and LM TP is enabled)
-        if (not self.model_config.mapping.enable_attention_dp) or (
-                self.model_config.mapping.enable_attention_dp and getattr(
-                    self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
-            lm_head.gather_output = False
-        logits = lm_head(hidden_states)
-        if (not self.model_config.mapping.enable_attention_dp) or (
-                self.model_config.mapping.enable_attention_dp and getattr(
-                    self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
-            lm_head.gather_output = True
+        # Only disable gather_output in ADP + LM TP; otherwise leave behavior unchanged
+        orig_gather_output = lm_head.gather_output
+        adp_lm_tp = (
+            self.model_config.mapping.enable_attention_dp and
+            getattr(self.model_config.mapping, 'enable_lm_tp_in_adp', False)
+        )
+        if adp_lm_tp:
+            lm_head.gather_output = False
+        logits = lm_head(hidden_states)
+        lm_head.gather_output = orig_gather_output
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Temporarily disable gather_output when not in ADP mode or (in ADP mode and LM TP is enabled)
if (not self.model_config.mapping.enable_attention_dp) or (
self.model_config.mapping.enable_attention_dp and getattr(
self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
lm_head.gather_output = False
logits = lm_head(hidden_states)
if (not self.model_config.mapping.enable_attention_dp) or (
self.model_config.mapping.enable_attention_dp and getattr(
self.model_config.mapping, 'enable_lm_tp_in_adp', False)):
lm_head.gather_output = True
# Only disable gather_output in ADP + LM TP; otherwise leave behavior unchanged
orig_gather_output = lm_head.gather_output
adp_lm_tp = (
self.model_config.mapping.enable_attention_dp and
getattr(self.model_config.mapping, 'enable_lm_tp_in_adp', False)
)
if adp_lm_tp:
lm_head.gather_output = False
logits = lm_head(hidden_states)
lm_head.gather_output = orig_gather_output
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/modules/logits_processor.py around lines 54 to 63, the
code currently disables lm_head.gather_output for non-ADP cases and then
unconditionally sets it back to True, which yields sharded per-shard logits and
may clobber the original setting; change the logic to only disable gather_output
when both enable_attention_dp is True and enable_lm_tp_in_adp is truthy, store
the original lm_head.gather_output value before modifying it, call
lm_head(hidden_states), and then restore the original value (rather than always
setting True) so callers get correct full-vocab logits and the original
configuration is preserved.

Comment on lines 492 to 494
new_draft_token = self.draft_sampler(logits)
new_draft_token = new_draft_token[:token_count]
next_draft_tokens.append(new_draft_token)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Signature mismatch: draft_sampler now requires iter but call omits it

This will raise a TypeError at runtime. Pass the loop index and update enumerate accordingly.

-        for _, mtp_layer in enumerate(draft_model.mtp_layers):
+        for i, mtp_layer in enumerate(draft_model.mtp_layers):
...
-            new_draft_token = self.draft_sampler(logits)
+            new_draft_token = self.draft_sampler(logits, i)
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/speculative/mtp.py around lines 492-494, the call to
self.draft_sampler(logits) omits the required iter argument; update the
surrounding loop to use enumerate (e.g., for i, ... in enumerate(...)) and pass
the loop index into draft_sampler (self.draft_sampler(logits, i)), then keep the
rest of the logic (slicing to token_count and appending) unchanged.

@Njuapp Njuapp changed the base branch from release/1.1.0rc2 to main September 8, 2025 14:32
@Njuapp Njuapp requested review from a team as code owners September 8, 2025 14:32
@Njuapp Njuapp changed the base branch from main to release/1.1.0rc2 September 9, 2025 03:50
kaiyux added a commit that referenced this pull request Sep 17, 2025
…herry-pick #7128) (#7571)

Signed-off-by: Cheng Hang <chang@nvidia.com>
Co-authored-by: Cheng Hang <chang@nvidia.com>
@kaiyux
Copy link
Member

kaiyux commented Sep 17, 2025

Included in #7571

@kaiyux kaiyux closed this Sep 17, 2025
Wong4j pushed a commit to Wong4j/TensorRT-LLM that referenced this pull request Sep 20, 2025
…herry-pick NVIDIA#7128) (NVIDIA#7571)

Signed-off-by: Cheng Hang <chang@nvidia.com>
Co-authored-by: Cheng Hang <chang@nvidia.com>
MrGeva pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Sep 21, 2025
…herry-pick NVIDIA#7128) (NVIDIA#7571)

Signed-off-by: Cheng Hang <chang@nvidia.com>
Co-authored-by: Cheng Hang <chang@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants