KEMBAR78
[None][chore] Update disagg benchmark configs by qiaoxj07 · Pull Request #8289 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@qiaoxj07
Copy link
Collaborator

@qiaoxj07 qiaoxj07 commented Oct 12, 2025

Summary by CodeRabbit

  • New Features

    • Added configuration options: enable_lm_head_tp_in_adp (linked to attention DP) and use_low_precision_moe_combine. Simplified attention DP flag handling.
    • Introduced role-specific profiling: distinct nsys capture windows for GEN and CTX and clearer output messages. Profiling filenames now include the role and instance identifiers.
    • Added environment variables to disable GC for server and worker processes.
  • Refactor

    • Streamlined profiling command construction and path handling for more consistent behavior across roles.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Xianjie <5410381+qiaoxj07@users.noreply.github.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 12, 2025

📝 Walkthrough

Walkthrough

Updates worker config generation to simplify a flag, add LM head TP toggle tied to attention DP, and enable low-precision MoE combine. Modifies the Slurm worker start script to set GC-disabling env vars and revise nsys profiling setup, including role-specific start/stop windows and filename patterns.

Changes

Cohort / File(s) Summary
Config generation updates
examples/disaggregated/slurm/benchmark/gen_worker_config.py
In gen_config_file: replaces explicit boolean expression with direct gen_enable_attention_dp; adds enable_lm_head_tp_in_adp set to gen_enable_attention_dp. In moe_config, adds use_low_precision_moe_combine: True. No signature changes.
Worker start and profiling adjustments
examples/disaggregated/slurm/benchmark/start_worker.sh
Exports TRTLLM_SERVER_DISABLE_GC=1 and TRTLLM_WORKER_DISABLE_GC=1. Overhauls nsys handling when enable_nsys is true: includes role in output filename, builds nsys_prefix unconditionally, and sets role-specific TLLM_PROFILE_START_STOP (GEN: 200-250, CTX: 10-30) with corresponding log messages. Invocation flow retained with updated variables.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant SLURM as SLURM Job Env
  participant Script as start_worker.sh
  participant NSYS as nsys
  participant LLM as trtllm-llmapi-launch

  SLURM->>Script: Launch with env (role, instance_id, SLURM_PROCID, enable_nsys)
  Script->>Script: export TRTLLM_SERVER_DISABLE_GC=1, TRTLLM_WORKER_DISABLE_GC=1
  alt enable_nsys = true
    Script->>NSYS: Construct nsys file prefix with role and IDs
    alt role == GEN
      Script->>NSYS: Set TLLM_PROFILE_START_STOP=200-250
      Script->>Script: nsys_prefix = profiling command
    else role == CTX
      Script->>NSYS: Set TLLM_PROFILE_START_STOP=10-30
      Script->>Script: nsys_prefix = profiling command
    end
  else enable_nsys = false
    Script->>Script: nsys_prefix empty
  end
  Script->>LLM: Execute with nsys_prefix (if set)
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description consists solely of the untouched template with placeholder comments and lacks any actual content in the Description, Test Coverage, or other required sections, so it does not meet the repository’s description template requirements. Please replace the template placeholders with concrete content by adding a summary of the implemented changes and their rationale under “Description,” specifying relevant tests under “Test Coverage,” and ensuring the PR Checklist is reviewed and updated as appropriate.
✅ Passed checks (2 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Title Check ✅ Passed The title concisely reflects the primary change of updating the disaggregated benchmark configuration files without unrelated noise and is clear enough for a teammate to understand the purpose of the changeset.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
examples/disaggregated/slurm/benchmark/gen_worker_config.py (2)

1-6: Add the required NVIDIA Apache-2.0 copyright header.

The file is missing the required copyright header at the top.

As per coding guidelines.

Add the copyright header at the top of the file:

+# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 import argparse
 import os

25-49: Update the docstring to match the function signature.

The docstring documents parameters that don't exist in the function signature (config_path, model_path, num_ctx_servers, num_gen_servers, worker_start_port, server_port) and is missing documentation for the actual first parameter work_dir.

Update the docstring to accurately reflect the function signature:

     """
     Generate configuration YAML file for disaggregated inference.
 
     Args:
-        config_path: Path to save the config file
-        model_path: Path to the model
-        num_ctx_servers: Number of context servers
+        work_dir: Directory to save the config files
         ctx_tp_size: Tensor parallel size for context servers
         ctx_pp_size: Pipeline parallel size for context servers
         ctx_batch_size: Batch size for context servers
         ctx_max_num_tokens: Max number of tokens for context servers
         ctx_max_seq_len: Max sequence length for context servers
         ctx_free_gpu_memory_fraction: Free GPU memory fraction for context servers
         ctx_enable_attention_dp: Enable attention DP for context servers
-        num_gen_servers: Number of generation servers
         gen_tp_size: Tensor parallel size for generation servers
         gen_pp_size: Pipeline parallel size for generation servers
         gen_batch_size: Batch size for generation servers
         gen_max_num_tokens: Max number of tokens for generation servers
+        gen_max_seq_len: Max sequence length for generation servers
         gen_enable_attention_dp: Enable attention DP for generation servers
         gen_gpu_memory_fraction: GPU memory fraction for generation servers
         eplb_num_slots: Number of slots for eplb
-        worker_start_port: Start port for workers
-        server_port: Server port
+        mtp_size: Number of nextn layers for MTP
+        cache_transceiver_max_num_tokens: Max number of tokens for cache transceiver
     """
🧹 Nitpick comments (2)
examples/disaggregated/slurm/benchmark/start_worker.sh (1)

67-77: Use an array for the nsys wrapper command

Building the whole nsys profile … invocation in a single quoted string forces you to escape inner quotes, triggers SC2089, and makes the command fragile if you later add arguments containing spaces. Defining the wrapper as an array keeps each flag/argument isolated and avoids the literal backslash issue ShellCheck warns about.

-    nsys_prefix=""
-    nsys_file=${work_dir}/nsys_worker_proc_${role}_${instance_id}_${SLURM_PROCID}
+    nsys_file=${work_dir}/nsys_worker_proc_${role}_${instance_id}_${SLURM_PROCID}
     export TLLM_PROFILE_RECORD_GC=1
     export TLLM_NVTX_DEBUG=1
-    nsys_prefix="nsys profile -e \"NSYS_MPI_STORE_TEAMS_PER_RANK=1\" -o ${nsys_file} -f true -t cuda,nvtx,python-gil -c cudaProfilerApi --cuda-graph-trace node --capture-range-end=stop --gpu-metrics-devices=none"
+    nsys_prefix=(
+        nsys profile
+        -e "NSYS_MPI_STORE_TEAMS_PER_RANK=1"
+        -o "${nsys_file}"
+        -f true
+        -t cuda,nvtx,python-gil
+        -c cudaProfilerApi
+        --cuda-graph-trace node
+        --capture-range-end=stop
+        --gpu-metrics-devices=none
+    )
     if [ "${role}" = "GEN" ]; then
         export TLLM_PROFILE_START_STOP=200-250
         echo "nsys is enabled on gen_gpus"
     elif [ "${role}" = "CTX" ]; then
         export TLLM_PROFILE_START_STOP=10-30
         echo "nsys is enabled on ctx_gpus"
     fi
-    ${nsys_prefix} trtllm-llmapi-launch ${numa_bind_cmd} \
+    "${nsys_prefix[@]}" trtllm-llmapi-launch ${numa_bind_cmd} \
examples/disaggregated/slurm/benchmark/gen_worker_config.py (1)

61-61: Unify boolean assignment pattern with line 95.

Line 61 uses the verbose True if ctx_enable_attention_dp else False pattern, while line 95 was simplified to just use gen_enable_attention_dp directly. Since both parameters are already booleans, apply the same simplification here for consistency.

Apply this diff to simplify the boolean assignment:

-        'enable_attention_dp': True if ctx_enable_attention_dp else False,
+        'enable_attention_dp': ctx_enable_attention_dp,
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5798a12 and 1a2accf.

📒 Files selected for processing (2)
  • examples/disaggregated/slurm/benchmark/gen_worker_config.py (2 hunks)
  • examples/disaggregated/slurm/benchmark/start_worker.sh (2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • examples/disaggregated/slurm/benchmark/gen_worker_config.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • examples/disaggregated/slurm/benchmark/gen_worker_config.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • examples/disaggregated/slurm/benchmark/gen_worker_config.py
🪛 Shellcheck (0.11.0)
examples/disaggregated/slurm/benchmark/start_worker.sh

[warning] 70-70: Quotes/backslashes will be treated literally. Use an array.

(SC2089)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (3)
examples/disaggregated/slurm/benchmark/gen_worker_config.py (3)

95-95: Good simplification!

Removing the redundant ternary operator improves readability since gen_enable_attention_dp is already a boolean.


96-96: Verify that LM head TP should always match attention DP setting.

The new flag enable_lm_head_tp_in_adp is unconditionally tied to gen_enable_attention_dp. Please confirm this coupling is correct for all use cases, or if this flag should be independently configurable.

Consider whether this should be a separate command-line argument:

parser.add_argument("--gen_enable_lm_head_tp_in_adp",
                    dest='gen_enable_lm_head_tp_in_adp',
                    action='store_true',
                    help="Enable LM head TP in attention DP for generation servers")

And update the function signature and config generation accordingly.


113-113: Verify that low-precision MoE combine should always be enabled.

The new flag use_low_precision_moe_combine is unconditionally set to True. Please confirm this is the desired behavior for all benchmark configurations, or if this should be made configurable via a command-line argument.

@qiaoxj07 qiaoxj07 changed the title update disagg benchmark configs [None][chore] Update disagg benchmark configs Oct 13, 2025
@qiaoxj07 qiaoxj07 changed the title [None][chore] Update disagg benchmark configs [None][feat] Update disagg benchmark configs Oct 13, 2025
Signed-off-by: Xianjie Qiao <5410381+qiaoxj07@users.noreply.github.com>
@qiaoxj07
Copy link
Collaborator Author

/bot skip --comment "slurm scripts are not tested in the pipeline"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21195 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21195 [ skip ] completed with state SUCCESS
Skipping testing for commit 1c295e4

@qiaoxj07 qiaoxj07 changed the title [None][feat] Update disagg benchmark configs [None][feat]Update disagg benchmark configs Oct 13, 2025
@qiaoxj07 qiaoxj07 changed the title [None][feat]Update disagg benchmark configs [None][chore] Update disagg benchmark configs Oct 13, 2025
@qiaoxj07 qiaoxj07 merged commit d145e87 into NVIDIA:main Oct 13, 2025
7 of 11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants