KEMBAR78
[None][test] add deepseek r1/v3 model with chunked prefill cases by ruodil · Pull Request #7124 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@ruodil
Copy link
Collaborator

@ruodil ruodil commented Aug 21, 2025

Summary by CodeRabbit

  • New Features

    • Enable automatic attention data-parallelism for DeepSeek R1 models and add presets to enable chunked prefill for DeepSeek R1 and V3 Lite variants.
  • Tests

    • Expanded performance test suites with chunked-prefill scenarios across 1–8 GPUs, adding FP8 and NVFP4/NVFP4-like variants with varied batch/sequence sizes and timeouts; test entries marked for chunked prefill.
  • Refactor

    • Removed a duplicate DeepSeek R1 pattern entry to streamline configuration.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@ruodil ruodil self-assigned this Aug 21, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 21, 2025

📝 Walkthrough

Walkthrough

Adds pattern-based model config entries to enable attention data-parallel and chunked prefill for Deepseek variants, removes a duplicate pattern, and appends chunked-prefill performance tests to cluster and full QA test lists. No public function signatures were changed.

Changes

Cohort / File(s) Summary of changes
Perf model config patterns
tests/integration/defs/perf/pytorch_model_config.py
Add pattern_config for deepseek_r1 enabling enable_attention_dp = True; add a "Deepseek R1 model with chunked prefill" pattern group that sets enable_attention_dp = True and enable_chunked_prefill = True for multiple Deepseek FP8/FP4 patterns; remove a duplicate deepseek_r1 mapping.
QA perf cluster tests (chunked prefill)
tests/integration/test_lists/qa/llm_perf_cluster.yml
Append chunked-prefill bench-pytorch float4/maxbs test entries for Deepseek variants (kv_frac:0.85, specified input_output_len/maxnt/reqs), added across 1-, 4-, and 8-GPU sections with appropriate EP/TP/GPU settings and some TIMEOUT(120) markers.
QA perf full tests (chunked prefill FP8)
tests/integration/test_lists/qa/llm_perf_full.yml
Add two FP8 Deepseek R1 chunked-prefill perf test entries (512- and 256-maxbs) with TIMEOUT(120); entries appear in FP8 sections; existing tests unchanged.

Sequence Diagram(s)

sequenceDiagram
  participant Runner as Perf Runner
  participant Config as get_model_yaml_config
  participant Patterns as pattern_config

  Runner->>Config: get_model_yaml_config(model_label)
  Config->>Patterns: iterate pattern_config entries
  alt model_label contains "deepseek_r1"
    Note right of Config #DDEBF7: set enable_attention_dp = True
  end
  alt model_label matches chunked prefill group
    Note right of Config #F6F8E9: set enable_attention_dp = True<br/>set enable_chunked_prefill = True
  end
  Config-->>Runner: return base_config with flags applied
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • kaiyux
  • StanleySun639
  • LarryXFly
  • zbpatel
  • yilin-void

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/integration/defs/perf/pytorch_model_config.py (1)

32-34: Python 3.8 compatibility: avoid PEP 585 built-in generics (list[str])

Repo targets Python 3.8+, but list[str] requires Python 3.9+. Use typing.List/Optional for compatibility.

@@
-def get_model_yaml_config(model_label: str,
-                          lora_dirs: list[str] = None) -> dict:
+from typing import List, Optional
+
+def get_model_yaml_config(model_label: str,
+                          lora_dirs: Optional[List[str]] = None) -> dict:
🧹 Nitpick comments (1)
tests/integration/test_lists/qa/llm_perf_cluster.yml (1)

20-22: Optional: Align 1-GPU chunked prefill with existing timeouts

I checked and confirmed that this heavy 1-GPU chunked prefill test currently has no timeout, while similar multi-GPU cases do use TIMEOUT(120). Adding the same timeout here can help avoid flakiness on slower nodes without impacting fast runners.

• Location:

  • tests/integration/test_lists/qa/llm_perf_cluster.yml line 21

• Proposed change:

-  - perf/test_perf.py::test_perf[deepseek_v3_lite_nvfp4-bench-pytorch-float4-maxbs:512-maxnt:2048-kv_frac:0.85-input_output_len:5000,500-reqs:200]
+  - perf/test_perf.py::test_perf[deepseek_v3_lite_nvfp4-bench-pytorch-float4-maxbs:512-maxnt:2048-kv_frac:0.85-input_output_len:5000,500-reqs:200] TIMEOUT(120)

This mirrors the multi-GPU patterns and should be safe to apply.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 90bfc8c and 142aa93.

📒 Files selected for processing (3)
  • tests/integration/defs/perf/pytorch_model_config.py (2 hunks)
  • tests/integration/test_lists/qa/llm_perf_cluster.yml (3 hunks)
  • tests/integration/test_lists/qa/llm_perf_full.yml (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/integration/defs/perf/pytorch_model_config.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/integration/defs/perf/pytorch_model_config.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (3)
tests/integration/defs/perf/pytorch_model_config.py (1)

56-62: LGTM: default enable_attention_dp for all DeepSeek R1 labels

The broad substring pattern 'deepseek_r1' is appropriate and ensures attention DP is enabled across R1 variants before more specific rules apply.

tests/integration/test_lists/qa/llm_perf_cluster.yml (2)

86-88: LGTM: chunked prefill nvfp4 patterns for 4-GPU block align with config and include TIMEOUT

Names include kv_frac:0.85 and should match the chunked prefill pattern rules after the config change. Timeout is consistent with similar heavy cases.


146-148: LGTM: chunked prefill nvfp4 patterns for 8-GPU block align with config and include TIMEOUT

These should correctly trigger enable_chunked_prefill and are scoped with ep/tp/gpus=8. Timeout looks appropriate.

@ruodil ruodil changed the title [None][test]add deepseek r1/v3 model with chunked prefill cases [None][test] add deepseek r1/v3 model with chunked prefill cases Aug 22, 2025
@ruodil ruodil force-pushed the user/ruodil/new_feature branch from 142aa93 to 8d20e84 Compare August 22, 2025 02:50
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/integration/test_lists/qa/llm_perf_cluster.yml (1)

20-21: Add missing 1-GPU 256-maxbs chunked prefill case for deepseek_v3_lite_nvfp4

We currently have only the 512-maxbs chunked-prefill entry in tests/integration/test_lists/qa/llm_perf_cluster.yml (around lines 20–21) but no matching 256-maxbs case. Chunked prefill is confirmed enabled for deepseek_v3_lite_nvfp4 in tests/integration/defs/perf/pytorch_model_config.py, so this test will be exercised as intended.

• tests/integration/test_lists/qa/llm_perf_cluster.yml (lines 20–21): insert the 256-maxbs entry directly below the existing 512-maxbs case
• tests/integration/defs/perf/pytorch_model_config.py: no changes needed—enable_chunked_prefill: True is already set for deepseek_v3_lite_nvfp4

Apply this diff:

   # for chunked prefill cases
   - perf/test_perf.py::test_perf[deepseek_v3_lite_nvfp4-bench-pytorch-float4-maxbs:512-maxnt:2048-kv_frac:0.85-input_output_len:5000,500-reqs:200]
+  - perf/test_perf.py::test_perf[deepseek_v3_lite_nvfp4-bench-pytorch-float4-maxbs:256-maxnt:1024-kv_frac:0.85-input_output_len:2000,2000-reqs:200]
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 142aa93 and f3cf3fa.

📒 Files selected for processing (3)
  • tests/integration/defs/perf/pytorch_model_config.py (2 hunks)
  • tests/integration/test_lists/qa/llm_perf_cluster.yml (3 hunks)
  • tests/integration/test_lists/qa/llm_perf_full.yml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/integration/defs/perf/pytorch_model_config.py
  • tests/integration/test_lists/qa/llm_perf_full.yml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tests/integration/test_lists/qa/llm_perf_cluster.yml (1)

86-88: Confirmed: 4-GPU chunked-prefill tests wire up maxnt and enable_chunked_prefill correctly

I verified that tests/integration/defs/perf/pytorch_model_config.py includes

  'enable_chunked_prefill': True,

and that the performance harness propagates max_num_tokens (parsed from the maxnt: label) into the LLM invocation. In the runtime (tensorrt_llm/llmapi/llm.py), when enable_chunked_prefill=True the early‐rejection checks are bypassed and enable_chunked_context is set, triggering chunked prefill rather than an error. No test-side changes are needed.

@kaiyux kaiyux requested a review from jmydurant September 10, 2025 06:56
@kaiyux
Copy link
Member

kaiyux commented Sep 10, 2025

@jmydurant please kindly review as well

Signed-off-by: ruodil <200874449+ruodil@users.noreply.github.com>
@ruodil ruodil enabled auto-merge (squash) September 16, 2025 03:14
@ruodil
Copy link
Collaborator Author

ruodil commented Sep 17, 2025

/bot reuse-pipeline

@ruodil
Copy link
Collaborator Author

ruodil commented Sep 17, 2025

/bot skip --comment "skip test as just adding cases"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #18917 [ reuse-pipeline ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #18917 [ reuse-pipeline ] completed with state SUCCESS
Can't reuse PR_Github #0 with status: UNKNOWN

@tensorrt-cicd
Copy link
Collaborator

PR_Github #18919 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #18919 [ skip ] completed with state FAILURE

@ruodil
Copy link
Collaborator Author

ruodil commented Sep 19, 2025

/bot skip --comment "skip test as just adding cases"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19248 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19248 [ skip ] completed with state SUCCESS
Skipping testing for commit 956a4f2

@ruodil ruodil merged commit c545310 into NVIDIA:main Sep 19, 2025
5 checks passed
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 19, 2025
…DIA#7124)

Signed-off-by: ruodil <200874449+ruodil@users.noreply.github.com>
Wong4j pushed a commit to Wong4j/TensorRT-LLM that referenced this pull request Sep 20, 2025
…DIA#7124)

Signed-off-by: ruodil <200874449+ruodil@users.noreply.github.com>
MrGeva pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Sep 21, 2025
…DIA#7124)

Signed-off-by: ruodil <200874449+ruodil@users.noreply.github.com>
@ruodil ruodil requested a review from tijyojwad October 22, 2025 08:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants