KEMBAR78
[None] [blog] Scaling Expert Parallelism in TensorRT LLM (Part 3: Pushing the Performance Boundary) by kaiyux · Pull Request #8323 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@kaiyux
Copy link
Member

@kaiyux kaiyux commented Oct 13, 2025

Scaling Expert Parallelism in TensorRT LLM (Part 3: Pushing the Performance Boundary)

Summary by CodeRabbit

  • Documentation
    • Added a new blog article: “Scaling Expert Parallelism in TensorRT-LLM (Part 3: Pushing the Performance Boundary).”
    • Covers lower-precision optimizations (FP4 GEMM, low-precision AlltoAll, FP8 FMHA/KV cache).
    • Details network structure updates (tensor-parallel LM head, Q/K/V concat optimization).
    • Describes kernel overlap and fusion techniques (PDL, fused AlltoAll, fused reductions, torch.compile guidance).
    • Includes end-to-end performance highlights, updated visuals, acknowledgements, and references.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

add alltoall optimization part

Signed-off-by: Dongxu Yang <78518666+dongxuy04@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Minor updates

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

doc: add FP8 context FMHA support part.

Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>

Add lowprecision all2all and fuse shared expert into local reduction

Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>

Add MTP LM head tensor parallelism

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Polish

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Add images

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

AI polishment

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Update

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
@kaiyux kaiyux requested a review from a team as a code owner October 13, 2025 13:04
@kaiyux kaiyux requested review from QiJune and nv-guomingz October 13, 2025 13:04
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
@kaiyux
Copy link
Member Author

kaiyux commented Oct 13, 2025

/bot skip --comment "doc changes"

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 13, 2025

📝 Walkthrough

Walkthrough

Adds a new technical blog markdown file detailing Part 3 of Scaling Expert Parallelism in TensorRT-LLM, covering precision strategies, network structure adjustments, and kernel fusion/overlap techniques, with references, diagrams, and performance notes.

Changes

Cohort / File(s) Summary
Docs – Tech Blog
docs/source/blogs/tech_blog/blog14_Scaling_Expert_Parallelism_in_TensorRT-LLM_part3.md
New blog post describing optimization techniques (low precision, structure changes, kernel fusion/overlap), implementation details, references to PRs, figures, and performance observations.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description contains only template placeholders without any actual summary, detailed description, or test coverage information, so it does not meet the required structure or provide the necessary context. Please replace all placeholder comments with actual content: provide a summary of the changes under the @coderabbitai summary section, fill in the Description with the issue and solution, list relevant tests in Test Coverage, and ensure the PR Checklist items are reviewed and marked appropriately.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title clearly and specifically describes the main change by highlighting the addition of Part 3 of the Scaling Expert Parallelism blog post focused on performance optimization in TensorRT LLM, making it concise and understandable at a glance.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (9)
docs/source/blogs/tech_blog/blog14_Scaling_Expert_Parallelism_in_TensorRT-LLM_part3.md (9)

176-184: Correct figure reference and unify AlltoAll casing.

Reference points to Figure 4 but the caption is Figure 6; also standardize “AlltoAll”.

-Taking the dispatch of four fields as an example, the data flow is shown in Figure 4.
+Taking the dispatch of four fields as an example, the data flow is shown in Figure 6.
@@
-<p align="center"><sub><em>Figure 6: The data flow of new Alltoall kernel</em></sub></p>
+<p align="center"><sub><em>Figure 6: The data flow of the new `AlltoAll` kernel</em></sub></p>

36-36: Add alt text to images (MD045).

All images should include meaningful alt text for accessibility and lint compliance.

-  <img src="../media/tech_blog14_overview_before_opt.png" width="600">
+  <img src="../media/tech_blog14_overview_before_opt.png" alt="Figure 1: Network structure before optimization" width="600">
@@
-  <img src="../media/tech_blog14_MTP_parallel_1.png" width="500">
+  <img src="../media/tech_blog14_MTP_parallel_1.png" alt="Figure 2: MTP LM head before optimization" width="500">
@@
-  <img src="../media/tech_blog14_MTP_parallel_2.png" width="500">
+  <img src="../media/tech_blog14_MTP_parallel_2.png" alt="Figure 3: MTP LM head after applying tensor parallelism" width="500">
@@
-  <img src="../media/tech_blog14_pdloff.png" width="1000">
+  <img src="../media/tech_blog14_pdloff.png" alt="Figure 4: Profiling results with PDL disabled" width="1000">
@@
-  <img src="../media/tech_blog14_pdlon.png" width="1000">
+  <img src="../media/tech_blog14_pdlon.png" alt="Figure 5: Profiling results with PDL enabled" width="1000">
@@
-  <img src="../media/tech_blog14_alltoall_dataflow.png" width="800">
+  <img src="../media/tech_blog14_alltoall_dataflow.png" alt="Figure 6: Data flow of the new AlltoAll kernel" width="800">
@@
-  <img src="../media/tech_blog14_overview_after_opt.png" width="600">
+  <img src="../media/tech_blog14_overview_after_opt.png" alt="Figure 7: Network structure after optimization" width="600">
@@
-  <img src="../media/tech_blog14_perf.png" width="600">
+  <img src="../media/tech_blog14_perf.png" alt="Figure 8: End-to-end performance comparison" width="600">

Also applies to: 88-88, 97-97, 154-154, 163-163, 180-180, 219-219, 227-227


50-51: Avoid bare URLs (MD034).

Use descriptive Markdown links.

-* https://huggingface.co/nvidia/DeepSeek-R1-FP4-v2
-* https://huggingface.co/nvidia/DeepSeek-R1-0528-FP4-v2
+* [DeepSeek‑R1‑FP4‑v2 checkpoint](https://huggingface.co/nvidia/DeepSeek-R1-FP4-v2)
+* [DeepSeek‑R1‑0528‑FP4‑v2 checkpoint](https://huggingface.co/nvidia/DeepSeek-R1-0528-FP4-v2)

204-212: Specify code fence language and clean up units/wording.

Add fence language (MD040), use µs, and tighten phrasing.

-On ISL/OSL 8k/1k, batch size 1 cases, on context phase, we observed that the `copy` operation takes 306us, which is clearly suboptimal. If we try to calculate a theoretical duration, considering 8 TB/sec HBM3e bandwidth, the formula would roughly be:
-```
-( ISL 8192 * k_nope_size 128 * num_heads 128 * 2 bytes * read/write 2 ) / ( 8 TB/sec * efficiency 0.8 ) = 80 us
-```
+In ISL/OSL 8k/1k, batch‑size‑1 context‑phase cases, we observed that the `copy` operation takes 306 µs, which is clearly suboptimal. A rough theoretical duration, assuming 8 TB/s HBM3e bandwidth, is:
+```text
+( ISL 8192 * k_nope_size 128 * num_heads 128 * 2 bytes * read/write 2 ) / ( 8 TB/s * efficiency 0.8 ) ≈ 80 µs
+```
@@
-To optimize the operator, we simply added `torch.compile` decorator to the operation, and the kernel duration directly drops to 107us, which is greatly reduced and already on a promising level. [PR 8044](https://github.com/NVIDIA/TensorRT-LLM/pull/8044) implemented the changes. This is an outstanding example demonstrating the power of `torch.compile`, and showing the process of analyzing and optimizing without heavily hand-crafting kernels.
+To optimize the operator, we added the `torch.compile` decorator to the operation; the kernel duration dropped to 107 µs. [PR 8044](https://github.com/NVIDIA/TensorRT-LLM/pull/8044) implemented the changes. This demonstrates the power of `torch.compile` and a data‑driven path to optimization without heavy hand‑crafted kernels.

32-33: Polish opening sentence.

Simplify and fix phrasing.

-Let's firstly take a look at how the network structure looks like before we did the optimizations, to give an overall review on how the workloads look like:
+First, let's look at the network structure before the optimizations to provide an overview of the workloads:

47-49: Clarify “wo GEMM” terminology.

Use a standard symbol and crisper wording.

-The wo GEMM is the final linear layer within the multi-head attention block that produces the final outputs. While DeepSeek R1's MLA modifies the initial projections for keys and values, the wo GEMM operator remains a critical and standard component for finalizing the attention computation. In the term, "wo" is the abbreviation for the weight matrix for the output.
+The output‑projection GEMM (often denoted Wₒ) is the final linear layer within the multi‑head attention block. While DeepSeek R1's MLA modifies the initial projections for keys and values, the Wₒ GEMM remains a standard component for finalizing the attention computation. Here, “Wₒ” denotes the output‑projection weight matrix.

93-95: Minor phrasing for readability.

Prefer “first” and simplify.

-Collecting the local argmax logits firstly helps with minimizing communication and argmax computation overheads. Finally, we split logits to guarantee correctness.
+Collecting the local argmax logits first minimizes communication and argmax overhead. Finally, we split logits to guarantee correctness.

126-136: PDL wording: fix prepositions and clarify.

Minor grammar nits.

-We inserted the `cudaTriggerProgrammaticLaunchCompletion` API with all thread blocks in the primary kernel, which signals that it's ready for the secondary kernel to launch, and then call the `cudaGridDependencySynchronize` API in the secondary kernel, which blocks until all primary kernels the secondary kernel depends on have completed and flushed results to global memory.
+We insert `cudaTriggerProgrammaticLaunchCompletion` in all thread blocks of the primary kernel to signal readiness for launching the secondary kernel, and call `cudaGridDependencySynchronize` in the secondary kernel, which blocks until all dependent primary kernels have completed and flushed results to global memory.

9-9: Optional: avoid emphasis-as-heading (MD036).

Consider plain text without italics or a small “Authors” line; current style is acceptable if you ignore this lint rule.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bbae7a0 and cf60757.

⛔ Files ignored due to path filters (8)
  • docs/source/blogs/media/tech_blog14_MTP_parallel_1.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_MTP_parallel_2.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_alltoall_dataflow.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_overview_after_opt.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_overview_before_opt.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_pdloff.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_pdlon.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_perf.png is excluded by !**/*.png
📒 Files selected for processing (1)
  • docs/source/blogs/tech_blog/blog14_Scaling_Expert_Parallelism_in_TensorRT-LLM_part3.md (1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
docs/source/blogs/tech_blog/blog14_Scaling_Expert_Parallelism_in_TensorRT-LLM_part3.md

9-9: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


36-36: Images should have alternate text (alt text)

(MD045, no-alt-text)


50-50: Bare URL used

(MD034, no-bare-urls)


51-51: Bare URL used

(MD034, no-bare-urls)


88-88: Images should have alternate text (alt text)

(MD045, no-alt-text)


97-97: Images should have alternate text (alt text)

(MD045, no-alt-text)


154-154: Images should have alternate text (alt text)

(MD045, no-alt-text)


163-163: Images should have alternate text (alt text)

(MD045, no-alt-text)


180-180: Images should have alternate text (alt text)

(MD045, no-alt-text)


207-207: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


219-219: Images should have alternate text (alt text)

(MD045, no-alt-text)


227-227: Images should have alternate text (alt text)

(MD045, no-alt-text)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
docs/source/blogs/tech_blog/blog14_Scaling_Expert_Parallelism_in_TensorRT-LLM_part3.md (1)

61-63: NVFP4 scale-factor description is accurate: NVFP4 encodes 4-bit floats in E2M1 layout, applies one E4M3 FP8 scale per 16-element micro-block, and uses a global FP32 scale for overflow safety.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 13, 2025

📝 Walkthrough

Walkthrough

Adds a new documentation file introducing a blog post on performance optimizations for expert parallelism in TensorRT-LLM, covering low-precision techniques, network structure adjustments, kernel overlap/fusion strategies, and end-to-end performance notes. No code or public APIs are changed.

Changes

Cohort / File(s) Summary of changes
Docs — New blog article
docs/source/blogs/tech_blog/blog14_Scaling_Expert_Parallelism_in_TensorRT-LLM_part3.md
Added a standalone blog post detailing TensorRT-LLM expert parallelism optimizations (FP4/FP8 usages, AlltoAll adjustments, LM head TP, QKV handling, kernel fusion/overlap, and performance notes).

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request description consists entirely of template placeholders without any actual summary of changes, explanation of the solution, or test coverage details, so it does not conform to the required template. Please complete the PR description by providing a summary of changes, a clear description of the issue and solution, and relevant test coverage details as specified in the template.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title clearly and concisely summarizes the main change by identifying the new blog article and its focus on performance optimizations in expert parallelism, making it immediately understandable to reviewers.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (5)
docs/source/blogs/tech_blog/blog14_Scaling_Expert_Parallelism_in_TensorRT-LLM_part3.md (5)

36-36: Add alt text to images for accessibility (MD045).

All images use HTML without alt text. Add concise alt attributes.

-  <img src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_overview_before_opt.png" width="600">
+  <img alt="Network structure overview before optimization" src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_overview_before_opt.png" width="600">
-  <img src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_MTP_parallel_1.png" width="500">
+  <img alt="MTP LM head computation before optimization" src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_MTP_parallel_1.png" width="500">
-  <img src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_MTP_parallel_2.png" width="500">
+  <img alt="MTP LM head computation after applying tensor parallelism" src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_MTP_parallel_2.png" width="500">
-  <img src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_pdloff.png" width="1000">
+  <img alt="Profiling timeline with PDL disabled" src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_pdloff.png" width="1000">
-  <img src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_pdlon.png" width="1000">
+  <img alt="Profiling timeline with PDL enabled" src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_pdlon.png" width="1000">
-  <img src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_alltoall_dataflow.png" width="800">
+  <img alt="Data flow of the new AlltoAll kernel" src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_alltoall_dataflow.png" width="800">
-  <img src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_overview_after_opt.png" width="600">
+  <img alt="Network structure overview after optimization" src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_overview_after_opt.png" width="600">
-  <img src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_perf.png" width="600">
+  <img alt="End-to-end performance comparison (Aug 31)" src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/tech_blog14_perf.png" width="600">

Based on static analysis hints

Also applies to: 88-88, 97-97, 154-154, 163-163, 180-180, 219-219, 227-227


50-51: Replace bare URLs with labeled links (MD034).

Improves readability and lint compliance.

-* https://huggingface.co/nvidia/DeepSeek-R1-FP4-v2
-* https://huggingface.co/nvidia/DeepSeek-R1-0528-FP4-v2
+* [nvidia/DeepSeek-R1-FP4-v2](https://huggingface.co/nvidia/DeepSeek-R1-FP4-v2)
+* [nvidia/DeepSeek-R1-0528-FP4-v2](https://huggingface.co/nvidia/DeepSeek-R1-0528-FP4-v2)

Based on static analysis hints


207-209: Specify a language for the fenced code block (MD040).

This is a formula; use a neutral language label.

-```
+```text
 ( ISL 8192 * k_nope_size 128 * num_heads 128 * 2 bytes * read/write 2 ) / ( 8 TB/sec * efficiency 0.8 ) = 80 us
Based on static analysis hints

---

`9-9`: **Optional: avoid emphasis-as-heading (MD036).**

Use a small heading instead of italicized line.



```diff
-*By NVIDIA TensorRT LLM Team*
+#### By the NVIDIA TensorRT-LLM Team

Based on static analysis hints


32-32: Minor grammar/wording polish for clarity.

Tighten phrasing and fix small grammar nits.

-Let's firstly take a look at how the network structure looks like before we did the optimizations, to give an overall review on how the workloads look like:
+Let's first look at the network structure before the optimizations, to give an overall view of the workloads:
-Collecting the local argmax logits firstly helps with minimizing communication and argmax computation overheads.
+Collecting the local argmax logits first helps minimize communication and argmax computation overhead.
-As mentioned in previous section, Q and K are divided into two parts in DeepSeek MLA: with RoPE and without RoPE.
+As mentioned in the previous section, Q and K are divided into two parts in DeepSeek MLA: with RoPE and without RoPE.
-On ISL/OSL 8k/1k, batch size 1 cases, on context phase, we observed that the `copy` operation takes 306us, which is clearly suboptimal.
+In ISL/OSL 8k/1k, batch‑size‑1 context cases, we observed that the `copy` operation takes ~306 us, which is suboptimal.

Also applies to: 93-93, 204-204, 206-206

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bbae7a0 and e657dc4.

⛔ Files ignored due to path filters (8)
  • docs/source/blogs/media/tech_blog14_MTP_parallel_1.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_MTP_parallel_2.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_alltoall_dataflow.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_overview_after_opt.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_overview_before_opt.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_pdloff.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_pdlon.png is excluded by !**/*.png
  • docs/source/blogs/media/tech_blog14_perf.png is excluded by !**/*.png
📒 Files selected for processing (1)
  • docs/source/blogs/tech_blog/blog14_Scaling_Expert_Parallelism_in_TensorRT-LLM_part3.md (1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
docs/source/blogs/tech_blog/blog14_Scaling_Expert_Parallelism_in_TensorRT-LLM_part3.md

9-9: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


36-36: Images should have alternate text (alt text)

(MD045, no-alt-text)


50-50: Bare URL used

(MD034, no-bare-urls)


51-51: Bare URL used

(MD034, no-bare-urls)


88-88: Images should have alternate text (alt text)

(MD045, no-alt-text)


97-97: Images should have alternate text (alt text)

(MD045, no-alt-text)


154-154: Images should have alternate text (alt text)

(MD045, no-alt-text)


163-163: Images should have alternate text (alt text)

(MD045, no-alt-text)


180-180: Images should have alternate text (alt text)

(MD045, no-alt-text)


207-207: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


219-219: Images should have alternate text (alt text)

(MD045, no-alt-text)


227-227: Images should have alternate text (alt text)

(MD045, no-alt-text)

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
@tensorrt-cicd
Copy link
Collaborator

PR_Github #21220 [ skip ] triggered by Bot

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
@kaiyux
Copy link
Member Author

kaiyux commented Oct 13, 2025

/bot skip --comment "doc changes"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21221 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21220 [ skip ] completed with state ABORTED

@kaiyux kaiyux enabled auto-merge (squash) October 13, 2025 13:35
@tensorrt-cicd
Copy link
Collaborator

PR_Github #21221 [ skip ] completed with state SUCCESS
Skipping testing for commit 6dee863

@kaiyux kaiyux merged commit 040103a into NVIDIA:main Oct 13, 2025
5 checks passed
@kaiyux kaiyux deleted the user/kaiyu/wideep_blog3 branch October 13, 2025 13:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants