KEMBAR78
[TRTLLM-6748][feat] add PDL support for more kernels by dc3671 · Pull Request #7977 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@dc3671
Copy link
Collaborator

@dc3671 dc3671 commented Sep 25, 2025

Summary by CodeRabbit

  • New Features

    • Added an optional environment-controlled kernel launch path with PDL support.
    • Introduced stream-aware MoE preparation APIs enabling non-blocking execution (API change: requires passing a stream).
  • Bug Fixes

    • Improved stability on newer GPUs by adding synchronization guards to MoE communication and preparation paths.
  • Refactor

    • Standardized kernel launch pattern across MoE components for consistent behavior across configurations.

Description

name pdl pdl_new out tput (tok/s) per iter time (ms) out tput/gpu improv/noPDL improv/PDL
ctx4_gen1_dep32_batch64_con2048_eplb0_mtp0 0 0 94033.48 21.78 2938.55    
ctx4_gen1_dep32_batch64_con2048_eplb0_mtp0 0 0 93694.50 21.86 2927.95    
ctx4_gen1_dep32_batch64_con2048_eplb0_mtp0 0 0 93853.12 21.82 2932.91    
ctx4_gen1_dep32_batch64_con2048_eplb0_mtp0 1 0 95917.64 21.35 2997.43 2.19%  
ctx4_gen1_dep32_batch64_con2048_eplb0_mtp0 1 0 96067.15 21.32 3002.10 2.35%  
ctx4_gen1_dep32_batch64_con2048_eplb0_mtp0 1 1 96583.82 21.20 3018.24 2.90% 0.62%
ctx4_gen1_dep32_batch64_con2048_eplb0_mtp0 1 1 96578.99 21.21 3018.09 2.90% 0.61%
ctx4_gen1_dep32_batch64_con2048_eplb0_mtp0 1 1 96490.51 21.22 3015.33 2.80% 0.52%
ctx4_gen1_dep32_batch64_con2048_eplb0_mtp0 1 1 96265.09 21.27 3008.28 2.56% 0.28%

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@dc3671
Copy link
Collaborator Author

dc3671 commented Sep 25, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19895 [ run ] triggered by Bot

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 25, 2025

📝 Walkthrough

Walkthrough

Introduces a conditional CUDA kernel launch macro in envUtils.h and adopts it across several MoE-related kernels. Adds architecture-guarded cudaGridDependencySynchronize calls in multiple kernels. Updates several host-callable functions in moePrepareKernels.cu to accept a cudaStream_t and routes all launches through the new macro. Adds header includes for envUtils.

Changes

Cohort / File(s) Summary
Launch abstraction (env utils)
cpp/tensorrt_llm/common/envUtils.h
Added macro LAUNCH_WITH_PDL_WHEN_ENABLED to conditionally launch kernels via cudaLaunchKernelEx with PDL when enabled, else use standard <<<>>>.
MoE comm kernels
cpp/tensorrt_llm/kernels/fusedMoeCommKernels.cu, cpp/tensorrt_llm/kernels/fusedMoeCommKernels.h
Inserted arch-guarded cudaGridDependencySynchronize at loop starts; replaced direct kernel launch with LAUNCH_WITH_PDL_WHEN_ENABLED; added include of envUtils.h in header.
MoE load balance kernels
cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu
Included envUtils.h; added arch-guarded cudaGridDependencySynchronize in multiple kernels; replaced several <<<>>> launches with LAUNCH_WITH_PDL_WHEN_ENABLED, selecting between no-redundant and standard route kernels.
MoE prepare kernels
cpp/tensorrt_llm/kernels/moePrepareKernels.cu, cpp/tensorrt_llm/kernels/moePrepareKernels.h
Added envUtils.h include (header); added arch-guarded cudaGridDependencySynchronize; replaced <<<>>> with LAUNCH_WITH_PDL_WHEN_ENABLED; updated function signatures to add cudaStream_t stream for computeCountAndIndice, moveIndice, memsetExpertIds; calls updated to pass stream.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Host
  participant Macro as LAUNCH_WITH_PDL_WHEN_ENABLED
  participant Env as getEnvEnablePDL()
  participant CUDA as CUDA Runtime
  participant Kernel as Target Kernel

  Host->>Macro: Launch request (grid, block, dynShm, stream, args)
  Macro->>Env: Query PDL enabled?
  alt PDL enabled
    Macro->>CUDA: cudaLaunchConfig_t setup
    Macro->>CUDA: cudaLaunchKernelEx(config, Kernel, args)
  else PDL disabled
    Macro->>CUDA: Kernel<<<grid, block, dynShm, stream>>>(args)
  end
  note over Kernel: On SM_90+\nconditionally calls cudaGridDependencySynchronize()
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ⚠️ Warning The PR description includes the template headings but lacks a filled-in title following the required ticket/type format and does not include a narrative explanation of the issue or the solution beyond raw performance numbers. The Test Coverage section is empty with no tests listed to validate the new functionality, and the checklist remains in template form without indicating which items have been completed. Overall the description is incomplete and does not meet the repository’s PR template requirements. Please add a concrete PR title using the “[JIRA/None][type] Summary” format, provide a clear Description that explains the problem and your solution, populate the Test Coverage section with relevant tests, and update the PR Checklist to reflect completed items.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title “[TRTLLM-6748][feat] add PDL support for more kernels” clearly includes a valid ticket ID and type and succinctly summarizes the primary change implemented in the PR. It is concise, specific to the main feature, and follows the repository’s title format guidelines.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
cpp/tensorrt_llm/common/envUtils.h (1)

18-22: Add required headers for cudaLaunchConfig_t and arg packing

Ensure definitions for cudaLaunchConfig_t and allow pack utilities.

 #pragma once
 #include <cstdint>
 
+#include <cuda_runtime_api.h>
+#include <array>
+#include <utility>
+
 #include <optional>
 #include <string>
cpp/tensorrt_llm/kernels/moePrepareKernels.cu (3)

113-173: Guarded grid sync still requires explicit block-level sync here.

cudaGridDependencySynchronize() only enforces the per-grid dependency order between CTAs; it does not synchronize threads within a CTA. Both computeCountAndSendStatics and recvCountAndStatics rely on every thread in the block having finished populating shared memory before the subsequent loops run. On SM90 the macro replaces the historical __syncthreads() that guaranteed that ordering, so these code paths now race and can read partially initialised shared state. Please restore the CTA barrier immediately after the guarded call (or keep the original __syncthreads() alongside it) to maintain correctness.


231-252: Missing CTA-wide barrier around grid dependency sync.

Same issue here: the guarded cudaGridDependencySynchronize() replaced the original block barrier, but the kernels still require a full CTA sync before entering the per-thread copy loops (shared state in localSendIndice, localBackwardIndice, and rankRecvCount). Without reintroducing __syncthreads(), SM90 builds can observe stale data. Please add the CTA barrier back right after the guarded call.


284-290: Restore the block-level synchronisation before tail fill.

The tail-fill loop reads totalRecvTokenCount that was computed cooperatively. After inserting cudaGridDependencySynchronize(), we still need the CTA-wide barrier that used to sit here; otherwise threads can run ahead with stale totals. Please add back the __syncthreads().

🧹 Nitpick comments (1)
cpp/tensorrt_llm/kernels/moePrepareKernels.h (1)

22-22: Avoid unnecessary header dependency

This header declares no symbols using envUtils; include it only in .cu where the launcher is used to reduce compile-time coupling.

-#include "tensorrt_llm/common/envUtils.h"
+// Include envUtils.h only in implementation files that launch kernels.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0945403 and 4c081ad.

📒 Files selected for processing (6)
  • cpp/tensorrt_llm/common/envUtils.h (1 hunks)
  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.cu (3 hunks)
  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.h (1 hunks)
  • cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu (8 hunks)
  • cpp/tensorrt_llm/kernels/moePrepareKernels.cu (8 hunks)
  • cpp/tensorrt_llm/kernels/moePrepareKernels.h (1 hunks)
🧰 Additional context used
📓 Path-based instructions (7)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh}: Namespace closing braces must include a trailing comment with the namespace name (e.g., '} // namespace foo').
Prefer const or constexpr variables over #define for constants.
Declare variables that are not modified after initialization as const.
Avoid magic literals in code; except for 0, nullptr, true, false. Use named constants for comparisons and logic.
Use Allman brace style for formatting.
Place the semicolon of an empty for/while loop on a new line.
Bodies of switch/while/do-while/for must be compound statements (brace-delimited), and if/else must always be followed by brace-delimited statements.
Type names (e.g., classes) must be CamelCase starting with an uppercase letter (e.g., FooBar).
Local variables, methods, and namespaces use lowerCamelCase (e.g., localFooBar).
Non-magic-number global variables that are non-static and not in an anonymous namespace must be lowerCamelCase prefixed with 'g' (e.g., gDontUseGlobalFoos).
Non-magic-number globals that are static or in an anonymous namespace use lowerCamelCase prefixed with 's' (e.g., sMutableStaticGlobal).
Locally visible static variables use lowerCamelCase with 's' prefix (e.g., static std::once_flag sFlag).
Private/protected member variables use 'm' prefix with CamelCase (e.g., mNbFooValues). Public members may omit, but 'm' is encouraged for clarity.
Constants (enums, global constants, static constants, and function-scope magic/literal constants) use uppercase SNAKE_CASE with 'k' prefix (e.g., kDIGIT_NUM).
Function-scope constants that are not magic numbers or literals are named like non-constant variables (e.g., bool const pass = a && b).
If macros are necessary, name them in UPPER_SNAKE_CASE (e.g., FOO_VERSION) and prefer constants over #define.
Use LLVM clang-format; wrap lines at a maximum of 120 columns; use '// clang-format off/on' sparingly with justification.
Use smart pointers for heap allocations; prefer unique_ptr for sole ownership, shared_ptr for shared...

Files:

  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.h
  • cpp/tensorrt_llm/kernels/moePrepareKernels.h
  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.cu
  • cpp/tensorrt_llm/common/envUtils.h
  • cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu
  • cpp/tensorrt_llm/kernels/moePrepareKernels.cu
**/*.{cpp,cxx,cc,cu,h,hpp,hh,hxx,cuh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

C++ filenames should be lowerCamelCase (first letter lowercase) and must be case-insensitive unique within a compilation target.

Files:

  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.h
  • cpp/tensorrt_llm/kernels/moePrepareKernels.h
  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.cu
  • cpp/tensorrt_llm/common/envUtils.h
  • cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu
  • cpp/tensorrt_llm/kernels/moePrepareKernels.cu
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.h
  • cpp/tensorrt_llm/kernels/moePrepareKernels.h
  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.cu
  • cpp/tensorrt_llm/common/envUtils.h
  • cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu
  • cpp/tensorrt_llm/kernels/moePrepareKernels.cu
**/*.{h,hpp,hh,hxx}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Document new class interfaces and function prototypes with Doxygen; use //! for single-line and //!< for members.

Files:

  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.h
  • cpp/tensorrt_llm/kernels/moePrepareKernels.h
  • cpp/tensorrt_llm/common/envUtils.h
**/*.{h,hpp,hh,hxx,cpp,cxx,cc}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{h,hpp,hh,hxx,cpp,cxx,cc}: Prefer anonymous namespaces over 'static' for internal linkage of functions.
All templates (class/function/member/static) must be instantiated at least once; non-POD classes should have private data members.

Files:

  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.h
  • cpp/tensorrt_llm/kernels/moePrepareKernels.h
  • cpp/tensorrt_llm/common/envUtils.h
**/*.{h,hpp,hh,hxx,cuh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use include guards named 'TRTLLM_<FILE_NAME_IN_CAPS_WITH_UNDERSCORES>_H' (no leading or trailing underscore; directory names excluded).

Files:

  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.h
  • cpp/tensorrt_llm/kernels/moePrepareKernels.h
  • cpp/tensorrt_llm/common/envUtils.h
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.h
  • cpp/tensorrt_llm/kernels/moePrepareKernels.h
  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.cu
  • cpp/tensorrt_llm/common/envUtils.h
  • cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu
  • cpp/tensorrt_llm/kernels/moePrepareKernels.cu
🧠 Learnings (9)
📚 Learning: 2025-09-23T15:01:00.070Z
Learnt from: nv-lschneider
PR: NVIDIA/TensorRT-LLM#7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:15-17
Timestamp: 2025-09-23T15:01:00.070Z
Learning: In TensorRT-LLM NCCL device kernels, the <sstream> header is not needed as an explicit include in config.cu because it's provided transitively through other headers. Local compilation testing confirms this works without the explicit include.

Applied to files:

  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.h
  • cpp/tensorrt_llm/kernels/moePrepareKernels.h
  • cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu
📚 Learning: 2025-09-23T15:01:00.070Z
Learnt from: nv-lschneider
PR: NVIDIA/TensorRT-LLM#7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:15-17
Timestamp: 2025-09-23T15:01:00.070Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/config.cu), std::ostringstream is used but <sstream> doesn't need to be explicitly included because it's provided transitively through other headers like tensorrt_llm/common/cudaUtils.h or config.h. Local compilation testing confirms this works without the explicit include.

Applied to files:

  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.h
  • cpp/tensorrt_llm/kernels/moePrepareKernels.h
  • cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu
📚 Learning: 2025-09-23T15:13:48.819Z
Learnt from: nv-lschneider
PR: NVIDIA/TensorRT-LLM#7910
File: cpp/tensorrt_llm/kernels/nccl_device/multimem.h:20-30
Timestamp: 2025-09-23T15:13:48.819Z
Learning: TRT-LLM targets modern CUDA toolkits that support FP8 datatypes, so cuda_fp8.h can be included unconditionally without version guards in TRT-LLM code.

Applied to files:

  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.h
  • cpp/tensorrt_llm/kernels/moePrepareKernels.h
  • cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu
📚 Learning: 2025-08-21T02:39:12.009Z
Learnt from: djns99
PR: NVIDIA/TensorRT-LLM#7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1475-1480
Timestamp: 2025-08-21T02:39:12.009Z
Learning: The min latency mode functionality in TensorRT-LLM MOE kernels (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu) is deprecated and no longer being maintained/updated, as confirmed by djns99. Bug reports and optimization suggestions for the computeStridesTmaWarpSpecializedLowLatencyKernel and related min latency code paths should be deprioritized.

Applied to files:

  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.cu
📚 Learning: 2025-08-19T03:35:20.866Z
Learnt from: djns99
PR: NVIDIA/TensorRT-LLM#6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4616-4626
Timestamp: 2025-08-19T03:35:20.866Z
Learning: In the MOE profiler TMA workspace preparation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu), the overlapping of TMA WS regions for NONE and FINALIZE variants is deliberate design to save memory space, as confirmed by djns99. The comment "reuse the same pointers to save space" reflects this intentional behavior.

Applied to files:

  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.cu
📚 Learning: 2025-08-08T22:03:40.707Z
Learnt from: sklevtsov-nvidia
PR: NVIDIA/TensorRT-LLM#3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1198-1209
Timestamp: 2025-08-08T22:03:40.707Z
Learning: In the CUTLASS MoE kernels (cpp/tensorrt_llm/cutlass_extensions), when `layout_info.fusion` is set to `TmaWarpSpecializedGroupedGemmInput::EpilogueFusion::FINALIZE`, the `router_scales` parameter must be non-null by design. The fused finalize kernel epilogue does not perform nullptr checks and requires valid router scales to function correctly. This is an implicit contract that callers must satisfy when enabling the FINALIZE fusion mode.

Applied to files:

  • cpp/tensorrt_llm/kernels/fusedMoeCommKernels.cu
📚 Learning: 2025-08-20T07:43:36.447Z
Learnt from: ChristinaZ
PR: NVIDIA/TensorRT-LLM#7068
File: cpp/tensorrt_llm/kernels/moeTopKFuncs.cuh:169-172
Timestamp: 2025-08-20T07:43:36.447Z
Learning: In TensorRT-LLM MOE kernels, when processing up to 128 experts across 32 threads, each thread handles at most 4 experts (N < 5 constraint), where N represents candidates per thread rather than total system capacity.

Applied to files:

  • cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu
  • cpp/tensorrt_llm/kernels/moePrepareKernels.cu
📚 Learning: 2025-08-25T00:03:39.294Z
Learnt from: djns99
PR: NVIDIA/TensorRT-LLM#7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1185-1189
Timestamp: 2025-08-25T00:03:39.294Z
Learning: TLLM_CHECK_WITH_INFO is a host-side utility function and cannot be called from CUDA device functions (those marked with __device__ or __global__). In device code, assert() is the primary mechanism for handling "should never happen" conditions, and like standard C++ assert, CUDA's assert only works in debug builds and is compiled out in release builds.

Applied to files:

  • cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu
📚 Learning: 2025-09-23T14:58:05.372Z
Learnt from: nv-lschneider
PR: NVIDIA/TensorRT-LLM#7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.

Applied to files:

  • cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu
  • cpp/tensorrt_llm/kernels/moePrepareKernels.cu
🧬 Code graph analysis (3)
cpp/tensorrt_llm/common/envUtils.h (2)
cpp/tensorrt_llm/kernels/moePrepareKernels.cu (5)
  • grid (302-302)
  • grid (360-360)
  • block (301-301)
  • block (335-335)
  • block (359-359)
cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttentionLaunch.h (1)
  • grid (259-345)
cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu (1)
cpp/tensorrt_llm/kernels/fusedMoeCommKernels.cu (4)
  • tokenCount (1066-1128)
  • tokenCount (1066-1066)
  • tokenCount (1140-1213)
  • tokenCount (1140-1140)
cpp/tensorrt_llm/kernels/moePrepareKernels.cu (1)
cpp/tensorrt_llm/kernels/fusedMoeCommKernels.cu (4)
  • tokenCount (1066-1128)
  • tokenCount (1066-1066)
  • tokenCount (1140-1213)
  • tokenCount (1140-1140)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (10)
cpp/tensorrt_llm/kernels/fusedMoeCommKernels.h (1)

23-23: LGTM: envUtils include aligns with new launcher usage

Header now exposes the macro/function to dependents.

cpp/tensorrt_llm/kernels/fusedMoeCommKernels.cu (3)

1072-1074: Verify need for cudaGridDependencySynchronize on SM90

Guarded sync may impose a full-grid dependency barrier; confirm it’s required for correctness and that kernels are launched in a mode that supports grid dependency semantics on targeted toolkits.


1147-1149: Same concern: grid-dependency sync in receive path

Confirm correctness/perf trade-offs and launch-mode requirements.


1469-1470: LGTM: switch to PDL-aware launcher

Once the launcher is fixed to pack args, this call site is correct (dim3 grid/block already).

Please validate end-to-end that launches succeed with and without TRTLLM_ENABLE_PDL set.

cpp/tensorrt_llm/kernels/moeLoadBalance/moeLoadBalanceKernels.cu (6)

22-22: LGTM: envUtils include

Needed for new launcher macro/function.


143-146: Verify SM90 grid-dependency sync is necessary here

zeroExpertTokenCountKernel does only per-block writes; ensure the barrier is actually required.


185-187: Verify grid-dependency sync placement

Confirm cudaGridDependencySynchronize before the counting loop is necessary and doesn’t regress perf.


336-339: Verify need for cudaGridDependencySynchronize in no-redundant route kernel

Confirm launch-mode/toolkit support and benchmark impact.


515-517: Verify grid-dependency sync in sort route kernel

As above; ensure correctness requirement and acceptable overhead.


294-296: Launcher parameters already dim3; no action needed. The gridDim and blockDim arguments are already dim3 and compile correctly—ignore this suggestion.

Likely an incorrect or invalid review comment.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19895 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #14973 completed with status: 'FAILURE'

@dc3671
Copy link
Collaborator Author

dc3671 commented Sep 25, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19906 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19906 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #14983 completed with status: 'FAILURE'

@dc3671 dc3671 force-pushed the add-wideep-pdl branch 2 times, most recently from 8cb84ca to c7d56cd Compare September 26, 2025 05:24
@dc3671
Copy link
Collaborator Author

dc3671 commented Sep 26, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20045 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20045 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #15101 completed with status: 'FAILURE'

@dc3671
Copy link
Collaborator Author

dc3671 commented Sep 26, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20054 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20054 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #15107 completed with status: 'FAILURE'

@qiaoxj07
Copy link
Collaborator

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20136 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20136 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #15179 completed with status: 'FAILURE'

@dc3671
Copy link
Collaborator Author

dc3671 commented Sep 27, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20140 [ run ] triggered by Bot

@dc3671
Copy link
Collaborator Author

dc3671 commented Sep 27, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20142 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20140 [ run ] completed with state ABORTED
LLM/main/L0_MergeRequest_PR #15182 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20142 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15184 completed with status: 'FAILURE'

@qiaoxj07
Copy link
Collaborator

qiaoxj07 commented Oct 8, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20760 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20760 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15689 completed with status: 'FAILURE'

@dc3671 dc3671 requested review from a team as code owners October 9, 2025 00:30
@dc3671 dc3671 requested review from poweiw and yuanjingx87 October 9, 2025 00:30
@dc3671
Copy link
Collaborator Author

dc3671 commented Oct 9, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20826 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20826 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15747 completed with status: 'FAILURE'

@dc3671
Copy link
Collaborator Author

dc3671 commented Oct 9, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20883 [ run ] triggered by Bot

@dc3671
Copy link
Collaborator Author

dc3671 commented Oct 9, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20884 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20883 [ run ] completed with state ABORTED
LLM/main/L0_MergeRequest_PR #15796 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20884 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15797 completed with status: 'FAILURE'

dc3671 and others added 3 commits October 10, 2025 08:40
Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com>
Signed-off-by: Zhenhuan Chen <chenzhh3671@gmail.com>
Signed-off-by: Zhenhuan Chen <chenzhh3671@gmail.com>
@dc3671
Copy link
Collaborator Author

dc3671 commented Oct 10, 2025

/bot run --stage-list "A10-PackageSanityCheck-PY310-UB2204-CU12,RTX5090-PackageSanityCheck-PY312-UB2404-CU12,GH200-PackageSanityCheck-PY312-UB2404-CU12"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20932 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20932 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15833 (Partly Tested) completed with status: 'SUCCESS'

@kaiyux
Copy link
Member

kaiyux commented Oct 10, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21040 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21040 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15906 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@dc3671 dc3671 merged commit 84d2f12 into NVIDIA:main Oct 11, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants