KEMBAR78
[https://nvbugs/5412562][feat] Allocate MoE workspace only when necessary (release/1.0 retargeted) by nv-yilinf · Pull Request #6955 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@nv-yilinf
Copy link
Collaborator

@nv-yilinf nv-yilinf commented Aug 15, 2025

Summary by CodeRabbit

  • Refactor

    • Reworked MoE workspace handling to be persistent and stream-aware, reducing per-call allocations and memory churn.
    • Allocation now adapts to required size and is safe during CUDA graph capture, improving stability.
    • Enhanced debug logging around workspace allocation and capture states for easier troubleshooting.
  • Bug Fixes

    • Addressed intermittent instability during CUDA graph capture by ensuring a dedicated, resizeable workspace is always available.

Description

In current MoE runner implementation, the runner will allocate a new workspace Tensor each time invoking the kernel. Even though backed by torch's caching allocator, frequent cudaMalloc/Frees are usually considered not a good practice and sometimes causes cudaMalloc to take ~100ms.
In this PR we fix this issue by maintaining the workspace tensor as a Class variable and only reallocates when target size is larger than current size or when capturing cuda graph.

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
@nv-yilinf nv-yilinf requested a review from a team as a code owner August 15, 2025 16:28
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 15, 2025

📝 Walkthrough

Walkthrough

Refactors MoE workspace handling in cpp/tensorrt_llm/thop/moeOp.cpp to use a persistent, stream-aware WorkspaceInfo stored in FusedMoeRunner. getWorkspaceInfo now returns a const reference, accepts cudaStream_t, handles CUDA graph capture, resizes/allocates as needed, and supplies stable workspace and mapping pointers to kernels.

Changes

Cohort / File(s) Summary of changes
MoE Workspace Refactor
cpp/tensorrt_llm/thop/moeOp.cpp
getWorkspaceInfo signature changed to return const-reference and take cudaStream_t; introduced member WorkspaceInfo workspace_info; allocation logic made persistent, stream- and capture-aware (isCapturing); total_workspace_size uses int64_t; kernel args now use workspace_info.workspace and workspace_info.src_to_dest_map; updated internal call sites to bind const refs and return persistent state.

Sequence Diagram(s)

sequenceDiagram
  participant Caller
  participant FusedMoeRunner
  participant CUDA as CUDA Allocator/Runtime
  participant Kernels

  Caller->>FusedMoeRunner: getWorkspaceInfo(..., stream)
  FusedMoeRunner->>CUDA: isCapturing(stream)
  alt Capturing or insufficient size
    FusedMoeRunner->>CUDA: Allocate/resize workspace
    CUDA-->>FusedMoeRunner: workspace ptr
    FusedMoeRunner->>FusedMoeRunner: Update workspace_info
  else Reuse
    FusedMoeRunner->>FusedMoeRunner: Use existing workspace_info
  end
  FusedMoeRunner-->>Caller: const& workspace_info
  Caller->>Kernels: Launch with workspace_info.workspace, workspace_info.src_to_dest_map
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@nv-yilinf
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15459 [ run ] triggered by Bot

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (2)
cpp/tensorrt_llm/thop/moeOp.cpp (2)

344-346: Non-OSS path passes a torch::Tensor where a raw pointer is expected (compile-time error).

In the non-OSS branch, workspace_info.workspace is a torch::Tensor, but it’s passed where a char* is expected. This will not compile since you’re casting a tensor object to a pointer.

Apply this diff to pass the underlying data pointer:

-            static_cast<char*>(workspace_info.workspace), output.data_ptr(),
+            static_cast<char*>(workspace_info.workspace.data_ptr()), output.data_ptr(),

474-476: Same raw pointer issue in min-latency non-OSS path.

Mirror the fix here as well: workspace_info.workspace is a torch::Tensor and needs .data_ptr().

-            static_cast<char*>(workspace_info.workspace), output.data_ptr(),
+            static_cast<char*>(workspace_info.workspace.data_ptr()), output.data_ptr(),
🧹 Nitpick comments (4)
cpp/tensorrt_llm/thop/moeOp.cpp (4)

580-581: Member naming: use m-prefix and avoid shadowing.

The new member workspace_info violates the class’ member naming convention (members use mPrefix). It also shadows the local workspace_info variable used at call sites, reducing clarity.

Apply this diff to rename the member:

-    WorkspaceInfo workspace_info;
+    WorkspaceInfo mWorkspaceInfo;

Notes:

  • The function below (getWorkspaceInfo) should be updated to refer to mWorkspaceInfo (see proposed diff in that function’s comment).
  • Call-site locals can keep the name workspace_info if you prefer; the member’s m-prefix prevents ambiguity.

311-313: Thread device index through to getWorkspaceInfo to ensure correct device placement for the workspace.

getWorkspaceInfo allocates a CUDA tensor internally. Without using input.options() (not available in that scope) or a device guard, the allocation may land on the wrong device if current device doesn’t match input.get_device(). Passing the device index fixes this deterministically.

Apply this diff to pass the input device index:

-        WorkspaceInfo const& workspace_info = getWorkspaceInfo(num_rows, hidden_size, inter_size, num_experts_total,
-            static_cast<int>(experts_per_token), activation_type, parallelism_config, min_latency_mode, stream);
+        WorkspaceInfo const& workspace_info = getWorkspaceInfo(num_rows, hidden_size, inter_size, num_experts_total,
+            static_cast<int>(experts_per_token), activation_type, parallelism_config, min_latency_mode,
+            input.get_device(), stream);

442-444: Ditto: pass device index to ensure workspace allocation lands on the correct GPU for min-latency path.

-        WorkspaceInfo const& workspace_info = getWorkspaceInfo(num_rows, hidden_size, inter_size, num_experts_total,
-            static_cast<int>(experts_per_token), activation_type, parallelism_config, min_latency_mode, stream);
+        WorkspaceInfo const& workspace_info = getWorkspaceInfo(num_rows, hidden_size, inter_size, num_experts_total,
+            static_cast<int>(experts_per_token), activation_type, parallelism_config, min_latency_mode,
+            input.get_device(), stream);

626-660: Make workspace allocation device-correct, avoid member/local shadowing, and fix 64-bit logging specifiers.

Good call to persist and grow the workspace and to reallocate during CUDA Graph capture. A few targeted improvements:

  • Ensure allocations are placed on the correct CUDA device by accepting a device_index parameter and using it in TensorOptions.
  • Use the mWorkspaceInfo member to avoid shadowing and match the class’ naming convention.
  • Fix %ld format specifiers for 64-bit values to avoid UB/incorrect output on platforms where long is 32-bit.

Apply this diff:

-    WorkspaceInfo const& getWorkspaceInfo(int64_t const num_rows, int64_t const hidden_size, int64_t const inter_size,
-        int num_experts, int experts_per_token, ActivationType activation_type,
-        kernels::MOEParallelismConfig const& parallelismConfig, bool min_latency_mode, cudaStream_t stream)
+    WorkspaceInfo const& getWorkspaceInfo(int64_t const num_rows, int64_t const hidden_size, int64_t const inter_size,
+        int num_experts, int experts_per_token, ActivationType activation_type,
+        kernels::MOEParallelismConfig const& parallelismConfig, bool min_latency_mode,
+        int device_index, cudaStream_t stream)
     {
         size_t moe_workspace_size = mKernelRunner->getWorkspaceSize(num_rows, hidden_size, inter_size, num_experts,
             experts_per_token, activation_type, parallelismConfig, /* use_lora */ false, mUseDeepSeekFP8BlockScaling,
             min_latency_mode, mUseW4A8GroupScaling);
         size_t src_to_dest_map_size = experts_per_token * num_rows * sizeof(int);

         std::vector<size_t> workspaces{moe_workspace_size, src_to_dest_map_size};

-        int64_t const total_workspace_size = common::calculateTotalWorkspaceSize(workspaces.data(), workspaces.size());
+        int64_t const total_workspace_size = common::calculateTotalWorkspaceSize(workspaces.data(), workspaces.size());

-        bool is_capturing = tensorrt_llm::common::isCapturing(stream);
+        bool is_capturing = tensorrt_llm::common::isCapturing(stream);
         // Always allocate workspace when capturing cuda graph to avoid illegal memory access during replay
-        if (is_capturing || workspace_info.workspace.numel() < total_workspace_size)
+        if (is_capturing || mWorkspaceInfo.workspace.numel() < total_workspace_size)
         {
             if (is_capturing)
             {
-                TLLM_LOG_DEBUG(
-                    "Allocating MoE workspace with %ld bytes size during cuda graph capture", total_workspace_size);
+                TLLM_LOG_DEBUG(
+                    "Allocating MoE workspace with %lld bytes size during cuda graph capture",
+                    static_cast<long long>(total_workspace_size));
             }
             else
             {
-                TLLM_LOG_DEBUG("MoE workspace size is not enough, increase the size from %ld bytes to %ld bytes",
-                    workspace_info.workspace.numel(), total_workspace_size);
+                TLLM_LOG_DEBUG("MoE workspace size is not enough, increase the size from %lld bytes to %lld bytes",
+                    static_cast<long long>(mWorkspaceInfo.workspace.numel()),
+                    static_cast<long long>(total_workspace_size));
             }
-            workspace_info.workspace = torch::empty({static_cast<long>(total_workspace_size)},
-                torch::dtype(torch::kInt8).device(torch::kCUDA).requires_grad(false));
+            mWorkspaceInfo.workspace = torch::empty({static_cast<int64_t>(total_workspace_size)},
+                torch::dtype(torch::kInt8).device(c10::Device(torch::kCUDA, device_index)).requires_grad(false));
         }
-        workspace_info.src_to_dest_map
-            = common::nextWorkspacePtr(static_cast<int8_t*>(workspace_info.workspace.data_ptr()), moe_workspace_size);
+        mWorkspaceInfo.src_to_dest_map
+            = common::nextWorkspacePtr(static_cast<int8_t*>(mWorkspaceInfo.workspace.data_ptr()), moe_workspace_size);

-        return workspace_info;
+        return mWorkspaceInfo;
     }

Optional consideration:

  • If you foresee capturing multiple CUDA graphs with the same runner instance (on the same or different streams) and replaying all of them concurrently, please double-check lifetime semantics with the caching allocator’s graph pools to ensure reallocating mWorkspaceInfo.workspace for a later capture doesn’t free memory held by a previously captured graph. If needed, we can store per-graph workspaces keyed by stream/graph.
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 9e02f6b and 2fa5a9f.

📒 Files selected for processing (1)
  • cpp/tensorrt_llm/thop/moeOp.cpp (5 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}: In C++, close namespaces with a comment naming the namespace (e.g., } // namespace foo)
Prefer const/constexpr variables over #define for constants
Declare variables const if not modified after initialization
Use Allman brace style in C++
C++ filenames use lowerCamelCase and must be case-insensitively unique within a build target
C++ type names use UpperCamelCase
Local variables, methods, and namespaces use lowerCamelCase
Global non-static variables not in anonymous namespace use gPrefix lowerCamelCase (e.g., gExample)
Static globals or globals in anonymous namespaces use sPrefix lowerCamelCase
Locally visible static variables start with 's' (e.g., static std::once_flag sFlag;)
Member variables use mPrefix lowerCamelCase; public members may omit but are encouraged to use 'm'
Constants (enums, global/static/function-scope magic numbers) use kPREFIXED_UPPER_SNAKE (e.g., kDIGIT_NUM)
If macros are unavoidable, use UPPER_SNAKE_CASE (prefer constants over #define)
Constructor parameter that conflicts with a public member name gets trailing underscore (foo_)
Literal suffixes should be uppercase (e.g., 1234L not 1234l)
C++: use spaces only; indent 4 spaces
Run clang-format (LLVM style) before submitting; wrap lines at 120 characters
If formatting must be bypassed, use // clang-format off/on around the section
Prefer smart pointers; use unique_ptr for sole ownership, shared_ptr for shared; weak_ptr only in exceptional cases
Do not use deprecated pre-C++11 smart pointers
Use C++ style comments; avoid C comments except special inline cases; prefer // single-line
Capitalize and punctuate full-sentence comments
Follow Doxygen rules: use //! for comments and //!< for members in C++
Disable code with #if/#endif and mnemonic conditions; avoid commented-out code; avoid dead code
Do not throw exceptions across library boundaries
Use least-forceful casts; avoid removing const/volatile; avoid C-style and functional casts (except constructors); p...

Files:

  • cpp/tensorrt_llm/thop/moeOp.cpp
**/*.{cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu}: Avoid literal values except for 0, nullptr, true, false; use named constexpr for other literals
Place semicolon of empty for/while loop on a new line
Always use brace-delimited bodies for switch/while/do-for/if/else
Use inline C comments in argument lists when parameter meaning is unclear (e.g., /* checkForErrors = */ false)
Do not use assignment in subexpressions (e.g., if (x = y) ... is forbidden)
Switch on enums should enumerate all values and omit default to catch new values at compile time
Structure switch statements; prohibit fallthrough except between empty cases; each case ends with break or throw; return at end of case not allowed; put break inside braces for compound case
Prefer anonymous namespaces over static for internal linkage of functions
Every defined function must be called at least once (no unused methods)

Files:

  • cpp/tensorrt_llm/thop/moeOp.cpp
**/*.{h,hpp,hxx,hh,cuh,cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Parameter names must be consistent between declarations and definitions

Files:

  • cpp/tensorrt_llm/thop/moeOp.cpp
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • cpp/tensorrt_llm/thop/moeOp.cpp
🧠 Learnings (1)
📓 Common learnings
Learnt from: djns99
PR: NVIDIA/TensorRT-LLM#6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.420Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15459 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #146 completed with status: 'FAILURE'

@nv-yilinf
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15477 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15477 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #151 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

Copy link
Collaborator

@jinyangyuan-nvidia jinyangyuan-nvidia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@nv-yilinf nv-yilinf enabled auto-merge (squash) August 17, 2025 22:35
@nv-yilinf nv-yilinf merged commit 7f7a301 into NVIDIA:release/1.0 Aug 18, 2025
5 checks passed
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 22, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 22, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 22, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 23, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 24, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 25, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 25, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 25, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 26, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 27, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 27, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 27, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 27, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 27, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 28, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 28, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 28, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 28, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 28, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 28, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 28, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 29, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 29, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 29, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 29, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 29, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Aug 30, 2025
…sary (release/1.0 retargeted) (NVIDIA#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
joyang-nv pushed a commit that referenced this pull request Sep 1, 2025
…sary (release/1.0 retargeted) (#6955)

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
@nv-yilinf nv-yilinf deleted the fix-moe-workspace-allocation-retarget-1.0 branch September 4, 2025 16:39
= common::nextWorkspacePtr(static_cast<int8_t*>(info.workspace.data_ptr()), moe_workspace_size);
bool is_capturing = tensorrt_llm::common::isCapturing(stream);
// Always allocate workspace when capturing cuda graph to avoid illegal memory access during replay
if (is_capturing || workspace_info.workspace.numel() < total_workspace_size)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When workspace_info.workspace.numel() < total_workspace_size and MOE kernels are running asynchronously in different streams, is it possible that 2 kernels from different streams access the same workspace at the same time? @jinyangyuan-nvidia @nv-yilinf

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think historically we have assumed that we only ever have one stream running MOE in a few places. But looking at the chunked MOE logic this definitely is a problematic assumption. Its quite possible there are a few bugs with this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants