KEMBAR78
[https://nvbugs/5472947][fix] wait on isend handles before reusing buffers by amukkara · Pull Request #7462 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@amukkara
Copy link
Collaborator

@amukkara amukkara commented Sep 2, 2025

Summary by CodeRabbit

  • New Features

    • None
  • Bug Fixes

    • Improved reliability of pipeline-parallel execution by properly awaiting and clearing outstanding send operations, reducing risks of hangs or duplicated sends during multi-microbatch runs.
    • More consistent logits communication on the final stage.
  • Refactor

    • Centralized send-wait logic into a single helper to standardize synchronization across stages while preserving existing rank-specific behavior.

Description

The new isend/wait pattern does not degrade performance.
req/sec of llama-3.1-8B, PP=4 H100 80B PCIe (350W), ISL=1000, OSL=1000:

Concurrency Before change After change
64 3.21 3.22
128 4.97 5.02
256 6.79 6.77

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Anurag Mukkara <134339030+amukkara@users.noreply.github.com>
@amukkara amukkara requested a review from a team as a code owner September 2, 2025 05:31
@amukkara amukkara requested a review from Naveassaf September 2, 2025 05:31
@amukkara amukkara changed the title [https;//nvbugs/5472947][fix] wait on isend handles before reusing buffers [https://nvbugs/5472947][fix] wait on isend handles before reusing buffers Sep 2, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 2, 2025

📝 Walkthrough

Walkthrough

A helper method wait_on_pp_send_handles(microbatch_id) was added to PyExecutor and used to replace direct waits on pipeline-parallel send handles at three call sites: two in _executor_loop_pp (Stages 2 and 3) and one in _handle_logits_communication for the last PP rank. The helper waits and then clears the handle.

Changes

Cohort / File(s) Summary of Changes
PP send-handle wait centralization
tensorrt_llm/_torch/pyexecutor/py_executor.py
Added wait_on_pp_send_handles(self, microbatch_id) that waits on and clears outstanding send handles. Replaced direct waits on self.send_handles[...] in _executor_loop_pp (Stage 2 and Stage 3) and in _handle_logits_communication (last PP rank). Minor formatting updates.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Exec as PyExecutor
  participant Next as Next PP Rank
  participant Last as (If last rank) Downstream Consumer

  rect rgb(245,245,255)
  note over Exec: Stage 2 / Stage 3 / Logits handling
  Exec->>Exec: wait_on_pp_send_handles(microbatch_id)
  alt Handle exists
    Exec->>Exec: Wait on handle<br/>(blocks until send completes)
    Exec->>Exec: Clear handle (set None)
  else No handle
    Exec->>Exec: No-op
  end
  end

  rect rgb(240,255,240)
  note over Exec,Next: After wait, initiate next nonblocking send
  Exec-->>Next: isend_object(payload)
  end

  opt Last PP rank (logits)
    Exec-->>Last: isend_object(logits)
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • pcastonguay
  • Shixiaowei02
  • chuangz0
  • schetlur-nv
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

909-913: Make the helper private and add minimal docstring/types.

For consistency with other internals and clarity, suggest a private name, type hints, and a short docstring.

-    def wait_on_pp_send_handles(self, microbatch_id):
-        if self.send_handles[microbatch_id] is not None:
-            self.send_handles[microbatch_id].wait()
-            self.send_handles[microbatch_id] = None
+    def _wait_on_pp_send_handle(self, microbatch_id: int) -> None:
+        """Wait for and clear the isend handle for the given microbatch slot."""
+        handle = self.send_handles[microbatch_id]
+        if handle is not None:
+            handle.wait()
+            self.send_handles[microbatch_id] = None

Update call sites:

-                        self.wait_on_pp_send_handles(prev_microbatch_id)
+                        self._wait_on_pp_send_handle(prev_microbatch_id)
-                    self.wait_on_pp_send_handles(prev_microbatch_id)
+                    self._wait_on_pp_send_handle(prev_microbatch_id)
-            self.wait_on_pp_send_handles(prev_microbatch_id)
+            self._wait_on_pp_send_handle(prev_microbatch_id)
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between e81c50d and 7cada23.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (4 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Filenames compiled into a target must be case-insensitively unique

Files:

  • tensorrt_llm/_torch/pyexecutor/py_executor.py
**/*.{h,hpp,hh,hxx,cc,cpp,cxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use spaces, not tabs; indent 4 spaces

Files:

  • tensorrt_llm/_torch/pyexecutor/py_executor.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code must target Python 3.8+
Indent with 4 spaces; do not use tabs (Python)
Maintain module namespace on import: prefer from package.subpackage import foo; use foo.Symbol()
Python filenames use snake_case
Python class names use PascalCase
Python functions and methods use snake_case
Python local variables use snake_case; if starting with a number concept, prefix with k (e.g., k_99th_percentile)
Python global variables use G_ prefix with UPPER_SNAKE_CASE
Python constants use UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes
Initialize all externally visible class members in init
For public interfaces, prefer docstrings over comments; comments should be for in-function or file-local interfaces
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes and variables inline with docstrings immediately after assignment
Avoid reflection when a non-reflective approach suffices
Limit except clauses to specific exceptions where possible
When using try/except for duck-typing, keep try body minimal and move logic to else

Files:

  • tensorrt_llm/_torch/pyexecutor/py_executor.py
**/*.{cpp,cc,cxx,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/_torch/pyexecutor/py_executor.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (4)
tensorrt_llm/_torch/pyexecutor/py_executor.py (4)

861-868: Correctness: wait-before-reuse is right.

Calling wait_on_pp_send_handles before reassigning the handle guarantees the prior isend has completed and its buffers aren’t reused prematurely. LGTM.


889-897: Finalize-after-wait prevents premature buffer release.

Waiting on the microbatch’s send handle before clearing the slot avoids freeing memory still in use by an in-flight send. Looks good.


1829-1834: Safe reuse guard for logits send—good call.

Waiting before posting the logits isend avoids clobbering a prior in-flight handle for the same microbatch ID.


909-913: Centralized send_handles waits: No direct send_handles[].wait() calls were found outside wait_on_pp_send_handles.

@amukkara amukkara requested a review from Tabrizian September 2, 2025 05:46
@amukkara
Copy link
Collaborator Author

amukkara commented Sep 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17315 [ run ] triggered by Bot

@pcastonguay
Copy link
Collaborator

@reasonsolo looks like there are additional changes compared to https://github.com/NVIDIA/TensorRT-LLM/pull/6976/files. Should this target release/1.0?

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17315 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #13013 completed with status: 'SUCCESS'

@reasonsolo
Copy link
Collaborator

@reasonsolo looks like there are additional changes compared to https://github.com/NVIDIA/TensorRT-LLM/pull/6976/files. Should this target release/1.0?

The bug says blocked on the CUDA synchronization. I'm guessing this bug is also caused by the cache blocks difference. PP loop no longer hangs in my test with Raayan's PR, so I think it's OK to target main branch.

@amukkara amukkara merged commit ae51368 into NVIDIA:main Sep 3, 2025
10 of 15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants