KEMBAR78
[TRTLLM-7155][feat] Unify sampler handle logits implementation. by dcampora · Pull Request #6867 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@dcampora
Copy link
Collaborator

@dcampora dcampora commented Aug 13, 2025

Summary by CodeRabbit

  • Refactor
    • Unified beam width handling for consistent sampling across requests.
    • Streamlined logits processing; computed only when requested to reduce overhead.
    • Generation logits are no longer returned; outputs focus on token log-probabilities.
    • Removed host-side logits storage to lower memory usage and improve efficiency.
    • More consistent behavior when overlap scheduling is enabled.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@dcampora dcampora requested a review from a team as a code owner August 13, 2025 12:14
@dcampora dcampora requested a review from Naveassaf August 13, 2025 12:14
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 13, 2025

Warning

Rate limit exceeded

@dcampora has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 7 minutes and 3 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between fba67b9 and dd573af.

📒 Files selected for processing (4)
  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/handle_logits.py (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/sampler.py (6 hunks)
📝 Walkthrough

Walkthrough

Updates integrate centralized logits handling into py_executor, adjust exclusion logic in request queue, refactor sampler to remove logits from host/state and unify beam-width computation, and simplify HandleLogits API by computing context prefix sums internally. Public APIs mostly unchanged except for HandleLogits signature and Sampler utility addition.

Changes

Cohort / File(s) Summary
Request queue exclude-last-logits logic
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
Removed TorchSampler check; should_exclude_last_generation_logits now depends only on disable_overlap_scheduler. Imports simplified. Comments updated. No public API change.
Executor logits hook
tensorrt_llm/_torch/pyexecutor/py_executor.py
Added HandleLogits import and new _handle_logits method (NVTX-scoped). In _sample_async, calls _handle_logits before sampling when logits are requested. Existing signatures unchanged.
Sampler refactor: beam width and state surface
tensorrt_llm/_torch/pyexecutor/sampler.py
Added Sampler.beam_width(scheduled_requests). Removed logits from SampleStateTensors. Renamed TRTLLMSampler.handle_logits→handle_logprobs; removed TRTLLMSampler.beam_width. Eliminated host/gen logits storage and HandleLogits wiring; flows focus on new_tokens/log_probs.
HandleLogits API simplification
tensorrt_llm/_torch/pyexecutor/handle_logits.py
call signature reduced to (context_requests, generation_requests, logits, beam_width). Now computes context prefix sums internally; derives indices for context/generation logits. Generation logits handling retained; tiles for beam_width>1 as needed. Docs updated.

Sequence Diagram(s)

sequenceDiagram
  participant Scheduler as py_executor._sample_async
  participant Handler as _handle_logits
  participant HL as HandleLogits
  participant Sampler as Sampler

  Scheduler->>Handler: _handle_logits(scheduled_batch, batch_outputs)
  alt logits requested
    Handler->>Sampler: beam_width(scheduled_batch.all_requests())
    Handler->>HL: __call__(context_requests, generation_requests, logits, beam_width)
    HL-->>Handler: processed logits attached to requests
  else no logits requested
    Handler-->>Scheduler: skip
  end
  Scheduler->>Sampler: sample_async(...)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested labels

Community want to contribute

Suggested reviewers

  • syuoni
  • DomBrown
  • achartier
  • chuangz0
  • Superjomn
  • mikeiovine
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@dcampora dcampora enabled auto-merge (squash) August 13, 2025 12:15
@dcampora
Copy link
Collaborator Author

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (4)
tensorrt_llm/_torch/pyexecutor/sampler.py (2)

1-1: Prepend NVIDIA copyright header (current year).

Per coding guidelines, all source files must start with the NVIDIA copyright header.

Apply at the top of the file.


373-381: Bug: request.seq_slot should be request.py_seq_slot.

LlmRequest elsewhere uses py_seq_slot; request.seq_slot likely does not exist and will raise AttributeError during speculative acceptance/writeback.

Apply this diff:

-                new_tokens[i, request.seq_slot, self.BEAM] = new_token
+                new_tokens[i, request.py_seq_slot, self.BEAM] = new_token
@@
-                new_tokens[num_accepted, request.seq_slot,
+                new_tokens[num_accepted, request.py_seq_slot,
                            self.BEAM] = new_token

Also applies to: 391-399

tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (1)

1-1: Missing NVIDIA copyright header.

Per guidelines, prepend the NVIDIA copyright header.

tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

1-1: Insert NVIDIA copyright header.

This file is missing the required header.

🧹 Nitpick comments (5)
tensorrt_llm/_torch/pyexecutor/sampler.py (1)

311-325: handle_logprobs builds correct per-token logprobs; minor robustness tweaks.

Logic is sound for TorchSampler (beam_width=1). Consider guarding against count > available length and documenting semantics to match request.py_result expectations.

Apply this diff to tighten safety and clarity:

-    def handle_logprobs(self, request: LlmRequest, state: SampleState, *, beam: int, count: int):
+    def handle_logprobs(self, request: LlmRequest, state: SampleState, *, beam: int, count: int):
+        """Append per-step token logprobs for the latest `count` steps (TorchSampler assumes beam=0)."""
         current_slice = slice(0, count), request.py_seq_slot, beam
         if request.py_return_log_probs:
             assert state.host.log_probs is not None
-            log_probs = state.host.log_probs[request.py_seq_slot][beam][:count]
+            lp = state.host.log_probs[request.py_seq_slot][beam]
+            assert lp.numel() >= count, f"Requested {count} logprobs, only {lp.numel()} available"
+            log_probs = lp[:count]
             current_tokens = state.host.new_tokens[current_slice]
 
             token_log_probs = [{
                 int(token): Logprob(logprob=logprob, rank=1)
             } for token, logprob in zip(current_tokens, log_probs.tolist())]
             assert beam == 0, "The following call relies on beam_width to be 1 - hence the list with a single element"
             request.py_result.append_log_probs([token_log_probs])
tests/unittest/_torch/test_return_logits.py (1)

1-1: Add NVIDIA copyright header.

All test sources should also prepend the current-year NVIDIA copyright.

tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (1)

684-699: Parameter ‘sampler’ is now unused; simplify signature and document behavior.

You’ve removed TorchSampler-specific gating, so sampler is unused. Rename to “_sampler” to avoid linter warnings and clarify backward compatibility; keep the assignment solely based on overlap scheduler as you do now.

Apply this diff:

-    def set_exclude_last_generation_logits(self,
-                                           disable_overlap_scheduler: bool,
-                                           sampler: Sampler) -> None:
+    def set_exclude_last_generation_logits(self,
+                                           disable_overlap_scheduler: bool,
+                                           _sampler: Sampler) -> None:
         # When overlap scheduler is enabled then when starting to handle a new prompt,
@@
-        # getter is required.
-        self.should_exclude_last_generation_logits = not disable_overlap_scheduler
+        # getter is required.
+        self.should_exclude_last_generation_logits = not disable_overlap_scheduler
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)

1485-1500: Document _handle_logits and avoid duplication of prefix-sum logic across components.

Function works, but:

  • Add a short docstring for maintainability.
  • Consider centralizing the “num_context_logits_prefix_sum” computation to avoid duplicated logic (also computed in TRTLLMSampler) and ensure future changes don’t diverge.

Apply this diff to add a docstring now:

 @nvtx_range("_handle_logits")
 def _handle_logits(self, scheduled_batch, batch_outputs):
+    """Handle context/generation logits outside Sampler.
+
+    Builds per-context prefix sums and delegates tensor marshalling to HandleLogits,
+    using the batch logits and beam width derived from the Sampler.
+    No-op if no request asks for logits.
+    """
     if any(r.py_return_context_logits or r.py_return_generation_logits
            for r in scheduled_batch.all_requests()):
         num_context_logits_prefix_sum = [0]
         prefix_sum = 0
         for request in scheduled_batch.context_requests:
             prefix_sum += request.context_chunk_size if request.py_return_context_logits else 1
             num_context_logits_prefix_sum.append(prefix_sum)

If you want, I can factor this prefix-sum logic into a small helper and update both call sites.


752-756: Avoid dynamic attributes on dataclasses for logits.

Assigning sample_state.host.logits at runtime works in Python but breaks type-checkers and contracts implied by SampleStateTensors. Consider introducing a dedicated optional field on the host tensor type used in PP, or a lightweight wrapper, to keep the interface explicit.

I can propose a minimal type-safe wrapper to carry transient logits for PP without reintroducing logits into all Sampler paths.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8416d7f and bfd6ffd.

📒 Files selected for processing (4)
  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (3 hunks)
  • tensorrt_llm/_torch/pyexecutor/sampler.py (6 hunks)
  • tests/unittest/_torch/test_return_logits.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/unittest/_torch/test_return_logits.py
  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tensorrt_llm/_torch/pyexecutor/sampler.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/unittest/_torch/test_return_logits.py
  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tensorrt_llm/_torch/pyexecutor/sampler.py
🔇 Additional comments (7)
tensorrt_llm/_torch/pyexecutor/sampler.py (2)

82-83: EarlyStopSampler host state aligns with refactor.

Host tensors now carry only new_tokens/log_probs. This minimal host SampleStateTensors is appropriate for encoder-only usage.


415-416: Using handle_logprobs in both context and generation flows is correct.

Centralizing the logprob append keeps TorchSampler consistent with the unified logits-handling path elsewhere.

Also applies to: 427-428

tests/unittest/_torch/test_return_logits.py (1)

15-15: Removing @force_ampere: confirm the scheduling intent and update comment.

The comment says “Save H100 resource,” but removing the Ampere-forcing decorator may allow this test to land on H100. If the intent is to avoid H100 usage, keep the decorator; if the intent is to free H100 and allow Ampere, adjust the comment accordingly.

Would you like me to update the comment to reflect the actual intent or restore the decorator for this test?

tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (1)

18-18: Import update looks good.

Switching to import only Sampler is consistent with the refactor that removed TorchSampler-specific gating.

tensorrt_llm/_torch/pyexecutor/py_executor.py (3)

41-41: HandleLogits integration is aligned with the unification goal.

Importing HandleLogits here (instead of wiring in Sampler) keeps Sampler focused on sampling.


178-179: Store max_num_sequences on executor: LGTM.

Needed by the new logits handling path; no concerns.


1476-1479: Triggering _handle_logits before sampling is the right place.

This preserves the previous flow while decoupling logits handling from sampling. Good guard via batch_outputs is present.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15138 [ run ] triggered by Bot

@dcampora dcampora force-pushed the user/dcampora/handle_logits_py_executor branch from fba67b9 to dd573af Compare August 13, 2025 16:26
@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15163 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15138 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15163 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11451 completed with status: 'FAILURE'

@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15200 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15200 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11480 completed with status: 'FAILURE'

@dcampora dcampora force-pushed the user/dcampora/handle_logits_py_executor branch from dd573af to 70bf47d Compare August 18, 2025 05:39
@dcampora
Copy link
Collaborator Author

/bot run

@dcampora dcampora force-pushed the user/dcampora/handle_logits_py_executor branch from 30a4daf to e41f200 Compare August 20, 2025 08:21
@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15896 [ run ] triggered by Bot

@dcampora dcampora force-pushed the user/dcampora/handle_logits_py_executor branch 3 times, most recently from e41f200 to 79235b0 Compare August 20, 2025 08:51
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15919 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15917 [ run ] completed with state ABORTED

Copy link
Collaborator

@mikeiovine mikeiovine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All speculative decoding related code looks good to me - leaving rest of the review for others with more context

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15919 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11963 completed with status: 'FAILURE'

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15953 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15953 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11989 completed with status: 'FAILURE'

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16001 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16001 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12027 completed with status: 'FAILURE'

Copy link
Collaborator

@yuxianq yuxianq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pp+logits part LGTM.

dcampora and others added 5 commits August 21, 2025 10:40
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
Signed-off-by: Daniel Cámpora <961215+dcampora@users.noreply.github.com>
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
Signed-off-by: Daniel Cámpora <961215+dcampora@users.noreply.github.com>
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16050 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16050 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12070 completed with status: 'FAILURE'

@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16080 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16080 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12092 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@dcampora dcampora merged commit 099f081 into NVIDIA:main Aug 22, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants