-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[TRTLLM-7155][feat] Unify sampler handle logits implementation. #6867
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TRTLLM-7155][feat] Unify sampler handle logits implementation. #6867
Conversation
|
Warning Rate limit exceeded@dcampora has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 7 minutes and 3 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughUpdates integrate centralized logits handling into py_executor, adjust exclusion logic in request queue, refactor sampler to remove logits from host/state and unify beam-width computation, and simplify HandleLogits API by computing context prefix sums internally. Public APIs mostly unchanged except for HandleLogits signature and Sampler utility addition. Changes
Sequence Diagram(s)sequenceDiagram
participant Scheduler as py_executor._sample_async
participant Handler as _handle_logits
participant HL as HandleLogits
participant Sampler as Sampler
Scheduler->>Handler: _handle_logits(scheduled_batch, batch_outputs)
alt logits requested
Handler->>Sampler: beam_width(scheduled_batch.all_requests())
Handler->>HL: __call__(context_requests, generation_requests, logits, beam_width)
HL-->>Handler: processed logits attached to requests
else no logits requested
Handler-->>Scheduler: skip
end
Scheduler->>Sampler: sample_async(...)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested labels
Suggested reviewers
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
|
/bot run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (4)
tensorrt_llm/_torch/pyexecutor/sampler.py (2)
1-1: Prepend NVIDIA copyright header (current year).Per coding guidelines, all source files must start with the NVIDIA copyright header.
Apply at the top of the file.
373-381: Bug: request.seq_slot should be request.py_seq_slot.LlmRequest elsewhere uses py_seq_slot; request.seq_slot likely does not exist and will raise AttributeError during speculative acceptance/writeback.
Apply this diff:
- new_tokens[i, request.seq_slot, self.BEAM] = new_token + new_tokens[i, request.py_seq_slot, self.BEAM] = new_token @@ - new_tokens[num_accepted, request.seq_slot, + new_tokens[num_accepted, request.py_seq_slot, self.BEAM] = new_tokenAlso applies to: 391-399
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (1)
1-1: Missing NVIDIA copyright header.Per guidelines, prepend the NVIDIA copyright header.
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
1-1: Insert NVIDIA copyright header.This file is missing the required header.
🧹 Nitpick comments (5)
tensorrt_llm/_torch/pyexecutor/sampler.py (1)
311-325: handle_logprobs builds correct per-token logprobs; minor robustness tweaks.Logic is sound for TorchSampler (beam_width=1). Consider guarding against count > available length and documenting semantics to match request.py_result expectations.
Apply this diff to tighten safety and clarity:
- def handle_logprobs(self, request: LlmRequest, state: SampleState, *, beam: int, count: int): + def handle_logprobs(self, request: LlmRequest, state: SampleState, *, beam: int, count: int): + """Append per-step token logprobs for the latest `count` steps (TorchSampler assumes beam=0).""" current_slice = slice(0, count), request.py_seq_slot, beam if request.py_return_log_probs: assert state.host.log_probs is not None - log_probs = state.host.log_probs[request.py_seq_slot][beam][:count] + lp = state.host.log_probs[request.py_seq_slot][beam] + assert lp.numel() >= count, f"Requested {count} logprobs, only {lp.numel()} available" + log_probs = lp[:count] current_tokens = state.host.new_tokens[current_slice] token_log_probs = [{ int(token): Logprob(logprob=logprob, rank=1) } for token, logprob in zip(current_tokens, log_probs.tolist())] assert beam == 0, "The following call relies on beam_width to be 1 - hence the list with a single element" request.py_result.append_log_probs([token_log_probs])tests/unittest/_torch/test_return_logits.py (1)
1-1: Add NVIDIA copyright header.All test sources should also prepend the current-year NVIDIA copyright.
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (1)
684-699: Parameter ‘sampler’ is now unused; simplify signature and document behavior.You’ve removed TorchSampler-specific gating, so sampler is unused. Rename to “_sampler” to avoid linter warnings and clarify backward compatibility; keep the assignment solely based on overlap scheduler as you do now.
Apply this diff:
- def set_exclude_last_generation_logits(self, - disable_overlap_scheduler: bool, - sampler: Sampler) -> None: + def set_exclude_last_generation_logits(self, + disable_overlap_scheduler: bool, + _sampler: Sampler) -> None: # When overlap scheduler is enabled then when starting to handle a new prompt, @@ - # getter is required. - self.should_exclude_last_generation_logits = not disable_overlap_scheduler + # getter is required. + self.should_exclude_last_generation_logits = not disable_overlap_schedulertensorrt_llm/_torch/pyexecutor/py_executor.py (2)
1485-1500: Document _handle_logits and avoid duplication of prefix-sum logic across components.Function works, but:
- Add a short docstring for maintainability.
- Consider centralizing the “num_context_logits_prefix_sum” computation to avoid duplicated logic (also computed in TRTLLMSampler) and ensure future changes don’t diverge.
Apply this diff to add a docstring now:
@nvtx_range("_handle_logits") def _handle_logits(self, scheduled_batch, batch_outputs): + """Handle context/generation logits outside Sampler. + + Builds per-context prefix sums and delegates tensor marshalling to HandleLogits, + using the batch logits and beam width derived from the Sampler. + No-op if no request asks for logits. + """ if any(r.py_return_context_logits or r.py_return_generation_logits for r in scheduled_batch.all_requests()): num_context_logits_prefix_sum = [0] prefix_sum = 0 for request in scheduled_batch.context_requests: prefix_sum += request.context_chunk_size if request.py_return_context_logits else 1 num_context_logits_prefix_sum.append(prefix_sum)If you want, I can factor this prefix-sum logic into a small helper and update both call sites.
752-756: Avoid dynamic attributes on dataclasses for logits.Assigning sample_state.host.logits at runtime works in Python but breaks type-checkers and contracts implied by SampleStateTensors. Consider introducing a dedicated optional field on the host tensor type used in PP, or a lightweight wrapper, to keep the interface explicit.
I can propose a minimal type-safe wrapper to carry transient logits for PP without reintroducing logits into all Sampler paths.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py(2 hunks)tensorrt_llm/_torch/pyexecutor/py_executor.py(3 hunks)tensorrt_llm/_torch/pyexecutor/sampler.py(6 hunks)tests/unittest/_torch/test_return_logits.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else
Files:
tests/unittest/_torch/test_return_logits.pytensorrt_llm/_torch/pyexecutor/executor_request_queue.pytensorrt_llm/_torch/pyexecutor/py_executor.pytensorrt_llm/_torch/pyexecutor/sampler.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
Prepend NVIDIA copyright header (current year) to all source files
Files:
tests/unittest/_torch/test_return_logits.pytensorrt_llm/_torch/pyexecutor/executor_request_queue.pytensorrt_llm/_torch/pyexecutor/py_executor.pytensorrt_llm/_torch/pyexecutor/sampler.py
🔇 Additional comments (7)
tensorrt_llm/_torch/pyexecutor/sampler.py (2)
82-83: EarlyStopSampler host state aligns with refactor.Host tensors now carry only new_tokens/log_probs. This minimal host SampleStateTensors is appropriate for encoder-only usage.
415-416: Using handle_logprobs in both context and generation flows is correct.Centralizing the logprob append keeps TorchSampler consistent with the unified logits-handling path elsewhere.
Also applies to: 427-428
tests/unittest/_torch/test_return_logits.py (1)
15-15: Removing @force_ampere: confirm the scheduling intent and update comment.The comment says “Save H100 resource,” but removing the Ampere-forcing decorator may allow this test to land on H100. If the intent is to avoid H100 usage, keep the decorator; if the intent is to free H100 and allow Ampere, adjust the comment accordingly.
Would you like me to update the comment to reflect the actual intent or restore the decorator for this test?
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (1)
18-18: Import update looks good.Switching to import only Sampler is consistent with the refactor that removed TorchSampler-specific gating.
tensorrt_llm/_torch/pyexecutor/py_executor.py (3)
41-41: HandleLogits integration is aligned with the unification goal.Importing HandleLogits here (instead of wiring in Sampler) keeps Sampler focused on sampling.
178-179: Store max_num_sequences on executor: LGTM.Needed by the new logits handling path; no concerns.
1476-1479: Triggering _handle_logits before sampling is the right place.This preserves the previous flow while decoupling logits handling from sampling. Good guard via batch_outputs is present.
|
PR_Github #15138 [ run ] triggered by Bot |
fba67b9 to
dd573af
Compare
|
/bot run |
|
PR_Github #15163 [ run ] triggered by Bot |
|
PR_Github #15138 [ run ] completed with state |
|
PR_Github #15163 [ run ] completed with state |
|
/bot run |
|
PR_Github #15200 [ run ] triggered by Bot |
|
PR_Github #15200 [ run ] completed with state |
dd573af to
70bf47d
Compare
|
/bot run |
30a4daf to
e41f200
Compare
|
/bot run |
|
PR_Github #15896 [ run ] triggered by Bot |
e41f200 to
79235b0
Compare
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
|
/bot run |
|
PR_Github #15919 [ run ] triggered by Bot |
|
PR_Github #15917 [ run ] completed with state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All speculative decoding related code looks good to me - leaving rest of the review for others with more context
|
PR_Github #15919 [ run ] completed with state |
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
|
/bot run |
|
PR_Github #15953 [ run ] triggered by Bot |
|
PR_Github #15953 [ run ] completed with state |
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
|
/bot run |
|
PR_Github #16001 [ run ] triggered by Bot |
|
PR_Github #16001 [ run ] completed with state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pp+logits part LGTM.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> Signed-off-by: Daniel Cámpora <961215+dcampora@users.noreply.github.com>
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> Signed-off-by: Daniel Cámpora <961215+dcampora@users.noreply.github.com>
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
|
/bot run |
|
PR_Github #16050 [ run ] triggered by Bot |
|
PR_Github #16050 [ run ] completed with state |
|
/bot run |
|
PR_Github #16080 [ run ] triggered by Bot |
|
PR_Github #16080 [ run ] completed with state |
Summary by CodeRabbit
Description
Test Coverage
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.