KEMBAR78
[TRTLLM-4517] [feat] Additional model outputs by Funatiq · Pull Request #7206 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@Funatiq
Copy link
Collaborator

@Funatiq Funatiq commented Aug 25, 2025

Summary by CodeRabbit

  • New Features

    • Support requesting and returning additional model outputs for both context and generation; exposed on result objects.
    • Advanced quickstart adds --additional_model_outputs to print selected extra outputs.
    • Sampling parameters and LLM now accept additional_model_outputs as a list of strings.
  • Improvements

    • More flexible logits handling across models, including encoder-only multimodal scenarios.
  • Tests

    • Added unit and integration tests covering additional model outputs.
    • Updated API stability references to include new result fields and parameter types.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 25, 2025

📝 Walkthrough

Walkthrough

Reworks model forward to normalize logits-containing outputs (wrap tensors as {'logits': ...}, extract logits from dicts, apply gather_ids post-forward) and defers the without_logits decision to post-forward. Adds a multimodal encoder-only path with an updated typed signature that returns {'mm_embeddings': ..., 'logits': None}. Introduces plumbing and runtime handling for arbitrary additional model outputs across requests, sampling, result types, and example CLI.

Changes

Cohort / File(s) Summary
Model forward normalization & multimodal encoder-only
tensorrt_llm/_torch/pyexecutor/model_engine.py
Reworked forward path to call model first, normalize outputs into a dict with logits when appropriate, apply gather_ids to outputs['logits'], and handle without_logits after forward. Added typed _forward_step_mm_encoder_only(self, inputs: Dict[str, Any], scheduled_requests: ScheduledRequests) -> Dict[str, Any] path that builds multimodal_params, calls self.model.forward, computes mm_embeddings, splits/aligns embeddings per scheduled requests, and returns {'mm_embeddings': ..., 'logits': None}.
Additional outputs handler
tensorrt_llm/_torch/pyexecutor/handle_additional_outputs.py
New HandleAdditionalOutputs class with __call__(context_requests, generation_requests, outputs, beam_width, num_context_tokens) that aggregates requested extra output names, determines context vs generation slicing, and appends per-request additional context/generation tensors via request append methods; includes KV-cache warnings and assertions.
Request/result plumbing for extra outputs
tensorrt_llm/_torch/pyexecutor/llm_request.py
Added additional_outputs: Optional[List[str]] through PyResult and LlmRequest constructors, internal storage fields _additional_context_outputs / _additional_generation_outputs, append methods, and properties exposing aggregated additional_context_outputs / additional_generation_outputs. executor_request_to_llm_request now forwards configured additional model outputs.
Executor sampling integration
tensorrt_llm/_torch/pyexecutor/py_executor.py
Imported HandleAdditionalOutputs; compute num_context_tokens and beam_width once; after logits handling, call HandleAdditionalOutputs(...) to populate extra outputs for context and generation requests; minor exception message tweak.
API surface: SamplingParams
tensorrt_llm/sampling_params.py, tests/unittest/api_stability/references/sampling_params.yaml
Removed AdditionalModelOutput dataclass; changed SamplingParams.additional_model_outputs from Optional[List[AdditionalModelOutput]] to Optional[List[str]]; internal conversion to tllme.AdditionalModelOutput retained. Updated API-stability reference to reflect string-based parameter.
Executor result schema
tensorrt_llm/executor/result.py, tests/unittest/api_stability/references_committed/completion_output.yaml
Added optional fields additional_context_outputs and additional_generation_outputs: Optional[Dict[str, torch.Tensor]] in CompletionOutput and populated them where available in _handle_sequence.
Examples / CLI
examples/llm-api/quickstart_advanced.py
Added --additional_model_outputs CLI option and threaded additional_model_outputs through LLM and SamplingParams construction and to printing of returned additional context/generation outputs.
Tests: integration & unit
tests/unittest/llmapi/test_additional_model_outputs.py, tests/integration/test_lists/test-db/l0_a10.yml
New comprehensive unit tests adding dummy model/config/loader, exercising SamplingParams.additional_model_outputs end-to-end and asserting shapes/values of context and generation additional outputs. Added new integration test entry.

Sequence Diagram(s)

sequenceDiagram
  actor Caller
  participant Engine as _forward_step
  participant Model as model.forward

  Caller->>Engine: _forward_step(inputs, without_logits, gather_ids)
  Engine->>Model: model.forward(inputs, return_context_logits=...)
  Model-->>Engine: outputs (dict or tensor)

  alt outputs is dict
    Engine->>Engine: logits = outputs.get('logits', None)
    alt logits is None
      Engine-->>Caller: outputs
    else
      alt gather_ids provided
        Engine->>Engine: outputs['logits'] = logits[gather_ids]
      end
      alt without_logits True
        Engine-->>Caller: outputs
      else
        Engine-->>Caller: outputs
      end
    end
  else outputs is tensor
    Engine->>Engine: outputs = {'logits': outputs}
    alt gather_ids provided
      Engine->>Engine: outputs['logits'] = outputs['logits'][gather_ids]
    end
    Engine-->>Caller: outputs
  end
Loading
sequenceDiagram
  actor Caller
  participant Engine as _forward_step_mm_encoder_only
  participant Model as model.forward (multimodal)

  Caller->>Engine: _forward_step_mm_encoder_only(multimodal_inputs, scheduled_requests)
  Engine->>Engine: validate/construct multimodal_params
  Engine->>Model: model.forward(multimodal_params)
  Model-->>Engine: mm_embeddings (tensor or empty)
  Engine->>Engine: split/chunk embeddings per scheduled requests
  Engine-->>Caller: {'mm_embeddings': [...], 'logits': None}
Loading
sequenceDiagram
  actor Sampler
  participant Executor as _sample_async
  participant LogitsHandler as HandleLogits
  participant AddHandler as HandleAdditionalOutputs
  participant Requests as ScheduledRequests

  Sampler->>Executor: trigger sampling loop
  Executor->>LogitsHandler: process logits -> generation context
  LogitsHandler-->>Executor: batch_outputs (logits + other)
  Executor->>AddHandler: HandleAdditionalOutputs(context_reqs, gen_reqs, batch_outputs, beam_width, num_context_tokens)
  AddHandler-->>Requests: append_additional_context_outputs / append_additional_generation_outputs per request
  AddHandler-->>Executor: confirmation
  Executor-->>Sampler: continue sampling with augmented results
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • chzblych
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9298f1b and bbc1b31.

📒 Files selected for processing (11)
  • examples/llm-api/quickstart_advanced.py (3 hunks)
  • tensorrt_llm/_torch/pyexecutor/handle_additional_outputs.py (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/llm_request.py (8 hunks)
  • tensorrt_llm/_torch/pyexecutor/model_engine.py (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (2 hunks)
  • tensorrt_llm/executor/result.py (3 hunks)
  • tensorrt_llm/sampling_params.py (3 hunks)
  • tests/integration/test_lists/test-db/l0_a10.yml (1 hunks)
  • tests/unittest/api_stability/references/sampling_params.yaml (1 hunks)
  • tests/unittest/api_stability/references_committed/completion_output.yaml (1 hunks)
  • tests/unittest/llmapi/test_additional_model_outputs.py (1 hunks)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Funatiq
Copy link
Collaborator Author

Funatiq commented Aug 25, 2025

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)

2318-2325: Update stale comment to match new behavior and fix typo.

The code no longer “returns all the logits” early; it computes, then optionally gathers below. Also, there’s a duplicated “the”.

-        # For simplicity, just return all the the logits if we have special gather_ids
-        # from speculative decoding.
+        # Run the forward pass. If gather_ids (speculative decoding) or gather_context_logits
+        # is set, ask the model to return context logits; we will gather below as needed.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between a1e03af and fc7cda4.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/pyexecutor/model_engine.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (2)
tensorrt_llm/runtime/generation.py (1)
  • gather_context_logits (1213-1214)
tensorrt_llm/_torch/speculative/interface.py (1)
  • without_logits (53-54)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16417 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16417 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12341 completed with status: 'FAILURE'

@Funatiq Funatiq force-pushed the dev/feat/additional_outputs branch 2 times, most recently from 49e64b3 to 1c3ad6e Compare August 27, 2025 09:35
@Funatiq
Copy link
Collaborator Author

Funatiq commented Aug 27, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16671 [ run ] triggered by Bot

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)

2252-2254: Always return a dict (with 'logits') in without_logits mode to preserve the contract; current return type can break callers.

This reintroduces the same contract issue previously flagged: _forward_step is annotated to return Dict[str, Any] but may return a Tensor here; many sites index outputs['logits']. Fix by normalizing to a dict.

Apply this diff:

-        if self.without_logits:
-            return outputs
+        if self.without_logits:
+            # Preserve downstream contract: always return a dict with 'logits'
+            if isinstance(outputs, dict):
+                normalized = dict(outputs)
+                normalized['logits'] = None
+                return normalized
+            return {'logits': None}
🧹 Nitpick comments (2)
tensorrt_llm/_torch/pyexecutor/model_engine.py (2)

2246-2251: Comment is misleading vs. implementation; clarify behavior.

You request context logits from the model, then gather rows. The comment claims “return all the logits”. Update the comment to reflect gathering behavior.


2255-2261: Broaden dict handling to Mapping for HF-style ModelOutput.

Some model outputs are Mapping-like (e.g., transformers’ ModelOutput). Checking Mapping is safer than dict.

Apply this minimal change:

-        if isinstance(outputs, dict):
+        from collections.abc import Mapping
+        if isinstance(outputs, Mapping):
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between fc7cda4 and 1c3ad6e.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/pyexecutor/model_engine.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code must target Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Preserve module namespaces when importing; import modules/packages and access members via the module (e.g., from package.subpackage import foo; foo.SomeClass())
Python file names should be snake_case
Python class names should be PascalCase
Python functions/methods and local variables should be snake_case; variables beginning with a number should be prefixed with k_ (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE prefixed with G_ (e.g., G_MY_GLOBAL); constants should be UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments should be reserved for in-function or file-local interfaces
Use Google-style docstrings for classes and functions; attributes and variables may be documented inline with trailing string literals
Avoid reflection when simpler, explicit code suffices (e.g., avoid dict(**locals()) patterns)
In try/except, catch the narrowest exceptions possible
For duck-typing patterns, keep the try body minimal and move logic to else to avoid masking unrelated failures

Files:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.{c,cc,cpp,cxx,h,hh,hpp,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)

Files:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (3)
tensorrt_llm/runtime/generation.py (1)
  • gather_context_logits (1213-1214)
tensorrt_llm/_torch/speculative/interface.py (1)
  • without_logits (53-54)
tensorrt_llm/_torch/pyexecutor/llm_request.py (1)
  • get (100-109)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)

2236-2271: Verify all downstream outputs['logits'] usages after normalization changes

The recent contract update in _forward_step may alter the shape or values returned in outputs['logits']. Please review each call site to ensure it correctly handles any reshaped or re‐normalized logits:

• Unit tests (must still pass):

  • tests/unittest/trt/model/test_phi.py
  • tests/unittest/trt/model/test_nemotron_nas.py
  • tests/unittest/trt/model/test_mistral.py
  • tests/unittest/trt/model/test_mamba.py
  • tests/unittest/trt/model/test_gpt.py
  • tests/unittest/trt/model/test_llama.py

• Generation pipeline:

  • tensorrt_llm/runtime/generation.py (around lines 950, 1852, 1860, 1882, 2243, 2536, 3594, 3611, 3653, 3659, 3793)

• PyExecutor integration:

  • tensorrt_llm/_torch/pyexecutor/py_executor.py (lines 780, 980, 1099, 1523)

• Sampler logic:

  • tensorrt_llm/_torch/pyexecutor/sampler.py (around lines 680–683, 952)

• Speculative drafter:

  • tensorrt_llm/_torch/speculative/model_drafter.py (lines 277, 371, 393)

• Model engine itself:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py (lines 2268, 2349)

• Triton backend adapter:

  • triton_backend/all_models/inflight_batcher_llm/tensorrt_llm/1/model.py (line 266)

• Example scripts:

  • examples/summarize.py (line 491)

At each site, confirm that the code correctly indexes, reshapes, and interprets the logits (dtype, device, dimensions, normalization).

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16671 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12513 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Aug 27, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16699 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16699 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12534 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Aug 28, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16837 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16837 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12640 completed with status: 'FAILURE'

@Funatiq Funatiq force-pushed the dev/feat/additional_outputs branch from 1c3ad6e to 9efe9d2 Compare August 28, 2025 18:02
@Funatiq
Copy link
Collaborator Author

Funatiq commented Aug 28, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16895 [ run ] triggered by Bot

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
tensorrt_llm/_torch/pyexecutor/model_engine.py (5)

395-401: Allocate gather_ids_cuda as torch.long (required for indexing).

Prevents CUDA advanced-indexing errors and avoids per-step casts.

-            self.gather_ids_cuda = torch.empty((self.max_num_tokens, ),
-                                               dtype=torch.int,
-                                               device='cuda')
+            self.gather_ids_cuda = torch.empty((self.max_num_tokens, ),
+                                               dtype=torch.long,
+                                               device='cuda')
-            self.previous_pos_indices_cuda = torch.empty(
-                (self.max_num_tokens, ), dtype=torch.int, device='cuda')
+            self.previous_pos_indices_cuda = torch.empty(
+                (self.max_num_tokens, ), dtype=torch.long, device='cuda')

1517-1519: Copy gather_ids as torch.long to match device buffer.

-            self.gather_ids_cuda[:len(gather_ids)].copy_(torch.tensor(
-                gather_ids, dtype=torch.int, pin_memory=True),
-                                                         non_blocking=True)
+            self.gather_ids_cuda[:len(gather_ids)].copy_(
+                torch.tensor(gather_ids, dtype=torch.long, pin_memory=True),
+                non_blocking=True)

1406-1411: previous_batch_indices used for GPU indexing must be long.

previous_slots (copied from previous_batch_indices_host) indexes CUDA tensors below; keep both host tensor and device buffer as torch.long.

-            previous_batch_indices_host = torch.tensor(previous_batch_indices,
-                                                       dtype=torch.int,
-                                                       pin_memory=True)
+            previous_batch_indices_host = torch.tensor(
+                previous_batch_indices, dtype=torch.long, pin_memory=True)

Additionally, update the device buffer allocation (Line 437–439):

-        self.previous_batch_indices_cuda = torch.empty((self.max_num_tokens, ),
-                                                       dtype=torch.int,
-                                                       device='cuda')
+        self.previous_batch_indices_cuda = torch.empty((self.max_num_tokens, ),
+                                                       dtype=torch.long,
+                                                       device='cuda')

1455-1460: previous_pos_indices used for GPU indexing must be long.

This indexes new_tokens_lens_device; use torch.long on host copy.

-                previous_pos_indices_host = torch.tensor(previous_pos_indices,
-                                                         dtype=torch.int,
-                                                         pin_memory=True)
+                previous_pos_indices_host = torch.tensor(
+                    previous_pos_indices, dtype=torch.long, pin_memory=True)

1690-1692: Use torch.long for gather_ids in no-cache path too.

-            self.gather_ids_cuda[:len(gather_ids)].copy_(torch.tensor(
-                gather_ids, dtype=torch.int, pin_memory=True),
-                                                         non_blocking=True)
+            self.gather_ids_cuda[:len(gather_ids)].copy_(
+                torch.tensor(gather_ids, dtype=torch.long, pin_memory=True),
+                non_blocking=True)
♻️ Duplicate comments (2)
tensorrt_llm/_torch/pyexecutor/model_engine.py (2)

2252-2254: Always return a dict when without_logits is True (preserve contract).

Callers commonly index outputs['logits']; returning a raw Tensor/non-dict here breaks the type contract. Return a dict with 'logits': None (and keep other fields if present).

-        if self.without_logits:
-            return outputs
+        if self.without_logits:
+            # Preserve downstream contract: always return a dict with 'logits'
+            if isinstance(outputs, dict):
+                outputs['logits'] = None
+                return outputs
+            else:
+                return {'spec_outputs': outputs, 'logits': None}

2267-2269: Fix unsafe int32 advanced indexing; cast to long and use index_select.

On CUDA, index tensors must be torch.long. Using advanced indexing with int32 can error or crash.

-        if gather_ids is not None:
-            outputs['logits'] = logits[gather_ids]
+        if gather_ids is not None:
+            idx = gather_ids.long()
+            outputs['logits'] = logits.index_select(0, idx)
🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)

2244-2251: Fix misleading comment and minor grammar.

Comment says “return all the the logits,” but the code gathers a subset. Update for accuracy.

-        # For simplicity, just return all the the logits if we have special gather_ids
-        # from speculative decoding.
+        # If gather_ids are provided (e.g., from speculative decoding), request context logits
+        # and gather the needed rows below.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 1c3ad6e and 9efe9d2.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/pyexecutor/model_engine.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{cpp,cc,cxx,cu,py,h,hpp,hh,hxx,cuh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use spaces only; no tabs; indent by 4 spaces

Files:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code must target Python 3.8+
Indent with 4 spaces; no tabs
Keep module namespace on import; import the module, not individual names; use module.symbol
Python filenames use snake_case (e.g., some_file.py)
Class names use PascalCase
Function and method names use snake_case
Local variables use snake_case; if starting with a number, prefix with k_ (e.g., k_99th_percentile)
Global variables use UPPER_SNAKE with G_ prefix (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE
Avoid shadowing variables from outer scopes
Initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with trailing docstrings
Avoid reflection when simple alternatives exist (e.g., avoid dict(**locals()) patterns)
Limit except clauses to specific exceptions; avoid bare except
When duck-typing with try/except, keep try body minimal and use else for logic

Files:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.{cpp,cc,cxx,cu,h,hpp,hh,hxx,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/_torch/pyexecutor/model_engine.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16895 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12692 completed with status: 'FAILURE'

@Funatiq Funatiq force-pushed the dev/feat/additional_outputs branch from 9efe9d2 to 3c3cd3b Compare August 29, 2025 09:48
@Funatiq
Copy link
Collaborator Author

Funatiq commented Aug 29, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16976 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16976 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #12744 completed with status: 'FAILURE'

@Funatiq Funatiq force-pushed the dev/feat/additional_outputs branch from 3c3cd3b to 2c4501d Compare September 3, 2025 11:25
@Funatiq
Copy link
Collaborator Author

Funatiq commented Sep 3, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17530 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17530 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #13179 completed with status: 'FAILURE'

- Deleted the `DummyModel` class and its configuration from modeling.
- Moved the `DummyModel` class to tests.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
- Removed the thread leak marker from the integration test.
- Wrapped the output generation in a context manager to ensure proper resource handling.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
- Add output check for generation output of model output that includes context.
- Make model outputs depend on position ids.
- Verify outputs based on token position.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
@Funatiq Funatiq force-pushed the dev/feat/additional_outputs branch from 6f6af2b to f14e921 Compare October 7, 2025 10:39
@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 7, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20726 [ run ] triggered by Bot

- Remove AdditionalModelOutput class, use str instead (Remove gather_context flag).
- Automatically detect whether to gather the additional outputs for context tokens or not.
- Update tests to reflect the changes.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
@tensorrt-cicd
Copy link
Collaborator

PR_Github #20726 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15661 completed with status: 'FAILURE'

- Updated the logic to collect additional outputs from context and generation requests.
- Only iterate over the requested additional outputs instead of all tensors.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
@Funatiq Funatiq requested a review from a team as a code owner October 8, 2025 09:45
@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 8, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20783 [ run ] triggered by Bot

@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 8, 2025

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 8, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20783 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15711 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@Funatiq Funatiq requested a review from tijyojwad October 8, 2025 15:28
@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 9, 2025

@coderabbitai summary

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 9, 2025

✅ Actions performed

Summary regeneration triggered.

Copy link
Collaborator

@DomBrown DomBrown left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM regarding the API change.

- Move additional context and generation outputs to prototype in completion output schema.
- Update sampling params schema to set `additional_model_outputs` to prototype.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
@Funatiq Funatiq force-pushed the dev/feat/additional_outputs branch from a245b32 to bc531ce Compare October 13, 2025 12:37
@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 13, 2025

/bot skip --comment "Only changed API status flag"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21216 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21216 [ skip ] completed with state SUCCESS
Skipping testing for commit bc531ce

@Funatiq Funatiq merged commit db8c63b into NVIDIA:main Oct 13, 2025
5 checks passed
@Funatiq Funatiq deleted the dev/feat/additional_outputs branch October 13, 2025 13:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants