-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[TRTLLM-7410][feat] Support hashing and KV cache reuse for videos #7360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughIntroduces BaseMultimodalInputProcessor and integrates it into LlavaNext and Qwen2VL input processors. Generalizes multimodal hashing and token-length computation to support image and video. Adjusts registry initialization and hashing flow, updates exports, and extends unit tests to include video token counting and Mistral model coverage. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor U as User code
participant W as input_processor_wrapper
participant IP as BaseMultimodalInputProcessor
participant P as Processor/_processor
U->>W: preprocess(inputs with modalities)
W->>IP: try multimodal hashing & token lengths
IP->>P: _get_num_multimodal_tokens(modality, dims)
alt Success (single modality)
P-->>IP: {modality: [lengths]}
IP-->>W: normalized lengths (single modality)
W-->>U: proceed with cached mm-hash path
else Failure/Not supported
IP-->>W: raise / NotImplemented
W-->>U: fallback to basic processor path
end
sequenceDiagram
autonumber
actor C as Caller
participant Q as Qwen2VLInputProcessorBase
participant Cfg as model_config.vision_config
C->>Q: get_mrope_config(inputs)
Q->>Cfg: read temporal_patch_size / tokens_per_second
Q->>Q: get_rope_index(image/video grid, temporal dims)
Q-->>C: mrope_position_ids, mrope_position_deltas
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (10)
tensorrt_llm/inputs/multimodal.py (3)
482-484: Return type/docstring mismatch, long line (E501), and avoid unnecessary PIL conversions.
- find_mm_token_lengths now returns a dict but the annotation and comment still say List[int].
- Break the long f-string to satisfy E501.
- When items are tensors, compute H/W from shape instead of converting to PIL; validate non-empty video list.
-def find_mm_token_lengths(mm_data: Dict[str, Any], - input_processor: Any) -> List[int]: - """Get multimodal token lengths from multimodal data items. """ +def find_mm_token_lengths(mm_data: Dict[str, Any], + input_processor: Any) -> Dict[str, List[int]]: + """Get per-modality multimodal token lengths from multimodal data items.""" @@ - if not hasattr(input_processor, f"get_num_tokens_per_{modality}"): - raise AttributeError( - f"Input processor {type(input_processor).__name__} does not have 'get_num_tokens_per_{modality}' method required for multimodal hashing." - ) + if not hasattr(input_processor, f"get_num_tokens_per_{modality}"): + raise AttributeError( + f"Input processor {type(input_processor).__name__} does not have " + f\"get_num_tokens_per_{modality}\" required for multimodal hashing." + ) @@ - if modality == "image": - if isinstance(item, torch.Tensor): - item = ToPILImage()(item) - num_tokens = input_processor.get_num_tokens_per_image( - image_width=item.width, - image_height=item.height, - ) + if modality == "image": + if isinstance(item, torch.Tensor): + h, w = int(item.shape[-2]), int(item.shape[-1]) + else: + w, h = item.width, item.height + num_tokens = input_processor.get_num_tokens_per_image( + image_width=w, + image_height=h, + ) modality_token_lengths.append(num_tokens) elif modality == "video": - assert isinstance(item, list), "Video must be a list of frames" - if isinstance(item[0], torch.Tensor): - item = [ToPILImage()(frame) for frame in item] - num_tokens = input_processor.get_num_tokens_per_video( - video_width=item[0].width, - video_height=item[0].height, - num_frames=len(item), - ) + assert isinstance(item, list), "Video must be a list of frames" + if not item: + raise ValueError("Video frame list is empty") + if isinstance(item[0], torch.Tensor): + h, w = int(item[0].shape[-2]), int(item[0].shape[-1]) + else: + w, h = item[0].width, item[0].height + num_tokens = input_processor.get_num_tokens_per_video( + video_width=w, + video_height=h, + num_frames=len(item), + ) modality_token_lengths.append(num_tokens) @@ - return num_mm_tokens # flatten all mm instances to a single list + return num_mm_tokens # mapping: modality -> list of lengthsAlso applies to: 493-496, 501-518, 521-521
423-430: Harden tensor serialization in serialize_item.Calling .numpy() on non-CPU tensors raises; also ensure contiguity. Update the tensor branch accordingly.
def serialize_item(obj: object) -> bytes: ... if isinstance(obj, torch.Tensor): t = obj.detach() if t.is_sparse: t = t.coalesce().to_dense() if t.device.type != "cpu": t = t.to("cpu") return t.contiguous().numpy().tobytes()
482-484: Updatefind_mm_token_lengthssignature and docstring
- Change the return annotation in
tensorrt_llm/inputs/multimodal.pyfrom-> List[int]to-> Dict[str, List[int]]and revise its docstring to describe the mapping of modality names to token-length lists.- Only one call site remains (in
tensorrt_llm/inputs/registry.pyat lines 424–426), which already unpacks the dict vianext(iter(...)).tensorrt_llm/inputs/registry.py (1)
311-331: Fix return types: methods promise Tuple[str, ...] but return generators.Wrap the generator expressions with tuple().
def get_registered_image_model_types(self) -> Tuple[str, ...]: - return ( + return tuple( model_type for model_type in self._multimodal_placeholder_by_model_type if "image" in self. _multimodal_placeholder_by_model_type[model_type].placeholder_map) @@ def get_registered_video_model_types(self) -> Tuple[str, ...]: - return ( + return tuple( model_type for model_type in self._multimodal_placeholder_by_model_type if "video" in self. _multimodal_placeholder_by_model_type[model_type].placeholder_map) @@ def get_registered_audio_model_types(self) -> Tuple[str, ...]: - return ( + return tuple( model_type for model_type in self._multimodal_placeholder_by_model_type if "audio" in self. _multimodal_placeholder_by_model_type[model_type].placeholder_map)tensorrt_llm/_torch/models/modeling_qwen2vl.py (6)
1-1: Add the required NVIDIA 2025 Apache-2.0 header block.Per repository guidelines, prepend the license header to this Python source file.
Apply this diff:
+ # Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # http://www.apache.org/licenses/LICENSE-2.0 + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License.
224-226: Fix Python 3.8 typing: use typing.Dict/Any rather than PEP 585 builtins; also correctanytoAny.
dict[str, any]is invalid on 3.8 andanyis a function.Apply:
- def _preprocess(self, text: dict[str, any], mm_data: dict[str, any], + def _preprocess(self, text: Dict[str, Any], mm_data: Dict[str, Any], mm_processor_kwargs: Dict[str, Any]):
252-258: Fix Python 3.8 typing in return annotation.Replace PEP 585
dict[...]withDict[...].Apply:
- def get_mrope_config( + def get_mrope_config( self, input_ids: torch.IntTensor, image_grid_thw: torch.LongTensor, video_grid_thw: torch.LongTensor, attention_mask: torch.Tensor, - second_per_grid_ts: torch.Tensor = None) -> dict[str, torch.Tensor]: + second_per_grid_ts: torch.Tensor = None) -> Dict[str, torch.Tensor]:
319-321: Fix Python 3.8 typing: useType[...]instead oftype[...].Also import
Typefrom typing.Apply:
-from typing import Any, Dict, List, Optional, Tuple, Union +from typing import Any, Dict, List, Optional, Tuple, Union, Type- def __init__(self, model_config: ModelConfig[PretrainedConfig], - model_class: type[PreTrainedModel]): + def __init__(self, model_config: ModelConfig[PretrainedConfig], + model_class: Type[PreTrainedModel]):
123-131: Avoid shadowinginput_idsparameter; it leaks into later usage.Inner loop reuses
input_ids, then Line 221 uses.deviceon the (now 1D) shadowed tensor. Rename local vars and reference the 2D tensor for device.Apply:
- for i, input_ids in enumerate(total_input_ids): - input_ids = input_ids[attention_mask[i] == 1] + for i, seq_input_ids in enumerate(total_input_ids): + seq_input_ids = seq_input_ids[attention_mask[i] == 1] image_nums, video_nums = 0, 0 vision_start_indices = torch.argwhere( - input_ids == vision_start_token_id).squeeze(1) - vision_tokens = input_ids[vision_start_indices + 1] + seq_input_ids == vision_start_token_id).squeeze(1) + vision_tokens = seq_input_ids[vision_start_indices + 1] image_nums = (vision_tokens == image_token_id).sum() video_nums = (vision_tokens == video_token_id).sum() - input_tokens = input_ids.tolist() + input_tokens = seq_input_ids.tolist()- mrope_position_deltas = torch.tensor( - mrope_position_deltas, device=input_ids.device).unsqueeze(1) + mrope_position_deltas = torch.tensor( + mrope_position_deltas, device=total_input_ids.device).unsqueeze(1)Also applies to: 220-221
246-250: Guard against missingvision_token_id; avoid AttributeError.Use getattr and build the mask robustly.
Apply:
- masks = (input_ids == self.model_config.image_token_id) | ( - input_ids == self.model_config.vision_token_id) | ( - input_ids == self.model_config.video_token_id) + masks = (input_ids == self.model_config.image_token_id) | ( + input_ids == self.model_config.video_token_id) + vision_token_id = getattr(self.model_config, "vision_token_id", None) + if vision_token_id is not None: + masks = masks | (input_ids == vision_token_id)
🧹 Nitpick comments (12)
tensorrt_llm/inputs/__init__.py (1)
1-1: Add the required NVIDIA 2025 Apache-2.0 header.Per repo guidelines, prepend the copyright header.
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + from .data import PromptInputs, TextPrompt, TokensPrompt, prompt_inputstensorrt_llm/inputs/multimodal.py (1)
1-1: Add the required NVIDIA 2025 Apache-2.0 header.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + """Multimodal utilities for handling images and other media types in TensorRT-LLM."""tensorrt_llm/_torch/models/modeling_llava_next.py (3)
41-53: Tokenizer/processor initialization: guard against missing/slow fast-tokenizers.Some Llava variants ship only Python tokenizers; consider falling back to use_fast=False if AutoTokenizer raises due to unavailable fast backend.
- self.use_fast = True + self.use_fast = True @@ - self.tokenizer = AutoTokenizer.from_pretrained( - model_path, - trust_remote_code=trust_remote_code, - use_fast=self.use_fast) + try: + self.tokenizer = AutoTokenizer.from_pretrained( + model_path, trust_remote_code=trust_remote_code, use_fast=self.use_fast) + except Exception: + self.tokenizer = AutoTokenizer.from_pretrained( + model_path, trust_remote_code=trust_remote_code, use_fast=False) @@ - self.processor = AutoProcessor.from_pretrained( - model_path, - trust_remote_code=trust_remote_code, - use_fast=self.use_fast) + try: + self.processor = AutoProcessor.from_pretrained( + model_path, trust_remote_code=trust_remote_code, use_fast=self.use_fast) + except Exception: + self.processor = AutoProcessor.from_pretrained( + model_path, trust_remote_code=trust_remote_code, use_fast=False)
55-58: Defensive: attribute existence for image_token_index and vision_config.Not all configs guarantee image_token_index/vision_config; add clear error to aid debugging.
- self.image_token_index = model_config.image_token_index - self.vocab_size = model_config.vocab_size - self.config = model_config.vision_config + if not hasattr(model_config, "image_token_index"): + raise AttributeError("model_config must define image_token_index") + self.image_token_index = model_config.image_token_index + self.vocab_size = model_config.vocab_size + if not hasattr(model_config, "vision_config"): + raise AttributeError("model_config must define vision_config") + self.config = model_config.vision_config
1-1: Add the required NVIDIA 2025 Apache-2.0 header.+// Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import copytests/unittest/_torch/multimodal/test_find_num_image_tokens.py (3)
59-62: Optionally cover Mistral images in the parametrization; skip if model not available.You already guard with a skip when the key is absent; adding the key here exercises the new path.
-@pytest.mark.parametrize("model_key", [ - "llava-v1.6-mistral-7b-hf", - "qwen2.5-vl", -]) +@pytest.mark.parametrize("model_key", [ + "llava-v1.6-mistral-7b-hf", + "qwen2.5-vl", + "mistral-small-3.1", +])
24-27: Mark networked tests and harden against offline CI.These tests fetch remote assets. Mark as network/slow or add environment-gated skip to reduce CI flakiness.
pytestmark = pytest.mark.network # at file top # Or inside each test: if not int(os.getenv("ENABLE_NETWORK_TESTS", "0")): pytest.skip("Network tests disabled")Also applies to: 173-176
1-1: Add the required NVIDIA 2025 Apache-2.0 header.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + import iotensorrt_llm/inputs/registry.py (3)
44-51: PEP257: add blank line after summary (Ruff D205).Refactor the BaseMultimodalInputProcessor docstring to have a single-line summary followed by a blank line.
-class BaseMultimodalInputProcessor: - """ - Base class for multimodal input processors with default implementations - of get_num_tokens_per_image and get_num_tokens_per_video methods. - - This class provides default implementations that work with most AutoProcessor-based - models. Specific processors can override these methods if they need custom logic. - """ +class BaseMultimodalInputProcessor: + """Multimodal input base providing default image/video token counters. + + Works with most AutoProcessor-based models via _get_num_multimodal_tokens. + Override methods if custom logic is required. + """
467-476: Update comment: hashing attempt is no longer image-specific.Logic now keys off “exactly one modality”; update the stale comment.
- # TODO: support multiple modalities for multimodal hashing (for kv cache reuse, chunked prefill, etc.) - if len(modalities) == 1: - # only try multimodal hashing if the inputs only contain image data + # TODO: support multiple modalities for multimodal hashing (kv reuse, chunked prefill, etc.) + if len(modalities) == 1: + # Try multimodal hashing only when there is exactly one modality
1-1: Add the required NVIDIA 2025 Apache-2.0 header.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + import enumtensorrt_llm/_torch/models/modeling_qwen2vl.py (1)
74-77: Docstring shape mismatch formrope_position_deltas.Returned tensor is shaped (batch_size, 1), not (batch_size). Update the docstring to avoid confusion.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
tensorrt_llm/_torch/models/modeling_llava_next.py(2 hunks)tensorrt_llm/_torch/models/modeling_qwen2vl.py(3 hunks)tensorrt_llm/inputs/__init__.py(2 hunks)tensorrt_llm/inputs/multimodal.py(2 hunks)tensorrt_llm/inputs/registry.py(3 hunks)tests/unittest/_torch/multimodal/test_find_num_image_tokens.py(4 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cc,cxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.{h,hpp,hh,hxx,cpp,cc,cxx,cu,cuh,py}: If a constructor parameter name conflicts with a public member, add a trailing underscore to the parameter (e.g., foo_).
Use uppercase literal suffixes (e.g., 1234L not 1234l).
Use spaces, not tabs; indent by 4 spaces.
Files:
tensorrt_llm/inputs/multimodal.pytensorrt_llm/inputs/registry.pytensorrt_llm/inputs/__init__.pytests/unittest/_torch/multimodal/test_find_num_image_tokens.pytensorrt_llm/_torch/models/modeling_qwen2vl.pytensorrt_llm/_torch/models/modeling_llava_next.py
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Target Python 3.8+ for all Python code.
Indent with 4 spaces; do not use tabs.
Maintain module namespace on imports; import the module/submodule, not individual classes/functions (e.g., from package.subpackage import foo; foo.SomeClass()).
Python filenames use snake_case (e.g., some_file.py).
Class names use PascalCase.
Function and method names use snake_case.
Local variable names use snake_case; if starting with a number, prefix with k_ (e.g., k_99th_percentile).
Global variables use UPPER_SNAKE_CASE and prefix G_ (e.g., G_MY_GLOBAL).
Constants use UPPER_SNAKE_CASE.
Avoid shadowing outer-scope variables.
Initialize all externally visible members of a class in init.
Prefer docstrings for interfaces used outside a file; use comments for local code within functions or local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline with docstrings placed under the definition.
Avoid reflection when simpler, explicit code suffices (e.g., avoid dict(**locals()) patterns).
Limit except clauses to specific exceptions; avoid bare except.
For duck-typing try/except, keep try blocks minimal and use else for the main logic.
Files:
tensorrt_llm/inputs/multimodal.pytensorrt_llm/inputs/registry.pytensorrt_llm/inputs/__init__.pytests/unittest/_torch/multimodal/test_find_num_image_tokens.pytensorrt_llm/_torch/models/modeling_qwen2vl.pytensorrt_llm/_torch/models/modeling_llava_next.py
**/*.{cpp,cc,cxx,cu,h,hpp,hh,hxx,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA 2025 Apache-2.0 copyright header block at the top of all source files (.cpp, .h, .cu, .py).
Files:
tensorrt_llm/inputs/multimodal.pytensorrt_llm/inputs/registry.pytensorrt_llm/inputs/__init__.pytests/unittest/_torch/multimodal/test_find_num_image_tokens.pytensorrt_llm/_torch/models/modeling_qwen2vl.pytensorrt_llm/_torch/models/modeling_llava_next.py
🧬 Code graph analysis (6)
tensorrt_llm/inputs/multimodal.py (1)
tensorrt_llm/inputs/registry.py (2)
get_num_tokens_per_image(53-86)get_num_tokens_per_video(88-145)
tensorrt_llm/inputs/registry.py (1)
tensorrt_llm/runtime/multimodal_model_runner.py (1)
processor(680-683)
tensorrt_llm/inputs/__init__.py (1)
tensorrt_llm/inputs/registry.py (3)
BaseMultimodalInputProcessor(44-145)InputProcessor(21-41)MultimodalPlaceholderMetadata(230-243)
tests/unittest/_torch/multimodal/test_find_num_image_tokens.py (4)
tensorrt_llm/_torch/models/modeling_mistral.py (1)
Mistral3InputProcessor(214-270)tensorrt_llm/_torch/shared_tensor/shared_tensor.py (2)
SharedTensorContainer(95-404)get_local_view(362-373)tensorrt_llm/inputs/utils.py (1)
default_multimodal_input_loader(449-612)tensorrt_llm/inputs/registry.py (2)
get_num_tokens_per_image(53-86)get_num_tokens_per_video(88-145)
tensorrt_llm/_torch/models/modeling_qwen2vl.py (1)
tensorrt_llm/inputs/registry.py (2)
BaseMultimodalInputProcessor(44-145)InputProcessor(21-41)
tensorrt_llm/_torch/models/modeling_llava_next.py (1)
tensorrt_llm/inputs/registry.py (2)
BaseMultimodalInputProcessor(44-145)InputProcessor(21-41)
🪛 Ruff (0.12.2)
tensorrt_llm/inputs/multimodal.py
494-494: Line too long (153 > 120)
(E501)
tensorrt_llm/inputs/registry.py
44-49: 1 blank line required between summary line and description
(D205)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (4)
tensorrt_llm/inputs/__init__.py (1)
3-4: Re-export looks good; enables stable import surface.Adding BaseMultimodalInputProcessor to the package namespace is appropriate and matches downstream usage.
Also applies to: 30-30
tensorrt_llm/_torch/models/modeling_llava_next.py (1)
35-35: Good change: inherit BaseMultimodalInputProcessor for unified MM token APIs.This aligns LlavaNext with the shared multimodal interface and removes redundant per-model image-token logic.
tensorrt_llm/_torch/models/modeling_qwen2vl.py (2)
31-31: MRO change looks fine; verify no base init is required.Since the class now mixes in BaseMultimodalInputProcessor, confirm neither it nor InputProcessor requires an explicit super().init call.
15-16: Exports verified; no action needed. BaseMultimodalInputProcessor and ExtraProcessedInputs are already exported intensorrt_llm/inputs/__init__.py. Optional: switch to absolute imports (e.g.,from tensorrt_llm.inputs import …) for namespace consistency.
5a1a0ea to
b859c4d
Compare
Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>
Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>
Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>
eec02c6 to
da7a7b2
Compare
get_num_tokens_per_image method
get_num_tokens_per_image methodSigned-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for extending mm hash to the video modality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, left a couple of small nits.
Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>
|
/bot run |
|
PR_Github #17587 [ run ] triggered by Bot |
|
PR_Github #17587 [ run ] completed with state |
Signed-off-by: Chang Liu <9713593+chang-l@users.noreply.github.com>
|
/bot reuse-pipeline |
|
PR_Github #17707 [ reuse-pipeline ] triggered by Bot |
|
PR_Github #17707 [ reuse-pipeline ] completed with state |
…IDIA#7360) Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com> Signed-off-by: Chang Liu <9713593+chang-l@users.noreply.github.com>
Summary by CodeRabbit
New Features
Refactor
Tests
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.