-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[None][feat] AutoDeploy: VLMs with subgraphs + cudagraph/compile #8203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
/bot run |
|
PR_Github #20822 [ run ] triggered by Bot |
📝 Walkthrough📝 WalkthroughPre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (8)
tensorrt_llm/_torch/auto_deploy/transform/library/compile_model.py (1)
52-58: Do not passcompile_backendback into the backend ctor.
self.config.model_dump()still includescompile_backendeven though we already used it to choosecompiler_cls. Every backend constructor I checked (CompileBackendTorchSimple,CompileBackendTorchCompile, etc.) does not accept acompile_backendkeyword, so this call will raise aTypeError: __init__() got an unexpected keyword argument 'compile_backend'. Please exclude that field (e.g.,self.config.model_dump(exclude={"compile_backend"})) before splatting into the ctor.tensorrt_llm/_torch/auto_deploy/models/patches/pixtral.py (1)
112-121: Respect caller-providedreturn_dict.
Line 117 currently forcesreturn_dict=True, so callers requestingreturn_dict=False(or relying on the config default whenNone) now get a dict regardless, diverging from the Hugging Face contract. Please plumb the argument through instead of overriding it.@@ - out = self.transformer( + if return_dict is None: + return_dict = getattr(self.config, "use_return_dict", True) + + out = self.transformer( @@ - return_dict=True, + return_dict=return_dict,tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.Required by repo guidelines. Add the standard header before the docstring.
As per coding guidelines
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# http://www.apache.org/licenses/LICENSE-2.0 +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License.tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.Required by repo guidelines.
As per coding guidelines
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# http://www.apache.org/licenses/LICENSE-2.0 +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License.tensorrt_llm/_torch/auto_deploy/models/hf.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.Required by repo guidelines.
As per coding guidelines
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# http://www.apache.org/licenses/LICENSE-2.0 +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License.tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.Required by repo guidelines.
As per coding guidelines
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# http://www.apache.org/licenses/LICENSE-2.0 +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License.tensorrt_llm/_torch/auto_deploy/transform/interface.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.Required by repo guidelines.
As per coding guidelines
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# http://www.apache.org/licenses/LICENSE-2.0 +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License.tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (1)
1-1: Add NVIDIA Apache-2.0 header (compliance).File is missing the required NVIDIA Apache-2.0 header with current year.
As per coding guidelines, prepend:
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# http://www.apache.org/licenses/LICENSE-2.0 +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License.As per coding guidelines
🧹 Nitpick comments (13)
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_mamba_cached_op.py (1)
186-186: Remove obsolete input_ids comment.Leftover commented code is noise now that the metadata path no longer consumes
input_ids. Please drop it to keep the fixture tight.- # input_ids = torch.randint(0, 1000, (b, s), device=device)tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_attention_op.py (1)
472-472: Trim the stale input_ids comment.Same as the mamba test, this commented line can go now that the signature no longer takes
input_ids.- # input_ids = torch.randint(0, 1000, (batch_size, seq_len_val), device=device)tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.py (1)
40-41: Silence the unused parameter warning.
modelisn’t used; rename it to_model(or similar) so Ruff stops flagging ARG002.Apply this diff:
- def get_export_infos(self, model: nn.Module) -> List[SubModuleExportInfo]: + def get_export_infos(self, _model: nn.Module) -> List[SubModuleExportInfo]:tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py (2)
39-45: Silence unused-arg warnings for cm/shared_config.Prefix with underscores to satisfy linters without behavioral change.
- def _apply_to_full_model( - self, - mod: nn.Module, - cm: CachedSequenceInterface, - factory: ModelFactory, - shared_config: SharedConfig, - ) -> Tuple[nn.Module, TransformInfo]: + def _apply_to_full_model( + self, + mod: nn.Module, + _cm: CachedSequenceInterface, + factory: ModelFactory, + _shared_config: SharedConfig, + ) -> Tuple[nn.Module, TransformInfo]:
46-51: Minor: avoid redundant move when devices match.If adconfig_checkpoint_device equals target device, the extra move_to_device is redundant. Optional micro-optimization.
tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py (1)
193-207: Add safety guard for missing_gmwhen using patched forward.Prevents obscure AttributeError if called before profiling step attaches
_gm.def forward_with_prepare_metadata(mod: nn.Module, **cm_kwargs): """Run prepare_metadata as pre-processing step, add to kwargs, and then run regular forward.""" - gm = mod._gm + assert hasattr(mod, "_gm"), "Expected `mod._gm` set by detection pass before cached forward." + gm = mod._gmtensorrt_llm/_torch/auto_deploy/models/hf.py (1)
453-455: Silence unused-arg warning in get_export_infos.Rename param to
_modelto match usage.- def get_export_infos(self, model: nn.Module) -> List[SubModuleExportInfo]: + def get_export_infos(self, _model: nn.Module) -> List[SubModuleExportInfo]:tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (1)
270-275: Use tuple unpacking instead of concatenation.Slightly cleaner and matches ruff suggestion.
- return ("position_ids",) + self._cached_arg_names + return ("position_ids", *self._cached_arg_names)tensorrt_llm/_torch/auto_deploy/transform/interface.py (1)
342-351: Improve exception logging; keep skip-on-error behavior.Capture stack with ad_logger.exception for easier debugging.
- except Exception as e: - error_msg = f"Transform {t_name} failed" - ad_logger.warning(f"{error_msg}: {e}") + except Exception: + ad_logger.exception("Transform %s failed", t_name) info_apply = TransformInfo(skipped=True, num_matches=0)tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (2)
51-56: Capture hook: support positional args and silence unused param.
- Current assert breaks if a submodule is invoked positionally.
- Inner hook param mod is unused (ARG001).
Refactor to normalize args→kwargs and avoid the assert.
- def _capture_kwargs(mod: nn.Module, args, kwargs) -> None: - assert not args, "positional arguments are not supported for capture" - captured_kwargs.clear() - captured_kwargs.update(kwargs) + def _capture_kwargs(_m: nn.Module, args, kwargs) -> None: + # Normalize positional + keyword args to kwargs using the callee's signature. + try: + sig = inspect.signature(_m.forward) + bound = sig.bind_partial(*args, **(kwargs or {})) + normalized = bound.arguments + except Exception: + # Fallback to raw kwargs if signature binding fails. + normalized = kwargs or {} + captured_kwargs.clear() + captured_kwargs.update(normalized) return None
161-164: Guard dynamic_shapes lookup to avoid KeyError.If a captured Tensor arg lacks an entry in dynamic_shape_lookup, this will KeyError.
- dynamic_shapes = { - k: e_info.dynamic_shape_lookup[k] if isinstance(v, torch.Tensor) else None - for k, v in captured_kwargs.items() - } + dynamic_shapes = { + k: (e_info.dynamic_shape_lookup.get(k) if isinstance(v, torch.Tensor) else None) + for k, v in captured_kwargs.items() + }Optionally log missing keys for visibility.
tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (2)
39-46: Silence unused parameters in signature.mod, cm, shared_config unused (ARG002). Rename to underscore to appease linters while keeping API.
- def _apply_to_full_model( - self, - mod: nn.Module, - cm: CachedSequenceInterface, - factory: ModelFactory, - shared_config: SharedConfig, - ) -> Tuple[nn.Module, TransformInfo]: + def _apply_to_full_model( + self, + _mod: nn.Module, + _cm: CachedSequenceInterface, + factory: ModelFactory, + _shared_config: SharedConfig, + ) -> Tuple[nn.Module, TransformInfo]:
68-75: Silence unused parameters in signature.Same here for mod and shared_config.
- def _apply_to_full_model( - self, - mod: nn.Module, - cm: CachedSequenceInterface, - factory: ModelFactory, - shared_config: SharedConfig, - ) -> Tuple[nn.Module, TransformInfo]: + def _apply_to_full_model( + self, + _mod: nn.Module, + cm: CachedSequenceInterface, + factory: ModelFactory, + _shared_config: SharedConfig, + ) -> Tuple[nn.Module, TransformInfo]:
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (43)
tensorrt_llm/_torch/auto_deploy/config/default.yaml(4 hunks)tensorrt_llm/_torch/auto_deploy/config/transformers.yaml(1 hunks)tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py(6 hunks)tensorrt_llm/_torch/auto_deploy/custom_ops/cuda_backend_causal_conv.py(2 hunks)tensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.py(2 hunks)tensorrt_llm/_torch/auto_deploy/custom_ops/mla.py(2 hunks)tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_attention.py(2 hunks)tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_causal_conv.py(2 hunks)tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_mamba.py(2 hunks)tensorrt_llm/_torch/auto_deploy/custom_ops/triton_attention.py(2 hunks)tensorrt_llm/_torch/auto_deploy/export/export.py(3 hunks)tensorrt_llm/_torch/auto_deploy/export/interface.py(2 hunks)tensorrt_llm/_torch/auto_deploy/models/__init__.py(1 hunks)tensorrt_llm/_torch/auto_deploy/models/factory.py(3 hunks)tensorrt_llm/_torch/auto_deploy/models/hf.py(8 hunks)tensorrt_llm/_torch/auto_deploy/models/mistral3.py(0 hunks)tensorrt_llm/_torch/auto_deploy/models/patches/llama4.py(4 hunks)tensorrt_llm/_torch/auto_deploy/models/patches/mistral3.py(3 hunks)tensorrt_llm/_torch/auto_deploy/models/patches/pixtral.py(5 hunks)tensorrt_llm/_torch/auto_deploy/models/patches/starcoder.py(1 hunks)tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py(0 hunks)tensorrt_llm/_torch/auto_deploy/shim/interface.py(1 hunks)tensorrt_llm/_torch/auto_deploy/transform/interface.py(9 hunks)tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py(3 hunks)tensorrt_llm/_torch/auto_deploy/transform/library/compile_model.py(3 hunks)tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py(3 hunks)tensorrt_llm/_torch/auto_deploy/transform/library/kvcache.py(5 hunks)tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py(6 hunks)tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py(3 hunks)tensorrt_llm/_torch/auto_deploy/transform/optimizer.py(1 hunks)tensorrt_llm/_torch/auto_deploy/transformations/_graph.py(6 hunks)tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py(2 hunks)tests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.py(1 hunks)tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_cuda_causal_conv_cached_op.py(1 hunks)tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_attention_op.py(2 hunks)tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_causal_conv_cached_op.py(1 hunks)tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_mamba_cached_op.py(1 hunks)tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_llama4_vlm_patch.py(2 hunks)tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3.py(0 hunks)tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3_patches.py(3 hunks)tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_build_small_single.py(2 hunks)tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py(3 hunks)tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.py(3 hunks)
💤 Files with no reviewable changes (3)
- tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3.py
- tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
- tensorrt_llm/_torch/auto_deploy/models/mistral3.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Use only spaces, no tabs; indent with 4 spaces.
Files:
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_causal_conv_cached_op.pytensorrt_llm/_torch/auto_deploy/export/interface.pytests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3_patches.pytensorrt_llm/_torch/auto_deploy/models/patches/starcoder.pytests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_cuda_causal_conv_cached_op.pytensorrt_llm/_torch/auto_deploy/custom_ops/cuda_backend_causal_conv.pytensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.pytests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.pytensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_causal_conv.pytests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.pytensorrt_llm/_torch/auto_deploy/transform/optimizer.pytests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.pytensorrt_llm/_torch/auto_deploy/transform/library/compile_model.pytensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_mamba.pytensorrt_llm/_torch/auto_deploy/shim/interface.pytests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_llama4_vlm_patch.pytensorrt_llm/_torch/auto_deploy/transformations/_graph.pytensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_attention.pytensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.pytests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.pytests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_build_small_single.pytensorrt_llm/_torch/auto_deploy/custom_ops/mla.pytests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_attention_op.pytensorrt_llm/_torch/auto_deploy/models/patches/mistral3.pytensorrt_llm/_torch/auto_deploy/export/export.pytensorrt_llm/_torch/auto_deploy/models/patches/pixtral.pytensorrt_llm/_torch/auto_deploy/models/__init__.pytests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_mamba_cached_op.pytensorrt_llm/_torch/auto_deploy/transform/library/build_model.pytensorrt_llm/_torch/auto_deploy/transform/library/load_weights.pytensorrt_llm/_torch/auto_deploy/custom_ops/triton_attention.pytensorrt_llm/_torch/auto_deploy/transform/library/kvcache.pytensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.pytensorrt_llm/_torch/auto_deploy/models/hf.pytensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.pytensorrt_llm/_torch/auto_deploy/models/patches/llama4.pytensorrt_llm/_torch/auto_deploy/transform/interface.pytensorrt_llm/_torch/auto_deploy/models/factory.py
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.
Files:
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_causal_conv_cached_op.pytensorrt_llm/_torch/auto_deploy/export/interface.pytests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3_patches.pytensorrt_llm/_torch/auto_deploy/models/patches/starcoder.pytests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_cuda_causal_conv_cached_op.pytensorrt_llm/_torch/auto_deploy/custom_ops/cuda_backend_causal_conv.pytensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.pytests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.pytensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_causal_conv.pytests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.pytensorrt_llm/_torch/auto_deploy/transform/optimizer.pytests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.pytensorrt_llm/_torch/auto_deploy/transform/library/compile_model.pytensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_mamba.pytensorrt_llm/_torch/auto_deploy/shim/interface.pytests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_llama4_vlm_patch.pytensorrt_llm/_torch/auto_deploy/transformations/_graph.pytensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_attention.pytensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.pytests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.pytests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_build_small_single.pytensorrt_llm/_torch/auto_deploy/custom_ops/mla.pytests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_attention_op.pytensorrt_llm/_torch/auto_deploy/models/patches/mistral3.pytensorrt_llm/_torch/auto_deploy/export/export.pytensorrt_llm/_torch/auto_deploy/models/patches/pixtral.pytensorrt_llm/_torch/auto_deploy/models/__init__.pytests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_mamba_cached_op.pytensorrt_llm/_torch/auto_deploy/transform/library/build_model.pytensorrt_llm/_torch/auto_deploy/transform/library/load_weights.pytensorrt_llm/_torch/auto_deploy/custom_ops/triton_attention.pytensorrt_llm/_torch/auto_deploy/transform/library/kvcache.pytensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.pytensorrt_llm/_torch/auto_deploy/models/hf.pytensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.pytensorrt_llm/_torch/auto_deploy/models/patches/llama4.pytensorrt_llm/_torch/auto_deploy/transform/interface.pytensorrt_llm/_torch/auto_deploy/models/factory.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).
Files:
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_causal_conv_cached_op.pytensorrt_llm/_torch/auto_deploy/export/interface.pytests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3_patches.pytensorrt_llm/_torch/auto_deploy/models/patches/starcoder.pytests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_cuda_causal_conv_cached_op.pytensorrt_llm/_torch/auto_deploy/custom_ops/cuda_backend_causal_conv.pytensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.pytests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.pytensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_causal_conv.pytests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.pytensorrt_llm/_torch/auto_deploy/transform/optimizer.pytests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.pytensorrt_llm/_torch/auto_deploy/transform/library/compile_model.pytensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_mamba.pytensorrt_llm/_torch/auto_deploy/shim/interface.pytests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_llama4_vlm_patch.pytensorrt_llm/_torch/auto_deploy/transformations/_graph.pytensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_attention.pytensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.pytests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.pytests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_build_small_single.pytensorrt_llm/_torch/auto_deploy/custom_ops/mla.pytests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_attention_op.pytensorrt_llm/_torch/auto_deploy/models/patches/mistral3.pytensorrt_llm/_torch/auto_deploy/export/export.pytensorrt_llm/_torch/auto_deploy/models/patches/pixtral.pytensorrt_llm/_torch/auto_deploy/models/__init__.pytests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_mamba_cached_op.pytensorrt_llm/_torch/auto_deploy/transform/library/build_model.pytensorrt_llm/_torch/auto_deploy/transform/library/load_weights.pytensorrt_llm/_torch/auto_deploy/custom_ops/triton_attention.pytensorrt_llm/_torch/auto_deploy/transform/library/kvcache.pytensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.pytensorrt_llm/_torch/auto_deploy/models/hf.pytensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.pytensorrt_llm/_torch/auto_deploy/models/patches/llama4.pytensorrt_llm/_torch/auto_deploy/transform/interface.pytensorrt_llm/_torch/auto_deploy/models/factory.py
🧠 Learnings (1)
📚 Learning: 2025-08-06T03:47:16.802Z
Learnt from: venkywonka
PR: NVIDIA/TensorRT-LLM#6650
File: tests/integration/test_lists/qa/llm_perf_cluster.yml:33-37
Timestamp: 2025-08-06T03:47:16.802Z
Learning: Ministral is a valid model name from Mistral AI, distinct from the regular Mistral models. In TensorRT-LLM test configurations, "ministral_8b" and "ministral_8b_fp8" are correct model identifiers and should not be changed to "mistral_8b".
Applied to files:
tests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.py
🧬 Code graph analysis (30)
tensorrt_llm/_torch/auto_deploy/export/interface.py (4)
tensorrt_llm/llmapi/llm_args.py (1)
Field(70-97)tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (1)
get_config_class(36-37)tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (1)
get_config_class(122-123)tensorrt_llm/_torch/auto_deploy/transform/library/kvcache.py (2)
get_config_class(81-82)get_config_class(239-240)
tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3_patches.py (1)
tensorrt_llm/_torch/auto_deploy/models/hf.py (1)
get_example_inputs_with_images(609-659)
tensorrt_llm/_torch/auto_deploy/custom_ops/cuda_backend_causal_conv.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (2)
_get_sanitized_seq_len(385-425)seq_len(293-294)
tensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.py (2)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (2)
seq_len(293-294)_get_sanitized_seq_len(385-425)tensorrt_llm/_torch/attention_backend/flashinfer.py (1)
page_size(185-189)
tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.py (4)
tensorrt_llm/_torch/auto_deploy/models/factory.py (5)
FullModelExportInfo(72-91)ModelFactory(94-334)SubModuleExportInfo(27-69)get_export_infos(323-334)model(125-127)tensorrt_llm/_torch/auto_deploy/models/hf.py (2)
get_export_infos(453-454)get_export_infos(668-669)tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py (1)
get_export_infos(44-45)tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py (1)
get_export_infos(40-41)
tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_causal_conv.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (6)
SequenceInfo(34-689)_get_sanitized_seq_len(385-425)seq_len(293-294)input_pos(297-298)cache_loc(301-302)pages_per_seq(305-306)
tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py (1)
tensorrt_llm/_torch/auto_deploy/models/factory.py (4)
FullModelExportInfo(72-91)SubModuleExportInfo(27-69)get_export_infos(323-334)model(125-127)
tensorrt_llm/_torch/auto_deploy/transform/optimizer.py (2)
tensorrt_llm/_torch/auto_deploy/shim/interface.py (1)
CachedSequenceInterface(11-76)tensorrt_llm/_torch/auto_deploy/transform/interface.py (2)
TransformRegistry(503-531)get(519-521)
tensorrt_llm/_torch/auto_deploy/transform/library/compile_model.py (3)
tensorrt_llm/_torch/auto_deploy/transform/interface.py (4)
_apply_to_full_model(490-500)SharedConfig(60-66)TransformInfo(121-174)get(519-521)tensorrt_llm/_torch/auto_deploy/shim/interface.py (1)
CachedSequenceInterface(11-76)tensorrt_llm/_torch/auto_deploy/compile/compiler.py (2)
CompileBackendRegistry(12-31)get(25-27)
tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_mamba.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (3)
SequenceInfo(34-689)_get_sanitized_seq_len(385-425)seq_len(293-294)
tensorrt_llm/_torch/auto_deploy/shim/interface.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (2)
GetCacheCallable(712-713)SequenceInfo(34-689)
tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_llama4_vlm_patch.py (1)
tensorrt_llm/_torch/auto_deploy/export/interface.py (1)
apply_export_patches(237-280)
tensorrt_llm/_torch/auto_deploy/transformations/_graph.py (1)
tensorrt_llm/module.py (1)
Module(33-226)
tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_attention.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (2)
_get_sanitized_num_sequences(428-443)seq_len(293-294)
tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (3)
tensorrt_llm/_torch/auto_deploy/export/export.py (2)
run_forward_for_capture(198-250)torch_export_to_gm(253-321)tensorrt_llm/_torch/auto_deploy/shim/interface.py (2)
args(23-25)named_args(28-30)tensorrt_llm/_torch/auto_deploy/models/factory.py (5)
get_example_inputs(310-320)get_export_infos(323-334)dynamic_shape_lookup(36-51)post_process(59-69)post_process(90-91)
tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py (4)
tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py (2)
SequenceEmbeddingInfo(48-86)get_export_infos(44-45)tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (1)
CacheConfig(28-31)tensorrt_llm/_torch/auto_deploy/export/export.py (1)
torch_export_to_gm(253-321)tensorrt_llm/_torch/auto_deploy/models/factory.py (5)
FullModelExportInfo(72-91)ModelFactory(94-334)SubModuleExportInfo(27-69)get_export_infos(323-334)model(125-127)
tensorrt_llm/_torch/auto_deploy/custom_ops/mla.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (2)
_get_sanitized_num_sequences(428-443)seq_len(293-294)
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_attention_op.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (4)
seq_len(293-294)input_pos(297-298)cache_loc(301-302)pages_per_seq(305-306)
tensorrt_llm/_torch/auto_deploy/models/patches/mistral3.py (1)
tensorrt_llm/_torch/auto_deploy/export/interface.py (3)
DisabledBaseExportPatch(142-150)ExportPatchRegistry(186-233)register(192-201)
tensorrt_llm/_torch/auto_deploy/export/export.py (2)
tensorrt_llm/_torch/auto_deploy/export/interface.py (1)
apply_export_patches(237-280)tensorrt_llm/_torch/auto_deploy/transformations/_graph.py (3)
lift_to_meta(79-92)tree_to(71-75)load_buffers_and_params(32-68)
tensorrt_llm/_torch/auto_deploy/models/patches/pixtral.py (2)
tensorrt_llm/_torch/auto_deploy/export/interface.py (8)
DisabledBaseExportPatch(142-150)ExportPatchRegistry(186-233)register(192-201)_apply_patch(132-134)_apply_patch(174-177)_revert_patch(137-139)_revert_patch(179-183)create_patch(221-228)tensorrt_llm/_torch/models/modeling_pixtral.py (1)
PixtralVisionModel(170-256)
tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (6)
tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (1)
_apply_to_full_model(125-197)tensorrt_llm/_torch/auto_deploy/transform/interface.py (3)
_apply_to_full_model(490-500)SharedConfig(60-66)TransformInfo(121-174)tensorrt_llm/_torch/auto_deploy/transform/library/compile_model.py (1)
_apply_to_full_model(42-65)tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py (2)
_apply_to_full_model(39-54)_apply_to_full_model(67-78)tensorrt_llm/_torch/auto_deploy/shim/interface.py (1)
CachedSequenceInterface(11-76)tensorrt_llm/_torch/auto_deploy/models/factory.py (3)
ModelFactory(94-334)model(125-127)build_model(134-173)
tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py (5)
tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (2)
_apply_to_full_model(39-52)_apply_to_full_model(68-88)tensorrt_llm/_torch/auto_deploy/transform/interface.py (3)
_apply_to_full_model(490-500)SharedConfig(60-66)TransformInfo(121-174)tensorrt_llm/_torch/auto_deploy/shim/interface.py (2)
CachedSequenceInterface(11-76)to(37-41)tensorrt_llm/_torch/auto_deploy/models/factory.py (2)
ModelFactory(94-334)load_or_random_init(239-280)tensorrt_llm/_torch/auto_deploy/transformations/_graph.py (1)
move_to_device(135-142)
tensorrt_llm/_torch/auto_deploy/custom_ops/triton_attention.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (2)
_get_sanitized_num_sequences(428-443)seq_len(293-294)
tensorrt_llm/_torch/auto_deploy/transform/library/kvcache.py (2)
tensorrt_llm/_torch/auto_deploy/transform/interface.py (4)
_apply_to_full_model(490-500)SharedConfig(60-66)TransformInfo(121-174)BaseTransform(213-500)tensorrt_llm/_torch/auto_deploy/shim/interface.py (3)
CachedSequenceInterface(11-76)named_args(28-30)initialize_caches(47-54)
tensorrt_llm/_torch/auto_deploy/models/hf.py (1)
tensorrt_llm/_torch/auto_deploy/models/factory.py (10)
FullModelExportInfo(72-91)ModelFactory(94-334)SubModuleExportInfo(27-69)get_export_infos(323-334)model(125-127)post_process(59-69)post_process(90-91)_init_dynamic_shape_lookup(54-56)_init_dynamic_shape_lookup(82-88)init_processor(205-212)
tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py (4)
tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (2)
_apply_to_full_model(39-52)_apply_to_full_model(68-88)tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (1)
_apply_to_full_model(125-197)tensorrt_llm/_torch/auto_deploy/transform/interface.py (4)
_apply_to_full_model(490-500)SharedConfig(60-66)TransformInfo(121-174)_apply(475-488)tensorrt_llm/_torch/auto_deploy/shim/interface.py (2)
CachedSequenceInterface(11-76)named_args(28-30)
tensorrt_llm/_torch/auto_deploy/models/patches/llama4.py (1)
tensorrt_llm/_torch/auto_deploy/export/interface.py (4)
BaseExportPatch(47-139)DisabledBaseExportPatch(142-150)ExportPatchRegistry(186-233)register(192-201)
tensorrt_llm/_torch/auto_deploy/transform/interface.py (5)
tensorrt_llm/_torch/auto_deploy/shim/interface.py (2)
CachedSequenceInterface(11-76)args(23-25)tensorrt_llm/_torch/auto_deploy/models/factory.py (3)
ModelFactory(94-334)get(349-351)model(125-127)tensorrt_llm/_torch/auto_deploy/transformations/_graph.py (5)
run_shape_prop(218-243)named_graphmodules(95-99)canonicalize_graph(174-187)lift_to_meta(79-92)placeholders_on_meta(312-341)tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (2)
_apply_to_full_model(39-52)_apply_to_full_model(68-88)tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (1)
_apply_to_full_model(125-197)
tensorrt_llm/_torch/auto_deploy/models/factory.py (6)
tests/unittest/_torch/thop/parallel/test_custom_ops.py (1)
custom_ops(37-42)tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (1)
CacheConfig(28-31)tensorrt_llm/_torch/auto_deploy/models/hf.py (4)
_init_dynamic_shape_lookup(527-534)post_process(499-525)get_export_infos(453-454)get_export_infos(668-669)tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py (1)
get_export_infos(44-45)tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py (1)
get_export_infos(40-41)tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.py (1)
get_export_infos(40-41)
🪛 Ruff (0.13.3)
tensorrt_llm/_torch/auto_deploy/custom_ops/cuda_backend_causal_conv.py
83-83: Unused function argument: input_pos
(ARG001)
83-83: Unused function argument: cache_loc
(ARG001)
83-83: Unused function argument: pages_per_seq
(ARG001)
83-83: Unused function argument: page_size
(ARG001)
tensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.py
216-216: Unused function argument: input_pos
(ARG001)
216-216: Unused function argument: pages_per_seq
(ARG001)
216-216: Unused function argument: slot_idx
(ARG001)
216-216: Unused function argument: page_size
(ARG001)
tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.py
40-40: Unused method argument: model
(ARG002)
tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_causal_conv.py
169-169: Unused function argument: input_pos
(ARG001)
169-169: Unused function argument: cache_loc
(ARG001)
169-169: Unused function argument: pages_per_seq
(ARG001)
169-169: Unused function argument: page_size
(ARG001)
tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py
44-44: Unused method argument: model
(ARG002)
tensorrt_llm/_torch/auto_deploy/transform/library/compile_model.py
46-46: Unused method argument: factory
(ARG002)
47-47: Unused method argument: shared_config
(ARG002)
tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_mamba.py
144-144: Unused function argument: input_pos
(ARG001)
144-144: Unused function argument: cache_loc
(ARG001)
144-144: Unused function argument: pages_per_seq
(ARG001)
144-144: Unused function argument: page_size
(ARG001)
tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_attention.py
381-381: Unused function argument: pages_per_seq
(ARG001)
381-381: Unused function argument: slot_idx
(ARG001)
381-381: Unused function argument: page_size
(ARG001)
tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py
51-51: Unused function argument: mod
(ARG001)
85-85: Avoid specifying long messages outside the exception class
(TRY003)
130-130: Unused method argument: shared_config
(ARG002)
tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py
40-40: Unused method argument: model
(ARG002)
tensorrt_llm/_torch/auto_deploy/custom_ops/mla.py
199-199: Unused function argument: position_ids
(ARG001)
199-199: Unused function argument: pages_per_seq
(ARG001)
199-199: Unused function argument: slot_idx
(ARG001)
199-199: Unused function argument: page_size
(ARG001)
tensorrt_llm/_torch/auto_deploy/models/patches/pixtral.py
59-59: Unused function argument: max_width
(ARG001)
tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py
41-41: Unused method argument: mod
(ARG002)
42-42: Unused method argument: cm
(ARG002)
44-44: Unused method argument: shared_config
(ARG002)
70-70: Unused method argument: mod
(ARG002)
73-73: Unused method argument: shared_config
(ARG002)
tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py
42-42: Unused method argument: cm
(ARG002)
44-44: Unused method argument: shared_config
(ARG002)
71-71: Unused method argument: factory
(ARG002)
72-72: Unused method argument: shared_config
(ARG002)
tensorrt_llm/_torch/auto_deploy/custom_ops/triton_attention.py
311-311: Unused function argument: pages_per_seq
(ARG001)
311-311: Unused function argument: slot_idx
(ARG001)
311-311: Unused function argument: page_size
(ARG001)
tensorrt_llm/_torch/auto_deploy/transform/library/kvcache.py
246-246: Unused method argument: factory
(ARG002)
247-247: Unused method argument: shared_config
(ARG002)
323-323: Unused method argument: factory
(ARG002)
324-324: Unused method argument: shared_config
(ARG002)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py
274-274: Consider ("position_ids", *self._cached_arg_names) instead of concatenation
Replace with ("position_ids", *self._cached_arg_names)
(RUF005)
tensorrt_llm/_torch/auto_deploy/models/hf.py
453-453: Unused method argument: model
(ARG002)
508-508: Loop control variable embed_name not used within loop body
(B007)
512-515: Avoid specifying long messages outside the exception class
(TRY003)
560-563: Avoid specifying long messages outside the exception class
(TRY003)
tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py
114-114: Unused method argument: factory
(ARG002)
115-115: Unused method argument: shared_config
(ARG002)
tensorrt_llm/_torch/auto_deploy/transform/interface.py
346-346: Do not catch blind exception: Exception
(BLE001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (23)
tensorrt_llm/_torch/auto_deploy/models/patches/starcoder.py (1)
3-3: Comment update looks good.The added TODO note makes the removal condition explicit and keeps maintenance intent clear.
tensorrt_llm/_torch/auto_deploy/config/transformers.yaml (1)
9-31: LGTM! Consistent per-GM execution disablement.The addition of
run_per_gm: falseacross these transforms aligns with the broader migration toward full-model processing rather than per-graph-module subgraphs, as described in the PR objectives.tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_build_small_single.py (2)
75-78: LGTM! Enables torch-opt backend for Llama-4.This change aligns with the PR objective to add
torch-optsupport for llama4 models and removes the previous skip, expanding test coverage.
89-93: LGTM! Enables torch-cudagraph backend for Mistral.This change aligns with the PR objective to add
torch-cudagraphsupport for mistral3 models.tensorrt_llm/_torch/auto_deploy/export/export.py (3)
6-6: LGTM! Required import for new function signature.The
Callabletype is needed for thecapture_fnparameter in the newrun_forward_for_capturefunction.
198-250: LGTM! Well-designed abstraction for capture operations.The new
run_forward_for_capturefunction effectively extracts common logic for running capture operations with patches and meta device handling. The implementation is clean, well-documented, and provides good flexibility through thecapture_fnparameter.
288-297: LGTM! Clean refactoring with improved separation of concerns.The refactored
torch_export_to_gmnow delegates capture orchestration torun_forward_for_capturewhile keeping export-specific logic in the internal_capture_fnhelper. This improves code maintainability and reusability.tensorrt_llm/_torch/auto_deploy/transform/library/kvcache.py (3)
7-7: LGTM! Required import for nn.Module annotations.The import is necessary for the updated method signatures that use
nn.Moduleinstead ofGraphModule.
242-248: LGTM! Consistent interface migration to full-model processing.The method signature update from
_applywithGraphModuleto_apply_to_full_modelwithnn.Modulealigns with the broader migration toward full-model transformations described in the PR objectives.Note: The static analysis warnings about unused
factoryandshared_configparameters are expected, as these are required by theBaseTransforminterface signature (see relevant code snippets frominterface.py).
319-325: LGTM! Consistent interface migration.The
InitializeCachetransform follows the same interface pattern asResizeKVCache, consistently updating to full-model processing.tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_causal_conv_cached_op.py (1)
171-188: LGTM! Test updated to reflect prepare_metadata API changes.The removal of
input_idsfrom thetorch_causal_conv_prepare_metadatacall aligns with the broader API refactor that shifts to usingposition_idsfor sequence-length sanitization, as indicated in the AI summary.tensorrt_llm/_torch/auto_deploy/models/__init__.py (1)
1-2: LGTM! Aligns with mistral3 deprecation.Removing
mistral3from the public exports is consistent with the broader de-emphasis of mistral3 support in this PR, including marking the mistral3 patch as disabled by default.tensorrt_llm/_torch/auto_deploy/models/patches/mistral3.py (2)
1-15: LGTM! Clear documentation of disabled patch status.The updated docstring and import clearly communicate that this patch is disabled by default and retained for potential future use. This aligns with the broader patch system changes introducing disabled patches.
167-169: LGTM! Consistent with disabled patch pattern.The change to inherit from
DisabledBaseExportPatchaligns with the broader patch system modernization, allowing the patch to remain registered while being disabled by default (consistent with similar changes inllama4.pyandpixtral.py).tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_llama4_vlm_patch.py (2)
60-60: LGTM! Updated to new patch configuration API.The change from
patch_listtopatch_configswith an explicitenabledflag provides more flexible control over patch application and aligns with the modernized patch system introduced in this PR.
89-97: LGTM! Consistent patch configuration format.The migration to
patch_configsdictionary format with explicit configurations for each patch improves clarity and aligns with the updatedapply_export_patchesAPI that accepts patch configurations (see relevant code snippet frominterface.py).tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py (1)
13-17: Export-info override looks good.Thanks for wiring FakeFactory into the new export-info contract; this keeps the test helper compliant with ModelFactory’s abstract interface.
Also applies to: 44-46
tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py (2)
1-16: DummyFactory export-info hook LGTM.Covering the abstract get_export_infos with FullModelExportInfo keeps these tests aligned with the factory API refresh. 👍
Also applies to: 40-42
175-183: run_per_gm flags acknowledged.Setting
run_per_gm=Falsefor the factory and export stages mirrors the new single-pass export flow, so no concerns here.tensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.py (1)
176-177: Position-id based sanitization looks correct.Switching both real and fake paths to sanitize via
position_idskeeps flashinfer in sync with the rest of the metadata APIs. Looks good.Also applies to: 216-219
tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py (1)
124-128: Verifyprofiling_metadataplumbs through model.forward for all target models.Some HF model forwards may not expect/forward this kwarg; ensure tested on llama4, mistral3, Qwen2.5-VL.
tensorrt_llm/_torch/auto_deploy/transform/interface.py (1)
83-86: Confirm default run_per_gm=True matches intended full-model flow.Current default runs per-GraphModule; many transforms now implement _apply_to_full_model. Verify pipeline config overrides as expected.
tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (1)
186-191: Remove version-compat fallback for set_submodule
torch.nn.Module.set_submodule is supported from PyTorch 2.6 onward; if your project requires ≥2.6, you can drop the getattr/setattr fallback.
tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py
Show resolved
Hide resolved
tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py
Show resolved
Hide resolved
|
PR_Github #20822 [ run ] completed with state |
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_cuda_causal_conv_cached_op.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
/bot run |
|
PR_Github #20925 [ run ] triggered by Bot |
|
PR_Github #20925 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #21039 [ run ] triggered by Bot |
|
PR_Github #21039 [ run ] completed with state |
ea6bee7 to
f9c7629
Compare
|
/bot run |
|
PR_Github #21227 [ run ] triggered by Bot |
|
PR_Github #21227 [ run ] completed with state |
|
/bot run |
|
PR_Github #21239 [ run ] triggered by Bot |
|
PR_Github #21239 [ run ] completed with state |
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
f9c7629 to
15ac2d3
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #21255 [ run ] triggered by Bot |
|
PR_Github #21255 [ run ] completed with state |
…DIA#8203) Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Summary by CodeRabbit
New Features
Breaking Changes
Deprecations/Removals
Description
Note: contains changes from #8157, Please only review final commit
torch-cudagraphandtorch-optfor VLMstorch-opt/torch-cudagraphfor llama4 + mistral3Qwen/Qwen2.5-VL-7B-Instructincluding opt/cudagraph. Was previously blocked on complex dynamism, see Support Qwen 2.5 VL nv-auto-deploy/TensorRT-LLM#127Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.