KEMBAR78
[TRTLLM-8480][chore] clean create_py_executor API by QiJune · Pull Request #8412 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@QiJune
Copy link
Collaborator

@QiJune QiJune commented Oct 16, 2025

Summary by CodeRabbit

  • Refactor

    • Streamlined executor and worker initialization by removing direct LoRA and KV cache connector parameters; these settings are now derived from model arguments.
    • Applied consistently across single/multi-process, Ray, RPC, and PyTorch execution paths.
    • Simplifies configuration with no expected change to runtime behavior.
  • Documentation

    • Updated constructor and proxy docstrings to reflect removed parameters.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
@QiJune QiJune requested review from a team as code owners October 16, 2025 04:07
@QiJune QiJune requested review from HuiGao-NV and hchings October 16, 2025 04:07
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 16, 2025

📝 Walkthrough

Walkthrough

This change removes LoraConfig and KvCacheConnectorConfig as publicly passed parameters across executor creation and worker layers. These configs are now derived from llm_args where needed. Signatures and imports are updated accordingly across executor factories, proxies, workers, RPC/Ray components, and Torch py executor creator.

Changes

Cohort / File(s) Summary
Executor factory API
tensorrt_llm/executor/executor.py
Dropped lora_config and kv_connector_config from public/create paths and internal helpers (_create_ray_executor/_create_rpc_executor/_create_ipc_executor). Removed related imports and argument propagation.
Base worker + Workers (IPC)
tensorrt_llm/executor/base_worker.py, tensorrt_llm/executor/worker.py
Removed lora_config and kv_connector_config parameters and imports. BaseWorker now derives lora_config via llm_args (PyTorch only). Updated worker_main and super() calls accordingly.
Ray components
tensorrt_llm/executor/ray_executor.py, tensorrt_llm/executor/ray_gpu_worker.py
Eliminated kv_connector_config and lora_config parameters, imports, and propagation in Ray executor/worker init and worker_kwargs construction.
RPC components
tensorrt_llm/executor/rpc_proxy.py, tensorrt_llm/executor/rpc_worker.py
Removed lora_config and kv_connector_config from constructor and main_task signatures, imports, and super() calls. Cleaned docstrings and parameter passing.
Proxy
tensorrt_llm/executor/proxy.py
Removed kv_connector_config from GenerationExecutorProxy.init; no longer included in worker_kwargs.
LLM API build paths
tensorrt_llm/llmapi/llm.py
Executor creation calls in _TrtLLM._build_model and _TorchLLM._build_model no longer pass lora_config or kv_connector_config.
Torch py executor creator
tensorrt_llm/_torch/pyexecutor/py_executor_creator.py
Removed public imports of KvCacheConnectorConfig and LoraConfig. create_py_executor now derives lora_config and kv_connector_config from llm_args instead of parameters.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Caller
  participant GenExec as GenerationExecutor
  participant Proxy as Proxy/Executor Impl
  participant Worker as Worker
  participant TorchPy as Torch PyExecutor Creator

  Note over Caller,Worker: Before: lora_config & kv_connector_config passed explicitly through layers
  Caller->>GenExec: create(engine, ..., lora_config, kv_connector_config, llm_args)
  GenExec->>Proxy: init(worker_kwargs + lora/kv)
  Proxy->>Worker: init(..., lora_config, kv_connector_config)
  Worker->>TorchPy: create_py_executor(..., lora_config, kv_connector_config)
  TorchPy-->>Worker: executor
  Worker-->>Proxy: ready
  Proxy-->>GenExec: executor
  GenExec-->>Caller: handle

  rect rgba(240,248,255,0.6)
  Note over Caller,Worker: After: configs sourced from llm_args only
  Caller->>GenExec: create(engine, ..., llm_args)
  GenExec->>Proxy: init(worker_kwargs without lora/kv)
  Proxy->>Worker: init(..., llm_args)
  Worker->>TorchPy: create_py_executor(..., llm_args)
  TorchPy->>TorchPy: derive lora/kv from llm_args
  TorchPy-->>Worker: executor
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request description only contains the unfilled PR template sections and does not include any actual summary of changes, explanation of the issue or solution, or test coverage information, leaving critical sections blank and failing to provide the required context. Please replace the placeholder template text with a clear summary of what was changed and why, fill in the Description section with a concise explanation of the implementation, and list relevant test cases under Test Coverage to ensure the description fully complies with the repository template.
Docstring Coverage ⚠️ Warning Docstring coverage is 21.05% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title “[TRTLLM-8480][chore] clean create_py_executor API” clearly identifies the JIRA ticket and the chore type and concisely summarizes the main change related to cleaning the create_py_executor interface, making it easy for teammates to understand the primary focus at a glance.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0a0159f and dfd44ee.

📒 Files selected for processing (10)
  • tensorrt_llm/_torch/pyexecutor/py_executor_creator.py (2 hunks)
  • tensorrt_llm/executor/base_worker.py (2 hunks)
  • tensorrt_llm/executor/executor.py (8 hunks)
  • tensorrt_llm/executor/proxy.py (1 hunks)
  • tensorrt_llm/executor/ray_executor.py (2 hunks)
  • tensorrt_llm/executor/ray_gpu_worker.py (1 hunks)
  • tensorrt_llm/executor/rpc_proxy.py (0 hunks)
  • tensorrt_llm/executor/rpc_worker.py (1 hunks)
  • tensorrt_llm/executor/worker.py (1 hunks)
  • tensorrt_llm/llmapi/llm.py (1 hunks)
💤 Files with no reviewable changes (1)
  • tensorrt_llm/executor/rpc_proxy.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/executor/rpc_worker.py
  • tensorrt_llm/_torch/pyexecutor/py_executor_creator.py
  • tensorrt_llm/executor/proxy.py
  • tensorrt_llm/llmapi/llm.py
  • tensorrt_llm/executor/executor.py
  • tensorrt_llm/executor/ray_gpu_worker.py
  • tensorrt_llm/executor/ray_executor.py
  • tensorrt_llm/executor/worker.py
  • tensorrt_llm/executor/base_worker.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/executor/rpc_worker.py
  • tensorrt_llm/_torch/pyexecutor/py_executor_creator.py
  • tensorrt_llm/executor/proxy.py
  • tensorrt_llm/llmapi/llm.py
  • tensorrt_llm/executor/executor.py
  • tensorrt_llm/executor/ray_gpu_worker.py
  • tensorrt_llm/executor/ray_executor.py
  • tensorrt_llm/executor/worker.py
  • tensorrt_llm/executor/base_worker.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/executor/rpc_worker.py
  • tensorrt_llm/_torch/pyexecutor/py_executor_creator.py
  • tensorrt_llm/executor/proxy.py
  • tensorrt_llm/llmapi/llm.py
  • tensorrt_llm/executor/executor.py
  • tensorrt_llm/executor/ray_gpu_worker.py
  • tensorrt_llm/executor/ray_executor.py
  • tensorrt_llm/executor/worker.py
  • tensorrt_llm/executor/base_worker.py
🧠 Learnings (2)
📚 Learning: 2025-08-26T06:07:02.166Z
Learnt from: shaharmor98
PR: NVIDIA/TensorRT-LLM#7231
File: tensorrt_llm/_torch/pyexecutor/_util.py:504-509
Timestamp: 2025-08-26T06:07:02.166Z
Learning: In tensorrt_llm/_torch/pyexecutor/_util.py, when calling model_engine.set_lora_model_config(), pass model_binding_config.mlp_hidden_size directly without multiplying by mapping.tp_size, as the mlp_hidden_size from get_bindings_model_config() is already the per-TP rank value needed for LoRA weight packaging.

Applied to files:

  • tensorrt_llm/llmapi/llm.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
PR: NVIDIA/TensorRT-LLM#7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • tensorrt_llm/executor/base_worker.py
🧬 Code graph analysis (4)
tensorrt_llm/executor/rpc_worker.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
  • BaseLlmArgs (1428-2173)
tensorrt_llm/_torch/pyexecutor/py_executor_creator.py (2)
tensorrt_llm/llmapi/llm_args.py (1)
  • LoadFormat (2317-2322)
tensorrt_llm/_torch/models/modeling_phi4mm.py (1)
  • lora_config (1082-1100)
tensorrt_llm/executor/executor.py (6)
tensorrt_llm/llmapi/llm_args.py (1)
  • BaseLlmArgs (1428-2173)
tensorrt_llm/executor/postproc_worker.py (1)
  • PostprocWorkerConfig (42-49)
tensorrt_llm/executor/ray_executor.py (1)
  • RayExecutor (31-319)
tensorrt_llm/executor/rpc_proxy.py (1)
  • GenerationExecutorRpcProxy (23-372)
tensorrt_llm/executor/worker.py (1)
  • GenerationExecutorWorker (40-224)
tensorrt_llm/executor/proxy.py (1)
  • GenerationExecutorProxy (36-451)
tensorrt_llm/executor/base_worker.py (2)
tensorrt_llm/llmapi/llm_args.py (2)
  • BaseLlmArgs (1428-2173)
  • PybindMirror (827-971)
tensorrt_llm/_torch/models/modeling_phi4mm.py (1)
  • lora_config (1082-1100)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (11)
tensorrt_llm/_torch/pyexecutor/py_executor_creator.py (1)

209-210: LGTM: Configs properly derived from llm_args.

The derivation of lora_config and kv_connector_config from llm_args is clean and consistent with the PR's goal of centralizing configuration. Both are optional fields in llm_args (as shown in the relevant code snippets), and downstream usage at lines 334, 506-550, 614, 671, and 674 already handles None values appropriately.

tensorrt_llm/llmapi/llm.py (2)

957-957: LGTM: Executor creation updated to remove direct config parameters.

The removal of lora_config and kv_connector_config from the create() call is consistent with the PR's goal of centralizing configuration handling in llm_args. For the TensorRT backend, these configurations are not needed in the same way as the PyTorch backend.


1061-1061: LGTM: PyTorch executor creation properly passes llm_args.

The call to self._executor_cls.create() correctly passes llm_args (which contains lora_config and kv_connector_config) instead of passing these configurations directly. This aligns with the updated create_py_executor signature in py_executor_creator.py.

tensorrt_llm/executor/ray_executor.py (1)

38-38: LGTM: Parameter removal is consistent with broader API changes.

The removal of kv_connector_config from RayExecutor.__init__ aligns with the PR's goal of moving configuration handling behind llm_args. The worker_kwargs construction at line 96-98 correctly excludes this parameter.

tensorrt_llm/executor/proxy.py (1)

96-96: LGTM: Worker kwargs updated consistently.

The removal of kv_connector_config from worker_kwargs is consistent with the broader PR changes. The GenerationExecutorWorker (or its subclasses) will no longer receive this parameter, and will instead derive it from llm_args as needed.

tensorrt_llm/executor/ray_gpu_worker.py (1)

110-134: LGTM: Clean parameter removal with proper delegation.

The removal of lora_config and kv_connector_config parameters is clean and consistent. The RayGPUWorker now relies on llm_args (passed at line 120 and delegated to BaseWorker at line 133), where BaseWorker derives lora_config internally as needed (see base_worker.py line 90).

tensorrt_llm/executor/rpc_worker.py (2)

39-59: LGTM: RpcWorker initialization updated consistently.

The removal of lora_config and kv_connector_config parameters is clean. The RpcWorker now relies on llm_args (line 48, passed to BaseWorker at line 54) for configuration, consistent with the PR's centralization approach.


188-214: LGTM: main_task signature and worker instantiation updated correctly.

The main_task function signature and the RpcWorker instantiation both correctly remove lora_config and kv_connector_config parameters, passing only llm_args which contains these configurations.

tensorrt_llm/executor/worker.py (2)

42-62: LGTM: GenerationExecutorWorker initialization cleaned up.

The removal of lora_config and kv_connector_config parameters is clean and consistent with the PR's centralization approach. The llm_args parameter (line 51) is properly passed to BaseWorker (line 61), which handles internal derivation of these configs.


228-244: LGTM: worker_main signature and instantiation updated correctly.

Both the worker_main function signature (lines 228-244) and the worker_cls instantiation (lines 364-372) correctly remove lora_config and kv_connector_config parameters, relying on llm_args (line 243 and 372) for configuration.

Also applies to: 364-372

tensorrt_llm/executor/base_worker.py (1)

87-90: ****

The code at lines 87–90 is logically sound. Line 87–88 uses short-circuit AND evaluation: llm_args is not None and llm_args.backend in [...]. This ensures self._is_pytorch_backend is True only when llm_args is non-None. Line 90 then safely accesses llm_args.lora_config only when self._is_pytorch_backend is True, which guarantees llm_args is not None. This is a correct defensive pattern in Python and requires no changes.

Likely an incorrect or invalid review comment.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@QiJune
Copy link
Collaborator Author

QiJune commented Oct 16, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21521 [ run ] triggered by Bot

Copy link
Collaborator

@Superjomn Superjomn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@QiJune
Copy link
Collaborator Author

QiJune commented Oct 16, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21521 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #16248 completed with status: 'FAILURE'

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21544 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21544 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16264 completed with status: 'SUCCESS'

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
@QiJune
Copy link
Collaborator Author

QiJune commented Oct 17, 2025

/bot run

@QiJune QiJune enabled auto-merge (squash) October 17, 2025 00:54
@tensorrt-cicd
Copy link
Collaborator

PR_Github #21634 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21634 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16303 completed with status: 'FAILURE'

@QiJune
Copy link
Collaborator Author

QiJune commented Oct 17, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21652 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21652 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16314 completed with status: 'FAILURE'

@QiJune
Copy link
Collaborator Author

QiJune commented Oct 17, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21698 [ run ] triggered by Bot. Commit: f13ca58

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21698 [ run ] completed with state SUCCESS. Commit: f13ca58
/LLM/main/L0_MergeRequest_PR pipeline #16349 completed with status: 'FAILURE'

@QiJune
Copy link
Collaborator Author

QiJune commented Oct 18, 2025

/bot skip --comment "only one unrelated AsymmetricalCacheTest failed"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21744 [ skip ] triggered by Bot. Commit: f13ca58

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21744 [ skip ] completed with state SUCCESS. Commit: f13ca58
Skipping testing for commit f13ca58

@QiJune QiJune merged commit 4a8ac8d into NVIDIA:main Oct 18, 2025
5 checks passed
nvchenghaoz pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Oct 20, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
govind-ramnarayan pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Oct 21, 2025
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants