KEMBAR78
[TRTLLM-7846][feat] implement etcd storage for disagg cluster by reasonsolo · Pull Request #8210 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@reasonsolo
Copy link
Collaborator

@reasonsolo reasonsolo commented Oct 9, 2025

Summary by CodeRabbit

  • New Features

    • Added etcd-backed cluster storage option with key-prefix get/watch, TTL/lease support, and event-driven watch queues.
    • Factory methods now accept etcd URIs to create etcd-based storage.
  • Bug Fixes

    • Improved shutdown flow in auto-scaling to ensure clean unwatch and stop.
    • Safeguarded watch unsubscription to avoid redundant calls and standardized unwatch error behavior.
  • Tests

    • Enabled and expanded tests to cover etcd-based storage.
    • Updated expectations for unwatch behavior and accounted for the etcd watch thread in leak checks.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@reasonsolo reasonsolo force-pushed the tllm7845_sdetcd_pr branch 2 times, most recently from 3582edc to 6240ca7 Compare October 9, 2025 04:43
@reasonsolo reasonsolo changed the title [TRTLLM-7846] implement etcd storage for disagg cluster [TRTLLM-7846][feat] implement etcd storage for disagg cluster Oct 9, 2025
@reasonsolo reasonsolo marked this pull request as ready for review October 9, 2025 04:45
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 9, 2025

📝 Walkthrough

Walkthrough

Adds etcd-backed cluster storage with Etcd3ClusterStorage and Etcd3WatchEventQueue, updates factory creation to handle etcd URIs, adjusts unwatch error type, refines autoscaling shutdown/unwatch flow, and extends tests to cover etcd storage and updated exception behavior.

Changes

Cohort / File(s) Summary of Changes
Etcd3 cluster storage implementation
tensorrt_llm/serve/cluster_storage.py
Introduces etcd3 client dependency; adds Etcd3ClusterStorage with set/get/delete/expire/prefix/watch/unwatch and client property; implements Etcd3WatchEventQueue to translate etcd watch events; extends create_cluster_storage(_client) to support etcd URIs and kwargs; changes unwatch to raise KeyError; updates logger import.
Autoscaling lifecycle adjustments
tensorrt_llm/serve/disagg_auto_scaling.py
Adds return annotation to get_worker_key_prefix; del now schedules stop(); stop() awaits unwatch before stopping storage; unwatch guards against missing _watch_handle.
Tests: cluster storage and disagg manager worker
tests/unittest/disaggregated/test_cluster_storage.py, tests/unittest/disaggregated/test_disagg_cluster_manager_worker.py
Updates expected exception to KeyError in unwatch test; enables etcd tests by setting test=True for Etcd cases; expands storage_types to include "etcd"; adjusts thread leak marker with note regarding python-etcd3 watch thread.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor App
  participant Factory as create_cluster_storage(_client)
  participant Storage as Etcd3ClusterStorage
  participant Etcd as etcd3.Client
  note over App,Etcd: New etcd-backed storage flow
  App->>Factory: create_cluster_storage("etcd://...", cluster_name, **kwargs)
  Factory-->>App: Etcd3ClusterStorage
  App->>Storage: set/get/delete/expire/get_prefix
  Storage->>Etcd: KV ops (lease, TTL, prefix)
  Etcd-->>Storage: Responses
  Storage-->>App: Results
Loading
sequenceDiagram
  autonumber
  actor App
  participant Storage as Etcd3ClusterStorage
  participant Etcd as etcd3.Watch
  participant Q as Etcd3WatchEventQueue
  participant AioQ as asyncio.Queue
  note over App,AioQ: Watch flow with event translation
  App->>Storage: watch(key_prefix)
  Storage->>Etcd: watch(prefix)
  Etcd-->>Q: watch events
  Q->>AioQ: enqueue WatchEvent
  App-->>Storage: unwatch(key_prefix)
  Storage->>Etcd: cancel watch
  Q-->>Q: cancel hook / cleanup
Loading
sequenceDiagram
  autonumber
  participant GC as Object Finalizer
  participant Auto as DisaggAutoScaling
  participant Store as ClusterStorage
  note over GC,Store: Updated shutdown path
  GC->>Auto: __del__()
  Auto->>Auto: schedule stop()
  Auto->>Auto: stop()
  Auto->>Auto: await unwatch_workers() (guarded)
  Auto->>Store: stop()
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request description is still the unfilled template with only a placeholder and guidance comments, lacking any actual explanation of the issue, solution details, test coverage, and checklist confirmations. Please replace the template and placeholder text with a real summary in the Description section that explains what was changed and why, add a Test Coverage section listing the relevant tests, and ensure the PR Checklist items are addressed.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title “[TRTLLM-7846][feat] implement etcd storage for disagg cluster” clearly follows the required ticket/type format and succinctly summarizes the primary feature added, namely etcd‐based storage support for the disaggregated cluster.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 27677a3 and 6240ca7.

📒 Files selected for processing (4)
  • tensorrt_llm/serve/cluster_storage.py (4 hunks)
  • tensorrt_llm/serve/disagg_auto_scaling.py (3 hunks)
  • tests/unittest/disaggregated/test_cluster_storage.py (2 hunks)
  • tests/unittest/disaggregated/test_disagg_cluster_manager_worker.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tests/unittest/disaggregated/test_disagg_cluster_manager_worker.py
  • tests/unittest/disaggregated/test_cluster_storage.py
  • tensorrt_llm/serve/disagg_auto_scaling.py
  • tensorrt_llm/serve/cluster_storage.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tests/unittest/disaggregated/test_disagg_cluster_manager_worker.py
  • tests/unittest/disaggregated/test_cluster_storage.py
  • tensorrt_llm/serve/disagg_auto_scaling.py
  • tensorrt_llm/serve/cluster_storage.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tests/unittest/disaggregated/test_disagg_cluster_manager_worker.py
  • tests/unittest/disaggregated/test_cluster_storage.py
  • tensorrt_llm/serve/disagg_auto_scaling.py
  • tensorrt_llm/serve/cluster_storage.py
🧬 Code graph analysis (2)
tensorrt_llm/serve/disagg_auto_scaling.py (1)
tensorrt_llm/serve/cluster_storage.py (8)
  • stop (67-68)
  • stop (167-170)
  • start (63-64)
  • start (162-165)
  • unwatch (95-96)
  • unwatch (243-250)
  • unwatch (381-383)
  • unwatch (528-531)
tensorrt_llm/serve/cluster_storage.py (1)
tensorrt_llm/logger.py (1)
  • error (126-127)
🪛 Ruff (0.13.3)
tensorrt_llm/serve/cluster_storage.py

111-111: Avoid specifying long messages outside the exception class

(TRY003)


248-250: Avoid specifying long messages outside the exception class

(TRY003)


390-390: PEP 484 prohibits implicit Optional

Convert to Optional[T]

(RUF013)


419-419: Do not catch blind exception: Exception

(BLE001)


428-428: Unused method argument: cluster_name

(ARG002)


473-473: Unpacked variable meta is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


489-489: Avoid specifying long messages outside the exception class

(TRY003)


523-523: Consider moving this statement to an else block

(TRY300)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@reasonsolo reasonsolo requested a review from pcastonguay October 9, 2025 06:17
@reasonsolo
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20896 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20896 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15808 completed with status: 'FAILURE'

@reasonsolo
Copy link
Collaborator Author

/bot run

@reasonsolo reasonsolo closed this Oct 10, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #20985 [ ] completed with state FAILURE
Not allowed on merged PR

@reasonsolo reasonsolo reopened this Oct 10, 2025
@reasonsolo
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20986 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20986 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15870 completed with status: 'FAILURE'

@reasonsolo
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21014 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21014 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15889 completed with status: 'FAILURE'

@reasonsolo
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21184 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21184 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15993 completed with status: 'FAILURE'

Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>
Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>
@reasonsolo
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21290 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21290 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16072 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@pcastonguay pcastonguay merged commit 22471ec into NVIDIA:main Oct 14, 2025
5 checks passed
govind-ramnarayan pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Oct 21, 2025
…#8210)

Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants