KEMBAR78
[None][doc] Add deployment guide section to the official doc website by nv-guomingz · Pull Request #6669 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@nv-guomingz
Copy link
Collaborator

@nv-guomingz nv-guomingz commented Aug 6, 2025

Preview:

image

Summary by CodeRabbit

  • Documentation
    • Added a new "Deployment Guide" section to the documentation, featuring quick start guides for deploying Llama 3.3-70B, Llama4 Scout 17B, and DeepSeek R1 models on TensorRT-LLM with NVIDIA GPUs.
    • Updated guide titles and improved links for clarity and relevance.
    • Included step-by-step instructions, configuration examples, troubleshooting tips, evaluation procedures, and benchmarking scripts for supported models.

@nv-guomingz nv-guomingz requested a review from a team as a code owner August 6, 2025 15:37
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 6, 2025

📝 Walkthrough

Walkthrough

A new "Deployment Guide" section was added to the documentation index, introducing three quick-start markdown guides for deploying Llama4 Scout, DeepSeek R1, and Llama3.3-70B models on TensorRT-LLM. The Llama3.3-70B guide is newly authored, and minor updates were made to the Llama4 Scout guide title and reference links.

Changes

Cohort / File(s) Change Summary
Documentation Index Update
docs/source/index.rst
Added a "Deployment Guide" toctree section referencing three new quick-start markdown files for Llama4 Scout, DeepSeek R1, and Llama3.3-70B under the deployment-guide directory. No changes to existing toctrees or content.
New Deployment Guide for Llama3.3-70B
docs/source/deployment-guide/quick-start-recipe-for-llama3.3-70b-on-trtllm.md
Introduced a comprehensive deployment guide for Llama3.3-70B on TensorRT-LLM, covering prerequisites, model access, Docker usage, server configuration, command-line flags, testing, troubleshooting, accuracy evaluation, and benchmarking for NVIDIA Blackwell and Hopper GPUs.
Minor Update to Llama4 Scout Guide
docs/source/deployment-guide/quick-start-recipe-for-llama4-scout-on-trtllm.md
Updated the document title to reflect broader hardware/software context and changed an internal class reference hyperlink to a formal API documentation link. No content changes to deployment instructions.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Documentation Index
    participant Deployment Guide (Llama3.3-70B)
    participant Deployment Guide (Llama4 Scout)
    participant Deployment Guide (DeepSeek R1)

    User->>Documentation Index: Accesses index.rst
    Documentation Index->>User: Displays new "Deployment Guide" section
    User->>Deployment Guide (Llama3.3-70B): Reads Llama3.3-70B quick-start guide
    User->>Deployment Guide (Llama4 Scout): Reads Llama4 Scout quick-start guide
    User->>Deployment Guide (DeepSeek R1): Reads DeepSeek R1 quick-start guide
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~7 minutes

Possibly related PRs

Suggested labels

Community want to contribute

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
docs/source/deployment-guide/quick-start-recipe-for-trttllm-llama4-scout.md (4)

1-1: Inconsistent model naming – “Llama4” vs “Llama 4”

Throughout TensorRT-LLM docs we use the space-separated form “Llama 4”. Changing only the title to “Llama4” introduces a mixed style that hurts searchability and indexing.
Prefer “Llama 4 Scout 17B …” for consistency with Meta’s official model name and the license reference on Line 11.

-# Quick Start Recipe for Llama4 Scout 17B FP8 and NVFP4 on TensorRT-LLM - Blackwell & Hopper Hardware
+# Quick Start Recipe for Llama 4 Scout 17B FP8 and NVFP4 on TensorRT-LLM – Blackwell & Hopper Hardware

23-26: Model label / link mismatch (NVFP4 ≠ FP4)

The bullet states NVFP4 model but links to a file that ends in -FP4.
If the checkpoint is indeed “NVFP4”, the URL or filename should reflect that; otherwise drop the “NV” prefix to avoid user confusion when searching on Hugging Face.

-* NVFP4 model: [Llama-4-Scout-17B-16E-Instruct-FP4](https://huggingface.co/nvidia/Llama-4-Scout-17B-16E-Instruct-FP4)
+* FP4 model (NVFP4): [Llama-4-Scout-17B-16E-Instruct-NVFP4](https://huggingface.co/nvidia/Llama-4-Scout-17B-16E-Instruct-NVFP4)

(or update the link if the checkpoint is still “-FP4”)


95-117: Avoid HTML entities for indentation – breaks in some Markdown renderers

The &emsp; entities used to indent option descriptions are rendered literally in several Sphinx/Markdown pipelines, resulting in “ Description:” in the final docs.
Use standard Markdown indent (two spaces) or definition lists instead.

Example:

#### `--tp_size`

  **Description:** Sets the **tensor-parallel size**

58-69: YAML snippet – missing root key comment & quoting $EXTRA_LLM_API_FILE

  1. The snippet writes top-level keys directly; calling out that this is the entire file avoids users appending to an existing config.
  2. Wrap the path in quotes to handle spaces.
-EXTRA_LLM_API_FILE=/tmp/config.yml
+EXTRA_LLM_API_FILE="/tmp/config.yml"

Consider adding a comment:

# /tmp/config.yml – overwrite if it exists
enable_attention_dp: false
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a16ba64 and 7ae3c48.

📒 Files selected for processing (2)
  • docs/source/deployment-guide/quick-start-recipe-for-trttllm-llama4-scout.md (1 hunks)
  • docs/source/index.rst (1 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
📚 Learning: in tensorrt-llm, test files (files under tests/ directories) do not require nvidia copyright headers...
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • docs/source/deployment-guide/quick-start-recipe-for-trttllm-llama4-scout.md
  • docs/source/index.rst
📚 Learning: in tensorrt-llm, examples directory can have different dependency versions than the root requirement...
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • docs/source/deployment-guide/quick-start-recipe-for-trttllm-llama4-scout.md
  • docs/source/index.rst
📚 Learning: in tensorrt-llm testing, it's common to have both cli flow tests (test_cli_flow.py) and pytorch api ...
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • docs/source/deployment-guide/quick-start-recipe-for-trttllm-llama4-scout.md
  • docs/source/index.rst
🔇 Additional comments (1)
docs/source/index.rst (1)

38-39: Path casing / hyphen check

Verify that the companion file actually lives at
docs/source/deployment-guide/quick-start-recipe-for-deepseek-r1-on-trt-llm.md
(the repo previously used “DeepSeek-R1” with capital “R”). A mismatched path will break the link and fail the docs build.

@nv-guomingz
Copy link
Collaborator Author

depends on #6543

@nv-guomingz nv-guomingz force-pushed the user/guomingz/vdr_doc branch from 7ae3c48 to a6312db Compare August 7, 2025 06:17
@nv-guomingz nv-guomingz changed the title [None][doc] Add deploymen guide section for VDR task [None][doc] Add deployment guide section for VDR task Aug 7, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
docs/source/deployment-guide/quick-start-recipe-for-llama3.3-70b-on-trtllm.md (5)

1-5: Model name looks non-standard—verify official naming before publishing

Meta’s public releases are branded “Llama 3 70B”. The repeated “3.3” variant in the title and throughout the guide may confuse users and break Hugging Face links if the repo is actually nvidia/Llama-3-70B-….

-# Quick Start Recipe for Llama3.3 70B on TensorRT-LLM - Blackwell & Hopper Hardware
+# Quick-Start Recipe for Llama 3 70B on TensorRT-LLM – Blackwell & Hopper GPUs

Please double-check the HF model IDs and rename consistently (title, links, commands, YAML, benchmark script).


60-70: YAML heredoc contains hard-tab indentation—copy-paste risk

Using tab characters (or inconsistent spaces) inside a heredoc can silently break YAML parsing. Convert to two-space indentation to match the rest of the guide and avoid surprises.

-kv_cache_config:     
-    dtype: fp8
+kv_cache_config:
+  dtype: fp8

238-243: Add a language identifier to fenced code block (markdownlint MD040)

The linter flagged this block; specifying shell keeps syntax highlighting consistent with earlier snippets.

-```
+```shell
 MODEL_PATH=nvidia/Llama-3.3-70B-Instruct-FP8
 ...

246-251: Missing language identifier on result block

For fixed-width output, mark the fence as text to silence MD040 and improve readability.

-```
+```text
 |Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
 ...

324-333: Same lint issue on benchmark sample output

Add an identifier (text) so editors don’t treat it as generic code.

-```
+```text
 ============ Serving Benchmark Result ============
 Successful requests:                      16
 ...
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7ae3c48 and a6312db.

📒 Files selected for processing (3)
  • docs/source/deployment-guide/quick-start-recipe-for-llama3.3-70b-on-trtllm.md (1 hunks)
  • docs/source/deployment-guide/quick-start-recipe-for-llama4-scout-on-trtllm.md (2 hunks)
  • docs/source/index.rst (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • docs/source/deployment-guide/quick-start-recipe-for-llama4-scout-on-trtllm.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • docs/source/index.rst
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
📚 Learning: in tensorrt-llm, test files (files under tests/ directories) do not require nvidia copyright headers...
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • docs/source/deployment-guide/quick-start-recipe-for-llama3.3-70b-on-trtllm.md
📚 Learning: in tensorrt-llm, examples directory can have different dependency versions than the root requirement...
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • docs/source/deployment-guide/quick-start-recipe-for-llama3.3-70b-on-trtllm.md
📚 Learning: in tensorrt-llm testing, it's common to have both cli flow tests (test_cli_flow.py) and pytorch api ...
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • docs/source/deployment-guide/quick-start-recipe-for-llama3.3-70b-on-trtllm.md
🪛 LanguageTool
docs/source/deployment-guide/quick-start-recipe-for-llama3.3-70b-on-trtllm.md

[style] ~115-~115: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ch** backend. #### --max_batch_sizeDescription: The maximum number of ...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~119-~119: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...r processing. #### --max_num_tokensDescription: The maximum total numb...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~123-~123: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...scheduled batch. #### --max_seq_lenDescription: The maximum possible s...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~127-~127: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ut tokens. #### --trust_remote_codeDescription: Allows TensorRT-LLM to...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~142-~142: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...: Sets the data type for the KV cache.   Default: auto (uses the data ...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~146-~146: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...checkpoint). #### cuda_graph_configDescription: A section for configur...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~148-~148: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...g CUDA graphs to optimize performance.  Options:   enable_paddi...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~156-~156: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...or which a CUDA graph will be created.   Default: 0   **Rec...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~160-~160: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...-max_batch_size command-line option.   batch_sizes: A specific list of ...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~162-~162: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...batch sizes to create CUDA graphs for.   Default: None #### `moe_conf...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~168-~168: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...n for Mixture-of-Experts (MoE) models.  Options:   backend: The ...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~172-~172: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...The backend to use for MoE operations.   Default: CUTLASS #### `atten...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~178-~178: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...end to use for attention calculations.  Default: TRTLLM See the [TorchLlmA...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~303-~303: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...l,e2el" done EOF chmod +x bench.sh ``` To benchmark the FP4 model, replace --mod...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)

🪛 markdownlint-cli2 (0.17.2)
docs/source/deployment-guide/quick-start-recipe-for-llama3.3-70b-on-trtllm.md

238-238: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


246-246: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


324-324: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
docs/source/deployment-guide/quick-start-recipe-for-llama3.3-70b-on-trtllm.md (1)

126-129: Flag the security implications of --trust_remote_code

--trust_remote_code executes arbitrary Python from the model repository at runtime. This is fine for demos but may violate internal security policies in production environments. Please add a one-liner warning and, if possible, a pointer to offline/“no-remote-code” workflows.

@nv-guomingz
Copy link
Collaborator Author

/bot run --stage-list "A10-Build-docs"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14412 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14412 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #10894 (Partly Tested) completed with status: 'FAILURE'

@nv-guomingz nv-guomingz force-pushed the user/guomingz/vdr_doc branch from a6312db to 2ca5ad5 Compare August 7, 2025 08:05
@nv-guomingz
Copy link
Collaborator Author

/bot run --stage-list "A10-Build_Docs"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14425 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14425 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10904 (Partly Tested) completed with status: 'SUCCESS'

@nv-guomingz nv-guomingz force-pushed the user/guomingz/vdr_doc branch from 2ca5ad5 to 1cf7ef1 Compare August 7, 2025 13:01
@nv-guomingz
Copy link
Collaborator Author

/bot skip --comment "docs build phase already pass"

@nv-guomingz nv-guomingz enabled auto-merge (squash) August 7, 2025 13:04
@tensorrt-cicd
Copy link
Collaborator

PR_Github #14473 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14473 [ skip ] completed with state SUCCESS
Skipping testing for commit 1cf7ef1

@nv-guomingz nv-guomingz requested a review from kaiyux August 7, 2025 13:25
@nv-guomingz nv-guomingz force-pushed the user/guomingz/vdr_doc branch from 1cf7ef1 to 09e4542 Compare August 7, 2025 14:13
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
@nv-guomingz nv-guomingz force-pushed the user/guomingz/vdr_doc branch from 09e4542 to 428a4ab Compare August 7, 2025 14:14
@nv-guomingz
Copy link
Collaborator Author

/bot skip --comment "docs build pass"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14482 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14482 [ skip ] completed with state SUCCESS
Skipping testing for commit 428a4ab

@nv-guomingz nv-guomingz merged commit 0223de0 into NVIDIA:main Aug 7, 2025
4 checks passed
@litaotju litaotju changed the title [None][doc] Add deployment guide section for VDR task [None][doc] Add deployment guide section to the official doc website Aug 7, 2025
Shunkangz pushed a commit to hcyezhang/TensorRT-LLM that referenced this pull request Aug 8, 2025
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
@nv-guomingz nv-guomingz deleted the user/guomingz/vdr_doc branch September 17, 2025 06:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants