-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[TRTLLM-7292][feat] Support multi-threaded tokenizers for trtllm-serve #7515
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TRTLLM-7292][feat] Support multi-threaded tokenizers for trtllm-serve #7515
Conversation
2a7c69f to
7f940c8
Compare
|
/bot run |
|
PR_Github #17634 [ run ] triggered by Bot |
📝 WalkthroughWalkthroughIntroduces a preprocessing stage for prompts. Adds a PreprocessedInputs TypedDict. BaseLLM gains preprocess_inputs and generate_async accepts optional preprocessed_inputs. OpenAIServer adds an async wrapper that preprocesses inputs in a background thread and then calls generate_async with the preprocessed data. Multimodal handling is centralized. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Client
participant S as OpenAIServer
participant BG as To-Thread (CPU)
participant L as BaseLLM
participant EX as Executor
C->>S: Request (PromptInputs, SamplingParams)
S->>BG: llm.preprocess_inputs(inputs, sampling_params)
Note right of BG: Tokenization, multimodal packing<br/>returns PreprocessedInputs
BG-->>S: PreprocessedInputs
S->>L: generate_async(preprocessed_inputs=..., other kwargs)
L->>L: derive ctx/gen-only flags<br/>adjust sampling params if needed
L->>EX: generate_async(prompt_token_ids, query_token_ids, multimodal_params, ...)
EX-->>L: GenerationResult
L-->>S: RequestOutput
S-->>C: Response
sequenceDiagram
autonumber
participant U as User Code
participant L as BaseLLM
U->>L: generate_async(inputs=PromptInputs, no preprocessed_inputs)
L->>L: preprocess_inputs(inputs, sampling_params)
L->>L: compute is_ctx_only / is_gen_only
L->>L: set sampling_params.max_tokens (non-TRT path)
L->>L: route unified multimodal_params
L-->>U: RequestOutput (via executor)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
✨ Finishing Touches
🧪 Generate unit tests✅ Unit Test PR creation complete.
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tensorrt_llm/llmapi/llm.py (1)
426-433: Add consistency validation forpreprocessed_inputs
No existing validation logic was found inPreprocessedInputs. To prevent subtle mismatches whenpreprocessed_inputsis passed togenerate_async, include a hash (or signature) of the originalinputsinPreprocessedInputsand verify it matches before proceeding.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
tensorrt_llm/inputs/data.py(2 hunks)tensorrt_llm/llmapi/llm.py(3 hunks)tensorrt_llm/serve/openai_server.py(6 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Filenames compiled into a target must be case-insensitively unique
Files:
tensorrt_llm/inputs/data.pytensorrt_llm/serve/openai_server.pytensorrt_llm/llmapi/llm.py
**/*.{h,hpp,hh,hxx,cc,cpp,cxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Use spaces, not tabs; indent 4 spaces
Files:
tensorrt_llm/inputs/data.pytensorrt_llm/serve/openai_server.pytensorrt_llm/llmapi/llm.py
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Code must target Python 3.8+
Indent with 4 spaces; do not use tabs (Python)
Maintain module namespace on import: prefer from package.subpackage import foo; use foo.Symbol()
Python filenames use snake_case
Python class names use PascalCase
Python functions and methods use snake_case
Python local variables use snake_case; if starting with a number concept, prefix with k (e.g., k_99th_percentile)
Python global variables use G_ prefix with UPPER_SNAKE_CASE
Python constants use UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes
Initialize all externally visible class members in init
For public interfaces, prefer docstrings over comments; comments should be for in-function or file-local interfaces
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes and variables inline with docstrings immediately after assignment
Avoid reflection when a non-reflective approach suffices
Limit except clauses to specific exceptions where possible
When using try/except for duck-typing, keep try body minimal and move logic to else
Files:
tensorrt_llm/inputs/data.pytensorrt_llm/serve/openai_server.pytensorrt_llm/llmapi/llm.py
**/*.{cpp,cc,cxx,h,hpp,hh,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend NVIDIA copyright header (current year) to all source files
Files:
tensorrt_llm/inputs/data.pytensorrt_llm/serve/openai_server.pytensorrt_llm/llmapi/llm.py
🧬 Code graph analysis (3)
tensorrt_llm/inputs/data.py (4)
tensorrt_llm/sampling_params.py (1)
SamplingParams(125-486)tensorrt_llm/inputs/multimodal.py (1)
MultimodalParams(152-410)tensorrt_llm/executor/result.py (1)
prompt_token_ids(525-526)tensorrt_llm/llmapi/llm.py (1)
prompt(79-80)
tensorrt_llm/serve/openai_server.py (3)
tensorrt_llm/inputs/data.py (1)
prompt_inputs(67-81)tensorrt_llm/llmapi/llm.py (4)
RequestOutput(46-86)preprocess_inputs(316-392)generate_async(395-471)prompt(79-80)tensorrt_llm/sampling_params.py (1)
SamplingParams(125-486)
tensorrt_llm/llmapi/llm.py (5)
tensorrt_llm/inputs/data.py (2)
PreprocessedInputs(56-61)TextPrompt(12-31)tensorrt_llm/sampling_params.py (1)
SamplingParams(125-486)tensorrt_llm/_utils.py (1)
nvtx_range_debug(865-889)tensorrt_llm/executor/executor.py (2)
generate_async(114-155)is_shutdown(277-278)tensorrt_llm/executor/request.py (2)
LoRARequest(24-53)PromptAdapterRequest(57-81)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (10)
tensorrt_llm/inputs/data.py (1)
56-62: LGTM! Well-structured preprocessing data type.The
PreprocessedInputsTypedDict provides a clean, strongly-typed interface for passing preprocessed data between the tokenization and generation stages. This design effectively enables multi-threaded tokenization while maintaining type safety.tensorrt_llm/serve/openai_server.py (5)
169-183: Good implementation of multi-threaded tokenization!The async wrapper correctly implements multi-threaded tokenization using
asyncio.to_thread, which properly releases the GIL as mentioned in the PR objectives. This enables concurrent tokenization while the generation pipeline still runs in the event loop context.
444-451: Consistent use of the wrapper across chat completions.The migration to use
generate_async_wrapperis correctly implemented, maintaining the same parameters and behavior while enabling multi-threaded tokenization.
532-533: Wrapper correctly applied to multimodal encoder path.The multimodal encoder generation path correctly uses the wrapper, though it only passes the
promptparameter. This is consistent with the original implementation.
651-658: Completion endpoint successfully migrated to use the wrapper.The per-prompt generation in the completion endpoint correctly uses the wrapper with all necessary parameters.
730-735: Harmony chat completion correctly uses the wrapper.The harmony adapter path successfully migrates to use the wrapper while maintaining the specialized detokenization handling (
sampling_params.detokenize = False).tensorrt_llm/llmapi/llm.py (4)
316-393: Well-designed preprocessing method that centralizes tokenization logic.The
preprocess_inputsmethod effectively consolidates all input preprocessing logic, including:
- Tokenization for text prompts
- Handling of pre-tokenized inputs
- Multimodal data processing with proper hashing
- VLM-specific prompt re-encoding when needed
This centralization enables clean separation of concerns and makes the multi-threaded tokenization possible.
394-424: Clean integration of preprocessed inputs ingenerate_async.The updated signature and implementation properly support both the traditional path (preprocessing inline) and the new optimized path (using pre-computed preprocessed inputs). The docstring is also well-updated.
434-439: Good extraction of preprocessed data fields.The code correctly extracts all necessary fields from the
preprocessed_inputsdictionary, maintaining the same data flow as the original implementation.
440-446: Context-only optimization preserved correctly.The logic for optimizing KV cache allocation in context-only requests is properly maintained after the refactoring.
|
PR_Github #17634 [ run ] completed with state |
|
Note Unit test generation is an Early Access feature. Expect some limitations and changes as we gather feedback and continue to improve it. Generating unit tests... This may take up to 20 minutes. |
|
Caution An unexpected error occurred while opening a pull request: Reference update failed - https://docs.github.com/rest/git/refs#create-a-reference |
|
/bot run --disable-fail-fast |
|
PR_Github #17701 [ run ] triggered by Bot |
|
PR_Github #17701 [ run ] completed with state |
c243fe8 to
0d5bfac
Compare
Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
…-threading acceleration Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
This reverts commit fa7f077. Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
0d5bfac to
c16d826
Compare
|
/bot run |
|
PR_Github #17715 [ run ] triggered by Bot |
|
PR_Github #17715 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #17716 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for the performance. As we discussed, the generate_async API change isn’t necessary, but feel free to address it in a subsequent PR if timing allows.
|
PR_Github #17716 [ run ] completed with state |
Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
|
/bot run --disable-fail-fast |
|
PR_Github #17794 [ run ] triggered by Bot |
|
PR_Github #17794 [ run ] completed with state |
NVIDIA#7515) Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Summary by CodeRabbit
Description
For some models (e.g., gpt_oss) that very small and well-optimized, the tokenizer can became a bottleneck because a single cpu thread is responsible for tokenization for every request.
This PR leverages the fact that tokenizer itself is probably written in Rust so GIL will be released during tokenization therefore can accelerate requests tokenization with multi-thread. Note that we cannot directly apply
asyncio.to_thread()toBaseLLM.generate_async()because some part of it (GenerationResult) assume event_loop to be present, but there won't be event_loop when running withto_thread().Below is a nsys profile for that shows multi-threading tokenization in effect

Test Coverage
Existing unit tests. If you have a good idea of how to test multi-thread tokenization, please comment.
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.