-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[TRTLLM-4517] [feat] Additional model outputs #7206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughReworks model forward to normalize logits-containing outputs (wrap tensors as Changes
Sequence Diagram(s)sequenceDiagram
actor Caller
participant Engine as _forward_step
participant Model as model.forward
Caller->>Engine: _forward_step(inputs, without_logits, gather_ids)
Engine->>Model: model.forward(inputs, return_context_logits=...)
Model-->>Engine: outputs (dict or tensor)
alt outputs is dict
Engine->>Engine: logits = outputs.get('logits', None)
alt logits is None
Engine-->>Caller: outputs
else
alt gather_ids provided
Engine->>Engine: outputs['logits'] = logits[gather_ids]
end
alt without_logits True
Engine-->>Caller: outputs
else
Engine-->>Caller: outputs
end
end
else outputs is tensor
Engine->>Engine: outputs = {'logits': outputs}
alt gather_ids provided
Engine->>Engine: outputs['logits'] = outputs['logits'][gather_ids]
end
Engine-->>Caller: outputs
end
sequenceDiagram
actor Caller
participant Engine as _forward_step_mm_encoder_only
participant Model as model.forward (multimodal)
Caller->>Engine: _forward_step_mm_encoder_only(multimodal_inputs, scheduled_requests)
Engine->>Engine: validate/construct multimodal_params
Engine->>Model: model.forward(multimodal_params)
Model-->>Engine: mm_embeddings (tensor or empty)
Engine->>Engine: split/chunk embeddings per scheduled requests
Engine-->>Caller: {'mm_embeddings': [...], 'logits': None}
sequenceDiagram
actor Sampler
participant Executor as _sample_async
participant LogitsHandler as HandleLogits
participant AddHandler as HandleAdditionalOutputs
participant Requests as ScheduledRequests
Sampler->>Executor: trigger sampling loop
Executor->>LogitsHandler: process logits -> generation context
LogitsHandler-->>Executor: batch_outputs (logits + other)
Executor->>AddHandler: HandleAdditionalOutputs(context_reqs, gen_reqs, batch_outputs, beam_width, num_context_tokens)
AddHandler-->>Requests: append_additional_context_outputs / append_additional_generation_outputs per request
AddHandler-->>Executor: confirmation
Executor-->>Sampler: continue sampling with augmented results
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 📒 Files selected for processing (11)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
/bot run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)
2318-2325: Update stale comment to match new behavior and fix typo.The code no longer “returns all the logits” early; it computes, then optionally gathers below. Also, there’s a duplicated “the”.
- # For simplicity, just return all the the logits if we have special gather_ids - # from speculative decoding. + # Run the forward pass. If gather_ids (speculative decoding) or gather_context_logits + # is set, ask the model to return context logits; we will gather below as needed.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else
Files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend NVIDIA copyright header (current year) to all source files
Files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (2)
tensorrt_llm/runtime/generation.py (1)
gather_context_logits(1213-1214)tensorrt_llm/_torch/speculative/interface.py (1)
without_logits(53-54)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
|
PR_Github #16417 [ run ] triggered by Bot |
|
PR_Github #16417 [ run ] completed with state |
49e64b3 to
1c3ad6e
Compare
|
/bot run |
|
PR_Github #16671 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)
2252-2254: Always return a dict (with 'logits') in without_logits mode to preserve the contract; current return type can break callers.This reintroduces the same contract issue previously flagged:
_forward_stepis annotated to return Dict[str, Any] but may return a Tensor here; many sites indexoutputs['logits']. Fix by normalizing to a dict.Apply this diff:
- if self.without_logits: - return outputs + if self.without_logits: + # Preserve downstream contract: always return a dict with 'logits' + if isinstance(outputs, dict): + normalized = dict(outputs) + normalized['logits'] = None + return normalized + return {'logits': None}
🧹 Nitpick comments (2)
tensorrt_llm/_torch/pyexecutor/model_engine.py (2)
2246-2251: Comment is misleading vs. implementation; clarify behavior.You request context logits from the model, then gather rows. The comment claims “return all the logits”. Update the comment to reflect gathering behavior.
2255-2261: Broaden dict handling to Mapping for HF-style ModelOutput.Some model outputs are Mapping-like (e.g., transformers’ ModelOutput). Checking
Mappingis safer thandict.Apply this minimal change:
- if isinstance(outputs, dict): + from collections.abc import Mapping + if isinstance(outputs, Mapping):
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Code must target Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Preserve module namespaces when importing; import modules/packages and access members via the module (e.g., from package.subpackage import foo; foo.SomeClass())
Python file names should be snake_case
Python class names should be PascalCase
Python functions/methods and local variables should be snake_case; variables beginning with a number should be prefixed with k_ (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE prefixed with G_ (e.g., G_MY_GLOBAL); constants should be UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments should be reserved for in-function or file-local interfaces
Use Google-style docstrings for classes and functions; attributes and variables may be documented inline with trailing string literals
Avoid reflection when simpler, explicit code suffices (e.g., avoid dict(**locals()) patterns)
In try/except, catch the narrowest exceptions possible
For duck-typing patterns, keep the try body minimal and move logic to else to avoid masking unrelated failures
Files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.{c,cc,cpp,cxx,h,hh,hpp,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)
Files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (3)
tensorrt_llm/runtime/generation.py (1)
gather_context_logits(1213-1214)tensorrt_llm/_torch/speculative/interface.py (1)
without_logits(53-54)tensorrt_llm/_torch/pyexecutor/llm_request.py (1)
get(100-109)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)
2236-2271: Verify all downstreamoutputs['logits']usages after normalization changesThe recent contract update in
_forward_stepmay alter the shape or values returned inoutputs['logits']. Please review each call site to ensure it correctly handles any reshaped or re‐normalized logits:• Unit tests (must still pass):
- tests/unittest/trt/model/test_phi.py
- tests/unittest/trt/model/test_nemotron_nas.py
- tests/unittest/trt/model/test_mistral.py
- tests/unittest/trt/model/test_mamba.py
- tests/unittest/trt/model/test_gpt.py
- tests/unittest/trt/model/test_llama.py
• Generation pipeline:
- tensorrt_llm/runtime/generation.py (around lines 950, 1852, 1860, 1882, 2243, 2536, 3594, 3611, 3653, 3659, 3793)
• PyExecutor integration:
- tensorrt_llm/_torch/pyexecutor/py_executor.py (lines 780, 980, 1099, 1523)
• Sampler logic:
- tensorrt_llm/_torch/pyexecutor/sampler.py (around lines 680–683, 952)
• Speculative drafter:
- tensorrt_llm/_torch/speculative/model_drafter.py (lines 277, 371, 393)
• Model engine itself:
- tensorrt_llm/_torch/pyexecutor/model_engine.py (lines 2268, 2349)
• Triton backend adapter:
- triton_backend/all_models/inflight_batcher_llm/tensorrt_llm/1/model.py (line 266)
• Example scripts:
- examples/summarize.py (line 491)
At each site, confirm that the code correctly indexes, reshapes, and interprets the logits (dtype, device, dimensions, normalization).
|
PR_Github #16671 [ run ] completed with state |
|
/bot run |
|
PR_Github #16699 [ run ] triggered by Bot |
|
PR_Github #16699 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #16837 [ run ] triggered by Bot |
|
PR_Github #16837 [ run ] completed with state |
1c3ad6e to
9efe9d2
Compare
|
/bot run |
|
PR_Github #16895 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
tensorrt_llm/_torch/pyexecutor/model_engine.py (5)
395-401: Allocate gather_ids_cuda as torch.long (required for indexing).Prevents CUDA advanced-indexing errors and avoids per-step casts.
- self.gather_ids_cuda = torch.empty((self.max_num_tokens, ), - dtype=torch.int, - device='cuda') + self.gather_ids_cuda = torch.empty((self.max_num_tokens, ), + dtype=torch.long, + device='cuda') - self.previous_pos_indices_cuda = torch.empty( - (self.max_num_tokens, ), dtype=torch.int, device='cuda') + self.previous_pos_indices_cuda = torch.empty( + (self.max_num_tokens, ), dtype=torch.long, device='cuda')
1517-1519: Copy gather_ids as torch.long to match device buffer.- self.gather_ids_cuda[:len(gather_ids)].copy_(torch.tensor( - gather_ids, dtype=torch.int, pin_memory=True), - non_blocking=True) + self.gather_ids_cuda[:len(gather_ids)].copy_( + torch.tensor(gather_ids, dtype=torch.long, pin_memory=True), + non_blocking=True)
1406-1411: previous_batch_indices used for GPU indexing must be long.previous_slots (copied from previous_batch_indices_host) indexes CUDA tensors below; keep both host tensor and device buffer as torch.long.
- previous_batch_indices_host = torch.tensor(previous_batch_indices, - dtype=torch.int, - pin_memory=True) + previous_batch_indices_host = torch.tensor( + previous_batch_indices, dtype=torch.long, pin_memory=True)Additionally, update the device buffer allocation (Line 437–439):
- self.previous_batch_indices_cuda = torch.empty((self.max_num_tokens, ), - dtype=torch.int, - device='cuda') + self.previous_batch_indices_cuda = torch.empty((self.max_num_tokens, ), + dtype=torch.long, + device='cuda')
1455-1460: previous_pos_indices used for GPU indexing must be long.This indexes new_tokens_lens_device; use torch.long on host copy.
- previous_pos_indices_host = torch.tensor(previous_pos_indices, - dtype=torch.int, - pin_memory=True) + previous_pos_indices_host = torch.tensor( + previous_pos_indices, dtype=torch.long, pin_memory=True)
1690-1692: Use torch.long for gather_ids in no-cache path too.- self.gather_ids_cuda[:len(gather_ids)].copy_(torch.tensor( - gather_ids, dtype=torch.int, pin_memory=True), - non_blocking=True) + self.gather_ids_cuda[:len(gather_ids)].copy_( + torch.tensor(gather_ids, dtype=torch.long, pin_memory=True), + non_blocking=True)
♻️ Duplicate comments (2)
tensorrt_llm/_torch/pyexecutor/model_engine.py (2)
2252-2254: Always return a dict when without_logits is True (preserve contract).Callers commonly index outputs['logits']; returning a raw Tensor/non-dict here breaks the type contract. Return a dict with 'logits': None (and keep other fields if present).
- if self.without_logits: - return outputs + if self.without_logits: + # Preserve downstream contract: always return a dict with 'logits' + if isinstance(outputs, dict): + outputs['logits'] = None + return outputs + else: + return {'spec_outputs': outputs, 'logits': None}
2267-2269: Fix unsafe int32 advanced indexing; cast to long and use index_select.On CUDA, index tensors must be torch.long. Using advanced indexing with int32 can error or crash.
- if gather_ids is not None: - outputs['logits'] = logits[gather_ids] + if gather_ids is not None: + idx = gather_ids.long() + outputs['logits'] = logits.index_select(0, idx)
🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)
2244-2251: Fix misleading comment and minor grammar.Comment says “return all the the logits,” but the code gathers a subset. Update for accuracy.
- # For simplicity, just return all the the logits if we have special gather_ids - # from speculative decoding. + # If gather_ids are provided (e.g., from speculative decoding), request context logits + # and gather the needed rows below.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{cpp,cc,cxx,cu,py,h,hpp,hh,hxx,cuh}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Use spaces only; no tabs; indent by 4 spaces
Files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Code must target Python 3.8+
Indent with 4 spaces; no tabs
Keep module namespace on import; import the module, not individual names; use module.symbol
Python filenames use snake_case (e.g., some_file.py)
Class names use PascalCase
Function and method names use snake_case
Local variables use snake_case; if starting with a number, prefix with k_ (e.g., k_99th_percentile)
Global variables use UPPER_SNAKE with G_ prefix (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE
Avoid shadowing variables from outer scopes
Initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with trailing docstrings
Avoid reflection when simple alternatives exist (e.g., avoid dict(**locals()) patterns)
Limit except clauses to specific exceptions; avoid bare except
When duck-typing with try/except, keep try body minimal and use else for logic
Files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.{cpp,cc,cxx,cu,h,hpp,hh,hxx,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend NVIDIA copyright header (current year) to all source files
Files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
|
PR_Github #16895 [ run ] completed with state |
9efe9d2 to
3c3cd3b
Compare
|
/bot run |
|
PR_Github #16976 [ run ] triggered by Bot |
|
PR_Github #16976 [ run ] completed with state |
3c3cd3b to
2c4501d
Compare
|
/bot run |
|
PR_Github #17530 [ run ] triggered by Bot |
|
PR_Github #17530 [ run ] completed with state |
- Deleted the `DummyModel` class and its configuration from modeling. - Moved the `DummyModel` class to tests. Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
- Removed the thread leak marker from the integration test. - Wrapped the output generation in a context manager to ensure proper resource handling. Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
- Add output check for generation output of model output that includes context. - Make model outputs depend on position ids. - Verify outputs based on token position. Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
6f6af2b to
f14e921
Compare
|
/bot run |
|
PR_Github #20726 [ run ] triggered by Bot |
- Remove AdditionalModelOutput class, use str instead (Remove gather_context flag). - Automatically detect whether to gather the additional outputs for context tokens or not. - Update tests to reflect the changes. Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
|
PR_Github #20726 [ run ] completed with state |
- Updated the logic to collect additional outputs from context and generation requests. - Only iterate over the requested additional outputs instead of all tensors. Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
|
/bot run |
|
PR_Github #20783 [ run ] triggered by Bot |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
PR_Github #20783 [ run ] completed with state |
|
@coderabbitai summary |
✅ Actions performedSummary regeneration triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM regarding the API change.
- Move additional context and generation outputs to prototype in completion output schema. - Update sampling params schema to set `additional_model_outputs` to prototype. Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
a245b32 to
bc531ce
Compare
|
/bot skip --comment "Only changed API status flag" |
|
PR_Github #21216 [ skip ] triggered by Bot |
|
PR_Github #21216 [ skip ] completed with state |
Summary by CodeRabbit
New Features
Improvements
Tests
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.