KEMBAR78
[https://nvbugs/5541545][fix] Remove test_llama4 by mikeiovine · Pull Request #8031 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@mikeiovine
Copy link
Collaborator

@mikeiovine mikeiovine commented Sep 26, 2025

Description

This test is too hard to maintain. It is too sensitive to small changes in kernels. We have more robust gsm8k accuracy tests now.

Test Coverage

N/A

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Tests
    • Re-enabled a previously skipped multi-GPU model test, allowing it to run in CI again.
    • Updated the test to explicitly set CUDA graph batch sizes for more deterministic execution.
    • No changes to runtime behavior or public APIs; production functionality remains unaffected.
    • Improves test coverage and reliability without altering user workflows.

@mikeiovine
Copy link
Collaborator Author

/bot run --post-merge

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 26, 2025

📝 Walkthrough

Walkthrough

Removed a skip entry from the waives list to allow a specific multi-GPU LLaMA4 test to run. Updated the corresponding unit test to pass an explicit batch_sizes list to CudaGraphConfig when CUDA graphs are enabled.

Changes

Cohort / File(s) Change Summary
Waives list update
tests/integration/test_lists/waives.txt
Removed one SKIP entry for unittest/_torch/multi_gpu_modeling/test_llama4.py::test_llama4[pp1-ep4-enable_adp-enable_graph-tp8-trtllm-scout].
LLaMA4 unittest CUDA graph config
tests/unittest/_torch/multi_gpu_modeling/test_llama4.py
When use_cuda_graph is True, construct CudaGraphConfig with batch_sizes=[1, 2, 3, 4] instead of default constructor.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Pre-merge checks and finishing touches

❌ Failed checks (3 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title Check ⚠️ Warning The title “[https://nvbugs/5541545][fix] Remove test_llama4” incorrectly indicates that the test is being deleted, whereas the actual change removes a skip entry and adjusts the test’s configuration to unwaive it and enable execution. This misrepresents the primary intent of the pull request, which is to unskip and modify the existing test rather than remove it entirely. Rename the pull request to accurately reflect the change, for example “[fix] Unwaive test_llama4 by removing skip entry and specifying CUDA graph batch sizes” so that it clearly conveys the unskip action and test configuration update.
Description Check ⚠️ Warning The PR description does not include the mandatory summary section at the top, which must specify a ticket or issue reference and a type (e.g., [fix]) or use the @coderabbitai summary directive. Although the Description, Test Coverage, Checklist, and Bot Help sections are filled out, the missing summary means the body does not conform to the repository’s template. This section is required to clearly communicate the intent and reference for the change. Please add the PR title section at the top of the description following the required format (for example “[JIRA ticket/NVBugs ID/GitHub issue/None][type] Summary”) or include the @coderabbitai summary directive to generate it automatically.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20120 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20120 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15163 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20144 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20144 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15186 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20277 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20277 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15289 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20387 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20387 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15381 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20461 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20461 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15427 completed with status: 'FAILURE'

@mikeiovine
Copy link
Collaborator Author

/bot run --stage-list "DGX_H200-8_GPUs-PyTorch-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20535 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20535 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15492 (Partly Tested) completed with status: 'FAILURE'

Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
@mikeiovine mikeiovine force-pushed the unwaive-llama4-test branch from 2c36f34 to fa93ede Compare October 8, 2025 16:20
@mikeiovine mikeiovine changed the title [https://nvbugs/5541545][fix] Unwaive test_llama4 [https://nvbugs/5541545][fix] Remove test_llama4 Oct 8, 2025
@mikeiovine
Copy link
Collaborator Author

/bot run

@mikeiovine mikeiovine requested review from a team and byshiue and removed request for a team October 8, 2025 16:23
@tensorrt-cicd
Copy link
Collaborator

PR_Github #20809 [ run ] triggered by Bot

@mikeiovine mikeiovine requested a review from a team October 8, 2025 16:27
@mikeiovine mikeiovine requested review from Wanli-Jiang and removed request for a team October 8, 2025 16:27
Copy link
Collaborator

@brb-nv brb-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@mikeiovine mikeiovine enabled auto-merge (squash) October 8, 2025 16:34
@tensorrt-cicd
Copy link
Collaborator

PR_Github #20809 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15733 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@mikeiovine mikeiovine merged commit c88913d into NVIDIA:main Oct 8, 2025
9 checks passed
kris1025 pushed a commit to kris1025/TensorRT-LLM that referenced this pull request Oct 14, 2025
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants