KEMBAR78
[None][test] Add post merge test for Seed-OSS-36B-Instruct by zhhuang-nv · Pull Request #8321 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@zhhuang-nv
Copy link
Collaborator

@zhhuang-nv zhhuang-nv commented Oct 13, 2025

Summary by CodeRabbit

  • New Features

    • Added configurable chat template options across evaluators via a new --chat_template_kwargs (JSON) CLI flag, including multimodal paths. Applies to GSM8K, GPQA (Main/Diamond/Extended), MMMU, and MMLU.
  • Tests

    • Added an integration test for Seed-OSS-36B-Instruct on GSM8K and included it in the pre-merge test matrix.
  • Chores

    • Updated accuracy references to include Seed-OSS-36B-Instruct results.

Description

GSM8K average accuracy is 92.077 on B200 with BF16 dtype.
We cannot get this score if we do not apply chat template or limit the thinking budget to a small value. Due to the long evaluation time and common model structure, I add this test to post merge stage.

Test Coverage

accuracy/test_llm_api_pytorch.py::TestSeedOss_36B::test_auto_dtype

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@zhhuang-nv zhhuang-nv changed the title [None] Add post merge test for Seed-OSS-36B-Instruct [None][test] Add post merge test for Seed-OSS-36B-Instruct Oct 13, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 13, 2025

📝 Walkthrough

Walkthrough

Adds an optional chat_template_kwargs parameter across evaluators, LM wrappers, and CLI commands, and threads it through to tokenizer.apply_chat_template calls. Updates MMLU and LM Eval paths, introduces corresponding CLI options, and adds integration tests and accuracy references for the Seed-OSS-36B-Instruct model.

Changes

Cohort / File(s) Summary
Evaluator base update
tensorrt_llm/evaluate/interface.py
Adds Evaluator.init param chat_template_kwargs and stores it; do_apply_chat_template forwards kwargs to llm.tokenizer.apply_chat_template.
LM Eval wrappers and evaluator
tensorrt_llm/evaluate/lm_eval.py
LmEvalWrapper, MultimodalLmEvalWrapper, and LmEvalEvaluator accept/store chat_template_kwargs; wrappers pass kwargs through in apply_chat_template; evaluator forwards kwargs when constructing wrappers; CLI commands (GSM8K, GPQA*, MMMU) accept --chat_template_kwargs (JSON) and propagate.
MMLU evaluator and CLI
tensorrt_llm/evaluate/mmlu.py
Adds chat_template_kwargs to MMLU.init; CLI gains --chat_template_kwargs (JSON) and passes through to constructor and evaluation path.
Accuracy references
tests/integration/defs/accuracy/references/gsm8k.yaml
Adds ByteDance-Seed/Seed-OSS-36B-Instruct accuracy entry (90.8).
Integration test for Seed-OSS-36B
tests/integration/defs/accuracy/test_llm_api_pytorch.py
Adds TestSeedOss_36B with GSM8K sampling params; new test_auto_dtype uses chat_template_kwargs in evaluation.
Test list updates
tests/integration/test_lists/test-db/l0_b200.yml
Adds TestSeedOss_36B::test_auto_dtype to pre_merge; removes three TestDeepSeekV3Lite::test_nvfp4 variants.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant CLI as CLI Command
  participant Eval as Evaluator (LmEval/MMLU)
  participant LM as LM Wrapper (Text/MM)
  participant Tok as Tokenizer

  User->>CLI: Run ... --chat_template_kwargs '{"k": "v"}'
  CLI->>Eval: instantiate(..., chat_template_kwargs)
  Eval->>LM: instantiate(..., chat_template_kwargs)
  Note over LM: Stores self.chat_template_kwargs

  alt apply_chat_template enabled
    LM->>Tok: apply_chat_template(messages, add_generation_prompt, **chat_template_kwargs)
    Tok-->>LM: templated_text
  else disabled
    LM-->>Eval: skip templating
  end

  LM-->>Eval: proceed with generation
  Eval-->>User: results
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 18.75% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ⚠️ Warning The pull request description includes the required Description, Test Coverage, PR Checklist, and GitHub Bot Help sections but omits the mandatory PR title line at the very top following the [JIRA ticket/...][type] Summary format. The template requires a summary or use of the @coderabbitai summary directive to generate a compliant title. Without this title, the description does not adhere to the repository’s standard template. Please add a top-level title that follows the [JIRA ticket/NVBugs ID/GitHub issue/None][type] Summary format or include the @coderabbitai summary directive to generate one; for example, [None][feat] Add post merge test for Seed-OSS-36B-Instruct. Ensuring this title is present will satisfy the repository’s PR template requirements.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title correctly follows the repository’s convention by including a ticket placeholder, a lowercase type tag, and a concise summary of the primary change, clearly indicating that a post-merge test for the Seed-OSS-36B-Instruct model is being added.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
tensorrt_llm/evaluate/interface.py (1)

37-39: Use typing.Dict for 3.8 compatibility; pass kwargs safely.

Project targets Python 3.8+. Replace PEP 585 generics with typing.Dict to avoid 3.9+ requirement. Logic forwarding chat_template_kwargs looks good.

Apply this diff:

-from typing import Any, Iterable, List, Optional, Union
+from typing import Any, Iterable, List, Optional, Union, Dict
@@
-                 system_prompt: Optional[str] = None,
-                 chat_template_kwargs: Optional[dict[str, Any]] = None):
+                 system_prompt: Optional[str] = None,
+                 chat_template_kwargs: Optional[Dict[str, Any]] = None):
@@
-        return llm.tokenizer.apply_chat_template(messages,
-                                                 tokenize=False,
-                                                 add_generation_prompt=True,
-                                                 **(self.chat_template_kwargs
-                                                    or {}))
+        return llm.tokenizer.apply_chat_template(
+            messages,
+            tokenize=False,
+            add_generation_prompt=True,
+            **(self.chat_template_kwargs or {}),
+        )

As per coding guidelines

Also applies to: 45-46, 69-71

tensorrt_llm/evaluate/mmlu.py (1)

24-24: Adopt Dict for Python 3.8; improve JSON option parsing (lint-safe, better errors).

  • Replace dict[...] with typing.Dict for 3.8 compatibility.
  • The lambda callback triggers ARG005 and gives poor error messages on bad JSON. Use a small parser function and raise click.BadParameter.

Apply this diff:

+import json
@@
-from typing import Any, Iterable, List, Optional, Union
+from typing import Any, Iterable, List, Optional, Union, Dict
@@
-                 system_prompt: Optional[str] = None,
-                 chat_template_kwargs: Optional[dict[str, Any]] = None):
+                 system_prompt: Optional[str] = None,
+                 chat_template_kwargs: Optional[Dict[str, Any]] = None):
@@
-    @click.option(
-        "--chat_template_kwargs",
-        type=str,
-        default=None,
-        callback=lambda ctx, param, value: json.loads(value) if value else None,
-        help=
-        'Chat template kwargs as JSON string, e.g., \'{"thinking_budget": 0}\'')
+    def _parse_json_option(ctx, param, value):
+        if not value:
+            return None
+        try:
+            return json.loads(value)
+        except json.JSONDecodeError as e:
+            raise click.BadParameter(f"Invalid JSON: {e.msg}") from e
+    @click.option(
+        "--chat_template_kwargs",
+        type=str,
+        default=None,
+        callback=_parse_json_option,
+        help='Chat template kwargs as JSON string, e.g., \'{"thinking_budget": 0}\'')
@@
-                         system_prompt=system_prompt,
-                         chat_template_kwargs=chat_template_kwargs)
+                         system_prompt=system_prompt,
+                         chat_template_kwargs=chat_template_kwargs)

As per coding guidelines

Also applies to: 38-38, 141-147, 303-309, 335-342

tensorrt_llm/evaluate/lm_eval.py (2)

52-61: Use typing.Dict for Python 3.8 compatibility across constructors.

Replace PEP 585 dict[...] with typing.Dict to keep 3.8 support.

Apply this diff:

-                 streaming: bool = False,
-                 chat_template_kwargs: Optional[dict[str, Any]] = None):
+                 streaming: bool = False,
+                 chat_template_kwargs: Optional[Dict[str, Any]] = None):
@@
-                 max_images: int = 999,
-                 chat_template_kwargs: Optional[dict[str, Any]] = None):
+                 max_images: int = 999,
+                 chat_template_kwargs: Optional[Dict[str, Any]] = None):
@@
-                 is_multimodal: bool = False,
-                 chat_template_kwargs: Optional[dict[str, Any]] = None):
+                 is_multimodal: bool = False,
+                 chat_template_kwargs: Optional[Dict[str, Any]] = None):

As per coding guidelines

Also applies to: 148-169, 303-332


480-485: Prefer a shared JSON parser over inline lambdas (fixes lint, better UX).

Define a single _parse_json_option (returns None or parsed dict; raises click.BadParameter on invalid JSON) and reuse it for all --chat_template_kwargs options to avoid ARG005 and improve errors.

I can provide a consolidated diff to add the helper and update these options if desired.

Also applies to: 538-544, 588-594, 638-644, 684-689

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d145e87 and 7cfff08.

📒 Files selected for processing (6)
  • tensorrt_llm/evaluate/interface.py (2 hunks)
  • tensorrt_llm/evaluate/lm_eval.py (15 hunks)
  • tensorrt_llm/evaluate/mmlu.py (6 hunks)
  • tests/integration/defs/accuracy/references/gsm8k.yaml (1 hunks)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
  • tests/integration/test_lists/test-db/l0_b200.yml (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/evaluate/interface.py
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tensorrt_llm/evaluate/mmlu.py
  • tensorrt_llm/evaluate/lm_eval.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/evaluate/interface.py
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tensorrt_llm/evaluate/mmlu.py
  • tensorrt_llm/evaluate/lm_eval.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/evaluate/interface.py
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tensorrt_llm/evaluate/mmlu.py
  • tensorrt_llm/evaluate/lm_eval.py
🧬 Code graph analysis (3)
tensorrt_llm/evaluate/interface.py (1)
tensorrt_llm/evaluate/lm_eval.py (2)
  • apply_chat_template (66-78)
  • apply_chat_template (197-249)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (5)
tests/integration/defs/accuracy/accuracy_core.py (3)
  • GSM8K (332-347)
  • evaluate (184-245)
  • evaluate (763-773)
tests/integration/defs/conftest.py (1)
  • llm_models_root (79-93)
tensorrt_llm/sampling_params.py (1)
  • SamplingParams (126-545)
tensorrt_llm/llmapi/llm_args.py (1)
  • KvCacheConfig (1108-1242)
tensorrt_llm/llmapi/llm.py (1)
  • LLM (1084-1100)
tensorrt_llm/evaluate/mmlu.py (1)
tensorrt_llm/evaluate/lm_eval.py (2)
  • apply_chat_template (66-78)
  • apply_chat_template (197-249)
🪛 Ruff (0.13.3)
tensorrt_llm/evaluate/mmlu.py

306-306: Unused lambda argument: ctx

(ARG005)


306-306: Unused lambda argument: param

(ARG005)

tensorrt_llm/evaluate/lm_eval.py

483-483: Unused lambda argument: ctx

(ARG005)


483-483: Unused lambda argument: param

(ARG005)


541-541: Unused lambda argument: ctx

(ARG005)


541-541: Unused lambda argument: param

(ARG005)


591-591: Unused lambda argument: ctx

(ARG005)


591-591: Unused lambda argument: param

(ARG005)


641-641: Unused lambda argument: ctx

(ARG005)


641-641: Unused lambda argument: param

(ARG005)


687-687: Unused lambda argument: ctx

(ARG005)


687-687: Unused lambda argument: param

(ARG005)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (3)
tests/integration/defs/accuracy/references/gsm8k.yaml (1)

224-225: Confirm reference vs test setup (dtype/chat template/budget).

PR text cites 92.077 (BF16, B200) with chat template or small thinking budget; YAML adds 90.8. Please verify the reference matches the configuration used in the new test (thinking_budget, sampling params, device) and adjust either the YAML or test accordingly.

tests/integration/test_lists/test-db/l0_b200.yml (1)

147-147: LGTM: post-merge placement is appropriate.

Long-running GSM8K eval for Seed-OSS is gated under post_merge PyTorch as intended.

tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

3583-3601: Reassess thinking_budget and max_tokens settings

  • Replace thinking_budget=-1 (unlimited) with a small non-negative value (e.g. 0/32/64) to bound “thinking” time.
  • Reduce max_tokens=16384 to a more realistic limit (256–512) for GSM8K to cut runtime.

Run the accuracy suite to confirm you still hit the 90.8 target; if not, tune these values accordingly.

@zhhuang-nv
Copy link
Collaborator Author

/bot run --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21213 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21213 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16014 completed with status: 'FAILURE'

@zhhuang-nv
Copy link
Collaborator Author

/bot run --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21279 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21279 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16066 completed with status: 'FAILURE'

@zhhuang-nv zhhuang-nv requested review from a team as code owners October 14, 2025 09:26
@zhhuang-nv zhhuang-nv requested review from kxdc and venkywonka October 14, 2025 09:26
Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>
Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>
@zhhuang-nv
Copy link
Collaborator Author

/bot run --post-merge

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21346 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21346 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16113 completed with status: 'SUCCESS'

@zhhuang-nv zhhuang-nv requested review from nv-guomingz and syuoni and removed request for kxdc and venkywonka October 15, 2025 02:42
Copy link
Collaborator

@syuoni syuoni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>
@syuoni
Copy link
Collaborator

syuoni commented Oct 17, 2025

/bot reuse-pipeline

@syuoni syuoni enabled auto-merge (squash) October 17, 2025 08:54
@tensorrt-cicd
Copy link
Collaborator

PR_Github #21685 [ reuse-pipeline ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21685 [ reuse-pipeline ] completed with state SUCCESS
Reusing PR_Github #21346 for commit 9b3e05d

@syuoni syuoni merged commit 7a2bab9 into NVIDIA:main Oct 17, 2025
5 checks passed
@zhhuang-nv zhhuang-nv deleted the seed-oss-ci-tests branch October 20, 2025 09:14
govind-ramnarayan pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Oct 21, 2025
Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants