KEMBAR78
add tunning files for QWEN-3-NEXT by yiakwy-xpu-ml-framework-team · Pull Request #10794 · sgl-project/sglang · GitHub
Skip to content

Conversation

@yiakwy-xpu-ml-framework-team
Copy link
Contributor

@yiakwy-xpu-ml-framework-team yiakwy-xpu-ml-framework-team commented Sep 23, 2025

Motivation

Tunning files are missing for H800 ...

See the log:
serv_qwen3-next.log

After apply the patch, you should see when serving QWEN-3-NEXT:

[2025-09-23 02:24:02 TP0] Using MoE kernel config from /home/yiakwy/workspace/Github/sglang-gold/python/sglang/srt/layers/moe/fused_moe_triton/configs/triton_3_4_0/E=512,N=128,device_name=NVIDIA_H800,dtype=fp8_w8a8,block_shape=[128, 128].json.

and performance boost from 130 tok/s to 160 toks/s

Modifications

  • tunning scripts

Accuracy Tests

Benchmarking and Profiling

python3 -m sglang.test.send_one

without tuning files

截屏2025-09-23 17 26 33

with tuning files

截屏2025-09-23 17 27 02

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yiakwy-xpu-ml-framework-team, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses the absence of tuning files for the Qwen-3-NEXT model, particularly for the H800 GPU. It integrates the Qwen-3-NEXT architecture into the existing MoE tuning framework and introduces specific, pre-tuned configurations to optimize the performance of fused MoE Triton kernels on NVIDIA H800 hardware. This ensures that the Qwen-3-NEXT model can leverage efficient kernel execution on the specified GPU, improving overall inference speed and efficiency.

Highlights

  • Qwen-3-NEXT Model Support: The tuning script for fused MoE Triton kernels has been updated to explicitly include support for the 'Qwen3NextForCausalLM' architecture, ensuring proper configuration detection for this model.
  • New H800 Tuning Configurations: Two new JSON configuration files have been added, providing optimized Triton kernel tuning parameters specifically for NVIDIA H800 GPUs. These configurations target different Mixture-of-Experts (MoE) setups (E=256, N=256 and E=512, N=128) using fp8_w8a8 data types.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds tuning files for the QWEN-3-NEXT model on H800 GPUs. The changes include updating the tuning script to recognize the Qwen3NextForCausalLM architecture and adding two new JSON configuration files with tuned kernel parameters for different expert and intermediate sizes. The changes are straightforward and appear correct, enabling optimized performance for the specified model and hardware.

@zhyncs zhyncs merged commit 984730b into sgl-project:main Sep 23, 2025
37 of 41 checks passed
HanHan009527 pushed a commit to HanHan009527/sglang that referenced this pull request Oct 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants