Speculative Decoding#

SGLang now provides an EAGLE-based (EAGLE-2/EAGLE-3) speculative decoding option. Our implementation aims to maximize speed and efficiency and is considered to be among the fastest in open-source LLM engines.

Performance Highlights#

Please see below for the huge improvements on throughput for LLaMA-Instruct 3.1 8B tested on MT bench that can be achieved via EAGLE3 decoding. For further details please see the EAGLE3 paper.

Method

Throughput (tokens/s)

SGLang (w/o speculative, 1x H100)

158.34 tokens/s

SGLang + EAGLE-2 (1x H100)

244.10 tokens/s

SGLang + EAGLE-3 (1x H100)

373.25 tokens/s

EAGLE Decoding#

To enable EAGLE speculative decoding the following parameters are relevant:

  • speculative_draft_model_path: Specifies draft model. This parameter is required.

  • speculative_num_steps: Depth of autoregressive drafting. Increases speculation range but risks rejection cascades. Default is 5.

  • speculative_eagle_topk: Branching factor per step. Improves candidate diversity, will lead to higher acceptance rate, but more lead to higher memory/compute consumption. Default is 4.

  • speculative_num_draft_tokens: Maximum parallel verification capacity. Allows deeper tree evaluation but will lead to higher GPU memory usage. Default is 8.

These parameters are the same for EAGLE-2 and EAGLE-3.

You can find the best combinations of these parameters with bench_speculative.py.

In the documentation below, we set --cuda-graph-max-bs to be a small value for faster engine startup. For your own workloads, please tune the above parameters together with --cuda-graph-max-bs, --max-running-requests, --mem-fraction-static for the best performance.

EAGLE-2 decoding#

You can enable EAGLE-2 decoding by setting --speculative-algorithm EAGLE and choosing an appropriate model.

[1]:
from sglang.test.doc_patch import launch_server_cmd
from sglang.utils import wait_for_server, print_highlight, terminate_process

import openai
[2025-10-23 18:42:40] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:42:40] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:42:40] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2]:
server_process, port = launch_server_cmd(
    """
python3 -m sglang.launch_server --model meta-llama/Llama-2-7b-chat-hf  --speculative-algorithm EAGLE \
    --speculative-draft-model-path lmsys/sglang-EAGLE-llama2-chat-7B --speculative-num-steps 3 \
    --speculative-eagle-topk 4 --speculative-num-draft-tokens 16 --cuda-graph-max-bs 8 --log-level warning
"""
)

wait_for_server(f"http://localhost:{port}")
[2025-10-23 18:42:46] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:42:46] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:42:46] INFO utils.py:164: NumExpr defaulting to 16 threads.
`torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:42:48] WARNING logging.py:328: `torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:42:48] WARNING server_args.py:1304: Overlap scheduler is disabled because of using eagle3 and standalone speculative decoding.
[2025-10-23 18:42:48] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
[2025-10-23 18:42:56] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:42:56] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:42:56] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:42:56] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:42:56] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:42:56] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:42:58] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
[2025-10-23 18:42:58] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
`torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:42:59] `torch_dtype` is deprecated! Use `dtype` instead!
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_generate_schema.py:2249: UnsupportedFieldAttributeWarning: The 'repr' attribute with value False was provided to the `Field()` function, which has no effect in the context it was used. 'repr' is field-specific metadata, and can only be attached to a model field using `Annotated` metadata or by assignment. This may have happened because an `Annotated` type alias using the `type` statement was used, or if the `Field()` function was attached to a single member of a union type.
  warnings.warn(
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_generate_schema.py:2249: UnsupportedFieldAttributeWarning: The 'frozen' attribute with value True was provided to the `Field()` function, which has no effect in the context it was used. 'frozen' is field-specific metadata, and can only be attached to a model field using `Annotated` metadata or by assignment. This may have happened because an `Annotated` type alias using the `type` statement was used, or if the `Field()` function was attached to a single member of a union type.
  warnings.warn(
Loading safetensors checkpoint shards:   0% Completed | 0/2 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  50% Completed | 1/2 [00:01<00:01,  1.54s/it]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:02<00:00,  1.09s/it]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:02<00:00,  1.16s/it]

Capturing batches (bs=4 avail_mem=40.23 GB):   0%|          | 0/4 [00:00<?, ?it/s][2025-10-23 18:43:08] MOE_A2A_BACKEND is not initialized, using default backend
Capturing batches (bs=1 avail_mem=39.93 GB): 100%|██████████| 4/4 [00:00<00:00, 11.26it/s]
Loading pt checkpoint shards:   0% Completed | 0/1 [00:00<?, ?it/s]
Loading pt checkpoint shards: 100% Completed | 1/1 [00:01<00:00,  1.19s/it]
Loading pt checkpoint shards: 100% Completed | 1/1 [00:01<00:00,  1.19s/it]

Capturing batches (bs=1 avail_mem=37.30 GB): 100%|██████████| 4/4 [00:04<00:00,  1.18s/it]
Capturing batches (bs=1 avail_mem=37.17 GB): 100%|██████████| 4/4 [00:00<00:00, 117.06it/s]


NOTE: Typically, the server runs in a separate terminal.
In this notebook, we run the server and notebook code together, so their outputs are combined.
To improve clarity, the server logs are displayed in the original black color, while the notebook outputs are highlighted in blue.
To reduce the log length, we set the log level to warning for the server, the default log level is info.
We are running those notebooks in a CI environment, so the throughput is not representative of the actual performance.
[3]:
client = openai.Client(base_url=f"http://127.0.0.1:{port}/v1", api_key="None")

response = client.chat.completions.create(
    model="meta-llama/Llama-2-7b-chat-hf",
    messages=[
        {"role": "user", "content": "List 3 countries and their capitals."},
    ],
    temperature=0,
    max_tokens=64,
)

print_highlight(f"Response: {response}")
Response: ChatCompletion(id='1d4a8d731ac64c1397ecae4272d4975a', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=' Sure! Here are three countries and their capitals:\n\n1. Country: France\nCapital: Paris\n2. Country: Japan\nCapital: Tokyo\n3. Country: Brazil\nCapital: Brasília', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=None, reasoning_content=None), matched_stop=2)], created=1761245003, model='meta-llama/Llama-2-7b-chat-hf', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=48, prompt_tokens=17, total_tokens=65, completion_tokens_details=None, prompt_tokens_details=None, reasoning_tokens=0), metadata={'weight_version': 'default'})
[4]:
terminate_process(server_process)

EAGLE-2 Decoding with torch.compile#

You can also enable torch.compile for further optimizations and optionally set --torch-compile-max-bs:

[5]:
server_process, port = launch_server_cmd(
    """
python3 -m sglang.launch_server --model meta-llama/Llama-2-7b-chat-hf  --speculative-algorithm EAGLE \
    --speculative-draft-model-path lmsys/sglang-EAGLE-llama2-chat-7B --speculative-num-steps 5 \
        --speculative-eagle-topk 8 --speculative-num-draft-tokens 64 --mem-fraction 0.6 \
            --enable-torch-compile --torch-compile-max-bs 2 --log-level warning
"""
)

wait_for_server(f"http://localhost:{port}")
[2025-10-23 18:43:29] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:43:29] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:43:29] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:43:32] WARNING server_args.py:1304: Overlap scheduler is disabled because of using eagle3 and standalone speculative decoding.
[2025-10-23 18:43:32] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
`torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:43:32] `torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:43:40] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:43:40] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:43:40] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:43:40] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:43:40] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:43:40] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:43:42] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
[2025-10-23 18:43:42] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
`torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:43:42] `torch_dtype` is deprecated! Use `dtype` instead!
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_generate_schema.py:2249: UnsupportedFieldAttributeWarning: The 'repr' attribute with value False was provided to the `Field()` function, which has no effect in the context it was used. 'repr' is field-specific metadata, and can only be attached to a model field using `Annotated` metadata or by assignment. This may have happened because an `Annotated` type alias using the `type` statement was used, or if the `Field()` function was attached to a single member of a union type.
  warnings.warn(
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_generate_schema.py:2249: UnsupportedFieldAttributeWarning: The 'frozen' attribute with value True was provided to the `Field()` function, which has no effect in the context it was used. 'frozen' is field-specific metadata, and can only be attached to a model field using `Annotated` metadata or by assignment. This may have happened because an `Annotated` type alias using the `type` statement was used, or if the `Field()` function was attached to a single member of a union type.
  warnings.warn(
Loading safetensors checkpoint shards:   0% Completed | 0/2 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  50% Completed | 1/2 [00:01<00:01,  1.54s/it]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:02<00:00,  1.11s/it]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:02<00:00,  1.17s/it]

Capturing batches (bs=4 avail_mem=55.13 GB):   0%|          | 0/4 [00:00<?, ?it/s][2025-10-23 18:43:49] MOE_A2A_BACKEND is not initialized, using default backend
Capturing batches (bs=2 avail_mem=54.89 GB):  25%|██▌       | 1/4 [00:00<00:00,  3.72it/s]/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:1575: UserWarning: Dynamo detected a call to a `functools.lru_cache`-wrapped function. Dynamo ignores the cache wrapper and directly traces the wrapped function. Silent incorrectness is only a *potential* risk, not something we have observed. Enable TORCH_LOGS="+dynamo" for a DEBUG stack trace.
  torch._dynamo.utils.warn_once(msg)
Capturing batches (bs=1 avail_mem=54.80 GB): 100%|██████████| 4/4 [00:16<00:00,  4.19s/it]
Loading pt checkpoint shards:   0% Completed | 0/1 [00:00<?, ?it/s]
Loading pt checkpoint shards: 100% Completed | 1/1 [00:01<00:00,  1.22s/it]
Loading pt checkpoint shards: 100% Completed | 1/1 [00:01<00:00,  1.22s/it]

Capturing batches (bs=1 avail_mem=37.33 GB): 100%|██████████| 4/4 [00:06<00:00,  1.66s/it]
Capturing batches (bs=1 avail_mem=36.97 GB): 100%|██████████| 4/4 [00:00<00:00, 16.44it/s]


NOTE: Typically, the server runs in a separate terminal.
In this notebook, we run the server and notebook code together, so their outputs are combined.
To improve clarity, the server logs are displayed in the original black color, while the notebook outputs are highlighted in blue.
To reduce the log length, we set the log level to warning for the server, the default log level is info.
We are running those notebooks in a CI environment, so the throughput is not representative of the actual performance.
[6]:
client = openai.Client(base_url=f"http://127.0.0.1:{port}/v1", api_key="None")

response = client.chat.completions.create(
    model="meta-llama/Llama-2-7b-chat-hf",
    messages=[
        {"role": "user", "content": "List 3 countries and their capitals."},
    ],
    temperature=0,
    max_tokens=64,
)

print_highlight(f"Response: {response}")
Response: ChatCompletion(id='ea6d9dfbeb7e46898b57383032474d58', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=' Sure! Here are three countries and their capitals:\n\n1. Country: France\nCapital: Paris\n2. Country: Japan\nCapital: Tokyo\n3. Country: Brazil\nCapital: Brasília', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=None, reasoning_content=None), matched_stop=2)], created=1761245064, model='meta-llama/Llama-2-7b-chat-hf', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=48, prompt_tokens=17, total_tokens=65, completion_tokens_details=None, prompt_tokens_details=None, reasoning_tokens=0), metadata={'weight_version': 'default'})
[7]:
terminate_process(server_process)

EAGLE-2 Decoding via Frequency-Ranked Speculative Sampling#

By employing a truncated high-frequency token vocabulary in the draft model, Eagle speculative decoding reduces lm_head computational overhead while accelerating the pipeline without quality degradation. For more details, checkout the paper.

In our implementation, set --speculative-token-map to enable the optimization. You can get the high-frequency token in FR-Spec from this model. Or you can obtain high-frequency token by directly downloading these token from this repo.

Thanks for the contribution from Weilin Zhao and Zhousx.

[8]:
server_process, port = launch_server_cmd(
    """
python3 -m sglang.launch_server --model meta-llama/Meta-Llama-3-8B-Instruct --speculative-algorithm EAGLE \
    --speculative-draft-model-path lmsys/sglang-EAGLE-LLaMA3-Instruct-8B --speculative-num-steps 5 \
    --speculative-eagle-topk 8 --speculative-num-draft-tokens 64 --speculative-token-map thunlp/LLaMA3-Instruct-8B-FR-Spec/freq_32768.pt \
    --mem-fraction 0.7 --cuda-graph-max-bs 2 --dtype float16  --log-level warning
"""
)

wait_for_server(f"http://localhost:{port}")
[2025-10-23 18:44:29] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:44:29] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:44:29] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:44:31] WARNING server_args.py:1304: Overlap scheduler is disabled because of using eagle3 and standalone speculative decoding.
[2025-10-23 18:44:31] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
`torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:44:32] `torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:44:32] Casting torch.bfloat16 to torch.float16.
[2025-10-23 18:44:40] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:44:40] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:44:40] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:44:40] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:44:40] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:44:40] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:44:42] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
[2025-10-23 18:44:42] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
`torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:44:42] `torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:44:42] Casting torch.bfloat16 to torch.float16.
[2025-10-23 18:44:43] Casting torch.bfloat16 to torch.float16.
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_generate_schema.py:2249: UnsupportedFieldAttributeWarning: The 'repr' attribute with value False was provided to the `Field()` function, which has no effect in the context it was used. 'repr' is field-specific metadata, and can only be attached to a model field using `Annotated` metadata or by assignment. This may have happened because an `Annotated` type alias using the `type` statement was used, or if the `Field()` function was attached to a single member of a union type.
  warnings.warn(
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_generate_schema.py:2249: UnsupportedFieldAttributeWarning: The 'frozen' attribute with value True was provided to the `Field()` function, which has no effect in the context it was used. 'frozen' is field-specific metadata, and can only be attached to a model field using `Annotated` metadata or by assignment. This may have happened because an `Annotated` type alias using the `type` statement was used, or if the `Field()` function was attached to a single member of a union type.
  warnings.warn(
Loading safetensors checkpoint shards:   0% Completed | 0/4 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  25% Completed | 1/4 [00:04<00:12,  4.23s/it]
Loading safetensors checkpoint shards:  50% Completed | 2/4 [00:08<00:08,  4.37s/it]
Loading safetensors checkpoint shards:  75% Completed | 3/4 [00:13<00:04,  4.44s/it]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:14<00:00,  3.19s/it]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:14<00:00,  3.63s/it]

Capturing batches (bs=4 avail_mem=60.13 GB):   0%|          | 0/4 [00:00<?, ?it/s][2025-10-23 18:45:02] MOE_A2A_BACKEND is not initialized, using default backend
Capturing batches (bs=1 avail_mem=59.74 GB): 100%|██████████| 4/4 [00:00<00:00,  5.18it/s]
[2025-10-23 18:45:03] Warning: Target model's context_length (8192) is greater than the derived context_length (2048). This may lead to incorrect model outputs or CUDA errors. Note that the derived context_length may differ from max_position_embeddings in the model's config.
[2025-10-23 18:45:03] Overriding the draft model's max_position_embeddings to 8192.
Loading pt checkpoint shards:   0% Completed | 0/1 [00:00<?, ?it/s]
Loading pt checkpoint shards: 100% Completed | 1/1 [00:01<00:00,  1.16s/it]
Loading pt checkpoint shards: 100% Completed | 1/1 [00:01<00:00,  1.16s/it]

Capturing batches (bs=1 avail_mem=58.40 GB): 100%|██████████| 4/4 [00:05<00:00,  1.28s/it]
Capturing batches (bs=1 avail_mem=58.26 GB): 100%|██████████| 4/4 [00:00<00:00, 27.01it/s]


NOTE: Typically, the server runs in a separate terminal.
In this notebook, we run the server and notebook code together, so their outputs are combined.
To improve clarity, the server logs are displayed in the original black color, while the notebook outputs are highlighted in blue.
To reduce the log length, we set the log level to warning for the server, the default log level is info.
We are running those notebooks in a CI environment, so the throughput is not representative of the actual performance.
[9]:
client = openai.Client(base_url=f"http://127.0.0.1:{port}/v1", api_key="None")

response = client.chat.completions.create(
    model="meta-llama/Meta-Llama-3-8B-Instruct",
    messages=[
        {"role": "user", "content": "List 3 countries and their capitals."},
    ],
    temperature=0,
    max_tokens=64,
)

print_highlight(f"Response: {response}")
Response: ChatCompletion(id='2b2a61a7034a4418a21cb756b9095196', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Here are 3 countries and their capitals:\n\n1. **France** - **Paris**\n2. **Japan** - **Tokyo**\n3. **Australia** - **Canberra**', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=None, reasoning_content=None), matched_stop=128009)], created=1761245119, model='meta-llama/Meta-Llama-3-8B-Instruct', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=39, prompt_tokens=18, total_tokens=57, completion_tokens_details=None, prompt_tokens_details=None, reasoning_tokens=0), metadata={'weight_version': 'default'})
[10]:
terminate_process(server_process)

EAGLE-3 Decoding#

You can enable EAGLE-3 decoding by setting --speculative-algorithm EAGLE3 and choosing an appropriate model.

[11]:
server_process, port = launch_server_cmd(
    """
python3 -m sglang.launch_server --model meta-llama/Llama-3.1-8B-Instruct  --speculative-algorithm EAGLE3 \
    --speculative-draft-model-path jamesliu1/sglang-EAGLE3-Llama-3.1-Instruct-8B --speculative-num-steps 5 \
        --speculative-eagle-topk 8 --speculative-num-draft-tokens 32 --mem-fraction 0.6 \
        --cuda-graph-max-bs 2 --dtype float16 --log-level warning
"""
)

wait_for_server(f"http://localhost:{port}")
[2025-10-23 18:45:25] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:45:25] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:45:25] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:45:27] WARNING server_args.py:1304: Overlap scheduler is disabled because of using eagle3 and standalone speculative decoding.
[2025-10-23 18:45:27] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
`torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:45:28] `torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:45:28] Casting torch.bfloat16 to torch.float16.
[2025-10-23 18:45:35] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:45:35] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:45:35] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:45:35] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:45:35] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:45:35] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:45:37] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
[2025-10-23 18:45:37] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
`torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:45:37] `torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:45:37] Casting torch.bfloat16 to torch.float16.
[2025-10-23 18:45:38] Casting torch.bfloat16 to torch.float16.
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_generate_schema.py:2249: UnsupportedFieldAttributeWarning: The 'repr' attribute with value False was provided to the `Field()` function, which has no effect in the context it was used. 'repr' is field-specific metadata, and can only be attached to a model field using `Annotated` metadata or by assignment. This may have happened because an `Annotated` type alias using the `type` statement was used, or if the `Field()` function was attached to a single member of a union type.
  warnings.warn(
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_generate_schema.py:2249: UnsupportedFieldAttributeWarning: The 'frozen' attribute with value True was provided to the `Field()` function, which has no effect in the context it was used. 'frozen' is field-specific metadata, and can only be attached to a model field using `Annotated` metadata or by assignment. This may have happened because an `Annotated` type alias using the `type` statement was used, or if the `Field()` function was attached to a single member of a union type.
  warnings.warn(
Loading safetensors checkpoint shards:   0% Completed | 0/4 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  25% Completed | 1/4 [00:04<00:13,  4.41s/it]
Loading safetensors checkpoint shards:  50% Completed | 2/4 [00:08<00:08,  4.39s/it]
Loading safetensors checkpoint shards:  75% Completed | 3/4 [00:13<00:04,  4.35s/it]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:14<00:00,  3.09s/it]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:14<00:00,  3.56s/it]

Capturing batches (bs=4 avail_mem=60.01 GB):   0%|          | 0/4 [00:00<?, ?it/s][2025-10-23 18:45:56] MOE_A2A_BACKEND is not initialized, using default backend
Capturing batches (bs=1 avail_mem=59.67 GB): 100%|██████████| 4/4 [00:00<00:00, 13.63it/s]
[2025-10-23 18:45:57] Warning: Target model's context_length (131072) is greater than the derived context_length (2048). This may lead to incorrect model outputs or CUDA errors. Note that the derived context_length may differ from max_position_embeddings in the model's config.
[2025-10-23 18:45:57] Overriding the draft model's max_position_embeddings to 131072.
Loading pt checkpoint shards:   0% Completed | 0/1 [00:00<?, ?it/s]
Loading pt checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  1.56it/s]
Loading pt checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  1.56it/s]

Capturing batches (bs=1 avail_mem=58.19 GB): 100%|██████████| 4/4 [00:05<00:00,  1.40s/it]
Capturing batches (bs=1 avail_mem=58.04 GB): 100%|██████████| 4/4 [00:00<00:00, 95.72it/s]


NOTE: Typically, the server runs in a separate terminal.
In this notebook, we run the server and notebook code together, so their outputs are combined.
To improve clarity, the server logs are displayed in the original black color, while the notebook outputs are highlighted in blue.
To reduce the log length, we set the log level to warning for the server, the default log level is info.
We are running those notebooks in a CI environment, so the throughput is not representative of the actual performance.
[12]:
client = openai.Client(base_url=f"http://127.0.0.1:{port}/v1", api_key="None")

response = client.chat.completions.create(
    model="meta-llama/Meta-Llama-3.1-8B-Instruct",
    messages=[
        {"role": "user", "content": "List 3 countries and their capitals."},
    ],
    temperature=0,
    max_tokens=64,
)

print_highlight(f"Response: {response}")
Response: ChatCompletion(id='a1b0ff7513cd472bb5da21d2e39e3137', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Here are 3 countries and their capitals:\n\n1. Country: Japan\n Capital: Tokyo\n\n2. Country: Australia\n Capital: Canberra\n\n3. Country: Brazil\n Capital: Brasília', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=None, reasoning_content=None), matched_stop=128009)], created=1761245172, model='meta-llama/Meta-Llama-3.1-8B-Instruct', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=43, prompt_tokens=43, total_tokens=86, completion_tokens_details=None, prompt_tokens_details=None, reasoning_tokens=0), metadata={'weight_version': 'default'})
[13]:
terminate_process(server_process)

Multi Token Prediction#

We support MTP(Multi-Token Prediction) in SGLang by using speculative decoding. We use Xiaomi/MiMo-7B-RL model as example here (deepseek mtp usage refer to deepseek doc)

[14]:
server_process, port = launch_server_cmd(
    """
    python3 -m sglang.launch_server --model-path XiaomiMiMo/MiMo-7B-RL --host 0.0.0.0 --trust-remote-code \
    --speculative-algorithm EAGLE --speculative-num-steps 1 --speculative-eagle-topk 1 --speculative-num-draft-tokens 2 \
    --mem-fraction 0.5 --log-level warning
"""
)

wait_for_server(f"http://localhost:{port}")
[2025-10-23 18:46:18] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:46:18] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:46:18] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:46:21] WARNING server_args.py:1304: Overlap scheduler is disabled because of using eagle3 and standalone speculative decoding.
[2025-10-23 18:46:21] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
`torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:46:22] `torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:46:28] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:46:28] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:46:28] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:46:28] INFO utils.py:148: Note: detected 112 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2025-10-23 18:46:28] INFO utils.py:151: Note: NumExpr detected 112 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2025-10-23 18:46:28] INFO utils.py:164: NumExpr defaulting to 16 threads.
[2025-10-23 18:46:29] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
[2025-10-23 18:46:30] INFO trace.py:48: opentelemetry package is not installed, tracing disabled
`torch_dtype` is deprecated! Use `dtype` instead!
[2025-10-23 18:46:30] `torch_dtype` is deprecated! Use `dtype` instead!
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_generate_schema.py:2249: UnsupportedFieldAttributeWarning: The 'repr' attribute with value False was provided to the `Field()` function, which has no effect in the context it was used. 'repr' is field-specific metadata, and can only be attached to a model field using `Annotated` metadata or by assignment. This may have happened because an `Annotated` type alias using the `type` statement was used, or if the `Field()` function was attached to a single member of a union type.
  warnings.warn(
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_generate_schema.py:2249: UnsupportedFieldAttributeWarning: The 'frozen' attribute with value True was provided to the `Field()` function, which has no effect in the context it was used. 'frozen' is field-specific metadata, and can only be attached to a model field using `Annotated` metadata or by assignment. This may have happened because an `Annotated` type alias using the `type` statement was used, or if the `Field()` function was attached to a single member of a union type.
  warnings.warn(
Loading safetensors checkpoint shards:   0% Completed | 0/4 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  25% Completed | 1/4 [00:00<00:02,  1.26it/s]
Loading safetensors checkpoint shards:  50% Completed | 2/4 [00:01<00:01,  1.20it/s]
Loading safetensors checkpoint shards:  75% Completed | 3/4 [00:02<00:00,  1.21it/s]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:03<00:00,  1.31it/s]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:03<00:00,  1.28it/s]

Capturing batches (bs=4 avail_mem=60.52 GB):   0%|          | 0/4 [00:00<?, ?it/s][2025-10-23 18:46:38] MOE_A2A_BACKEND is not initialized, using default backend
Capturing batches (bs=1 avail_mem=60.22 GB): 100%|██████████| 4/4 [00:00<00:00,  7.04it/s]
Loading safetensors checkpoint shards:   0% Completed | 0/4 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  25% Completed | 1/4 [00:00<00:00,  4.32it/s]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:00<00:00,  7.51it/s]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:00<00:00,  7.12it/s]

Capturing batches (bs=1 avail_mem=59.38 GB): 100%|██████████| 4/4 [00:00<00:00, 23.29it/s]


NOTE: Typically, the server runs in a separate terminal.
In this notebook, we run the server and notebook code together, so their outputs are combined.
To improve clarity, the server logs are displayed in the original black color, while the notebook outputs are highlighted in blue.
To reduce the log length, we set the log level to warning for the server, the default log level is info.
We are running those notebooks in a CI environment, so the throughput is not representative of the actual performance.
[15]:
import requests

url = f"http://localhost:{port}/v1/chat/completions"

data = {
    "model": "XiaomiMiMo/MiMo-7B-RL",
    "messages": [{"role": "user", "content": "What is the capital of France?"}],
}

response = requests.post(url, json=data)
print_highlight(response.json())
{'id': '2666b8fd66ff4ff3806f8b4e7faac738', 'object': 'chat.completion', 'created': 1761245211, 'model': 'XiaomiMiMo/MiMo-7B-RL', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': '\nOkay, so the user is asking, "What is the capital of France?" Let me start by recalling the basic geography of France. I know that France is a country in Western Europe. The question is about its capital city. From what I remember, the capital of France is Paris. But wait, let me make sure I\'m not mixing it up with any other countries. For example, Germany\'s capital is Berlin, Italy\'s is Rome, so France\'s should be Paris. \n\nWait, sometimes people might confuse Paris with other major cities in France, like Lyon, which is known for its cuisine, or Marseille, which is a significant port. But no, Paris is definitely the capital. The Eiffel Tower, the Louvre Museum, and the Arc de Triomphe are all in Paris, which solidifies it as the capital. \n\nI should also check if there\'s any political change I might be unaware of. For instance, sometimes countries change their capitals for various reasons. But as far as I know, Paris has been the capital of France since the country was established. The Napoleonic Code came into effect in 1804, and Paris was the capital then, and it remains so to this day. \n\nAnother way to verify this is by thinking about government buildings. The French government\'s seat is in Paris, with many important institutions located there, such as the National Assembly (Assemblée Nationale), the Senate (Sénat), and the Élysée Palace where the President resides. Also, the foreign office (France\'s Ministry of Foreign Affairs) is based in Paris. \n\nAdditionally, the official time zone for France is Central European Time (CET), which is observed in Paris as well. Major events, like the Olympic Games, were held in Paris recently (2024), further confirming its status as the capital. \n\nI can\'t think of any reason why this would be incorrect. Maybe a common mistake someone might make is answering with a region or another city, but Paris is the clear answer here. There\'s no contest. \n\nWait, just to be thorough, let me think about the French language. The word for capital in French is "capitale," and when asked about the capital of France, the correct回答 would be "Paris." So if I were a French speaker, I would know that without a doubt. \n\nYes, all signs point to Paris being the capital of France. No need to second-guess this one. It\'s a fundamental fact that any basic geography quiz would test. I\'m confident in this answer.\n\n\nThe capital of France is **Paris**. This iconic city is home to renowned landmarks like the Eiffel Tower, the Louvre Museum, and the historic Arc de Triomphe. It serves as the political, cultural, and economic heart of France, housing the country\'s government institutions, such as the National Assembly (Assemblée Nationale) and the Élysée Palace. No recent or historical changes have altered this status, solidifying Paris as France\'s capital for centuries. 🇫🇷', 'reasoning_content': None, 'tool_calls': None}, 'logprobs': None, 'finish_reason': 'stop', 'matched_stop': 151645}], 'usage': {'prompt_tokens': 26, 'total_tokens': 661, 'completion_tokens': 635, 'prompt_tokens_details': None, 'reasoning_tokens': 0}, 'metadata': {'weight_version': 'default'}}
[16]:
terminate_process(server_process)

References#

EAGLE process is as follows:

  • Within EAGLE the draft model predicts the next feature vector, i.e. the last hidden state of the original LLM, using the feature sequence \((f_1, ..., f_k)\) and the token sequence \((t_2, ..., t_{k+1})\).

  • The next token is then sampled from \(p_{k+2}=\text{LMHead}(f_{k+1})\). Afterwards, the two sequences are extended in a tree style—branching out multiple potential continuations, with the branching factor per step controlled by the speculative_eagle_topk parameter—to ensure a more coherent connection of context, and are given as input again.

  • EAGLE-2 additionally uses the draft model to evaluate how probable certain branches in the draft tree are, dynamically stopping the expansion of unlikely branches. After the expansion phase, reranking is employed to select only the top speculative_num_draft_tokens final nodes as draft tokens.

  • EAGLE-3 removes the feature prediction objective, incorporates low and mid-layer features, and is trained in an on-policy manner.

This enhances drafting accuracy by operating on the features instead of tokens for more regular inputs and passing the tokens from the next timestep additionally to minimize randomness effects from sampling. Furthermore the dynamic adjustment of the draft tree and selection of reranked final nodes increases acceptance rate of draft tokens further. For more details see EAGLE-2 and EAGLE-3 paper.

For guidance how to train your own EAGLE model please see the EAGLE repo.