KEMBAR78
Support OpenAI o3 and o4-mini by ACTOR-ALCHEMIST · Pull Request #6457 · ChatGPTNextWeb/NextChat · GitHub
Skip to content

Conversation

@ACTOR-ALCHEMIST
Copy link

@ACTOR-ALCHEMIST ACTOR-ALCHEMIST commented Apr 18, 2025

💻 变更类型 | Change Type

  • feat
  • fix
  • refactor
  • perf
  • style
  • test
  • docs
  • ci
  • chore
  • build

🔀 变更说明 | Description of Change

新增对 OpenAI 最新模型 o3 和 o4-mini 的适配,
更新了判断视觉模型的正则表达式,支o3和o4-mini的多模态功能
针对 o3、o4-mini 这类新模型,API 请求参数自动切换为 max_completion_tokens,完全兼容 OpenAI 新接口,避免因参数不符导致的报错。
参考:https://openai.com/index/introducing-o3-and-o4-mini/

📝 补充信息 | Additional Information

解决了 Issue: #6456

Summary by CodeRabbit

  • New Features
    • Added support for new vision models "o3" and "o4-mini".
  • Enhancements
    • Improved model recognition to include "o3" and "o4-mini" in vision-capable model lists.

@vercel
Copy link

vercel bot commented Apr 18, 2025

Someone is attempting to deploy a commit to the NextChat Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Apr 18, 2025

Walkthrough

The changes update model recognition and request handling logic to include support for "o3" and "o4-mini" models. Specifically, the model grouping logic in the OpenAI platform client is extended so that models with names starting with "o4-mini" are treated the same as "o1" and "o3" models, affecting the inclusion of the max_tokens property in vision model requests. Additionally, new regex patterns and model names for "o3" and "o4-mini" are added to the constants used for vision model detection and model lists.

Changes

File(s) Change Summary
app/client/platforms/openai.ts Extended logic to group "o4-mini" models with "o1" and "o3" for request handling, affecting the addition of max_tokens.
app/constant.ts Added /o3/ and /o4-mini/ regex patterns to VISION_MODEL_REGEXES and added "o3" and "o4-mini" to openaiModels.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant OpenAIPlatform
    participant Constants

    Client->>OpenAIPlatform: Send request with selected model (e.g., "o1", "o3", "o4-mini")
    OpenAIPlatform->>Constants: Check if model matches vision regexes
    Constants-->>OpenAIPlatform: Return match result
    OpenAIPlatform->>OpenAIPlatform: Determine if model is "o1", "o3", or "o4-mini"
    alt If model is "o1", "o3", or "o4-mini"
        OpenAIPlatform->>OpenAIPlatform: Exclude max_tokens from vision request
    else Other models
        OpenAIPlatform->>OpenAIPlatform: Include max_tokens in vision request
    end
    OpenAIPlatform->>Client: Return response
Loading

Possibly related issues

  • 支持o3和o4-mini #6456: This PR directly addresses the addition of "o3" and "o4-mini" model support as described in the issue.

Possibly related PRs

Suggested labels

planned

Suggested reviewers

  • lloydzhou
  • Dogtiti

Poem

In fields of code where models grow,
"o3" and "o4-mini" now join the show!
Regexes stretch, arrays expand,
Vision support grows across the land.
No more tokens where they don't belong—
With every hop, the models grow strong!
🐇✨

Tip

⚡💬 Agentic Chat (Pro Plan, General Availability)
  • We're introducing multi-step agentic chat in review comments and issue comments, within and outside of PR's. This feature enhances review and issue discussions with the CodeRabbit agentic chat by enabling advanced interactions, including the ability to create pull requests directly from comments and add commits to existing pull requests.

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 11b37c1 and 2d7229d.

📒 Files selected for processing (2)
  • app/client/platforms/openai.ts (2 hunks)
  • app/constant.ts (2 hunks)
🔇 Additional comments (4)
app/constant.ts (2)

481-482: Added support for o3 and o4-mini vision models!

The addition of the regex patterns for o3 and o4-mini in the VISION_MODEL_REGEXES array ensures that these new models are properly detected as vision-capable models, which is essential for correctly processing multimodal inputs.


521-522: Added new o3 and o4-mini models to the OpenAI models list

These additions ensure the new models are included in the list of available models for selection in the UI and for model filtering in the backend. This properly integrates them into the application's model selection system.

app/client/platforms/openai.ts (2)

200-202: Updated isO1OrO3 condition to include o4-mini models

Good implementation. The updated condition ensures that o4-mini models receive the same special handling as o1 and o3 models. This is important for supporting the new API parameter requirements.


247-249: Correctly excluded o4-mini from vision model max_tokens parameter

This ensures compatibility with OpenAI's API by preventing the addition of the max_tokens parameter for o4-mini models when handling vision requests, similar to the handling for o1 and o3 models. Instead, the code will use max_completion_tokens as defined in the earlier logic.

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@ACTOR-ALCHEMIST ACTOR-ALCHEMIST changed the title 支持 OpenAI 新模型 o3 与 o4-mini,并适配API max_completion_tokens参数#6456 支持 OpenAI 新模型 o3 与 o4-mini,并适配API max_completion_tokens参数 Apr 18, 2025
@ACTOR-ALCHEMIST ACTOR-ALCHEMIST changed the title 支持 OpenAI 新模型 o3 与 o4-mini,并适配API max_completion_tokens参数 Support for OpenAI's new models o3 and o4-mini, and adaptation to the API max_completion_tokens parameter. Apr 18, 2025
@ACTOR-ALCHEMIST ACTOR-ALCHEMIST changed the title Support for OpenAI's new models o3 and o4-mini, and adaptation to the API max_completion_tokens parameter. Support OpenAI o3 and o4-mini Apr 18, 2025
@Leizhenpeng Leizhenpeng merged commit 3809375 into ChatGPTNextWeb:main Apr 19, 2025
1 check failed
@coderabbitai coderabbitai bot mentioned this pull request Aug 9, 2025
10 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants