KEMBAR78
Keep special sampling params by blahblahasdf · Pull Request #1186 · vllm-project/vllm · GitHub
Skip to content

Conversation

@blahblahasdf
Copy link
Contributor

@blahblahasdf blahblahasdf commented Sep 26, 2023

See the discussion on #970

I have a model that generates special tokens with important meaning to generation tasks. At present there is no way to get these special tokens back in the generated text because of the hardcoded input to detokenize_incrementally in llm_engine.

This PR adds an optional parameter to SamplingParams

Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@blahblahasdf Thanks for the PR! Can you change it to skip_special_tokens? I think this will improve the compatibility with HF.

@blahblahasdf
Copy link
Contributor Author

@blahblahasdf Thanks for the PR! Can you change it to skip_special_tokens? I think this will improve the compatibility with HF.

Yup will do.

@WoosukKwon WoosukKwon self-requested a review September 28, 2023 02:13
Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for the contribution!

@WoosukKwon WoosukKwon merged commit 20f7cc4 into vllm-project:main Sep 28, 2023
hongxiayang pushed a commit to hongxiayang/vllm that referenced this pull request Feb 13, 2024
jikunshang added a commit to jikunshang/vllm that referenced this pull request May 12, 2025
please note this will lead to P sampled first token not equal to D
sampled first token.



<!--- pyml disable-next-line no-emphasis-as-heading -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants