KEMBAR78
docs: fix typo in 'quantization-aware training' by luckyvickyricky · Pull Request #39904 · huggingface/transformers · GitHub
Skip to content

Conversation

@luckyvickyricky
Copy link
Contributor

@luckyvickyricky luckyvickyricky commented Aug 5, 2025

What does this PR do?

This PR fixes a minor typo in the documentation:

  • "quantization-aware trainin" → "quantization-aware training"

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

@jungnerd, @chelsseeey, @skwh54, @amo33, @maximizemaxwell, @D15M4S

Once the translation crew members listed above have checked and approved this PR, I will mention maintainer for final review and merging.

Thank you!

FP-Quant currently performs best for very large batch size processing.

See [QuTLASS README](https://github.com/IST-DASLab/qutlass/blob/main/README.md) for speedups.
See [QuTLASS README](https://github.com/IST-DASLab/qutlass/blob/main/README.md) for speedups.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI: The last line might appear as changed due to GitHub editor auto-formatting (e.g., newline at EOF). As far as I can tell, the content itself has not been modified.

For reference, here is the permalink to the original unchanged lines:

## Speedups
FP-Quant currently performs best for very large batch size processing.
See [QuTLASS README](https://github.com/IST-DASLab/qutlass/blob/main/README.md) for speedups.

Copy link
Member

@Rocketknight1 Rocketknight1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! (And don't worry about the final newline)

@Rocketknight1 Rocketknight1 marked this pull request as ready for review August 6, 2025 14:39
@Rocketknight1 Rocketknight1 force-pushed the fix/typo-quantization-aware-training branch from 608c003 to fa410f1 Compare August 6, 2025 14:39
@Rocketknight1 Rocketknight1 enabled auto-merge (squash) August 6, 2025 14:39
@Rocketknight1 Rocketknight1 merged commit dff6185 into huggingface:main Aug 6, 2025
14 checks passed
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants