KEMBAR78
[None][fix] Use safeInitRowMax instead of fp32_lowest to avoid NaN by lowsfer · Pull Request #7087 · NVIDIA/TensorRT-LLM · GitHub
Skip to content

Conversation

@lowsfer
Copy link
Member

@lowsfer lowsfer commented Aug 20, 2025

In some special cases when all BMM1 result is masked out, we may get NaN due to math with fp32 lowest. Use a larger value to avoid that.

Summary by CodeRabbit

  • Bug Fixes
    • Fixed a column-based mask bit test to correctly detect masked columns, preventing incorrect attention outputs in edge cases.
  • Performance and Stability
    • Improved numerical stability by using safer initialization values for max-tracking, reducing risk of incorrect extrema, overflow, or NaNs.
  • Chores
    • Aligned sentinel/initialization handling across related code paths for consistency.
  • Notes
    • No changes to public APIs or user-facing configuration.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 20, 2025

📝 Walkthrough

Walkthrough

Replaced multiple initializations of accumulator/max-tracking sentinels from mha::numeric_limits::lowest() to safeInitRowMax across shared memory setup and masking paths. Updated a column bit-mask check from shift-and-test to (mask & (1ULL << col)) form. No interface or structural changes.

Changes

Cohort / File(s) Summary
Accumulator init updates
cpp/kernels/xqa/mha_sm90.cu
Replaced various acc(m,n)(i,j) and gemm1AccColMax initializations from mha::numeric_limits<float>::lowest() to safeInitRowMax across SPEC_DEC and non-SPEC_DEC masking/initialization paths.
Bit-mask test adjustment
cpp/kernels/xqa/mha_sm90.cu
Changed column masking condition from ((mask >> col) & 1) == 0 to (mask & (1ULL << col)) == 0; aligned associated accumulator assignment to use safeInitRowMax.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 35475b4 and 4a5205d.

📒 Files selected for processing (1)
  • cpp/kernels/xqa/mha_sm90.cu (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • cpp/kernels/xqa/mha_sm90.cu
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@lowsfer lowsfer requested a review from jhaotingc August 20, 2025 08:36
@lowsfer lowsfer enabled auto-merge (squash) August 20, 2025 08:36
@lowsfer lowsfer changed the title [fix]: use safeInitRowMax instead of fp32_lowest to avoid NaN [None][fix]: use safeInitRowMax instead of fp32_lowest to avoid NaN Aug 20, 2025
@lowsfer lowsfer changed the title [None][fix]: use safeInitRowMax instead of fp32_lowest to avoid NaN [None][fix]: Use safeInitRowMax instead of fp32_lowest to avoid NaN Aug 20, 2025
@lowsfer lowsfer changed the title [None][fix]: Use safeInitRowMax instead of fp32_lowest to avoid NaN [None][fix] Use safeInitRowMax instead of fp32_lowest to avoid NaN Aug 20, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
cpp/kernels/xqa/mha_sm90.cu (1)

2381-2387: Bit-mask check rewritten to (mask & (1ULL << col)) == 0: clearer and type-safe

The new form is equivalent to the old shift-and-test but easier to read and avoids precedence pitfalls. Using 1ULL ensures 64-bit shifting; col < 64 here, so no UB.

Optional: For consistency, consider using the same style wherever bit checks appear in this file (some places still use shift forms).

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 983fb8e and 35475b4.

📒 Files selected for processing (1)
  • cpp/kernels/xqa/mha_sm90.cu (5 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}: In C++, close namespaces with a comment naming the namespace (e.g., } // namespace foo)
Prefer const/constexpr variables over #define for constants
Declare variables const if not modified after initialization
Use Allman brace style in C++
C++ filenames use lowerCamelCase and must be case-insensitively unique within a build target
C++ type names use UpperCamelCase
Local variables, methods, and namespaces use lowerCamelCase
Global non-static variables not in anonymous namespace use gPrefix lowerCamelCase (e.g., gExample)
Static globals or globals in anonymous namespaces use sPrefix lowerCamelCase
Locally visible static variables start with 's' (e.g., static std::once_flag sFlag;)
Member variables use mPrefix lowerCamelCase; public members may omit but are encouraged to use 'm'
Constants (enums, global/static/function-scope magic numbers) use kPREFIXED_UPPER_SNAKE (e.g., kDIGIT_NUM)
If macros are unavoidable, use UPPER_SNAKE_CASE (prefer constants over #define)
Constructor parameter that conflicts with a public member name gets trailing underscore (foo_)
Literal suffixes should be uppercase (e.g., 1234L not 1234l)
C++: use spaces only; indent 4 spaces
Run clang-format (LLVM style) before submitting; wrap lines at 120 characters
If formatting must be bypassed, use // clang-format off/on around the section
Prefer smart pointers; use unique_ptr for sole ownership, shared_ptr for shared; weak_ptr only in exceptional cases
Do not use deprecated pre-C++11 smart pointers
Use C++ style comments; avoid C comments except special inline cases; prefer // single-line
Capitalize and punctuate full-sentence comments
Follow Doxygen rules: use //! for comments and //!< for members in C++
Disable code with #if/#endif and mnemonic conditions; avoid commented-out code; avoid dead code
Do not throw exceptions across library boundaries
Use least-forceful casts; avoid removing const/volatile; avoid C-style and functional casts (except constructors); p...

Files:

  • cpp/kernels/xqa/mha_sm90.cu
**/*.{cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu}: Avoid literal values except for 0, nullptr, true, false; use named constexpr for other literals
Place semicolon of empty for/while loop on a new line
Always use brace-delimited bodies for switch/while/do-for/if/else
Use inline C comments in argument lists when parameter meaning is unclear (e.g., /* checkForErrors = */ false)
Do not use assignment in subexpressions (e.g., if (x = y) ... is forbidden)
Switch on enums should enumerate all values and omit default to catch new values at compile time
Structure switch statements; prohibit fallthrough except between empty cases; each case ends with break or throw; return at end of case not allowed; put break inside braces for compound case
Prefer anonymous namespaces over static for internal linkage of functions
Every defined function must be called at least once (no unused methods)

Files:

  • cpp/kernels/xqa/mha_sm90.cu
**/*.{h,hpp,hxx,hh,cuh,cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Parameter names must be consistent between declarations and definitions

Files:

  • cpp/kernels/xqa/mha_sm90.cu
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • cpp/kernels/xqa/mha_sm90.cu
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (4)
cpp/kernels/xqa/mha_sm90.cu (4)

1948-1963: Masking to safeInitRowMax under SPEC_DEC prevents NaNs on fully-masked rows

Updating masked-out elements to safeInitRowMax here avoids -inf - (-inf) during the max-subtraction softmax step. This is in line with the PR description.

One potential behavioral nuance: if an entire column (or the covered subset) is masked, max equals the sentinel and exp2f(elem - max) yields 1, not 0. Upstream consumers may then see non-zero sums. If downstream expects zero contribution for fully-masked columns, consider a follow-up guard (e.g., treat columns with colMax == safeInitRowMax as zero-sum) or confirm that attention sinks/normalization logic accounts for this edge case in your workloads. Do you want me to draft an optional guard pattern for this degenerate case?


2069-2095: Row-range masking to safeInitRowMax (SWAP_AB path) is consistent and safe

Using safeInitRowMax for rows outside the valid range avoids NaNs in softmax and aligns with the change elsewhere.


2383-2386: Masking to safeInitRowMax under SPEC_DEC non-row path maintains NaN-avoidance

Setting masked elements to safeInitRowMax complements the bit-mask check and avoids -inf arithmetic during softmax.


2393-2418: Non-SPEC_DEC col-range masking switched to safeInitRowMax: consistent with sentinel strategy

The change mirrors other paths and should prevent NaNs if an entire column is masked.

Same note as above: in fully-masked cases, softmax of uniform sentinels produces uniform weights after max subtraction. If zero contribution is desired for fully-masked columns, consider a follow-up guard. I can propose a minimal patch if that’s a requirement.

… some cases

Signed-off-by: Yao Yao <lowsfer@users.noreply.github.com>
@lowsfer
Copy link
Member Author

lowsfer commented Aug 20, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15903 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15903 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11954 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@lowsfer lowsfer merged commit cbcea33 into NVIDIA:main Aug 21, 2025
5 checks passed
zhou-yuxin pushed a commit to zhou-yuxin/TensorRT-LLM that referenced this pull request Aug 21, 2025
…#7087)

Signed-off-by: Yao Yao <lowsfer@users.noreply.github.com>
Signed-off-by: Yuxin <yuxinz@nvidia.com>
jhaotingc pushed a commit to jhaotingc/TensorRT-LLM that referenced this pull request Aug 25, 2025
…#7087)

Signed-off-by: Yao Yao <lowsfer@users.noreply.github.com>
symphonylyh pushed a commit to symphonylyh/TensorRT-LLM that referenced this pull request Aug 26, 2025
…VIDIA#6282 NVIDIA#6279

* [None][infra] Pin the version for triton to 3.3.1 (NVIDIA#6508)

Signed-off-by: qqiao <qqiao@nvidia.com>

* [None][infra] Pin the version for triton to 3.3.1 (NVIDIA#6508) (NVIDIA#6519) (NVIDIA#6549)

Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>

* [fix]: use safeInitRowMax instead of fp32_lowest to avoid NaN (NVIDIA#7087)

Signed-off-by: Yao Yao <lowsfer@users.noreply.github.com>

* [None][fix] Fix a numerical stability issue for XQA with spec dec

Signed-off-by: Yao Yao <lowsfer@users.noreply.github.com>

* fix typo

Signed-off-by: Jhao-Ting Chen <jhaotingc@nvidia.com>

* fix precompiled multi_query_token kernel not having is_fp8_out hash key (NVIDIA#6279)

Signed-off-by: Jhao-Ting Chen <jhaotingc@nvidia.com>

* [fix] Fix missing fields in xqa kernel cache key (NVIDIA#6282)

Signed-off-by: Yao Yao <lowsfer@users.noreply.github.com>

---------

Signed-off-by: qqiao <qqiao@nvidia.com>
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Signed-off-by: Yao Yao <lowsfer@users.noreply.github.com>
Signed-off-by: Jhao-Ting Chen <jhaotingc@nvidia.com>
Co-authored-by: Emma Qiao <qqiao@nvidia.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>
Co-authored-by: Yao Yao <lowsfer@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants