KEMBAR78
[CPU] Support GQA for flash attention by Valentine233 · Pull Request #157893 · pytorch/pytorch · GitHub
Skip to content

Conversation

@Valentine233
Copy link
Collaborator

@Valentine233 Valentine233 commented Jul 9, 2025

As many models require GQA, we support it in flash attention for CPU path.

cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168

@pytorch-bot pytorch-bot bot added the module: cpu CPU specific problem (e.g., perf, algorithm) label Jul 9, 2025
@pytorch-bot
Copy link

pytorch-bot bot commented Jul 9, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/157893

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 289b47c with merge base b146ca7 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@Valentine233 Valentine233 added the topic: not user facing topic category label Jul 9, 2025
@Valentine233 Valentine233 marked this pull request as draft July 9, 2025 05:31
@Valentine233 Valentine233 requested a review from mingfeima July 9, 2025 06:59
@Valentine233 Valentine233 marked this pull request as ready for review July 9, 2025 08:24
Copy link
Collaborator

@mingfeima mingfeima left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

genernally OK, just simplify the test cases a little bit to remove the duplicated code.

@parametrize("dtype", [torch.float64, torch.float32, torch.bfloat16, torch.float16])
@parametrize("n_heads", [[65, 5], [16, 4], [27, 1], [5, 1]])
@parametrize("train", [False, True])
def test_scaled_dot_product_fused_attention_gqa_vs_math_cpu(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

combine this one with test_scaled_dot_product_fused_attention_mask_vs_math_cpu to remove duplicated code.

### impls
def test_sdpa_vs_math_cpu_helper(...)

def test_scaled_dot_product_fused_attention_mask_vs_math_cpu()
   test_sdpa_vs_math_cpu_helper(...)

def test_scaled_dot_product_fused_attention_gqa_vs_math_cpu()
   test_sdpa_vs_math_cpu_helper(...)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, UT updated.

@Valentine233 Valentine233 added the ciflow/trunk Trigger trunk jobs on your pull request label Jul 10, 2025
@Valentine233
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

fduwjj added a commit that referenced this pull request Jul 14, 2025
As many models require GQA, we support it in flash attention for CPU path.
Approved by: https://github.com/mingfeima, https://github.com/jansel

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request Jul 18, 2025
Summary:

For `scaled_dot_product_attention(..., enable_gqa=True)`:
- the Math backend passes the flag through, performing the extra [KV broadcast](https://github.com/pytorch/pytorch/blob/6e07d6a0ff386d99d8c2f1d25978b0683988a4cb/aten/src/ATen/native/transformers/attention.cpp#L902) if set to True
- the Flash backend has no flag, and relies on correct indexing in the C++ kernel
- Export used to default to Math for `enable_gqa=True`, but #157893 landed and enabled Flash. At the same time, there's an export-only [decomp](https://github.com/pytorch/pytorch/blob/6e07d6a0ff386d99d8c2f1d25978b0683988a4cb/torch/_decomp/decompositions.py#L4968) redirecting flash -> math, calling with `enable_gqa` unset, because that info isn't available. This led to https://fb.workplace.com/groups/1028545332188949/posts/1264609398582540 crashing, calling the Math non-GQA variant, with GQA inputs.

This assumes GQA for seqlen mismatches in the export decomp, setting `enable_gqa = <q seqlen> != <kv seqlen>`, relying on prior backend checks to raise on invalid input shapes.

Test Plan:
test_export

Rollback Plan:

Differential Revision: D78524147
facebook-github-bot pushed a commit that referenced this pull request Jul 23, 2025
Summary:

For `scaled_dot_product_attention(..., enable_gqa=True)`:
- the Math backend passes the flag through, performing the extra [KV broadcast](https://github.com/pytorch/pytorch/blob/6e07d6a0ff386d99d8c2f1d25978b0683988a4cb/aten/src/ATen/native/transformers/attention.cpp#L902) if set to True
- the Flash backend has no flag, and relies on correct indexing in the C++ kernel
- Export used to default to Math for `enable_gqa=True`, but #157893 landed and enabled Flash. At the same time, there's an export-only [decomp](https://github.com/pytorch/pytorch/blob/6e07d6a0ff386d99d8c2f1d25978b0683988a4cb/torch/_decomp/decompositions.py#L4968) redirecting flash -> math, calling with `enable_gqa` unset, because that info isn't available. This led to https://fb.workplace.com/groups/1028545332188949/posts/1264609398582540 crashing, calling the Math non-GQA variant, with GQA inputs.

This assumes GQA for seqlen mismatches in the export decomp, setting `enable_gqa = <q seqlen> != <kv seqlen>`, relying on prior backend checks to raise on invalid input shapes.

Test Plan:
test_export

Rollback Plan:

Reviewed By: angelayi

Differential Revision: D78524147
pytorchmergebot pushed a commit that referenced this pull request Jul 24, 2025
Differential Revision: D78524147

For `scaled_dot_product_attention(..., enable_gqa=True)`:
- the Math backend passes the flag through, performing the extra [KV broadcast](https://github.com/pytorch/pytorch/blob/6e07d6a0ff386d99d8c2f1d25978b0683988a4cb/aten/src/ATen/native/transformers/attention.cpp#L902) if set to True
- the Flash backend has no flag, and relies on correct indexing in the C++ kernel
- Export used to default to Math for `enable_gqa=True`, but #157893 landed and enabled Flash. At the same time, there's an export-only [decomp](https://github.com/pytorch/pytorch/blob/6e07d6a0ff386d99d8c2f1d25978b0683988a4cb/torch/_decomp/decompositions.py#L4968) redirecting flash -> math, calling with `enable_gqa` unset, because that info isn't available. This led to https://fb.workplace.com/groups/1028545332188949/posts/1264609398582540 crashing, calling the Math non-GQA variant, with GQA inputs.

This assumes GQA for seqlen mismatches in the export decomp, setting `enable_gqa = <q seqlen> != <kv seqlen>`, relying on prior backend checks to raise on invalid input shapes.

Pull Request resolved: #158604
Approved by: https://github.com/angelayi, https://github.com/drisspg
yangw-dev pushed a commit that referenced this pull request Aug 1, 2025
Differential Revision: D78524147

For `scaled_dot_product_attention(..., enable_gqa=True)`:
- the Math backend passes the flag through, performing the extra [KV broadcast](https://github.com/pytorch/pytorch/blob/6e07d6a0ff386d99d8c2f1d25978b0683988a4cb/aten/src/ATen/native/transformers/attention.cpp#L902) if set to True
- the Flash backend has no flag, and relies on correct indexing in the C++ kernel
- Export used to default to Math for `enable_gqa=True`, but #157893 landed and enabled Flash. At the same time, there's an export-only [decomp](https://github.com/pytorch/pytorch/blob/6e07d6a0ff386d99d8c2f1d25978b0683988a4cb/torch/_decomp/decompositions.py#L4968) redirecting flash -> math, calling with `enable_gqa` unset, because that info isn't available. This led to https://fb.workplace.com/groups/1028545332188949/posts/1264609398582540 crashing, calling the Math non-GQA variant, with GQA inputs.

This assumes GQA for seqlen mismatches in the export decomp, setting `enable_gqa = <q seqlen> != <kv seqlen>`, relying on prior backend checks to raise on invalid input shapes.

Pull Request resolved: #158604
Approved by: https://github.com/angelayi, https://github.com/drisspg
@github-actions github-actions bot deleted the support_gqa_cpu branch August 13, 2025 02:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged module: cpu CPU specific problem (e.g., perf, algorithm) open source topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants