-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Open
Labels
module: sdpaAll things related to torch.nn.functional.scaled_dot_product_attentiionAll things related to torch.nn.functional.scaled_dot_product_attentiiontriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
Summary
This PR: #108174 will update the FlashAttention kernel within PyTorch core to V2. Currently this kernel does not support windows. This Issue is used to track support.
Ark-kun, f-ale, LionNatsu, umarbutler, lostmsu and 3 moreHK-SHAO, umarbutler and axbycc-mark
Metadata
Metadata
Assignees
Labels
module: sdpaAll things related to torch.nn.functional.scaled_dot_product_attentiionAll things related to torch.nn.functional.scaled_dot_product_attentiiontriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module