KEMBAR78
Enable FlashAttentionV2 on Windows · Issue #108175 · pytorch/pytorch · GitHub
Skip to content

Enable FlashAttentionV2 on Windows #108175

@drisspg

Description

@drisspg

Summary

This PR: #108174 will update the FlashAttention kernel within PyTorch core to V2. Currently this kernel does not support windows. This Issue is used to track support.

See: Dao-AILab/flash-attention#345

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: sdpaAll things related to torch.nn.functional.scaled_dot_product_attentiiontriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions