-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Closed
Labels
module: bfloat16module: mpsRelated to Apple Metal Performance Shaders frameworkRelated to Apple Metal Performance Shaders frameworktriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Describe the bug
torch.arrange is not implemented for BFloat16 on MPS. I'm assuming this is a hangover from before BFloat16 support was added to MPS.
img_ids[..., 1] = img_ids[..., 1] + torch.arange(h // 2, device=device, dtype=dtype)[:, None]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: "arange_mps" not implemented for 'BFloat16'Versions
PyTorch version: 2.6.0.dev20240924
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: version 3.29.5
Libc version: N/A
Python version: 3.11.10 (main, Sep 7 2024, 08:05:54) [Clang 16.0.0 (clang-1600.0.26.3)] (64-bit runtime)
Python platform: macOS-15.0-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] numpy==1.26.4
[pip3] onnx==1.15.0
[pip3] onnxruntime==1.16.3
[pip3] pytorch-lightning==2.1.3
[pip3] torch==2.6.0.dev20240924
[pip3] torchmetrics==0.11.4
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.0.dev20240924
[conda] Could not collect
(AlphaInvokeAI) M3iMac:utils davidburn
malfet and SuperKenVery
Metadata
Metadata
Assignees
Labels
module: bfloat16module: mpsRelated to Apple Metal Performance Shaders frameworkRelated to Apple Metal Performance Shaders frameworktriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module