-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Description
🐛 Describe the bug
I encountered some weird NaNs when switching from cpu to mps and after a bit of digging, noticed that clamp behaves differently. The issue is reproducible from the python console:
import torch
torch.tensor(0).clamp(min=1e-1) # tensor(0.1000)
torch.tensor(0).to("mps").clamp(min=1e-1).item(). # 0.0
Versions
Collecting environment information...
PyTorch version: 2.3.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.28.3
Libc version: N/A
Python version: 3.10.4 (main, Sep 11 2022, 20:39:17) [Clang 13.1.6 (clang-1316.0.21.2.5)] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpy-quaternion==2023.0.2
[pip3] pytorch-lightning==2.2.0.post0
[pip3] torch==2.3.1
[pip3] torch-tb-profiler==0.4.3
[pip3] torchmetrics==1.3.1
[pip3] torchvision==0.18.1
[conda] No relevant packages