KEMBAR78
Add warning about removed sm50 and sm60 arches by pytorchbot · Pull Request #158478 · pytorch/pytorch · GitHub
Skip to content

Conversation

@pytorchbot
Copy link
Collaborator

Related to #157517

Detect when users are executing torch build with cuda 12.8/12.9 and running on Maxwell or Pascal architectures.
We would like to include reference to the issue: #157517 as well as ask people to install CUDA 12.6 builds if they are running on sm50 or sm60 architectures.

Test:

>>> torch.cuda.get_arch_list()
['sm_70', 'sm_75', 'sm_80', 'sm_86', 'sm_90', 'sm_100', 'sm_120', 'compute_120']
>>> torch.cuda.init()
/home/atalman/.conda/envs/py312/lib/python3.12/site-packages/torch/cuda/__init__.py:263: UserWarning: 
    Found <GPU Name> which is of cuda capability 5.0.
    PyTorch no longer supports this GPU because it is too old.
    The minimum cuda capability supported by this library is 7.0.

  warnings.warn(
/home/atalman/.conda/envs/py312/lib/python3.12/site-packages/torch/cuda/__init__.py:268: UserWarning: 
                        Support for Maxwell and Pascal architectures is removed for CUDA 12.8+ builds.
                        Please see https://github.com/pytorch/pytorch/issues/157517
                        Please install CUDA 12.6 builds if you require Maxwell or Pascal support.

cc @ptrblck @msaroufim @eqy @jerryzh168 @albanD @malfet

Related to #157517

Detect when users are executing torch build with cuda 12.8/12.9 and running on Maxwell or Pascal architectures.
We would like to include reference to the issue: #157517 as well as ask people to install CUDA 12.6 builds if they are running on sm50 or sm60 architectures.

Test:
```
>>> torch.cuda.get_arch_list()
['sm_70', 'sm_75', 'sm_80', 'sm_86', 'sm_90', 'sm_100', 'sm_120', 'compute_120']
>>> torch.cuda.init()
/home/atalman/.conda/envs/py312/lib/python3.12/site-packages/torch/cuda/__init__.py:263: UserWarning:
    Found <GPU Name> which is of cuda capability 5.0.
    PyTorch no longer supports this GPU because it is too old.
    The minimum cuda capability supported by this library is 7.0.

  warnings.warn(
/home/atalman/.conda/envs/py312/lib/python3.12/site-packages/torch/cuda/__init__.py:268: UserWarning:
                        Support for Maxwell and Pascal architectures is removed for CUDA 12.8+ builds.
                        Please see #157517
                        Please install CUDA 12.6 builds if you require Maxwell or Pascal support.
```

Pull Request resolved: #158301
Approved by: https://github.com/nWEIdia, https://github.com/albanD

(cherry picked from commit fb731fe)
@pytorch-bot
Copy link

pytorch-bot bot commented Jul 16, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/158478

Note: Links to docs will display an error until the docs builds have been completed.

⏳ 5 Pending, 1 Unrelated Failure

As of commit b578cdb with merge base 3a7ff82 (image):

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@atalman atalman merged commit 40e7433 into release/2.8 Jul 16, 2025
103 of 105 checks passed
@atalman
Copy link
Contributor

atalman commented Aug 1, 2025

Validated with Release 2.8:

Python 3.13.5 | packaged by Anaconda, Inc. | (main, Jun 12 2025, 16:09:02) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.init()
Device capability 8.6
Min Max Arch 86 90 90
/home/ubuntu/miniconda3/lib/python3.13/site-packages/torch/cuda/__init__.py:287: UserWarning: 
    Found GPU0 NVIDIA A10G which is of cuda capability 8.6.
    Minimum and Maximum cuda capability supported by this version of PyTorch is
    (9.0) - (9.0)
    
  warnings.warn(
/home/ubuntu/miniconda3/lib/python3.13/site-packages/torch/cuda/__init__.py:308: UserWarning: 
    Please install PyTorch with a following CUDA
    configurations:  12.8 12.9 following instructions at
    https://pytorch.org/get-started/locally/
    
  warnings.warn(matched_cuda_warn.format(matched_arches))
>>> torch.cuda.is_available()

tvukovic-amd pushed a commit to ROCm/pytorch that referenced this pull request Aug 20, 2025
Add warning about removed sm50 and sm60 arches (pytorch#158301)

Related to pytorch#157517

Detect when users are executing torch build with cuda 12.8/12.9 and running on Maxwell or Pascal architectures.
We would like to include reference to the issue: pytorch#157517 as well as ask people to install CUDA 12.6 builds if they are running on sm50 or sm60 architectures.

Test:
```
>>> torch.cuda.get_arch_list()
['sm_70', 'sm_75', 'sm_80', 'sm_86', 'sm_90', 'sm_100', 'sm_120', 'compute_120']
>>> torch.cuda.init()
/home/atalman/.conda/envs/py312/lib/python3.12/site-packages/torch/cuda/__init__.py:263: UserWarning:
    Found <GPU Name> which is of cuda capability 5.0.
    PyTorch no longer supports this GPU because it is too old.
    The minimum cuda capability supported by this library is 7.0.

  warnings.warn(
/home/atalman/.conda/envs/py312/lib/python3.12/site-packages/torch/cuda/__init__.py:268: UserWarning:
                        Support for Maxwell and Pascal architectures is removed for CUDA 12.8+ builds.
                        Please see pytorch#157517
                        Please install CUDA 12.6 builds if you require Maxwell or Pascal support.
```

Pull Request resolved: pytorch#158301
Approved by: https://github.com/nWEIdia, https://github.com/albanD

(cherry picked from commit fb731fe)

Co-authored-by: atalman <atalman@fb.com>
@github-actions github-actions bot deleted the cherry-pick-158301-by-pytorch_bot_bot_ branch September 1, 2025 02:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants