KEMBAR78
Which GPU should I buy for ComfyUI · comfyanonymous/ComfyUI Wiki · GitHub
Skip to content

Which GPU should I buy for ComfyUI

comfyanonymous edited this page Oct 17, 2025 · 8 revisions

Which GPU should I buy?

This is a tier list of which consumer GPUs we would recommend for using with ComfyUI.

In AI the most important thing is the software stack, which is why this is ranked this way.

S Tier

Nvidia

All Nvidia GPUs from the last 10 years (since Maxwell/GTX 900) are supported in pytorch and they work very well.

3000 series and above are recommended for best performance. More VRAM is always preferable.

Why you should avoid older generations if you can.

Older generations of cards will work however performance might be worse than expected because they don't support certain operations.

Here is a quick summary of what is supported on each generation:

  • 50 series (blackwell): fp16, bf16, fp8, fp4
  • 40 series (ada): fp16, bf16, fp8
  • 30 series (ampere): fp16, bf16
  • 20 series (turing): fp16
  • 10 series (pascal) and below: only slow full precision fp32.

Models are inferenced in fp16 or bf16 for best quality depending on the model with the option for fp8 on some models for less memory/more speed at lower quality.

Note that this table doesn't mean that it's completely unsupported to use fp16 on 10 series for example it just means it's going to be slower because the GPU can't handle it natively.

Don't be tempted by the cheap pascal workstation cards with lots of vram, your performance will be bad.

Anything older than 2000 series like Volta or Pascal should be avoided because they are about to be deprecated in cuda 13.

B Tier

AMD (Linux)

Officially supported in pytorch.

Works well if the card is officially supported by ROCm but can be a bit slow compared to price equivalent Nvidia GPUs depending on the GPU. The later the GPU generation the better things work.

RDNA 4, MI300X: Confirmed "A tier" experience on latest ComfyUI and latest pytorch nightly.

Unsupported cards might be a real pain to get running.

AMD (Windows)

Official pytorch version that works but can be a bit slow compared to the Linux builds. Oldest officialy supported generation is the 7000 series.

Intel (Linux + Windows)

Officially supported in pytorch. People seem to get it working fine.

D Tier

Mac with Apple silicon

Officially supported in pytorch. It works but they love randomly breaking things with OS updates.

Very slow. A lot of ops are not properly supported. No fp8 support at all.

F Tier

Qualcomm AI PC

Pytorch doesn't work at all.

They are: "working on it", until they do actually get it working I recommend avoiding them completely because it might take them so long to make it work that the current hardware will be completely obsolete.

Clone this wiki locally