H100$6.39/hr 1.2% 7d
A100 80GB$2.45/hr 0.5% 7d
H200$10.29/hr 0.8% 7d
L40S$1.28/hr 0.3% 7d
T4$0.24/hr 0.6% 7d
L4$0.45/hr 1.1% 7d
H100$6.39/hr 1.2% 7d
A100 80GB$2.45/hr 0.5% 7d
H200$10.29/hr 0.8% 7d
L40S$1.28/hr 0.3% 7d
T4$0.24/hr 0.6% 7d
L4$0.45/hr 1.1% 7d

GPU & Accelerator Comparisons

Side-by-side comparisons of 23 GPU and AI accelerator pairs. Live blended pricing across spot, on-demand, and reserved instances. Updated daily.

Training GPUs

Head-to-head comparisons of GPUs designed for AI model training and HPC workloads

Inference GPUs

Comparisons of cost-efficient GPUs optimised for deploying AI models in production

Cross-Vendor

NVIDIA vs AMD and other cross-vendor comparisons

Custom Silicon

Cloud-native AI chips — TPUs, Trainium, Inferentia — compared with GPUs

How to Choose the Right GPU

Selecting the right cloud GPU depends on your workload type, budget, and scale. Training workloads — where models learn from data — require high compute throughput and large memory capacity. GPUs like the H100, B200, and MI300X are designed for this. Inference workloads — where trained models serve predictions — prioritise cost efficiency and latency. GPUs like the T4, L4, and L40S excel here.

Cross-vendor comparisons are increasingly relevant as AMD's MI300X and cloud-native accelerators like Google TPUs and AWS Trainium challenge NVIDIA's dominance. While NVIDIA's CUDA ecosystem remains the most mature, alternatives can offer significant cost savings for supported workloads.

The blended pricing shown in our comparisons averages across spot, on-demand, and reserved pricing tiers from major cloud providers. This gives a realistic view of actual costs rather than best-case spot pricing that may not always be available.

Browse All Accelerators

View individual profiles with live pricing, specs, and regional breakdowns for all 39 GPU and AI accelerator types.

Need a custom comparison?

Get tailored analysis, historical pricing data, and deployment recommendations.