H100$6.39/hr 1.2% 7d
A100 80GB$2.45/hr 0.5% 7d
H200$10.29/hr 0.8% 7d
L40S$1.28/hr 0.3% 7d
T4$0.24/hr 0.6% 7d
L4$0.45/hr 1.1% 7d
H100$6.39/hr 1.2% 7d
A100 80GB$2.45/hr 0.5% 7d
H200$10.29/hr 0.8% 7d
L40S$1.28/hr 0.3% 7d
T4$0.24/hr 0.6% 7d
L4$0.45/hr 1.1% 7d

Cloud GPU & Accelerator Pricing

Live blended pricing across spot, on-demand, and reserved instances for 37 GPU and AI accelerator types. Updated daily from major cloud providers.

Training

High-performance accelerators optimised for AI model training, fine-tuning, and HPC workloads

NVIDIA
NVIDIA GB200
$15.12
/hr
Blackwell · 192GB · 1800 TFLOPS
1d: -0.0%7d: -0.0%41 regions
NVIDIA
NVIDIA GB300
$12.32
/hr
Blackwell · 288GB · 2250 TFLOPS
1d: +8.2%7d: +7.3%7 regions
NVIDIA
NVIDIA B300
$10.31
/hr
Blackwell · 288GB · 2250 TFLOPS
1d: -1.2%7d: -8.3%2 regions
NVIDIA
NVIDIA H200
$10.06
/hr
Hopper · 141GB · 990 TFLOPS
1d: -1.1%7d: -0.0%59 regions
NVIDIA
NVIDIA B200
$7.23
/hr
Blackwell · 192GB · 1800 TFLOPS
1d: 7d: +1.2%12 regions
NVIDIA
NVIDIA H100
$6.43
/hr
Hopper · 80GB · 990 TFLOPS
1d: +1.8%7d: -0.1%58 regions
Google
Google TPU v3
$5.16
/hr
TPU v3 · 16GB · 123 TFLOPS
1d: -0.0%7d: -0.0%9 regions
AMD
AMD MI300X
$3.44
/hr
CDNA 3 · 192GB · 1300 TFLOPS
1d: -0.0%7d: -0.1%34 regions
AWS
AWS Trainium2
$2.80
/hr
Trainium2
1d: -0.3%7d: -2.1%6 regions
Google
Google TPU v2
$2.77
/hr
TPU v2 · 8GB · 45 TFLOPS
1d: -0.0%7d: -0.0%10 regions
NVIDIA
NVIDIA A100 80GB
$2.44
/hr
Ampere · 80GB · 312 TFLOPS
1d: -0.6%7d: -0.1%57 regions
AWS
AWS Trainium
$1.95
/hr
Trainium · 32GB
1d: 7d: +0.2%14 regions
NVIDIA
NVIDIA A100 40GB
$1.76
/hr
Ampere · 40GB · 312 TFLOPS
1d: +2.4%7d: -0.3%51 regions
NVIDIA
NVIDIA V100
$1.64
/hr
Volta · 16GB · 125 TFLOPS
1d: -0.0%7d: -0.1%53 regions
NVIDIA
NVIDIA V100 32GB
$1.50
/hr
Volta · 32GB · 125 TFLOPS
1d: +4.9%7d: -0.3%23 regions
Google
Google TPU v5p
$1.06
/hr
TPU v5p · 95GB
1d: -0.1%7d: -0.1%11 regions
NVIDIA
NVIDIA P100
$0.98
/hr
Pascal · 16GB · 19 TFLOPS
1d: 7d: 12 regions
Intel (Habana Labs)
Intel Gaudi
$0.87
/hr
Gaudi · 32GB
1d: 7d: -0.1%6 regions
Google
Google TPU v6e (Trillium)
$0.63
/hr
TPU v6e
1d: -0.1%7d: -2.4%26 regions
AMD
AMD MI25
$0.21
/hr
Vega · 16GB · 12 TFLOPS
1d: +0.3%7d: +2.5%50 regions
Inference

Cost-efficient accelerators optimised for deploying and serving AI models in production

Other Accelerators

FPGAs, specialised chips, and accelerators for custom workloads

GPU Comparisons

Side-by-side comparisons of 25 GPU and AI accelerator pairs — H100 vs A100, H100 vs MI300X, T4 vs L4, and more.

Understanding Cloud GPU Pricing

Cloud GPU pricing varies significantly based on the accelerator type, cloud region, and pricing model. Signwl tracks pricing across three tiers — spot (preemptible), on-demand, and reserved (committed use) — and presents a blended average that reflects the true cost landscape for each accelerator.

Training-class GPUs like the NVIDIA H100, H200, and AMD MI300X command premium pricing due to their high compute throughput and memory bandwidth, which are essential for training large AI models. Inference-class GPUs like the L4, T4, and L40S offer lower price points optimised for serving models in production, where cost-per-query matters more than raw TFLOPS.

Regional pricing differences can be substantial — the same GPU may cost 30–50% more in certain regions due to supply constraints, power costs, and local demand. Signwl aggregates pricing data from major cloud providers across all available regions to give a complete picture of the global GPU cost landscape.

Need detailed GPU pricing data?

Access historical trends, provider-level analysis, and custom data feeds.