H100$4.83/hr 3.4%
A100 40GB$0.74/hr 0.0%
L40S$0.56/hr 0.3%
A10G$0.32/hr 1.7%
L4$0.17/hr 3.6%
T4$0.16/hr 0.8%
H100$4.83/hr 3.4%
A100 40GB$0.74/hr 0.0%
L40S$0.56/hr 0.3%
A10G$0.32/hr 1.7%
L4$0.17/hr 3.6%
T4$0.16/hr 0.8%

AI Compute Availability Map

Availability:
Low
Medium
High

How We Measure GPU Compute Availability

This map visualises real-time GPU compute availability across 15 global cloud regions by analysing spot pricing data from AWS, Microsoft Azure, and Google Cloud Platform. Spot price discounts — the percentage difference between on-demand and spot pricing — serve as a proxy for spare compute capacity. Higher discounts typically indicate greater availability of GPU instances in that region.

Data Collection

Pricing data is collected daily from AWS, Microsoft Azure, and Google Cloud Platform — approximately 30,000 records across 30 fields, covering 17 GPU types from NVIDIA and AMD. Only instances with both spot and on-demand pricing are included. GPUs available on a single pricing tier (e.g. spot-only models like the NVIDIA B200) are excluded.

Availability Classification

Each region and GPU combination is classified into one of three availability tiers based on the computed spot discount:

  • High availability — spot discount greater than 70%, indicating significant spare capacity
  • Medium availability — spot discount between 30% and 70%
  • Low availability — spot discount below 30%, suggesting constrained supply or high demand

GPU Groups

GPUs are categorised into functional groups to help users quickly identify availability for their workload type:

  • Training (Current) — H100, H200, A100 (40GB & 80GB), L40S
  • Training (Legacy) — V100, P100, MI25
  • Inference (Current) — L4, A10G, A10, T4G, RTX PRO 6000, V520, V710
  • Inference (Legacy) — T4, P4, M60

Data sourced from public cloud provider pricing APIs. Last updated: March 2026.