AI Compute Availability Map
How We Measure GPU Compute Availability
This map visualises real-time GPU compute availability across 15 global cloud regions by analysing spot pricing data from AWS, Microsoft Azure, and Google Cloud Platform. Spot price discounts — the percentage difference between on-demand and spot pricing — serve as a proxy for spare compute capacity. Higher discounts typically indicate greater availability of GPU instances in that region.
Data Collection
Pricing data is collected daily from AWS, Microsoft Azure, and Google Cloud Platform — approximately 30,000 records across 30 fields, covering 17 GPU types from NVIDIA and AMD. Only instances with both spot and on-demand pricing are included. GPUs available on a single pricing tier (e.g. spot-only models like the NVIDIA B200) are excluded.
Availability Classification
Each region and GPU combination is classified into one of three availability tiers based on the computed spot discount:
- High availability — spot discount greater than 70%, indicating significant spare capacity
- Medium availability — spot discount between 30% and 70%
- Low availability — spot discount below 30%, suggesting constrained supply or high demand
GPU Groups
GPUs are categorised into functional groups to help users quickly identify availability for their workload type:
- Training (Current) — H100, H200, A100 (40GB & 80GB), L40S
- Training (Legacy) — V100, P100, MI25
- Inference (Current) — L4, A10G, A10, T4G, RTX PRO 6000, V520, V710
- Inference (Legacy) — T4, P4, M60
Data sourced from public cloud provider pricing APIs. Last updated: March 2026.