SGPI Signwl Global GPU Price Index
The observation-weighted average cost of cloud GPU compute, updated daily. Blended across spot, on-demand, and reserved pricing from major cloud providers.
Methodology
Scope: The SGPI tracks GPU-only pricing across major cloud providers. It excludes TPUs, FPGAs, custom silicon (Trainium, Inferentia), and other non-GPU accelerators. Currently 25 GPU types are tracked.
Pricing tiers: Four tiers are blended: spot (preemptible), on-demand, 1-year reserved, and 3-year reserved. This gives a comprehensive view of actual market pricing rather than relying on a single tier.
Weighting: Each pricing observation is weighted by the number of individual pricing records contributing to that observation. GPUs with more pricing data points (indicating deeper market liquidity) receive more weight in the index. This prevents niche GPUs with limited availability from disproportionately influencing the index.
Sub-indices: SGPI-T tracks training-class GPUs (H100, H200, A100, MI300X, B200, etc.) — hardware optimised for AI model training and HPC workloads. SGPI-I tracks inference-class GPUs (T4, L4, L40S, A10G, etc.) — hardware optimised for serving models in production. The spread between SGPI-T and SGPI-I indicates the relative cost premium for training compute.
Update frequency: Daily. Data is collected from major cloud providers and the index is recalculated each morning at 06:00 UTC.
Frequently Asked Questions
What is the SGPI?
The Signwl Global GPU Price Index (SGPI) is a daily observation-weighted average of cloud GPU compute pricing across all major providers. It provides a single headline number for the cost of GPU compute, similar to how commodity indices track the price of oil or gold.
How is it different from individual GPU pricing?
Individual GPU prices (like H100 at $6.43/hr) show the cost of a specific accelerator. The SGPI aggregates all GPUs into a single market-wide indicator, weighted by market activity. This makes it useful for tracking the overall direction of GPU compute costs, not just individual GPU movements.
What does the training/inference spread tell us?
The SGPI-T to SGPI-I ratio shows the cost premium for training compute vs inference. A widening spread suggests training demand is outpacing inference, while a narrowing spread suggests training costs are falling (often due to new hardware entering the market) or inference demand is growing.
Explore the GPU Compute Market
Live pricing for 39 accelerators, GPU comparisons, regional analysis, and investment tools.