Blended GPU compute costs remained stable 0.0% this week across major cloud providers, led by ALVEO_U30's 22.2% decline.
Top Movers
| GPU | Blended Price | WoW Change | Class |
|---|---|---|---|
| ALVEO_U30 | $0.03/hr | ▼ 22.2% | General |
| INFERENTIA2 | $0.10/hr | ▲ 10.9% | Inference |
| H200 | $10.17/hr | ▼ 2.9% | Training |
| TRAINIUM | $1.17/hr | ▲ 2.6% | Training |
| INFERENTIA | $0.20/hr | ▲ 2.4% | Inference |
| V100 32GB | $1.59/hr | ▲ 1.9% | Training |
| V520 | $0.13/hr | ▲ 1.8% | General |
Blended pricing = average of spot, on-demand, and 1-year reserved rates across major cloud providers.
Training vs Inference
Training-class GPU pricing held steady this week (avg $4.24/hr, +0.2% WoW), while inference-class pricing rose (avg $0.51/hr, +1.6% WoW).
The training-to-inference price ratio stands at 8.3x — narrowing compared to last week. The elevated spread suggests strong demand for training compute relative to inference, consistent with ongoing large model training activity.
Regional Spotlight: North America
North America trades at a 9% discount to global averages this week, with 38 GPU types available across the region. The most expensive GPUs in the region are GB200 ($12.47/hr), GB300 ($11.50/hr), B300 ($10.31/hr). The 9% discount makes North America one of the more cost-effective regions for GPU deployment this week.
For detailed pricing data across all North America sub-regions, see the full regional profile.
Implications
For cloud buyers: Europe continues to offer the lowest average GPU pricing ($1.98/hr blended average). For workloads with regional flexibility, the gap between Europe and Middle East is $2.07/hr — a 104% premium. Compare regional pricing →
For semiconductor analysts: GPU pricing trends remain broadly stable this week. H100 (-0.2% WoW) and MI300X (-0.1% WoW) are tracking within normal ranges. Blackwell (B200) blended pricing at $7.21/hr (+0.8% WoW) provides an early read on next-generation adoption curves. View all GPU profiles →
For GPU investors: Stable pricing supports predictable returns for existing deployments. Model scenarios with the GPU ROI Calculator →