H100$4.83/hr 3.4%
A100 40GB$0.74/hr 0.0%
L40S$0.56/hr 0.3%
A10G$0.32/hr 1.7%
L4$0.17/hr 3.6%
T4$0.16/hr 0.8%
H100$4.83/hr 3.4%
A100 40GB$0.74/hr 0.0%
L40S$0.56/hr 0.3%
A10G$0.32/hr 1.7%
L4$0.17/hr 3.6%
T4$0.16/hr 0.8%
01

Pricing & Indices

We track on-demand and spot pricing daily across GPU, CPU, RAM, and storage — so you can see exactly how AI infrastructure economics are shifting.

ACCI

GPU Pricing

Weighted average cost of AI/ML compute across 20+ GPU types (H100, H200, A100, L4) and pricing models. Tracks the fundamental cost of AI infrastructure.

  • 20+ GPU types tracked
  • On-demand & spot pricing
  • Performance-weighted indices
  • Geographic normalization
Subscriber accessRequest Access
CPCI

CPU Pricing

Comprehensive CPU instance pricing across cloud providers and regions. Tracks compute costs for AI preprocessing, data pipelines, and supporting infrastructure.

  • All major instance families
  • Cross-provider comparison
  • Regional price variations
  • Historical trend data
Subscriber accessRequest Access
RMCI

RAM Pricing

Per-instance memory costs across GPU-enabled and standard instances. Tracks the memory component of total AI infrastructure spend.

  • Per-GB memory pricing
  • GPU instance memory premiums
  • High-memory instance tracking
  • Regional cost analysis
Subscriber accessRequest Access
ASCI

Storage Pricing

Cost trends for persistent storage required for AI/ML workloads. Tracks SSD, HDD, and premium storage tiers across providers.

  • 4 storage tiers tracked
  • Cost per TB analysis
  • Performance tier comparison
  • Regional storage variations
Subscriber accessRequest Access

Need access to real-time index data? Contact us