H100$6.39/hr 1.2% 7d
A100 80GB$2.45/hr 0.5% 7d
H200$10.29/hr 0.8% 7d
L40S$1.28/hr 0.3% 7d
T4$0.24/hr 0.6% 7d
L4$0.45/hr 1.1% 7d
H100$6.39/hr 1.2% 7d
A100 80GB$2.45/hr 0.5% 7d
H200$10.29/hr 0.8% 7d
L40S$1.28/hr 0.3% 7d
T4$0.24/hr 0.6% 7d
L4$0.45/hr 1.1% 7d
AMDGPU

AMD MI300X

CDNA 3 architecture · 192GB memory · 1300 FP16 TFLOPS · 750W TDP

Cloud Pricing Today

About the AMD MI300X

The AMD Instinct MI300X is AMD's most competitive entry in the data centre AI accelerator market. Based on the CDNA 3 architecture, it offers 1,300 FP16 TFLOPS and 192GB of HBM3 memory — significantly more memory than the NVIDIA H100's 80GB.

The MI300X's memory advantage is its strongest differentiator. With 192GB of HBM3 providing 5.3 TB/s of bandwidth, it can serve larger language models on fewer GPUs than the H100, potentially reducing total cost of ownership for inference workloads. This memory capacity puts it on par with NVIDIA's newer B200 and GB200.

The MI300X uses AMD's ROCm software stack, which has made significant progress in supporting major AI frameworks including PyTorch and JAX. However, the CUDA ecosystem remains more mature, and some workloads may require additional optimisation effort to run efficiently on AMD hardware.

In the cloud market, the MI300X is increasingly available as providers diversify their GPU offerings. It is particularly attractive for organisations looking to reduce dependency on a single vendor or seeking better price/performance for memory-intensive workloads.

Memory (VRAM)
192 GB
FP16 Performance
1300 TFLOPS
Power (TDP)
750W
Architecture
CDNA 3

Common Use Cases

AI trainingLarge language model inferenceHigh-memory workloadsHPC

Key Facts

Manufacturer
AMD
Architecture
CDNA 3
Accelerator Type
GPU
Primary Use
training
Memory (VRAM)
192 GB
FP16 Performance
1300 TFLOPS
Thermal Design Power
750W

Frequently Asked Questions

How much does an MI300X cost per hour?

The AMD MI300X blended cloud pricing typically ranges from $2.50–$5.00 per hour, often competitive with or lower than NVIDIA H100 pricing for comparable performance.

Can I run PyTorch on the MI300X?

Yes. The MI300X supports PyTorch through AMD's ROCm software stack. Most PyTorch models can run on MI300X, though some CUDA-specific operations may require adaptation. Major frameworks have increasingly strong ROCm support.

MI300X vs H100 — which is better?

The MI300X offers 31% more FP16 TFLOPS (1,300 vs 990) and 140% more memory (192GB vs 80GB). The H100 has a more mature CUDA ecosystem and wider availability. For memory-bound workloads, the MI300X can offer better total cost of ownership.

Related Accelerators

Compare AMD MI300X

Investment Tool

Calculate AMD MI300X ROI

Estimate payback period, annual returns, and 3-year ROI with live Signwl pricing data.

Track AMD MI300X pricing over time

Get access to historical pricing data, regional analysis, and custom alerts.