H100$6.39/hr 1.2% 7d
A100 80GB$2.45/hr 0.5% 7d
H200$10.29/hr 0.8% 7d
L40S$1.28/hr 0.3% 7d
T4$0.24/hr 0.6% 7d
L4$0.45/hr 1.1% 7d
H100$6.39/hr 1.2% 7d
A100 80GB$2.45/hr 0.5% 7d
H200$10.29/hr 0.8% 7d
L40S$1.28/hr 0.3% 7d
T4$0.24/hr 0.6% 7d
L4$0.45/hr 1.1% 7d
NVIDIAGPU

NVIDIA B200

Blackwell architecture · 192GB memory · 1800 FP16 TFLOPS · 1000W TDP

Cloud Pricing Today

About the NVIDIA B200

The NVIDIA B200 is a Blackwell-generation GPU designed for AI training and inference at scale. With 192GB of HBM3e memory, it offers a significant performance uplift over the previous-generation H100 while maintaining compatibility with existing CUDA workflows.

Memory (VRAM)
192 GB
FP16 Performance
1800 TFLOPS
Power (TDP)
1000W
Architecture
Blackwell

Common Use Cases

AI model trainingLarge-scale inferenceGenerative AIEnterprise AI deployments

Key Facts

Manufacturer
NVIDIA
Architecture
Blackwell
Accelerator Type
GPU
Primary Use
training
Memory (VRAM)
192 GB
FP16 Performance
1800 TFLOPS
Thermal Design Power
1000W

Related Accelerators

Investment Tool

Calculate NVIDIA B200 ROI

Estimate payback period, annual returns, and 3-year ROI with live Signwl pricing data.

Track NVIDIA B200 pricing over time

Get access to historical pricing data, regional analysis, and custom alerts.