NVIDIAGPU
NVIDIA B200
Blackwell architecture · 192GB memory · 1800 FP16 TFLOPS · 1000W TDP
Cloud Pricing Today
About the NVIDIA B200
The NVIDIA B200 is a Blackwell-generation GPU designed for AI training and inference at scale. With 192GB of HBM3e memory, it offers a significant performance uplift over the previous-generation H100 while maintaining compatibility with existing CUDA workflows.
Memory (VRAM)
192 GB
FP16 Performance
1800 TFLOPS
Power (TDP)
1000W
Architecture
Blackwell
Common Use Cases
AI model trainingLarge-scale inferenceGenerative AIEnterprise AI deployments
Key Facts
- Manufacturer
- NVIDIA
- Architecture
- Blackwell
- Accelerator Type
- GPU
- Primary Use
- training
- Memory (VRAM)
- 192 GB
- FP16 Performance
- 1800 TFLOPS
- Thermal Design Power
- 1000W
Related Accelerators
NVIDIA
NVIDIA GB300
Blackwell · 288GB · 2250 TFLOPS
training
NVIDIA
NVIDIA GB200
Blackwell · 192GB · 1800 TFLOPS
training
NVIDIA
NVIDIA B300
Blackwell · 288GB · 2250 TFLOPS
training
NVIDIA
NVIDIA H200
Hopper · 141GB · 990 TFLOPS
training
NVIDIA
NVIDIA H100
Hopper · 80GB · 990 TFLOPS
training
NVIDIA
NVIDIA A100 80GB
Ampere · 80GB · 312 TFLOPS
training
Investment Tool
Calculate NVIDIA B200 ROI
Estimate payback period, annual returns, and 3-year ROI with live Signwl pricing data.
Track NVIDIA B200 pricing over time
Get access to historical pricing data, regional analysis, and custom alerts.