NVIDIAGPU
NVIDIA V100
Volta architecture · 16GB memory · 125 FP16 TFLOPS · 300W TDP
Cloud Pricing Today
About the NVIDIA V100
The NVIDIA V100 is a Volta-generation GPU that was the first mainstream GPU to include Tensor Cores. With 16GB of HBM2 memory, it pioneered the use of GPUs for deep learning training and remains available at budget price points.
Memory (VRAM)
16 GB
FP16 Performance
125 TFLOPS
Power (TDP)
300W
Architecture
Volta
Common Use Cases
Legacy AI trainingResearch workloadsBudget compute
Key Facts
- Manufacturer
- NVIDIA
- Architecture
- Volta
- Accelerator Type
- GPU
- Primary Use
- training
- Memory (VRAM)
- 16 GB
- FP16 Performance
- 125 TFLOPS
- Thermal Design Power
- 300W
Related Accelerators
NVIDIA
NVIDIA GB300
Blackwell · 288GB · 2250 TFLOPS
training
NVIDIA
NVIDIA GB200
Blackwell · 192GB · 1800 TFLOPS
training
NVIDIA
NVIDIA B200
Blackwell · 192GB · 1800 TFLOPS
training
NVIDIA
NVIDIA B300
Blackwell · 288GB · 2250 TFLOPS
training
NVIDIA
NVIDIA H200
Hopper · 141GB · 990 TFLOPS
training
NVIDIA
NVIDIA H100
Hopper · 80GB · 990 TFLOPS
training
Investment Tool
Calculate NVIDIA V100 ROI
Estimate payback period, annual returns, and 3-year ROI with live Signwl pricing data.
Track NVIDIA V100 pricing over time
Get access to historical pricing data, regional analysis, and custom alerts.