NVIDIAGPU
NVIDIA P100
Pascal architecture · 16GB memory · 19 FP16 TFLOPS · 250W TDP
Cloud Pricing Today
About the NVIDIA P100
The NVIDIA P100 is a Pascal-generation GPU, one of the earliest data centre GPUs used for deep learning. While largely superseded, it remains available on some cloud platforms at very low cost.
Memory (VRAM)
16 GB
FP16 Performance
19 TFLOPS
Power (TDP)
250W
Architecture
Pascal
Common Use Cases
Legacy workloadsEducational useBasic ML experimentation
Key Facts
- Manufacturer
- NVIDIA
- Architecture
- Pascal
- Accelerator Type
- GPU
- Primary Use
- training
- Memory (VRAM)
- 16 GB
- FP16 Performance
- 19 TFLOPS
- Thermal Design Power
- 250W
Related Accelerators
NVIDIA
NVIDIA GB300
Blackwell · 288GB · 2250 TFLOPS
training
NVIDIA
NVIDIA GB200
Blackwell · 192GB · 1800 TFLOPS
training
NVIDIA
NVIDIA B200
Blackwell · 192GB · 1800 TFLOPS
training
NVIDIA
NVIDIA B300
Blackwell · 288GB · 2250 TFLOPS
training
NVIDIA
NVIDIA H200
Hopper · 141GB · 990 TFLOPS
training
NVIDIA
NVIDIA H100
Hopper · 80GB · 990 TFLOPS
training
Investment Tool
Calculate NVIDIA P100 ROI
Estimate payback period, annual returns, and 3-year ROI with live Signwl pricing data.
Track NVIDIA P100 pricing over time
Get access to historical pricing data, regional analysis, and custom alerts.