NVIDIAGPU
NVIDIA A100 40GB
Ampere architecture · 40GB memory · 312 FP16 TFLOPS · 400W TDP
Cloud Pricing Today
About the NVIDIA A100 40GB
The NVIDIA A100 40GB is the standard Ampere-generation data centre GPU with 40GB of HBM2e memory. It was the workhorse GPU for AI training before the H100 and remains cost-effective for many training and inference workloads.
Memory (VRAM)
40 GB
FP16 Performance
312 TFLOPS
Power (TDP)
400W
Architecture
Ampere
Common Use Cases
AI trainingModel fine-tuningBatch inferenceScientific simulation
Key Facts
- Manufacturer
- NVIDIA
- Architecture
- Ampere
- Accelerator Type
- GPU
- Primary Use
- training
- Memory (VRAM)
- 40 GB
- FP16 Performance
- 312 TFLOPS
- Thermal Design Power
- 400W
Related Accelerators
NVIDIA
NVIDIA GB300
Blackwell · 288GB · 2250 TFLOPS
training
NVIDIA
NVIDIA GB200
Blackwell · 192GB · 1800 TFLOPS
training
NVIDIA
NVIDIA B200
Blackwell · 192GB · 1800 TFLOPS
training
NVIDIA
NVIDIA B300
Blackwell · 288GB · 2250 TFLOPS
training
NVIDIA
NVIDIA H200
Hopper · 141GB · 990 TFLOPS
training
NVIDIA
NVIDIA H100
Hopper · 80GB · 990 TFLOPS
training
Investment Tool
Calculate NVIDIA A100 40GB ROI
Estimate payback period, annual returns, and 3-year ROI with live Signwl pricing data.
Track NVIDIA A100 40GB pricing over time
Get access to historical pricing data, regional analysis, and custom alerts.