GoogleTPU
Google TPU v2
TPU v2 architecture · 8GB memory · 45 FP16 TFLOPS
Cloud Pricing Today
About the Google TPU v2
The Google Cloud TPU v2 is a second-generation Tensor Processing Unit designed for accelerating machine learning training and inference. TPUs use a fundamentally different architecture to GPUs, optimised specifically for matrix operations common in neural networks.
Memory (VRAM)
8 GB
FP16 Performance
45 TFLOPS
Architecture
TPU v2
Common Use Cases
ML training on Google CloudTensorFlow workloadsResearch
Key Facts
- Manufacturer
- Architecture
- TPU v2
- Accelerator Type
- TPU
- Primary Use
- training
- Memory (VRAM)
- 8 GB
- FP16 Performance
- 45 TFLOPS
Related Accelerators
NVIDIA
NVIDIA GB300
Blackwell · 288GB · 2250 TFLOPS
training
NVIDIA
NVIDIA GB200
Blackwell · 192GB · 1800 TFLOPS
training
NVIDIA
NVIDIA B200
Blackwell · 192GB · 1800 TFLOPS
training
NVIDIA
NVIDIA B300
Blackwell · 288GB · 2250 TFLOPS
training
NVIDIA
NVIDIA H200
Hopper · 141GB · 990 TFLOPS
training
NVIDIA
NVIDIA H100
Hopper · 80GB · 990 TFLOPS
training
Track Google TPU v2 pricing over time
Get access to historical pricing data, regional analysis, and custom alerts.