GPU 101
Everything you need to know about GPUs, AI accelerators, and cloud compute pricing. From fundamentals to advanced topics, with live market data from Signwl.
Essential concepts — what GPUs are, how they work, and why they matter for AI
A GPU (Graphics Processing Unit) is a specialised processor designed for parallel computation. Originally built for rendering graphics, GPUs are now t...
CPUs (Central Processing Units) excel at sequential, complex tasks with a few powerful cores. GPUs (Graphics Processing Units) excel at parallel tasks...
VRAM (Video RAM) is the dedicated memory on a GPU, used to store model weights, activations, and data during AI training and inference. VRAM capacity ...
TFLOPS (Tera Floating-Point Operations Per Second) measure a GPU's computational throughput — how many trillions of mathematical operations it can per...
A TPU (Tensor Processing Unit) is a custom AI accelerator designed by Google, purpose-built for machine learning workloads. Unlike GPUs, which are gen...
Practical guides to cloud GPU pricing, workload types, and choosing the right accelerator
Cloud GPUs are available in three pricing tiers: on-demand (pay full price, guaranteed availability), spot/preemptible (60-90% cheaper but can be inte...
Training is the process of teaching an AI model by exposing it to data — it requires maximum GPU compute and memory. Inference is running a trained mo...
GPU utilisation measures how much of a GPU's capacity is actively being used. In cloud markets, Signwl uses spot pricing discounts as a proxy for regi...
Choosing the right cloud GPU depends on three factors: workload type (training vs inference), model size (which determines minimum VRAM), and budget. ...
Deep dives into GPU generations, CUDA, multi-GPU training, and power efficiency
NVIDIA has released five major data centre GPU generations: Volta (2017), Turing (2018), Ampere (2020), Hopper (2022), and Blackwell (2024). Each gene...
CUDA (Compute Unified Device Architecture) is NVIDIA's parallel computing platform and programming model that enables software to run on NVIDIA GPUs. ...
Multi-GPU training distributes AI model training across multiple GPUs to handle models too large for a single GPU or to reduce training time. The two ...
TDP (Thermal Design Power) measures the maximum power a GPU draws under load, expressed in watts. GPU power consumption directly impacts data centre o...
Browse All Accelerators
Live pricing for 39 GPU and AI accelerator types.
GPU Comparisons
Side-by-side comparisons of 25 accelerator pairs.
Need expert guidance?
Get tailored recommendations for your AI infrastructure needs.
Contact Us