Why is Redis so fast?
There are many design decisions that make Redis blisteringly quick. We’ll focus on two that carry most of the weight.
-
First is Redis’s in-memory design + single-threaded event loop. Keeping data in RAM and processing commands in one core loop avoids locks and random I/O.
-
Second is its purpose-built data structures and lean wire protocol. Most commands hit O(1) or O(log N) paths with tiny per-op overhead.
What happens when you read from Redis👇
Step 1: A client sends a command using RESP (a simple, compact text/binary protocol). Minimal parsing and small payloads reduce CPU and network overhead. No big JSON/XML blobs.
Step 2: Redis’s event loop accepts the socket and queues work. Single-threaded command execution means no lock contention or context-switch thrash. If you want to use more CPUs you need to run more Redis instances in a cluster.
Step 3: The command is parsed and routed via a hash table lookup to the target keyspace entry. Lookups are O(1) on average thanks to efficient dictionaries and cache-friendly memory layouts.
Step 4: The operation runs entirely in memory using specialized structures: Strings, lists (quicklist), hashes (compact encodings), sets/intsets, sorted sets (skiplist + hash), streams, etc. They're are engineered for predictable, fast operations with tight CPU caches.
Step 5: The response is written back through the same event loop. Pipelining and batching can amortize syscalls and round trips, pushing throughput even higher.
Step 6: Persistence and replication are off the critical path. AOF uses append-only, sequential writes with configurable fsync; RDB snapshots happen in a child process. Basically: we can lose data! This is a tradeoff Redis makes.
Replication is async by default. The slow stuff is handled in the background so the hot path stays hot.
Because Redis keeps data in RAM, executes commands in a single, lock-free event loop, and uses highly optimized data structures and a lean protocol, it avoids the latency traps of disk I/O, heavy parsing, and lock contention. That’s why it’s fast.