Menu
Course/Caching/Redis & Memcached

Redis & Memcached

In-depth comparison of Redis and Memcached: data structures, persistence, clustering, pub/sub, and choosing between them.

15 min readHigh interview weight

The Two Dominant In-Memory Caches

When engineers say 'add a cache layer,' they almost always mean Redis or Memcached. Both store data in RAM for sub-millisecond access. Both are battle-tested at massive scale — Redis is used by GitHub, Twitter, Stack Overflow, and Airbnb; Memcached powers Facebook's cache infrastructure at extraordinary scale. Yet they represent very different design philosophies, and choosing the wrong one adds unnecessary operational complexity.

Memcached: Simple, Fast, Multi-Threaded

Memcached is intentionally minimal. It is a distributed hash table in RAM — nothing more. Its data model is a flat key-value store where keys are strings (up to 250 bytes) and values are opaque blobs (up to 1 MB by default). It supports exactly two operations that matter: `get` and `set` (plus `delete`, `add`, `replace`, and atomic `incr`/`decr`).

Memcached is multi-threaded, making it efficient at utilizing all cores on a single node. It uses a slab allocator to manage memory without fragmentation — memory is pre-divided into slab classes of fixed chunk sizes. There is no persistence, no replication, no pub/sub, and no server-side scripting. If a Memcached node restarts, the cache is cold. This simplicity is intentional: the Memcached team argues that replication and persistence should be handled at the application or infrastructure layer.

ℹ️

Memcached clustering

Memcached itself has no clustering — it is a single-node daemon. Sharding across multiple Memcached nodes is entirely client-side: the application library hashes the key and routes to the appropriate server. Facebook's mcrouter proxy handles this transparently at scale.

Redis: Feature-Rich, Persistent, Single-Threaded Core

Redis (Remote Dictionary Server) is a data structure server that happens to be fast enough to serve as a cache. Its rich data model is the killer feature: strings, lists, sets, sorted sets (`ZSET`), hashes, bitmaps, HyperLogLogs, streams, and geospatial indexes — all with O(1) or O(log n) operations. This makes Redis not just a cache but a Swiss Army knife for distributed systems.

  • String — simple key-value, `SET user:42:name 'Alice'`, `INCR page_views`
  • Hash — nested key-value within a key, `HSET user:42 name Alice age 30`; great for object storage
  • List — ordered linked list, `LPUSH`/`RPUSH`/`LRANGE`; use for queues and activity feeds
  • Set — unordered unique members, `SADD`/`SMEMBERS`/`SINTER`; use for tags, friend lists
  • Sorted Set (ZSET) — set with float scores, `ZADD`/`ZRANGE`; use for leaderboards, priority queues
  • Stream — append-only log, `XADD`/`XREAD`; use for event sourcing and message passing

Redis Persistence

Unlike Memcached, Redis can survive restarts. Two persistence mechanisms are available:

MechanismHow It WorksRecovery TimeData Loss RiskUse When
RDB (Snapshot)Forks process and dumps a point-in-time snapshot to disk on schedule (e.g., every 5 min)Fast (load one file)Up to snapshot intervalFaster restarts matter; some data loss acceptable
AOF (Append-Only File)Logs every write command; replays on restart. Configurable fsync: always / every second / neverSlower (replay log)0 (fsync=always) to ~1s (fsync=everysec)Minimal data loss required
RDB + AOFBoth enabled; AOF used for recovery, RDB for fast backupsMedium~1 secondProduction recommended for write-back caches

Redis High Availability: Sentinel & Cluster

Redis Sentinel provides automatic failover for a single primary + replicas setup. Sentinels monitor the primary, elect a new primary on failure, and reconfigure clients. No data sharding — all data lives on one shard. Suitable for moderate data sizes.

Redis Cluster provides automatic sharding across 16,384 hash slots distributed among multiple primary nodes, each with optional replicas. Clients use the CLUSTER SLOTS map to route commands. This enables horizontal scaling but adds complexity: multi-key commands require all keys to be in the same slot (use hash tags `{user:42}:profile` to co-locate keys).

Loading diagram...
Redis Cluster: 16,384 hash slots distributed across primary nodes, each with a replica

Redis pub/sub and Other Features

Redis's `PUBLISH`/`SUBSCRIBE` enables lightweight real-time messaging — useful for invalidation notifications, live score updates, and chat. However, pub/sub in Redis is fire-and-forget: there is no persistence, so subscribers that are offline miss messages. For reliable messaging, use Redis Streams instead.

Lua scripting with `EVAL` allows atomic multi-command operations. Transactions via `MULTI`/`EXEC` batch commands atomically. Pipelining sends multiple commands without waiting for each reply, dramatically improving throughput over high-latency connections.

Redis vs Memcached: Decision Guide

DimensionRedisMemcached
Data ModelRich (strings, lists, sets, hashes, sorted sets, streams)Flat key-value blobs only
PersistenceYes (RDB, AOF, or both)No — memory only
ReplicationBuilt-in (Sentinel, Cluster)Client-side sharding only
ThreadingSingle-threaded command processing (I/O threads in v6+)Multi-threaded, uses all cores
Pub/SubYes (PUBLISH/SUBSCRIBE, Streams)No
Lua ScriptingYes (EVAL)No
Memory EfficiencyHigher overhead per keyLower overhead, slab allocator
SimplicityMore complexSimpler operationally
Best ForLeaderboards, sessions, queues, rate limiting, pub/subSimple object caching at extreme scale
💡

Default choice

In 2024+, choose Redis unless you have a specific reason not to. Its feature richness rarely adds meaningful operational overhead, and it unifies your caching, queuing, rate-limiting, and pub/sub infrastructure. Memcached's main advantage — multi-threading — matters primarily at very high QPS (>1 million/s) on machines with many cores.

💡

Interview Tip

When an interviewer asks 'What cache would you use?' don't just say Redis. Say: 'I'd use Redis because we also need sorted sets for the leaderboard and pub/sub for cache invalidation notifications — consolidating to one system reduces operational overhead. If we were doing pure key-value caching at Facebook-scale QPS and needed maximum throughput from a single node's CPU cores, Memcached's multi-threading would be compelling.' Demonstrating that you know both and can justify your choice is the goal.

📝

Knowledge Check

5 questions

Test your understanding of this lesson. Score 70% or higher to complete.

Ask about this lesson

Ask anything about Redis & Memcached