Menu
Course/Data Storage/Database Replication

Database Replication

Master-slave, master-master, and quorum-based replication. Replication lag, conflict resolution, and trade-offs between consistency and availability.

15 min readHigh interview weight

What Replication Solves

Replication copies data from one node (the primary) to one or more other nodes (replicas). It serves two distinct goals: high availability (if the primary fails, a replica takes over) and read scaling (read traffic is distributed across multiple replicas). Understanding *which goal you're optimizing for* determines which replication topology to choose.

Single-Leader (Master-Replica) Replication

The most common topology. All writes go to the leader (primary). The leader streams changes to followers (replicas), which apply them asynchronously. Reads can be served by any replica. This is the default configuration for PostgreSQL streaming replication, MySQL replica, and MongoDB replica sets.

Loading diagram...
Single-leader replication: writes go to primary, reads distributed across replicas

Replication Lag

In asynchronous replication, replicas apply changes after the primary has already acknowledged the write to the client. This creates replication lag — a window where a replica serves stale data. Typical lag in well-configured systems is milliseconds to seconds, but under heavy write load it can grow to minutes.

⚠️

Read-After-Write Inconsistency

A user writes a profile update, and their next request (a read) hits a lagging replica — they see the old data. This is the 'read-after-write' anomaly. Solutions: (1) Always route reads for the current user to the primary. (2) Track the write's replication position and only route reads to replicas that have caught up. (3) Use sticky sessions to always hit the same replica.

Synchronous vs Asynchronous Replication

ModeWhen Primary AcksData Loss RiskWrite Latency
AsynchronousBefore replica confirmsYes — replica may lagLow (primary only)
SynchronousAfter at least one replica confirmsNo — one durability copy guaranteedHigher (waits for replica)
Semi-synchronous (MySQL)After one replica; others are asyncMinimal — one replica guaranteedMedium

Failover and Leader Election

When the primary fails, a replica must be promoted. This process is called failover. Two key challenges: (1) Determining the primary is actually dead (not just slow) — the split-brain problem. (2) Choosing which replica to promote — typically the one with the least replication lag. Tools like Patroni (PostgreSQL), Orchestrator (MySQL), and ZooKeeper automate this.

⚠️

Split Brain

If a primary becomes temporarily unreachable (network partition) and a new primary is elected, you now have two nodes accepting writes — a split brain. Writes to the old primary are lost when the partition heals. Preventing split brain requires a quorum mechanism: the primary can only accept writes if it can reach a majority of nodes.

Multi-Leader (Master-Master) Replication

Multiple nodes accept writes simultaneously. Useful for geo-distributed systems where you want low-latency writes in multiple regions. The fundamental challenge is write conflicts: if two leaders accept different values for the same row concurrently, which value wins?

  • Last-Write-Wins (LWW): The write with the later timestamp wins. Simple but risks losing concurrent writes that arrive out of order.
  • Conflict-free Replicated Data Types (CRDTs): Data structures designed to merge automatically (counters, sets). Used in DynamoDB, Riak.
  • Application-level resolution: Expose conflicts to the application (CouchDB approach). Application decides the merge logic.
  • Avoid conflicts: Route all writes for a record to the same leader using consistent hashing on the record key.

Quorum-Based Replication (Leaderless)

In leaderless systems (Cassandra, DynamoDB, Riak), any node can accept writes. Consistency is achieved through quorum reads and writes. With `N` total replicas, a write is successful when `W` replicas acknowledge it, and a read is successful when `R` replicas respond. Consistency is guaranteed when `W + R > N`.

text
Cassandra quorum example (N=3 replicas):
  Write quorum (W=2): write must succeed on 2 of 3 nodes
  Read quorum  (R=2): read from 2 of 3 nodes, take latest

  W + R = 4 > N = 3 → Consistent (always see latest write)

Strong consistency: W=3, R=1  (or W=1, R=3)
High availability: W=1, R=1   (eventual consistency)
Balanced:          W=2, R=2   (tolerates 1 node failure)

When `W + R <= N`, you get eventual consistency — faster writes and reads, but a read may return stale data if you haven't yet reached quorum overlap. Cassandra defaults to `QUORUM` consistency but allows per-query override.

💡

Interview Tip

Replication questions in interviews often follow this pattern: 'How do you handle the case where the primary goes down?' Walk through: detect failure (heartbeats + timeout), elect a new primary (Raft/Paxos or sentinel), redirect writes, and handle in-flight transactions. Mention the risk of split brain and how quorum prevents it. This shows you understand production realities, not just textbook topology diagrams.

📝

Knowledge Check

5 questions

Test your understanding of this lesson. Score 70% or higher to complete.

Ask about this lesson

Ask anything about Database Replication