Menu
Course/Caching/Caching Strategies: Write-Through, Write-Back, Write-Around

Caching Strategies: Write-Through, Write-Back, Write-Around

The three fundamental caching write strategies, their consistency guarantees, performance characteristics, and ideal use cases.

12 min readHigh interview weight

Why Write Strategy Matters

A cache is only as useful as its ability to stay in sync with the source of truth. Reading from a cache is straightforward — on a miss, fetch from the database and populate the cache. Writing is harder. Every time data changes, you must decide: does the cache update first, does the database update first, or does the cache skip the write entirely? The answer shapes your system's consistency, write latency, and risk of data loss.

There are three fundamental patterns for handling writes: write-through, write-back (write-behind), and write-around. Each makes a different trade-off between latency, consistency, and durability. Choosing the wrong one can mean stale data reaching users, write amplification, or cache fills poisoned with data nobody ever reads.

Write-Through

In write-through caching, every write goes to the cache and the database simultaneously (or sequentially, cache first) before the application receives a success response. The cache always mirrors the database for recently written keys.

Loading diagram...
Write-through: cache and DB are updated on every write before acknowledging the client

Pros: Reads always hit a warm, consistent cache. No risk of the cache serving stale data for keys that have been written. Data is persisted to the database synchronously, so a cache crash loses nothing.

Cons: Write latency is high — the application blocks on both the cache write and the database write. For write-heavy workloads this is painful. Every write also pollutes the cache with data that may never be read (a cache miss followed immediately by an expensive write-through write is doubly wasteful).

📌

Best fit

Write-through is ideal when reads vastly outnumber writes and strong read consistency is important — for example, user profile data, configuration settings, or pricing information that is read on nearly every request but changed rarely.

Write-Back (Write-Behind)

In write-back (also called write-behind) caching, the application writes only to the cache and acknowledges success immediately. The cache asynchronously flushes dirty data to the database in the background — either on a timer, on eviction, or when a batch threshold is reached.

Loading diagram...
Write-back: the cache acknowledges writes immediately; DB is updated asynchronously

Pros: Write latency is extremely low — the database is not in the critical path. Write-heavy workloads can coalesce many writes into a single database round-trip (e.g., a counter incremented 1,000 times before flush becomes a single `UPDATE`). Excellent throughput.

Cons: If the cache crashes before flushing, data is lost. This makes write-back inappropriate for anything requiring durability. There is also a window where the cache and database are inconsistent — if another service reads directly from the database, it may see stale data.

⚠️

Durability risk

Write-back caches must be treated as stateful infrastructure. Use Redis with `appendonly yes` (AOF persistence) or maintain a write-behind queue in a durable message broker so in-flight writes can be replayed after a restart.

📌

Best fit

Write-back suits write-intensive, low-durability-risk workloads: view counters, real-time analytics, game leaderboard scores, or IoT sensor telemetry where losing a few seconds of data is acceptable.

Write-Around

In write-around caching, writes bypass the cache entirely and go straight to the database. The cache is only populated on cache misses (lazy loading / cache-aside pattern). Previously cached values become stale after a write and are only refreshed when next requested.

Loading diagram...
Write-around: writes go directly to DB; cache is populated only on subsequent reads

Pros: The cache is not polluted with write-once data that nobody reads again. This is especially valuable for bulk import operations or infrequently accessed records. Reads of hot data are still served from cache.

Cons: After a write, the next read for that key will be a cache miss. If reads frequently follow writes (common in user-facing apps: user edits their profile and immediately views it), this creates a burst of cold reads hitting the database.

Strategy Comparison

StrategyWrite LatencyRead ConsistencyData Loss RiskBest For
Write-ThroughHigh (sync DB write)StrongNone (DB always current)Read-heavy, config data, profiles
Write-BackLow (async DB write)EventualPossible on cache crashWrite-heavy, counters, analytics
Write-AroundMedium (DB only)Eventual (miss after write)NoneWrite-once, bulk loads, rare reads

Cache-Aside (Lazy Loading)

Cache-aside (also called lazy loading) is the most common read strategy and is often combined with write-around. The application checks the cache first; on a miss, it fetches from the database and populates the cache. The application code owns the cache logic — no proxy or cache intermediary is involved.

python
def get_user(user_id: str) -> User:
    # 1. Check cache
    cached = cache.get(f"user:{user_id}")
    if cached:
        return deserialize(cached)

    # 2. Cache miss — fetch from DB
    user = db.query("SELECT * FROM users WHERE id = ?", user_id)
    if not user:
        raise NotFoundError()

    # 3. Populate cache with TTL
    cache.set(f"user:{user_id}", serialize(user), ttl=300)
    return user

Cache-aside is resilient: if the cache goes down, the application simply fetches from the database every time — degraded performance but no outage. The downside is cache stampede: on cold start or after a cache flush, all requests for a popular key hit the database simultaneously.

💡

Interview Tip

Interviewers almost always ask about caching strategies. Lead with cache-aside as your default read pattern, then layer in write-through for write consistency or write-back for write throughput. Always mention the trade-off you're making: 'I'll use write-through here because this is user profile data and stale reads after an update would be a bad experience.' That framing — explicitly naming trade-offs — is what separates a strong answer from a mediocre one.

📝

Knowledge Check

5 questions

Test your understanding of this lesson. Score 70% or higher to complete.

Ask about this lesson

Ask anything about Caching Strategies: Write-Through, Write-Back, Write-Around