Menu
Course/Performance & Scalability Patterns/Write-Through & Write-Behind Caching

Write-Through & Write-Behind Caching

Synchronous vs asynchronous cache writes: write-through for consistency, write-behind (write-back) for performance, and their failure modes.

12 min readHigh interview weight

The Write Strategy Spectrum

While Cache-Aside handles reads lazily, write strategies determine how changes flow between the cache and the primary data store. There are three: write-through (synchronous, cache-first, strong consistency), write-behind (asynchronous, cache-first, high performance), and write-around (skip the cache entirely on writes). Each makes a different consistency vs. performance trade-off.

Write-Through Caching

In write-through caching, every write goes to the cache first and the data store synchronously in the same operation. The write is not acknowledged to the caller until both the cache and the database confirm the write. This keeps the cache and database always in sync.

Loading diagram...
Write-Through: both cache and DB must confirm before returning success
  • Consistency: Cache and DB are always in sync. Read-after-write is always fresh.
  • Higher write latency: Every write waits for two round trips (cache + DB).
  • Cache pollution: Infrequently-read data gets cached on every write, wasting memory.
  • Ideal for: Read-heavy workloads where data is also written and must be immediately readable — e.g., session stores, leaderboards.

Write-Behind (Write-Back) Caching

Write-behind (also called write-back) caches the write immediately and returns success to the caller, then asynchronously flushes the data to the primary store in the background. The application sees ultra-low write latency because the persistence step is decoupled. This is the technique used inside SSDs, CPU caches, and database buffer pools.

Loading diagram...
Write-Behind: write returns immediately; persistence is asynchronous
  • Ultra-low write latency: Caller is not blocked on database I/O.
  • Write coalescing: Multiple writes to the same key can be batched into a single DB write.
  • Data loss risk: If the cache node fails before flushing, in-flight writes are lost.
  • Consistency gap: Reads from the DB (or a different cache node) will see stale data.
  • Ideal for: Write-heavy workloads where some data loss is tolerable — e.g., analytics counters, IoT sensor ingestion, gaming leaderboards.
⚠️

Write-Behind Durability Risk

Write-behind is not safe for financial transactions, inventory counts, or any data where loss is unacceptable. If the cache crashes between the write ack and the flush to the database, those writes are gone forever. Always pair write-behind with a durable queue or WAL for crash recovery if you need durability.

Write-Around Caching

Write-around skips the cache entirely on writes — data goes straight to the database. The cache is only populated on reads (cache miss path, like Cache-Aside). This prevents polluting the cache with write-heavy data that will rarely be read back, at the cost of always having a cache miss on the first read after a write.

Side-by-Side Comparison

StrategyWrite LatencyRead ConsistencyData Loss RiskBest For
Write-ThroughHigh (sync to DB)StrongNoneRead-heavy, consistency-sensitive
Write-BehindLow (async flush)Eventually consistentYes (crash before flush)Write-heavy, loss-tolerant
Write-AroundLow (only DB write)Miss on first read after writeNoneData rarely re-read after write
Cache-AsideLow (only DB write)Eventually consistentNoneGeneral purpose, read-heavy

Combining Write-Through with Cache-Aside

In practice, many systems combine patterns. A common hybrid is write-through for writes + Cache-Aside for reads. Writes always populate the cache synchronously; reads check the cache first. This gives strong post-write consistency while still benefiting from the lazy-loading semantics of Cache-Aside for keys that are never written through.

python
# Write-Through: write DB and cache together
def update_product_price(product_id: str, new_price: float) -> None:
    with db.transaction():
        db.execute(
            "UPDATE products SET price = ? WHERE id = ?",
            new_price, product_id
        )
        # Cache updated synchronously within the same logical operation
        cache_key = f"product:{product_id}"
        redis.setex(cache_key, ttl=3600, value=serialize({
            "id": product_id,
            "price": new_price,
        }))
    # Both succeed or we raise and roll back

# Write-Behind: write cache immediately, flush async
def record_page_view(page_id: str) -> None:
    # Increment an in-memory counter (write-behind)
    redis.incr(f"views:{page_id}")
    # A background job aggregates and flushes every 60 seconds
    # → some view counts may be lost on Redis crash
💡

Interview Tip

When an interviewer asks 'how do you keep the cache consistent?', structure your answer around write strategy choice. Say: 'For our user profile service which is read-heavy but write-infrequent, I'd use write-through so that a profile update is immediately visible in the cache. For our view counters, write-behind with periodic flush is fine because losing a few counts is acceptable.' Show you pick the strategy based on the data's consistency requirements, not as a one-size-fits-all choice.

📝

Knowledge Check

5 questions

Test your understanding of this lesson. Score 70% or higher to complete.

Ask about this lesson

Ask anything about Write-Through & Write-Behind Caching