Database per Service
Each microservice owns its data: schema isolation, data duplication trade-offs, cross-service queries, and migration strategies.
The Principle
Database per Service is a foundational microservices pattern: each service owns and exclusively manages its own data store. No other service can directly query or write to that database. Access is only through the service's API. This is the data-isolation counterpart to the service-isolation that microservices enforce at the code level.
Without this pattern, a shared database becomes a tight coupling point. Schema changes by one team break others, database bottlenecks affect all services, and you cannot independently choose the right database technology for each workload. The whole promise of microservices — independent deployability and scaling — collapses.
Isolation Levels
Database isolation in microservices exists on a spectrum. Choose the level that matches your organizational maturity and deployment model:
| Isolation Level | How | Pros | Cons |
|---|---|---|---|
| Separate DB instance | Each service runs its own DB server | Full isolation, different DB technologies | Highest operational overhead |
| Separate database/schema | Same DB cluster, separate schemas per service | Logical isolation, simpler ops | Resource contention still possible |
| Separate table prefix | Same schema, tables named `order_*`, `user_*` | Easiest operationally | Weakest isolation, easy to accidentally share |
Cross-Service Queries
The most common objection to Database per Service is: 'How do I join data across services?' For example, an order page needs user name (from User Service) and inventory status (from Inventory Service). In a shared database this is a three-table JOIN. With separate databases, you cannot JOIN.
The solutions are:
- API Composition — The API Gateway or a dedicated aggregation service calls multiple services and joins the results in application code. Simple and synchronous, but adds latency.
- CQRS Read Model — A projector subscribes to events from multiple services and builds a denormalized read model that already has all the data needed for a query. Best for frequently accessed aggregated views.
- Data replication via events — Services publish events when their data changes. Other services maintain a local copy of the data they need. Eventual consistency, but no cross-service calls at query time.
Data Duplication Trade-offs
Data Duplication Is a Feature, Not a Bug
When services replicate data they need from other services (e.g., Order Service stores a snapshot of the customer name at order time), this is intentional. It makes the Order Service autonomous — it does not fail if the User Service is down. The trade-off is that updates to the User Service must propagate via events, and old orders retain the historical name, which is often the correct business behavior anyway.
Migration Strategy
Migrating from a shared monolithic database to per-service databases is one of the hardest parts of a microservices transition. The Strangler Fig approach works well: identify a service boundary, extract that service's tables into a new database, and use the Transactional Outbox (or CDC) to keep data synchronized during the transition. Cut over queries to the new service API, then remove the old shared tables.
Interview Tip
When discussing Database per Service in an interview, always address the cross-service query problem upfront — interviewers will ask. Present API Composition for simple cases and CQRS read models for complex aggregation. Also mention that the pattern enables polyglot persistence: each service can use the DB technology best suited to its access pattern (e.g., a recommendation service using a graph DB while the order service uses Postgres).