Adapter Pattern in Distributed Systems
Transform interfaces between incompatible services: protocol adapters, data format converters, and integration with third-party APIs.
Adapter Pattern: From OOP to Distributed Systems
The Adapter pattern is one of the Gang of Four structural patterns. In object-oriented design, it wraps an incompatible interface so it looks like the interface a client expects. In distributed systems, the same concept applies at a larger scale: an adapter service or component translates between incompatible protocols, data formats, or API contracts so that two systems can communicate without either one changing.
Unlike the Anti-Corruption Layer (which is about protecting your domain model) or the Ambassador (which is about outbound connectivity resilience), the Adapter is narrowly focused on interface compatibility — making two things fit together that otherwise wouldn't.
Types of Adapters in Distributed Systems
| Adapter Type | Problem It Solves | Example |
|---|---|---|
| Protocol Adapter | Client speaks REST, service speaks gRPC or SOAP | gRPC-gateway transcoding REST → gRPC |
| Data Format Adapter | Systems use different serialization formats | JSON → Avro converter in a Kafka pipeline |
| Schema Adapter | Same concept, different field names or structure | Translating `customer_id` → `user_guid` |
| Semantic Adapter | Same field names, different meanings | Converting UTC timestamps to a partner's epoch format |
| Authentication Adapter | Incompatible auth schemes | OAuth 2.0 token → legacy Basic Auth header |
Protocol Adapter Example: gRPC-Gateway
A common scenario: your internal microservices use gRPC for high-performance service-to-service communication, but your external clients (browsers, mobile apps) use REST. Rather than maintaining two APIs, you deploy a gRPC-Gateway — an adapter that transcodes HTTP/JSON REST requests into gRPC calls and translates the protobuf responses back to JSON.
Data Format Adapter in Event-Driven Pipelines
In event-driven architectures using Kafka, different teams may produce events in different formats. A Kafka Streams adapter (or a Kafka Connect transformer) can sit in the pipeline, consuming events in one format and re-publishing them in another. For example, a legacy system publishes XML events; the adapter converts them to Avro before they reach the modern consumers.
// Data format adapter: converts legacy XML event to modern JSON event
interface LegacyOrderEvent {
// XML-derived structure from legacy system
OrderId: string;
CustRef: string;
ItemQty: number;
OrderTimestamp: string; // "2024-01-15T10:30:00Z"
}
interface ModernOrderEvent {
orderId: string;
customerId: string;
quantity: number;
createdAt: number; // Unix timestamp in ms
}
class OrderEventAdapter {
transform(legacy: LegacyOrderEvent): ModernOrderEvent {
return {
orderId: legacy.OrderId,
customerId: legacy.CustRef,
quantity: legacy.ItemQty,
createdAt: new Date(legacy.OrderTimestamp).getTime(),
};
}
}Adapter vs Anti-Corruption Layer vs Facade
| Pattern | Primary Focus | Scope |
|---|---|---|
| Adapter | Interface compatibility (structural/format translation) | Single class or microservice component |
| Anti-Corruption Layer | Domain model protection + translation | Entire boundary between bounded contexts |
| Facade | Simplifying a complex subsystem interface | Single class hiding complexity |
| Translator (in ACL) | Semantic translation between domain concepts | Part of the ACL |
Adapter Is Often Embedded in ACLs
In practice, an Adapter is frequently one of the components inside an Anti-Corruption Layer. The ACL uses adapters (for protocol/format conversion), translators (for semantic mapping), and facades (for interface simplification) together. When someone asks you to design an ACL, you can describe the adapter as one of its internal building blocks.
When to Introduce a Dedicated Adapter Service
Sometimes the adapter logic is substantial enough to warrant its own microservice — this is often called an integration service or adapter service. This is appropriate when:
- The translation is stateful or requires lookups (e.g., mapping external customer IDs to internal UUIDs via a mapping table).
- Multiple upstream producers need to be adapted before reaching downstream consumers.
- The adapter needs independent scaling (e.g., high-volume event transformation in a Kafka pipeline).
- The external API requires authentication, retry logic, and its own circuit breaker.
Interview Tip
The Adapter pattern is unlikely to be the star of an interview, but it is important as a building block within larger answers. When discussing integrating with a third-party API, mention: 'I would wrap the third-party client in an adapter that normalizes their interface to our domain. This is also the first layer of our Anti-Corruption Layer.' This shows you understand how patterns compose and relate.