RabbitMQ & Traditional Message Brokers
RabbitMQ architecture: exchanges, queues, bindings, routing keys. How it differs from Kafka and when to choose it.
RabbitMQ Architecture
RabbitMQ is an AMQP-based message broker built around a powerful routing model. Unlike Kafka's producer-to-partition direct model, RabbitMQ introduces an indirection layer: producers publish to exchanges, exchanges route to queues via bindings, and consumers subscribe to queues. This routing flexibility is RabbitMQ's defining strength.
Exchange Types
| Exchange Type | Routing Logic | Use Case |
|---|---|---|
| Direct | Route by exact routing key match | Simple queue distribution, point-to-point |
| Fanout | Broadcast to all bound queues (ignores key) | Notifications, cache invalidation |
| Topic | Route by pattern matching (*, #) | Category-based routing, multi-tenant systems |
| Headers | Route by message header attributes | Complex filtering without key structure |
Message Acknowledgment and Durability
RabbitMQ supports manual acknowledgment (the consumer explicitly acks after processing) and automatic acknowledgment (acked when delivered). For durability, you need both durable queues (survive broker restart) and persistent messages (written to disk). Without both, messages are lost on restart.
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# Declare durable queue (survives broker restart)
channel.queue_declare(queue='tasks', durable=True)
def callback(ch, method, properties, body):
print(f"Processing: {body}")
# ... do work ...
# Manually ack AFTER successful processing
ch.basic_ack(delivery_tag=method.delivery_tag)
# prefetch_count=1: don't dispatch a new message until this one is acked
channel.basic_qos(prefetch_count=1)
channel.basic_consume(queue='tasks', on_message_callback=callback)
channel.start_consuming()Use basic_qos(prefetch_count=1) for fair dispatch
By default, RabbitMQ dispatches messages round-robin without knowing whether a consumer is busy. Setting prefetch_count=1 tells RabbitMQ not to give a consumer more than one unacked message. This ensures busy consumers don't accumulate a backlog while idle consumers wait.
RabbitMQ vs Kafka: Head-to-Head
| Dimension | RabbitMQ | Kafka |
|---|---|---|
| Paradigm | Smart broker, dumb consumer | Dumb broker, smart consumer |
| Message retention | Deleted after ack | Retained for configured period |
| Ordering | Per-queue FIFO | Per-partition ordering |
| Throughput | ~50K msgs/sec per node | Millions msgs/sec |
| Routing | Rich (exchanges, bindings) | Simple (topic + partition key) |
| Replay | Not supported | Full replay within retention window |
| Consumer tracking | Broker tracks acks | Consumer tracks offsets |
| Best for | Task queues, complex routing, RPC | Event streaming, high throughput, replay |
Amazon SQS and Azure Service Bus
For teams that want managed traditional queuing without running their own broker, cloud-native options are excellent choices. Amazon SQS is a fully managed queue service with standard (at-least-once, best-effort ordering) and FIFO (exactly-once, strict ordering up to 300 TPS) variants. Azure Service Bus offers queues and topics with rich features like sessions, dead-lettering, message deferral, and scheduled delivery.
SQS Standard vs FIFO
SQS Standard queues have virtually unlimited throughput but may deliver messages out of order and occasionally deliver duplicates. SQS FIFO queues guarantee exactly-once processing and strict ordering within a message group but cap at 300 TPS (3,000 with batching). Choose Standard for high-throughput tasks where ordering doesn't matter; FIFO for financial transactions or order sequencing.
When to Choose RabbitMQ or SQS
- Task queues — Background jobs (image processing, email sending, PDF generation) where each task should be processed once
- Complex routing — Route messages to different consumers based on content or type without custom code
- RPC over messaging — Request/reply patterns where you need a response from the consumer
- Moderate throughput — Up to tens of thousands of messages per second
- Operational simplicity — SQS requires zero broker management; RabbitMQ is simpler than Kafka
Interview Tip
When comparing RabbitMQ and Kafka in an interview, the key insight is: RabbitMQ is a 'smart broker' that routes and tracks message delivery; Kafka is a 'dumb broker' that just appends to a log and lets consumers track their own position. This makes Kafka scale better but gives RabbitMQ more routing flexibility. If you hear 'task queue' or 'background jobs,' lean toward RabbitMQ/SQS. If you hear 'event streaming,' 'high throughput,' or 'replay,' lean toward Kafka.