Skip to content
Learni
View all tutorials
Architecture Logicielle

How to Implement Event Sourcing in 2026

Lire en français

Introduction

Event sourcing revolutionizes how we model data in complex applications by storing not the final state of an object, but the sequence of events that led to it. Imagine a bank account: instead of just keeping the current balance (500€), you archive every transaction—deposit of 1000€, withdrawal of 300€, and so on—to recalculate the balance anytime.

Why adopt this approach in 2026? Distributed systems are exploding in complexity with cloud-native tech and AI. Event sourcing provides infinite traceability, resilience to failures (via event replay), and horizontal scalability through asynchronous projections. It integrates perfectly with CQRS (Command Query Responsibility Segregation) and Domain-Driven Design (DDD), essential patterns for microservices.

This intermediate, purely conceptual tutorial equips you to design robust event-driven architectures. Without a single line of code, we break down the theory with concrete analogies, real-world case studies (like Netflix or Uber), and practical frameworks. By the end, you'll know when and how to implement it for tangible benefits: 40% downtime reduction and instant audits. Ready to turn your data into a living history? (248 words)

Prerequisites

  • Solid knowledge of Domain-Driven Design (DDD): aggregates, entities, bounded contexts.
  • Familiarity with CQRS patterns and NoSQL databases (for projections).
  • Experience modeling complex business domains (e-commerce, finance).
  • Basics of event-driven architecture (pub/sub, Kafka-like streams).

Foundations: What is Event Sourcing?

Event sourcing is built on a simple principle: every state change is an immutable event. Unlike traditional CRUD, where you overwrite state (UPDATE table SET balance=500), it's append-only: you add an event to the log.

Analogy: Like a VHS tape. The current state is the current frame; to go back, you rewind and replay the frames (events). Case study: In EventStoreDB (the leading tool), an 'Order' aggregate stores [OrderCreated], [ItemAdded], [PaymentFailed]. The balance is computed via fold/replay: state = fold(events, initial_state).

Immediate benefits:

  • Perfect audit: Who did what, when? No 'magic' in the data.
  • Temporality: States at any date via partial replay.
  • Debugging: Reproduce a bug by replaying up to the failure point.

In e-commerce, a 'Cart' goes from {} to {product1:2} via [CartInitialized], [ProductAdded]. No UPDATE, just append. This enforces domain-centric modeling, aligned with DDD. (212 words)

Differences from Traditional CRUD Approaches

CRUD vs Event Sourcing: Comparative table for clarity.

AspectCRUDEvent Sourcing
------------------------------
StorageFinal state (mutable)Sequence of events (immutable)
QueryDirect on DBMaterialized projections
HistorySeparate logs (possible loss)Native, replayable
ScalabilityACID locksPartitioned events
ComplexitySimple for basic OLTPHigh, but powerful for OLAP
Concrete example: Inventory management. CRUD: UPDATE stock SET qty=10 WHERE id=1. Problem: 'Why 10?' Event Sourcing: [StockInitialized:100], [Sale: -20], [Restock: +30] → qty=110. For 'current stock' queries, project to a SQL view.

Hybrid migration: Start with dual-write (CRUD + events), then shift to pure event sourcing. At Amazon, this cut inconsistencies by 60% in distributed inventories. Event sourcing shines when business rules evolve: new balance calculation? Just add a new projector, without touching the source log. (198 words)

Key Components of the Architecture

An event sourcing system revolves around 4 pillars:

  1. Event Store: Append-only log (EventStoreDB, Kafka). Events: {id, aggregateId, type, data, timestamp, metadata}.
  2. Command Side: Receives commands (CreateOrder), validates via aggregate, emits events.
  3. Projections: Subscribers to the stream materialize read models (MongoDB for fast queries).
  4. Snapshots: Periodic states to speed up replay (every 1000 events).
Uber case study: Trips modeled as [TripStarted], [PositionUpdated], [TripEnded]. Projections: real-time map (GPS polygon), billing (distance sum). Scalability: 1M events/s via Kafka partitions by driverId.

Flow: Command → Aggregate (load snapshot + replay) → Events → Event Store → Async Projections. Ensures eventual consistency, perfect for microservices. (187 words)

Lifecycle of an Event-Sourced Aggregate

Step by step:

  1. Loading: Fetch snapshot + events since lastVersion.
  2. Applying command: Aggregate.apply(command) → generates new events.
  3. Validation: Check invariants (e.g., balance >=0 before Withdrawal).
  4. Persistence: Append event batch with incremental version (optimistic concurrency).
  5. Publication: Notify projectors via pub/sub.
Analogy: Theater actor. State = current role; events = lines learned. Replay = re-rehearse the script.

Complex case: 'Hotel Booking'. Events: [RoomBooked], [Cancelled], [ReBooked]. Invariant: max 1 active booking. 'Cancel' command: if active, emit [Cancelled]. Replay ensures consistency. At Netflix, this handles billing with 0% revenue loss during rollbacks. (192 words)

Projections and CQRS Separation

Projections transform the event log into optimized read models. CQRS separates writes (events) from reads (views).

Types:

  • Live projections: Real-time (dashboards).
  • Ad-hoc: One-off queries (replay to a date).
  • Materialized: Denormalized DBs (Elasticsearch for search).

Practical framework:
  • Define projectors: For 'OrderSummary', on [OrderCreated] → insert {id, total}.
  • Fault tolerance: If projector crashes, resume from last checkpoint.

Booking.com study: Projections for analytics (conversion rates) scale to 10k/sec without impacting writes. Downside: Potential lag (seconds), offset by UX (loading skeletons). In 2026, with native event streaming (Kafka Streams), serverless projections become standard. (178 words)

Essential Best Practices

  • Small, atomic events: One event = one unique business fact (not generic [OrderUpdated]).
  • Event versioning: Add v2 in parallel (schema evolution without downtime).
  • Smart snapshots: Every 500-1000 events, or by size (1MB).
  • Idempotence: Projectors handle duplicates via unique eventId.
  • DDD Bounded Contexts: One store per context (avoids monolithic event stream).

Common Mistakes to Avoid

  • Over-modeling: Don't event-source everything (only rich aggregates). Pitfall: perf hit for simple lookups → use hybrid CRUD.
  • Anemic events: Minimal data only (e.g., {qty:2}, not UI state).
  • Ignoring concurrency: Without version checks, race conditions → use expectedVersion.
  • Synchronous projections: Blocks writes → always async with dead-letter queues.

Further Reading

Dive into tools: EventStoreDB, Axon Framework, Kafka for streams.

Advanced studies:


Expert training: Discover our Learni courses on event-driven architecture for hands-on DDD + Event Sourcing.