Skip to content
Learni
View all tutorials
Architecture Logicielle

How to Implement a Microservices Architecture in 2026

Lire en français

Introduction

In a world where apps need to scale horizontally and adapt quickly to business changes, microservices architecture is the standard in 2026. Unlike monoliths where everything is tightly coupled, microservices break the app into autonomous services that communicate via lightweight APIs (HTTP/REST or gRPC). This enables independent deployments, better resilience, and one team per service.

This intermediate tutorial guides you step-by-step through implementing a simple yet realistic microservices setup: a Users service for managing profiles and an Orders service that creates orders by calling Users to validate the user. We'll use Node.js with TypeScript and Express for the services, Docker for containerization, and Docker Compose for orchestration. By the end, you'll have a working system you can test locally and deploy to Kubernetes in production.

Why does it matter? 80% of Fortune 500 companies use it to cut downtime by 50%. Ready to transform your code? (142 words)

Prerequisites

  • Node.js 20+ and npm/yarn
  • Docker and Docker Compose installed
  • Basic knowledge of TypeScript and Express
  • Editor like VS Code with Docker and TypeScript extensions
  • Ports 3000-3002 free

Create the Users Service

services/users/src/server.ts
import express, { Request, Response } from 'express';
import cors from 'cors';

const app = express();
const PORT = 3001;

app.use(cors());
app.use(express.json());

interface User {
  id: number;
  name: string;
  email: string;
}

const users: User[] = [
  { id: 1, name: 'Alice', email: 'alice@example.com' },
  { id: 2, name: 'Bob', email: 'bob@example.com' }
];

app.get('/users/:id', (req: Request, res: Response) => {
  const id = parseInt(req.params.id);
  const user = users.find(u => u.id === id);
  if (user) {
    res.json(user);
  } else {
    res.status(404).json({ error: 'User not found' });
  }
});

app.listen(PORT, () => {
  console.log(`Users service running on port ${PORT}`);
});

This code implements a basic Users service with Express and TypeScript. It exposes a GET /users/:id endpoint to fetch a user by ID from an in-memory array (replace with a real DB in production). CORS is enabled for inter-service calls, and strict types prevent runtime errors.

Containerize the Users Service

Now, let's package this service in a Docker container to isolate it and make it portable. This simulates independent deployments, a key microservices principle.

Dockerfile for Users

services/users/Dockerfile
FROM node:20-alpine

WORKDIR /app

COPY package*.json ./
RUN npm install

COPY src/ ./src/

EXPOSE 3001

CMD ["npm", "start"]

This multi-stage Dockerfile uses Node Alpine for lightness. It copies package.json first for npm caching, then the src code. EXPOSE declares the port, and CMD starts the server. Also create package.json with 'start': 'ts-node src/server.ts' and install express, cors, @types/*, ts-node.

Package.json for Users

services/users/package.json
{
  "name": "users-service",
  "version": "1.0.0",
  "scripts": {
    "start": "ts-node src/server.ts"
  },
  "dependencies": {
    "express": "^4.19.2",
    "cors": "^2.8.5"
  },
  "devDependencies": {
    "@types/express": "^4.17.21",
    "@types/cors": "^2.8.17",
    "@types/node": "^20.11.30",
    "ts-node": "^10.9.2",
    "typescript": "^5.4.3"
  }
}

This package.json defines the start script for ts-node. Runtime deps are minimal; devDeps for development. In production, use tsc to build static JS and run node dist/server.js.

Create the Orders Service

services/orders/src/server.ts
import express, { Request, Response } from 'express';
import cors from 'cors';
import axios from 'axios';

const app = express();
const PORT = 3002;
const USERS_SERVICE_URL = 'http://localhost:3001';

app.use(cors());
app.use(express.json());

interface Order {
  id: number;
  userId: number;
  product: string;
  quantity: number;
}

let orderId = 1;
const orders: Order[] = [];

app.post('/orders', async (req: Request, res: Response) => {
  const { userId, product, quantity } = req.body;

  try {
    // Appel synchrone au service Users
    const userResponse = await axios.get(`${USERS_SERVICE_URL}/users/${userId}`);
    if (!userResponse.data) {
      return res.status(404).json({ error: 'User not found' });
    }

    const order: Order = { id: orderId++, userId, product, quantity };
    orders.push(order);
    res.status(201).json(order);
  } catch (error) {
    res.status(500).json({ error: 'Service Users unavailable' });
  }
});

app.get('/orders', (req: Request, res: Response) => {
  res.json(orders);
});

app.listen(PORT, () => {
  console.log(`Orders service running on port ${PORT}`);
});

The Orders service handles POST /orders (validates user via HTTP call to Users) and GET /orders. Axios manages synchronous calls; try/catch ensures resilience. In production, switch to async with a Circuit Breaker like Resilience4j.

Containerize and Orchestrate with Docker Compose

Repeat for Orders (same Dockerfile/package.json with adaptations). Use Docker Compose to launch services on a shared network, simulating a cluster.

Dockerfile for Orders

services/orders/Dockerfile
FROM node:20-alpine

WORKDIR /app

COPY package*.json ./
RUN npm install

COPY src/ ./src/

EXPOSE 3002

CMD ["npm", "start"]

Identical to Users but with port 3002. Ensures consistent builds.

Package.json for Orders

services/orders/package.json
{
  "name": "orders-service",
  "version": "1.0.0",
  "scripts": {
    "start": "ts-node src/server.ts"
  },
  "dependencies": {
    "express": "^4.19.2",
    "cors": "^2.8.5",
    "axios": "^1.6.7"
  },
  "devDependencies": {
    "@types/express": "^4.17.21",
    "@types/cors": "^2.8.17",
    "@types/node": "^20.11.30",
    "@types/axios": "^0.14.0",
    "ts-node": "^10.9.2",
    "typescript": "^5.4.3"
  }
}

Adds axios and its types for HTTP calls. Identical scripts for consistency.

Docker Compose for the Full Setup

docker-compose.yml
version: '3.8'
services:
  users:
    build: ./services/users
    ports:
      - "3001:3001"
    networks:
      - microservices-net

  orders:
    build: ./services/orders
    ports:
      - "3002:3002"
    depends_on:
      - users
    networks:
      - microservices-net
    environment:
      - USERS_SERVICE_URL=http://users:3001

networks:
  microservices-net:
    driver: bridge

This docker-compose orchestrates services on a shared bridge network. 'depends_on' ensures sequential startup; env var sets internal URL (users:3001 via Docker DNS). Run with 'docker-compose up --build'.

Test the Architecture

Run docker-compose up --build. Test with:

  • curl http://localhost:3002/orders (empty)
  • curl -X POST http://localhost:3002/orders -H 'Content-Type: application/json' -d '{"userId":1,"product":"Laptop","quantity":1}'
Orders validates via Users and persists. It's scalable!

Best Practices

  • Asynchronous communication: Prefer Kafka/RabbitMQ for decoupling over synchronous HTTP.
  • Circuit Breaker: Implement with 'opossum' to avoid failure cascades.
  • Service Discovery: Use Consul or Kubernetes for dynamic URLs.
  • Observability: Prometheus + Grafana for metrics and logs.
  • Database per service: Dedicated DB (Postgres for Users, Mongo for Orders).

Common Pitfalls to Avoid

  • Tight coupling: Avoid shared DBs or libs; use API contracts (OpenAPI).
  • Failure handling: Without retry/timeout, one down service crashes everything.
  • Internal security: Add mTLS or JWT even on private networks.
  • Uneven scaling: Monitor and auto-scale per service (Docker Swarm/K8s).

Next Steps

Master Kubernetes for production with our Learni trainings. Resources: Microservices.io, Docker Compose docs, 'Building Microservices' by Sam Newman.