Skip to content
Learni
View all tutorials
DevOps

How to Orchestrate Containers with Docker Compose in 2026

Lire en français

Introduction

Docker Compose is an essential tool for orchestrating multiple Docker containers with a single command, perfect for complex development and testing environments. Unlike manual docker run, it handles dependencies, persistent volumes, isolated networks, and scaling through a declarative YAML file.

Why use it in 2026? Modern apps are multi-container: a Node.js backend API, PostgreSQL database, Redis cache. Docker Compose faithfully replicates these stacks locally, speeding up developer onboarding and eliminating 'it works on my machine' problems. This intermediate tutorial guides you through building a complete stack: an Express server connected to Postgres and Redis, with data persistence and secure exposure.

By the end, you'll master volumes, networks, depends_on, and overrides for production-like environments. Estimated time: 20 minutes for a functional setup.

Prerequisites

  • Docker Desktop installed (version 27+ with Compose v2 built-in)
  • Basic Docker knowledge (images, containers)
  • Node.js 20+ installed locally to test the app
  • Code editor (VS Code recommended with Docker extension)
  • Unix-like terminal (WSL2 on Windows)

Create the Node.js Application Dockerfile

Dockerfile
FROM node:20-alpine

WORKDIR /app

COPY package*.json ./
RUN npm ci --only=production

COPY . .

EXPOSE 3000

CMD ["node", "server.js"]
USER node

This multi-stage Dockerfile minimizes the final image size (Alpine for lightness). It copies package.json first to optimize Docker's cache, installs production deps, then the app. The 'node' user enhances security by avoiding root. Copy-paste into a project folder.

Prepare the Node.js Application

Create a project folder (mkdir my-stack && cd my-stack). Add a simple Express server that interacts with Postgres and Redis. This code demonstrates the connections: DB queries and Redis caching. Environment variables will be injected by Compose.

Implement the Express Server

server.js
const express = require('express');
const { Pool } = require('pg');
const redis = require('redis');

const app = express();
app.use(express.json());

// Connexions via env vars de Compose
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
const client = redis.createClient({ url: process.env.REDIS_URL });
client.connect();

app.get('/users', async (req, res) => {
  const cached = await client.get('users');
  if (cached) return res.json(JSON.parse(cached));

  const { rows } = await pool.query('SELECT * FROM users');
  await client.set('users', JSON.stringify(rows), { EX: 60 });
  res.json(rows);
});

app.listen(3000, () => console.log('Server on 3000'));

module.exports = app;

This server exposes /users: checks Redis cache first, otherwise queries Postgres and caches for 60 seconds. It uses process.env for DB/Redis URLs provided by Compose. Run npm init -y && npm i express pg redis locally to test.

Generate package.json

package.json
{
  "name": "docker-compose-app",
  "version": "1.0.0",
  "main": "server.js",
  "scripts": {
    "start": "node server.js"
  },
  "dependencies": {
    "express": "^4.19.2",
    "pg": "^8.13.0",
    "redis": "^4.7.0"
  }
}

Minimal production package.json: dependencies only, no devDeps. The start script matches the Dockerfile's CMD. Run npm install once to generate node_modules, but Docker will rebuild it.

Define the Base Stack with docker-compose.yml

Now orchestrate everything: web (Node), db (Postgres), redis. Expose ports, map volumes for DB persistence, define an internal network. depends_on ensures ordered startup.

Base docker-compose.yml File

docker-compose.yml
services:
  web:
    build: .
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgresql://postgres:password@db:5432/mydb
      - REDIS_URL=redis://redis:6379
    depends_on:
      - db
      - redis
    networks:
      - app-network

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: mydb
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - app-network

  redis:
    image: redis:7-alpine
    networks:
      - app-network

volumes:
  postgres_data:

networks:
  app-network:
    driver: bridge

Declarative YAML: build: . uses the local Dockerfile, env vars shared via service names (db → postgres@db). Named volume persists the DB. 'app-network' isolates traffic. Launch with docker compose up -d.

Initialize the Database

Note: Postgres starts empty. Add an init script to create the users table. Use a volume to mount an SQL file.

Database Initialization SQL Script

init.sql
CREATE TABLE IF NOT EXISTS users (
  id SERIAL PRIMARY KEY,
  name VARCHAR(100),
  email VARCHAR(100) UNIQUE
);

INSERT INTO users (name, email) VALUES
  ('Alice', 'alice@example.com'),
  ('Bob', 'bob@example.com')
ON CONFLICT (email) DO NOTHING;

This script creates the table and inserts test data. Mount it in Postgres's /docker-entrypoint-initdb.d/ for automatic execution on first startup.

Updated docker-compose.yml with DB Init

docker-compose.yml
services:
  web:
    build: .
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgresql://postgres:password@db:5432/mydb
      - REDIS_URL=redis://redis:6379
    depends_on:
      - db
      - redis
    networks:
      - app-network

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: mydb
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    networks:
      - app-network

  redis:
    image: redis:7-alpine
    networks:
      - app-network

volumes:
  postgres_data:

networks:
  app-network:
    driver: bridge

Added volume ./init.sql: Postgres executes it automatically. Ideal for seeding data. Test with docker compose up -d && curl http://localhost:3000/users.

Essential Commands to Manage the Stack

Use docker compose (v2): up -d for detached mode, logs -f to follow logs, down -v to clean up volumes.

Management Bash Script

manage.sh
#!/bin/bash
case $1 in
  up)
    docker compose up -d
    ;;
  down)
    docker compose down -v
    ;;
  logs)
    docker compose logs -f web
    ;;
  test)
    curl http://localhost:3000/users || echo "Erreur"
    ;;
  *)
    echo "Usage: ./manage.sh {up|down|logs|test}"
    ;;
esac

Wrapper script for common workflows: chmod +x manage.sh && ./manage.sh up. Simplifies daily operations. Add it to your project.

Best Practices

  • Use .env files: cp .env.example .env for secrets (DB_PASS=xxx), reference with ${DB_PASS} in YAML.
  • Networks and conditional depends_on: Add condition: service_healthy for healthchecks.
  • Named volumes: Always for persistence; bind mounts for dev (hot-reload).
  • Compose overrides: docker-compose.override.yml for dev vs. prod differences.
  • Scaling: docker compose up --scale web=3 to test high availability.

Common Errors to Avoid

  • depends_on without healthcheck: DB 'ready' ≠ 'healthy'. Add healthcheck to the db service.
  • Publicly exposed ports: Use ports: - "127.0.0.1:3000:3000" for localhost only.
  • Non-persistent volumes: Forgetting them leads to data loss on down. Always use named volumes.
  • Poor images: Prefer -alpine for size/security, pinned tags (postgres:16-alpine).

Next Steps

Move to Kubernetes with Kompose (kompose convert), or explore Docker Swarm for production. Read the official Docker Compose docs. Check out our Learni trainings on DevOps and Containerization to master Helm, Terraform, and CI/CD.