Skip to content
Learni
View all tutorials
Bases de données

How to Implement Advanced PostgreSQL in 2026

Lire en français

Introduction

PostgreSQL is the most robust open-source relational DBMS in 2026, outpacing MySQL in ACID compliance and advanced extensions like JSONB or full-text search. This intermediate tutorial guides you through implementing a complete database for an e-commerce app: normalized schemas, relationships, partial indexes, audit triggers, materialized views, and PL/pgSQL procedures.

Why does it matter? Poorly optimized databases cause 80% of production bottlenecks. Here, every step is hands-on: ready-to-run Docker code and SQL scripts executable via psql. By the end, you'll have a scalable DB handling 10k+ orders per day with built-in monitoring. Ideal for fullstack devs or data engineers seeking bookmarkable references. (142 words)

Prerequisites

  • Docker and Docker Compose installed
  • psql client (via brew install libpq on macOS or equivalent)
  • Basic SQL knowledge (SELECT, INSERT, JOIN)
  • Linux/macOS/Windows with WSL environment
  • 4GB free RAM for testing

Docker Compose Installation

docker-compose.yml
version: '3.8'
services:
  postgres:
    image: postgres:17-alpine
    container_name: postgres_app
    restart: always
    environment:
      POSTGRES_DB: ecommerce
      POSTGRES_USER: admin
      POSTGRES_PASSWORD: securepass123
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
volumes:
  postgres_data:

This file launches PostgreSQL 17 in an isolated container with the 'ecommerce' database pre-initialized. The volume persists data; init.sql runs automatically on first startup. In production, avoid exposing ports—use a reverse proxy instead.

Database Initialization Script

init.sql
CREATE SCHEMA IF NOT EXISTS shop;

CREATE TABLE shop.users (
  id SERIAL PRIMARY KEY,
  email VARCHAR(255) UNIQUE NOT NULL,
  name VARCHAR(100) NOT NULL,
  created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE TABLE shop.products (
  id SERIAL PRIMARY KEY,
  name VARCHAR(200) NOT NULL,
  price DECIMAL(10,2) NOT NULL,
  stock INTEGER DEFAULT 0
);

CREATE TABLE shop.orders (
  id SERIAL PRIMARY KEY,
  user_id INTEGER REFERENCES shop.users(id) ON DELETE CASCADE,
  total DECIMAL(10,2) NOT NULL,
  status VARCHAR(20) DEFAULT 'pending',
  order_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

This script creates a 'shop' schema and three related tables with foreign key constraints and timestamps. The CASCADE option cleans up orphaned orders. Test it with docker-compose up -d && psql -h localhost -U admin -d ecommerce.

Test Data Insertion

seed.sql
INSERT INTO shop.users (email, name) VALUES
  ('alice@example.com', 'Alice Dupont'),
  ('bob@example.com', 'Bob Martin');

INSERT INTO shop.products (name, price, stock) VALUES
  ('iPhone 17', 999.99, 50),
  ('MacBook Pro', 2499.99, 20);

INSERT INTO shop.orders (user_id, total, status) VALUES
  (1, 999.99, 'completed'),
  (2, 4999.98, 'pending');

These inserts populate the tables with 2 users, 2 products, and 2 orders. Verify with SELECT * FROM shop.orders;. In production, use COPY for massive volumes (>1M rows).

Advanced Queries with JOINs and Aggregates

queries.sql
SELECT 
  u.name,
  COUNT(o.id) as order_count,
  SUM(o.total) as total_spent,
  AVG(o.total) as avg_order
FROM shop.users u
LEFT JOIN shop.orders o ON u.id = o.user_id
GROUP BY u.id, u.name
HAVING COUNT(o.id) > 0
ORDER BY total_spent DESC;

-- Window function pour ranking
SELECT 
  p.name,
  p.price,
  RANK() OVER (ORDER BY p.price DESC) as price_rank
FROM shop.products p;

This query joins users and orders for per-customer stats (using HAVING to filter). The RANK() window function ranks products by price without subqueries. Great performance for analytics dashboards.

Indexes and Optimization

indexes.sql
CREATE INDEX CONCURRENTLY idx_orders_user_status ON shop.orders (user_id, status) WHERE status = 'pending';
CREATE INDEX idx_products_stock_price ON shop.products (stock DESC, price) USING BTREE;

-- Analyse performance
EXPLAIN ANALYZE SELECT * FROM shop.orders WHERE user_id = 1 AND status = 'pending';

-- Vacuum pour maintenance
VACUUM ANALYZE shop.orders;

Partial indexes speed up filtered queries (e.g., pending orders). CONCURRENTLY avoids locks in production. EXPLAIN ANALYZE spots sequential scans to fix; VACUUM frees up bloat.

Triggers and Audit Trail

triggers.sql
CREATE TABLE shop.audit_log (
  id SERIAL PRIMARY KEY,
  table_name VARCHAR(50),
  record_id INTEGER,
  action VARCHAR(10),
  old_data JSONB,
  new_data JSONB,
  changed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE OR REPLACE FUNCTION shop.audit_func()
RETURNS TRIGGER AS $$
BEGIN
  IF TG_OP = 'UPDATE' THEN
    INSERT INTO shop.audit_log (table_name, record_id, action, old_data, new_data)
    VALUES (TG_TABLE_NAME, OLD.id, 'UPDATE', row_to_json(OLD), row_to_json(NEW));
  ELSIF TG_OP = 'DELETE' THEN
    INSERT INTO shop.audit_log (table_name, record_id, action, old_data)
    VALUES (TG_TABLE_NAME, OLD.id, 'DELETE', row_to_json(OLD));
  END IF;
  RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER audit_orders_trigger
  AFTER UPDATE OR DELETE ON shop.orders
  FOR EACH ROW EXECUTE FUNCTION shop.audit_func();

This PL/pgSQL trigger logs changes to JSONB for GDPR-compliant traceability. It handles UPDATE/DELETE; row_to_json captures full state. Test with UPDATE shop.orders SET status='shipped' WHERE id=1;.

Materialized Views and Procedures

views.sql
CREATE MATERIALIZED VIEW shop.sales_summary AS
SELECT 
  date_trunc('month', order_date) as month,
  SUM(total) as monthly_revenue,
  COUNT(*) as order_count
FROM shop.orders
GROUP BY month
WITH DATA;

CREATE OR REPLACE PROCEDURE shop.refresh_summary()
LANGUAGE plpgsql
AS $$
BEGIN
  REFRESH MATERIALIZED VIEW shop.sales_summary;
  RAISE NOTICE 'Vue rafraîchie avec succès';
END;
$$;

-- Usage
CALL shop.refresh_summary();

The materialized view aggregates monthly sales for fast queries (10x speedup). The PL/pgSQL procedure refreshes it via cron job. Perfect for BI; refresh only on deltas in production.

Best Practices

  • Normalize to 3NF but denormalize for read-heavy workloads (e.g., computed columns).
  • Use connection pooling (PgBouncer) to scale to 1000+ connections.
  • Monitor with pg_stat_statements and EXPLAIN for slow queries.
  • Daily pg_dump backups + WAL archiving for PITR.
  • Secure with roles: REVOKE ALL on public, GRANT minimally.

Common Errors to Avoid

  • Forgetting VACUUM: Bloat >50% kills performance; enable auto_vacuum.
  • Useless indexes: Check pg_stat_user_indexes; drop if <5% hit rate.
  • Long transactions: Keep under 5s to avoid locks; use SAVEPOINT.
  • JSONB without GIN index: Slow document queries; CREATE INDEX ON table USING GIN (jsonb_col).