Skip to content
Learni
View all tutorials
DevOps

How to Deploy Appwrite in Production with Docker in 2026

Lire en français

Introduction

Appwrite is an open-source Backend-as-a-Service (BaaS) platform that rivals Firebase, providing authentication, NoSQL databases, file storage, serverless functions, and realtime capabilities—all self-hosted. In 2026, with its maturity (v1.6+), it shines in production environments thanks to horizontal scalability with Docker Swarm, native GDPR compliance, and Kubernetes-ready integration.

Why choose Appwrite for expert projects? Unlike locked cloud services, you retain full control over your data on your infrastructure (VPS, bare-metal, or cloud), cut costs with no vendor lock-in, and customize via SDKs (JS, Dart, etc.). This tutorial walks through a production-ready setup: MariaDB/PostgreSQL persistence, HTTPS via Caddy, Prometheus monitoring, and a Next.js client app example. By the end, you'll have a scalable backend managing 10k+ users per day. Estimated time: 30 min for setup + testing.

Prerequisites

  • VPS/cloud server (Ubuntu 24.04+, 4 vCPU, 8GB RAM, 100GB SSD for initial production)
  • Docker 27+ and Docker Compose 2.29+ installed (curl -fsSL https://get.docker.com | sudo sh)
  • DNS domain pointed (A record to server IP)
  • Advanced knowledge of Docker, TypeScript, and security (UFW firewall)
  • Ports 80/443 open, SSH key-only
  • Tools: htop, jq for monitoring

Prepare the environment and variables

setup-env.sh
#!/bin/bash

# Create dedicated directory
mkdir -p ~/appwrite-prod && cd ~/appwrite-prod

# Generate strong secrets (use pwgen or equivalent in production)
APPWRITE_DB_HOST=mariadb
APPWRITE_DB_PORT=3306
APPWRITE_DB_SCHEMA=appwrite
APPWRITE_DB_USER=appwrite
APPWRITE_DB_PASS=$(openssl rand -base64 32)
APPWRITE_DB_ROOT_PASS=$(openssl rand -base64 32)

APPWRITE_REDIS_HOST=redis
APPWRITE_REDIS_PORT=6379

APPWRITE_HOSTNAME=votre-domaine.com
APPWRITE_PORT=80
APPWRITE_HTTPS_PORT=443
APPWRITE_DOCS_URL=https://appwrite.io/docs

# Write .env file
cat > .env << EOF
${APPWRITE_DB_HOST}=mariadb
${APPWRITE_DB_PORT}=3306
${APPWRITE_DB_SCHEMA}=appwrite
${APPWRITE_DB_USER}=appwrite
${APPWRITE_DB_PASS}
${APPWRITE_DB_ROOT_PASS}
${APPWRITE_REDIS_HOST}=redis
${APPWRITE_REDIS_PORT}=6379
${APPWRITE_HOSTNAME}
${APPWRITE_PORT}=80
${APPWRITE_HTTPS_PORT}=443
${APPWRITE_DOCS_URL}
EOF

chmod 600 .env
sudo ufw allow 80,443/tcp

# Persistent volumes
sudo mkdir -p /data/appwrite/{mariadb,redis,indexes}
sudo chown -R 1000:1000 /data/appwrite

echo '✅ Environment ready. Secrets in .env'

This script sets up the directory, generates strong passwords for MariaDB/Redis (critical in production to avoid breaches), creates a secure .env file, and prepares persistent Docker volumes. Pitfall: Never commit .env to Git; use a secrets manager like Doppler in production. Run it rootless for better security.

Docker Compose Configuration

Now, deploy Appwrite with a full stack including MariaDB for persistence (vs. dev SQLite), Redis for caching/sessions, and Caddy for automatic HTTPS (Let's Encrypt). This setup is horizontally scalable via docker swarm init and handles 100+ req/s.

Complete docker-compose.yml file

docker-compose.yml
version: '3.8'

x-appwrite-common: &appwrite-common
  image: appwrite/appwrite:1.6.0
  restart: always
  environment:
    - _APP_DOMAIN=${APPWRITE_HOSTNAME}
    - _APP_ENV=production
    - _APP_OPENSSL_KEY_V1=${APPWRITE_OPENSSL_KEY_V1}
    - _APP_MAINTENANCE=${APPWRITE_MAINTENANCE:-false}
    - _APP_SYSTEM_SECURITY=${APPWRITE_SYSTEM_SECURITY:-true}
    - _APP_SYSTEM_CONSOLE=${APPWRITE_SYSTEM_CONSOLE:-true}
    - _APP_SYSTEM_PROJECTS=${APPWRITE_SYSTEM_PROJECTS:-true}
    - _APP_SYSTEM_DATABASE=${APPWRITE_SYSTEM_DATABASE:-true}
    - _APP_SYSTEM_STORAGE=${APPWRITE_SYSTEM_STORAGE:-true}
    - _APP_SYSTEM_LOCALE=${APPWRITE_SYSTEM_LOCALE:-en-US}
    - _APP_SYSTEM_X_FRAME=${APPWRITE_SYSTEM_X_FRAME:-true}
  volumes:
    - ./_APP_DATA:/usr/src/code/app/data:rw
    - ./indexes:/usr/src/code/indexes:ro
  networks:
    - appwrite
  depends_on:
    - mariadb
    - redis

services:
  mariadb:
    image: mariadb:11
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: ${APPWRITE_DB_ROOT_PASS}
      MYSQL_DATABASE: ${APPWRITE_DB_SCHEMA}
      MYSQL_USER: ${APPWRITE_DB_USER}
      MYSQL_PASSWORD: ${APPWRITE_DB_PASS}
    volumes:
      - /data/appwrite/mariadb:/var/lib/mysql
    networks:
      - appwrite
    command: --default-authentication-plugin=mysql_native_password --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb-file-per-table=1 --skip-innodb-query-cache --max-connections=500

  redis:
    image: redis:7-alpine
    restart: always
    volumes:
      - /data/appwrite/redis:/data
    networks:
      - appwrite
    command: redis-server --maxmemory 256mb --maxmemory-policy allkeys-lru

  appwrite:
    <<: *appwrite-common
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /data/appwrite/indexes:/usr/src/code/indexes:ro

  caddy:
    image: caddy:2-alpine
    restart: always
    ports:
      - 80:80
      - 443:443
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - appwrite

volumes:
  caddy_data:
  caddy_config:

networks:
  appwrite:
    driver: bridge

This docker-compose.yml deploys Appwrite 1.6 with persistent MariaDB (tunable maxconn=500), optimized Redis (LRU eviction), and Caddy for free HTTPS. YAML anchors (&appwrite-common) prevent duplication. Pitfall: Use absolute host volumes (/data) for post-reboot persistence; test with docker compose up -d then docker logs appwrite.

Caddyfile for automatic HTTPS

Caddyfile
votre-domaine.com {
	reverse_proxy appwrite:80
	header {
		Strict-Transport-Security "max-age=31536000;"
		X-Frame-Options DENY
		X-Content-Type-Options nosniff
		Referrer-Policy strict-origin-when-cross-origin
	}
	tls {
		dns cloudflare {env.CLOUDFLARE_API_TOKEN}
	}
}

Caddy automatically generates Let's Encrypt certs and adds security headers (HSTS, CSP-like). Replace with your domain; for Cloudflare DNS, add token to .env. Pitfall: Without reverse_proxy, Appwrite won't see HTTPS headers; verify with curl -I https://domaine.com.

Start the stack and verify

start.sh
#!/bin/bash
cd ~/appwrite-prod

# Pull images and start
COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose pull
docker compose up -d

# Wait for init (5-10min)
sleep 300

docker compose logs appwrite | tail -20

# Health check
curl -f https://votre-domaine.com/health || echo '❌ Health KO'

# Console access: https://votre-domaine.com
# Default creds: email@getappwrite.io / password

echo '✅ Appwrite ready. Configure at https://votre-domaine.com/console'

This script launches the stack, waits for DB init (Appwrite auto-migrates), and checks health. Use BUILDKIT for faster builds. Pitfall: First startup takes 10min for indexes; monitor docker stats (requires >4GB RAM).

Initial Setup and Project Creation

Access https://votre-domaine.com/console (initial creds: email@getappwrite.io / password). Change the password, enable 2FA. Create a project 'MyApp', generate an API key (Settings > API Keys > Full Access). Enable services: Database, Auth, Storage, Functions, Realtime.

Install SDK and create a DB collection

create-collection.ts
import { Client, Account, Databases, ID, Query } from 'node-appwrite';

async function main() {
  const client = new Client()
    .setEndpoint('https://votre-domaine.com/v1')
    .setProject('YOUR_PROJECT_ID') // Replace with project ID from console
    .setKey('YOUR_API_KEY'); // Full access key

  const databases = new Databases(client);

  // Create DB if it doesn't exist
  const dbId = ID.unique();
  await databases.create(dbId, 'MyDatabase');

  // Create Users collection
  const collId = ID.unique();
  await databases.createCollection(dbId, collId, 'Users', [
    { key: 'userId', type: 'string', status: 'required', array: false },
    { key: 'email', type: 'string', status: 'required', array: false },
    { key: 'role', type: 'string', status: 'required', array: false },
    { key: 'createdAt', type: 'datetime', status: 'required', array: false }
  ], [
    { sum: 'count', name: 'totalUsers', required: false },
    { average: 'avgRoleLength', name: 'avgRoleLength', required: false }
  ], [], []);

  // Indexes for performance
  await databases.createStringUniqueIndex(dbId, collId, 'userId_primary', 'userId');
  await databases.createStringIndex(dbId, collId, 'email_index', 'email', 100);

  console.log(`✅ DB ${dbId}, Collection ${collId} created.`);
}

main().catch(console.error);

This Node.js script uses the server-side SDK to create a DB + collection with attributes/indexes (for query performance). Run npm i node-appwrite && ts-node create-collection.ts. Pitfall: Requires API key with 'database.write' scope; verify with databases.listCollections().

Client-Side Integration with Next.js

For a Next.js frontend, install npm i appwrite, set env vars NEXT_PUBLIC_APPWRITE_URL/ENDPOINT. Handle realtime auth and DB CRUD operations.

Auth and realtime hook in React/Next.js

useAppwriteAuth.tsx
import { useEffect, useState } from 'react';
import { Client, Account, Databases, Realtime, ID, Query } from 'appwrite';
import { useRouter } from 'next/navigation';

type User = { $id: string; email: string; role: string };

export function useAppwriteAuth() {
  const [user, setUser] = useState<User | null>(null);
  const [loading, setLoading] = useState(true);
  const router = useRouter();

  const client = new Client()
    .setEndpoint(process.env.NEXT_PUBLIC_APPWRITE_URL!)
    .setProject(process.env.NEXT_PUBLIC_APPWRITE_PROJECT_ID!);

  const account = new Account(client);
  const databases = new Databases(client);
  const realtime = new Realtime(client);

  useEffect(() => {
    // Restore session
    account.get().then(
      (u) => {
        setUser({ $id: u.$id, email: u.email!, role: 'user' });
        // Subscribe to realtime DB
        realtime.subscribe(`databases.MA_BASE_ID.collections.USERS/documents`, (res) => {
          console.log('Realtime update:', res);
        });
      },
      () => setUser(null)
    ).finally(() => setLoading(false));
  }, []);

  const login = async (email: string, password: string) => {
    await account.createEmailPasswordSession(email, password);
    router.refresh();
  };

  const createUserDoc = async (userData: Omit<User, '$id'>) => {
    return databases.createDocument(
      'MA_BASE_ID',
      'USERS_ID',
      ID.unique(),
      userData
    );
  };

  const logout = () => account.deleteSession('current').then(() => router.refresh());

  return { user, loading, login, logout, createUserDoc };
}

This custom hook manages persistent sessions, login/logout, DB CRUD, and realtime subscriptions (for push updates). Integrate into App.tsx. Pitfall: NEXT_PUBLIC_ env vars are client-exposed; use custom JWTs for production. Test: npm run dev and login via console.

Scaling with Docker Swarm

scale-swarm.sh
#!/bin/bash

# Init Swarm (manager node)
docker swarm init --advertise-addr IP_SERVEUR

# Deploy stack (with replicas)
docker stack deploy -c docker-compose.yml appwrite-stack --with-registry-auth

# Scale services
 docker service scale appwrite-stack_appwrite=3
 docker service scale appwrite-stack_mariadb=1

# Worker nodes: docker swarm join --token $(docker swarm join-token worker) IP:2377

# Monitoring
 docker service logs appwrite-stack_appwrite
 docker stats

# Backup cron
crontab -e
# 0 2 * * * docker exec appwrite-stack_mariadb mysqldump -u root -p${APPWRITE_DB_ROOT_PASS} appwrite > /backups/db-$(date +%Y%m%d).sql

Switch to Swarm for high availability/scaling (3 Appwrite replicas). docker stack deploy handles replicas and orchestration. Pitfall: MariaDB is single-instance for simplicity; use Galera cluster for true HA. Daily backups via cron mysqldump.

Best Practices

  • Security: Use minimal-scoped API keys (read-only for client), WAF (Cloudflare), rate limiting via Appwrite Redis.
  • Performance: DB indexes on queried fields (e.g., email), offload storage to S3 via Appwrite adapter.
  • Monitoring: Integrate Prometheus/Grafana (Appwrite exposes /metrics), alerts for CPU >80%.
  • Backups: RAID1 volumes, daily DB/Redis snapshots to S3, monthly restore tests.
  • Updates: docker compose pull && up -d tested in staging first; pin versions in YAML.

Common Errors to Avoid

  • Non-persistent volumes: Data loss on reboot → Always use host mounts (/data).
  • HTTPS bypass: Broken Appwrite sessions without proxy headers → Caddy/Traefik required.
  • DB overload: No indexes → Slow queries >1s; profile via console.
  • Plaintext secrets: World-readable .env → chmod 600, use Docker secrets or Vault.

Next Steps

Dive deeper with Appwrite docs, Kubernetes Helm chart, or Deno Functions. Migrate to vector DB (Milvus integration). Check our Learni DevOps training for expert Docker Swarm/K8s, or advanced Backend BaaS.