Introduction
Grafana Mimir is an open-source time series database designed to store and query long-term metrics in a highly scalable way, while remaining 100% compatible with Prometheus. Unlike Prometheus, which has limited retention (typically 2-15 days), Mimir handles petabytes of data across distributed clusters with horizontal sharding and replication.
Why use it in 2026? With the explosion of cloud-native metrics (Kubernetes, microservices), Mimir solves the storage bottleneck: think of Prometheus as a bike (fast but limited capacity), and Mimir as a truck (handling massive volumes). This beginner tutorial deploys a complete local stack: Mimir in monolithic mode (for simplicity), Prometheus for ingestion, Node Exporter for real data, and Grafana for visualization. By the end, you'll query metrics with 7 days of retention, scalable infinitely. Estimated time: 15 min.
Prerequisites
- Docker version 27+ and Docker Compose v2.29+ installed (check with
docker --versionanddocker compose version). - 4 GB free RAM and 10 GB disk space (for test data).
- Basic knowledge of Prometheus (scrape/remote_write concepts) and YAML.
- Ports 3000 (Grafana), 8080 (Mimir), 9090 (Prometheus), 9100 (Node Exporter) available.
- Unix-like terminal (WSL on Windows recommended).
Step 1: Create the folder structure
Create a project folder to organize your files. Open a terminal and run:
bash
mkdir mimir-local && cd mimir-local
mkdir -p mimir grafana/provisioning/datasources prometheus
This structure separates configs: mimir/mimir.yaml for Mimir, prometheus/prometheus.yml for ingestion, grafana/provisioning/datasources/mimir.yml for Grafana auto-config. It avoids Docker volume conflicts and makes backups easy.
Configure Mimir (monolithic mode)
server:
http_listen_port: 8080
log_level: info
multitenancy_enabled: false
blocks_storage:
backend: filesystem
filesystem:
dir: /data/blocks
tsdb:
dir: /data/tsdb
limits:
ingestion_rate_mb: 4
ingestion_burst_size_mb: 6
retention_period: 168h # 7 days for testing
api:
enable_admin_endpoints: true
ruler_storage:
backend: local
local:
directory: /data/ruler
alertmanager_storage:
backend: filesystem
filesystem:
dir: /data/alertmanagerThis config runs Mimir in monolithic mode (all-in-one) with filesystem storage for beginner simplicity. Blocks are stored in /data/blocks (Prometheus remote_write compatible), with 7-day retention. Admin endpoints are enabled for /ready and metrics. Pitfall: Without multitenancy=false, multi-tenant auth blocks local testing.
Step 2: Verify the Mimir config
Save this file as mimir/mimir.yaml. Mimir uses a 'filesystem' backend for blocks (aligned TSDB series), ideal for development. Analogy: 'Blocks' are like compressed ZIP files of metrics, uploaded periodically by Prometheus. This allows future scaling to S3/MinIO without code changes.
Configure Prometheus for remote_write
global:
scrape_interval: 15s
remote_write:
- url: http://mimir:8080/api/v1/push
scrape_configs:
- job_name: mimir
honor_labels: true
static_configs:
- targets: ['mimir:8080']
- job_name: node
honor_labels: true
static_configs:
- targets: ['node_exporter:9100']
rule_files: []
Prometheus scrapes Mimir (its own metrics) and Node Exporter (host CPU/RAM), then pushes to Mimir via /api/v1/push. 'honor_labels: true' preserves Prometheus labels. Pitfall: Without remote_write, data stays local to Prometheus (volatile). Verify with curl http://localhost:9090/api/v1/status/config after startup.
Step 3: Provision Grafana automatically
Grafana auto-configures on boot via provisioning, avoiding manual clicks: the 'Mimir' datasource points to http://mimir:8080/prometheus (Prometheus-compatible endpoint). Login: admin/admin.
Grafana datasource for Mimir
apiVersion: 1
providers:
- name: 'mimir-local'
orgId: 1
folder: ''
type: file
disableDeletion: false
updateIntervalSeconds: 60
allowUiUpdates: true
datasources:
- name: Mimir
type: prometheus
access: proxy
orgId: 1
url: http://mimir:8080/prometheus
basicAuth: false
isDefault: true
version: 1
editable: true
This YAML provisions a Prometheus datasource pointing to the Mimir API. 'access: proxy' enables queries from the Grafana UI without CORS issues. Pitfall: Internal URL 'mimir:8080' (Docker network); expose 8080 for external access. Test the up query in Grafana Explore.
Docker Compose: Complete stack
version: '3.8'
services:
mimir:
image: grafana/mimir:2.14.1
command:
- '-config.file=/etc/mimir/mimir.yaml'
ports:
- '8080:8080'
volumes:
- ./mimir/mimir.yaml:/etc/mimir/mimir.yaml
- mimir-data:/data
networks:
- mimir
healthcheck:
test: ['CMD', 'wget', '--no-verbose', '--tries=1', '--spider', 'http://localhost:8080/ready']
interval: 10s
timeout: 5s
retries: 3
prometheus:
image: prom/prometheus:v2.53.1
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- '9090:9090'
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.enable-lifecycle'
networks:
- mimir
depends_on:
mimir:
condition: service_healthy
grafana:
image: grafana/grafana:11.3.0
ports:
- '3000:3000'
volumes:
- grafana-data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin
- GF_USERS_ALLOW_SIGN_UP=false
networks:
- mimir
depends_on:
- prometheus
node_exporter:
image: prom/node-exporter:v1.8.2
ports:
- '9100:9100'
networks:
- mimir
volumes:
mimir-data:
grafana-data:
networks:
mimir:
driver: bridge
This Compose file orchestrates the stack: Mimir healthcheck on /ready, Prometheus depends on Mimir, Grafana provisioned. Persistent volumes for data (don't lose blocks). Pitfall: Pinned versions for stability; update image tags together. depends_on ensures boot order.
Step 4: Launch and verify the stack
Run docker compose up -d (detached). Check logs: docker compose logs mimir. Access:
- Grafana: http://localhost:3000 → Explore → Mimir → Query
rate(node_cpu_seconds_total[5m]). - Mimir ready:
curl http://localhost:8080/ready(returns 'ready'). - Prometheus: http://localhost:9090/targets (UP). Data arrives in 1-2 min.
Verify with curl (Mimir metrics)
#!/bin/bash
# Check Mimir readiness
echo "Mimir ready:"
curl -s http://localhost:8080/ready | jq
# Example query: up jobs
curl -G -s http://localhost:8080/prometheus/api/v1/query \
-d 'query=up' \
--data-urlencode 'time=now' | jq '.data.result'
# Count stored series (admin endpoint)
curl -s http://localhost:8080/admin/tsdb/series-count | jqThis bash script tests the Mimir API: /ready confirms boot, /prometheus/api/v1/query simulates Grafana Explore, /admin/tsdb counts series (e.g., 100+ after 5 min). Install jq for pretty-printing. Pitfall: Without -G --data-urlencode, query params fail.
Best practices
- Persistence: Always use /data volume for blocks;
docker compose downwithout--volumespreserves data. - Security: In production, enable multitenancy + auth (OAuth/JWT); limit exposed ports.
- Monitoring: Add Loki/Promtail for logs; scale via Helm on K8s.
- Retention: Test
blocks_storage.tsdb.retentionat 90d+; monitor 'mimir_tsdb_blocks_compactions_failed_total'. - Backup: Rsync /data/blocks to S3; use mimirtool for snapshots.
Common errors to avoid
- Mimir not ready: Logs show 'tsdb dir locked' → remove orphan volumes (
docker volume prune). - No data in Grafana: Check remote_write queue (
prometheus_remote_storage_samples_pending); wait 2 min after boot. - CORS/Proxy errors: In Grafana, use 'access: proxy' + expose 8080; test curl from container (
docker exec). - OOS after restart: Filesystem isn't distributed; migrate to S3/MinIO for production.
Next steps
- Official docs: Grafana Mimir Docs.
- Production Helm chart: github.com/grafana/helm-charts.
- Integrate Tempo for traces: Add a Tempo service to Compose.
- Expert training: Check out our Observability Learni courses for Kubernetes + distributed Mimir.