Skip to content
Learni
View all tutorials
Monitoring

How to Configure Alertmanager in 2026

Lire en français

Introduction

Alertmanager is a key component of the Prometheus ecosystem, designed to handle alerts intelligently. Unlike a simple forwarder, it groups similar alerts, suppresses duplicates, and routes notifications to channels like email, Slack, or PagerDuty. Why use it in 2026? In a world of infinitely scaling cloud infrastructures, alert storms can overwhelm SRE teams. Alertmanager acts like medical triage: it prioritizes, inhibits redundant alerts, and keeps focus on critical incidents.

This beginner tutorial takes you from basics (Docker install) to production-ready config. By the end, you'll integrate Alertmanager with Prometheus, set up receivers, and test live. Perfect for ops teams wanting robust monitoring without unnecessary complexity. (128 words)

Prerequisites

  • Docker installed (version 24+)
  • Prometheus v2.50+ running (or installed via Docker)
  • Basic YAML and command-line knowledge
  • Port 9093 free for the Alertmanager UI

Docker Installation

docker-run-alertmanager.sh
docker run -d \
  --name alertmanager \
  -p 9093:9093 \
  -v $(pwd)/alertmanager.yml:/etc/alertmanager/alertmanager.yml \
  prom/alertmanager:v0.27.0

This command launches Alertmanager in a Docker container with the YAML config mounted. Port 9093 exposes the web UI. Use v0.27.0 for 2026 stability; avoid latest tags in production to prevent breaking changes.

Accessing the Web UI

Once running, open http://localhost:9093. You'll see the minimal UI with Alerts, Silences, and Status tabs. No config? Alerts fail with a 422 error. Next: basic config.

Basic YAML Configuration

alertmanager.yml
global:
  smtp_smarthost: 'localhost:1025'
  smtp_from: 'alertmanager@example.com'

route:
  group_by: ['alertname', 'cluster']
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 1h
  receiver: 'email'

receivers:
- name: 'email'
  email_configs:
  - to: 'team@example.com'

inhibit_rules:
- source_match:
    severity: 'critical'
  target_match:
    severity: 'warning'
  equal: ['alertname', 'cluster']

This basic config sets up an email receiver and a global route that groups by alert name and cluster (wait 30s for first notification, then every 5min). The inhibit rule blocks warnings if a critical alert exists on the same object, avoiding redundant spam.

Restart and Verify

Save alertmanager.yml, then docker restart alertmanager. Check logs: docker logs alertmanager. In the UI under Status > Config, confirm it's loaded. Test with a Prometheus alert pointing to http://localhost:9093.

Prometheus Integration

prometheus.yml
global:
  scrape_interval: 15s

rule_files:
  - "alert.rules.yml"

alerting:
  alertmanagers:
  - static_configs:
    - targets:
      - localhost:9093

scrape_configs:
- job_name: 'prometheus'
  static_configs:
  - targets: ['localhost:9090']

Add this alerting section to your prometheus.yml to route alerts to Alertmanager. Restart Prometheus. Create a simple alert.rules.yml for testing (next step). Pitfall: forget alert_relabel_configs if filtering.

Prometheus Alert Rules for Testing

alert.rules.yml
groups:
- name: example
  rules:
  - alert: InstanceDown
    expr: up == 0
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: "Instance {{ $labels.instance }} down"
      description: "{{ $labels.instance }} has been down for more than 5 minutes."

This rules file triggers an 'InstanceDown' alert if up == 0 for 5min. Labels and annotations enrich the alert for Alertmanager. Add to your Prometheus config and reload (curl -X POST http://localhost:9090/-/reload).

Advanced Config: Multiple Receivers

Analogy: Receivers are like sorted mailboxes. Add Slack or webhooks to diversify.

Slack and Webhook Receivers

alertmanager-advanced.yml
global:
  smtp_smarthost: 'localhost:1025'

route:
  group_by: ['alertname']
  receiver: 'default'
  routes:
  - match:
      severity: critical
    receiver: 'slack-critical'

receivers:
- name: 'default'
  email_configs:
  - to: 'dev@team.com'

- name: 'slack-critical'
  slack_configs:
  - api_url: 'https://hooks.slack.com/services/T000/B000/C000'
    channel: '#alerts'
    text: '🚨 {{ .CommonAnnotations.summary }}'

- name: 'webhook'
  webhook_configs:
  - url: 'http://localhost:5001/'

inhibit_rules: []

Routes match severity to send critical alerts to Slack. Webhook for custom integrations (e.g., PagerDuty). Replace api_url with your real Slack webhook. Restart to apply; test by silencing in the UI.

Test Alert via curl

test-alert.sh
#! /bin/bash
curl -XPOST 'http://localhost:9093/api/v2/alerts' \
  -H 'Content-Type: application/json' \
  -d '[
    {
      "labels": {
        "alertname": "TestAlert",
        "severity": "warning",
        "instance": "test:8080"
      },
      "annotations": {
        "summary": "Ceci est un test"
      }
    }
  ]'

This script simulates an alert via the Alertmanager API. Check the Alerts UI to see it grouped. Great for debugging without Prometheus. Add -k for self-signed HTTPS.

Best Practices

  • Smart group_by: Always include alertname, job, instance for fine-grained grouping without spam.
  • Progressive repeat_interval: 4h for info, 30min for warning, 5min for critical – ramp up urgency.
  • Secure the API: Enable basic_auth or TLS in prod (--web.config.file).
  • Backup config: Version alertmanager.yml in Git; use Kubernetes ConfigMaps.
  • Monitor Alertmanager: Scrape /metrics with Prometheus to alert on its downtimes.

Common Errors to Avoid

  • Bad YAML indentation: Alertmanager crashes with unclear logs; validate with yamllint or UI Status.
  • No group_wait: Triggers thousands of instant notifications (alert storm).
  • Missing inhibit_rules: Child alerts pollute (e.g., disk full + OOMKill).
  • Unexposed ports: Check docker ps; use --network to connect Prometheus.

Next Steps

  • Official docs: Alertmanager GitHub
  • Kubernetes tutorial: Alertmanager Helm chart
  • Advanced: Custom Go webhooks
Check out our Learni monitoring training to master Prometheus, Grafana, and Loki.