Skip to content
Learni
View all tutorials
Observabilité

How to Install Grafana Tempo in 2026

Lire en français

Introduction

Grafana Tempo is an open-source distributed tracing system from Grafana Labs, designed to analyze performance in microservices applications. Unlike Jaeger or Zipkin, Tempo stands out for its simplicity: it only indexes 128-bit trace IDs, stores raw traces immutably, and queries by ID. This makes it infinitely scalable without expensive databases.

Why use it in 2026? With the rise of cloud-native architectures and Kubernetes, tracing latencies across services is essential for debugging. Tempo integrates natively with Grafana for intuitive visualizations, Prometheus for metrics, and OpenTelemetry for standard instrumentation. This beginner tutorial walks you through a complete local setup with Docker Compose, including a working demo. By the end, you'll trace real requests and explore them visually. Estimated time: 20 minutes.

Prerequisites

  • Docker and Docker Compose installed (version 20+).
  • Ports 3000 (Grafana), 3200 (Tempo HTTP), 4317/4318 (OTLP) free.
  • Basic knowledge of YAML and command line.
  • Web browser for the Grafana UI.

Create the Docker Compose file

docker-compose.yml
version: "3"
services:
  tempo:
    image: grafana/tempo:latest
    command: [ "-config.file=/etc/tempo.yaml" ]
    volumes:
      - ./tempo.yaml:/etc/tempo.yaml
      - tempo-data:/tmp/tempo
    ports:
      - "3200:3200"       # HTTP
      - "4317:4317"       # OTLP gRPC
      - "4318:4318"       # OTLP HTTP
      - "9411:9411"       # Zipkin
      - "14268:14268"     # Jaeger ingest
    networks:
      - tempo

  grafana:
    image: grafana/grafana:latest
    volumes:
      - ./grafana-datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
      - grafana-data:/var/lib/grafana
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
    ports:
      - "3000:3000"
    networks:
      - tempo

  otelcol:
    image: otel/opentelemetry-collector-contrib:latest
    command: [ "--config=/etc/otelcol-config.yaml" ]
    volumes:
      - ./otelcol-config.yaml:/etc/otelcol-config.yaml
    ports:
      - "8889:8889"       # Prometheus metrics
    depends_on:
      - tempo
    networks:
      - tempo

  hotrod:
    image: grafana/hotrod:latest
    ports:
      - "8080:8080"
    environment:
      - OTEL_SERVICE_NAME=hotrod
      - OTEL_EXPORTER_OTLP_ENDPOINT=http://otelcol:4318
      - OTEL_RESOURCE_ATTRIBUTES=service.name=hotrod
    depends_on:
      - otelcol
    networks:
      - tempo

volumes:
  tempo-data: {}
  grafana-data: {}

networks:
  tempo:
    driver: bridge

This Docker Compose file deploys a complete stack: Tempo for trace storage, Grafana for visualization, OpenTelemetry Collector as an OTLP gateway, and HotROD as an instrumented demo app. Ports are exposed for external access. Create a project folder, copy this code into it, and run docker compose up -d to launch everything. Check for port conflicts with netstat beforehand.

Launch the Tempo stack

Create a project folder: mkdir tempo-demo && cd tempo-demo. Paste the docker-compose.yml from above. Create the missing config files (next steps). Launch with docker compose up -d. Check logs: docker compose logs -f. The HotROD app will be available at http://localhost:8080. Grafana at http://localhost:3000 (admin/admin).

Tempo Configuration (Local)

tempo.yaml
server:
  http_listen_port: 3200

distributor:
  receivers:                           # Configured receivers
    otlp:
      protocols:
        grpc:
          endpoint: 0.0.0.0:4317
        http:
          endpoint: 0.0.0.0:4318
    jaeger:
      protocols:
        grpc:
          endpoint: 0.0.0.0:14250
        thrift_http:
          endpoint: 0.0.0.0:8080
    zipkin:
    otlp:
      protocols:
        grpc:
          endpoint: 0.0.0.0:9411

storage:
  trace:
    backend: local                     # Local storage (file)
    local:
      path: /tmp/tempo/blocks
    wal:
      path: /tmp/tempo/wal

compactor:
  compaction:
    block_retention: 1h
    compaction_window: 1h
    max_compaction_objects: 1000000

metrics_generator:
  registry:
    external_labels:
      source: tempo
      cluster: local

overrides:
  defaults:
    metrics_generator_processors: [service-graphs, span-metrics]

receiver:
  otlp:
    protocols:
      grpc:
      http:

cleanup:
  period: 1m

This config enables OTLP, Jaeger, and Zipkin receivers for Tempo in local mode (file WAL + blocks storage). The compactor cleans old traces after 1 hour. For production, switch to S3/MinIO. Pitfall: without WAL, traces are lost on restart; here, it ensures durability.

Configure the OTEL Collector

HotROD automatically generates traces via OpenTelemetry to the OTEL Collector, which forwards them to Tempo. Access HotROD at http://localhost:8080 and click buttons (e.g., 'Frontend'). Traces appear in real time.

OTEL Collector Configuration

otelcol-config.yaml
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

processors:
  batch:

exporters:
  otlp:
    endpoint: tempo:4317
    tls:
      insecure: true
  logging:

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp, logging]
  telemetry:
    logs:
      level: "debug"

The OTEL Collector receives OTLP traces from HotROD, batches them, and forwards to Tempo. Logging helps with debugging. In production, add processors like 'spanmetrics' for metrics generation. Check logs if no traces: docker compose logs otelcol.

Integrate Grafana and Explore Traces

Log in to Grafana (admin/admin). The Tempo datasource is auto-provisioned (next config). Go to Explore > select Tempo. Search by service 'hotrod', or paste a Trace ID from HotROD logs. Visualize span waterfalls: latencies, errors, links to logs/metrics.

Grafana Datasource Provisioning

grafana-datasources.yaml
apiVersion: 1

datasources:
  - name: Tempo
    type: tempo
    uid: tempo
    url: http://tempo:3200
    access: proxy
    basicAuth: false
    isDefault: true
    jsonData:
      tracesToLogsV2:
        datasourceUid: "false"
      tracesToMetricsV2:
        datasourceUid: "false"
      nodeGraph:
        enabled: true
    editable: true

This file automatically provisions the Tempo datasource in Grafana. The URL points to the internal Docker service. Enables Node Graph for service graphs. Restart Grafana after changes; avoid 'proxy' with strict firewalls.

Simple Python Instrumentation Example

trace_app.py
import requests
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource

# Configuration
trace.set_tracer_provider(TracerProvider(resource=Resource.create({"service.name": "python-app"})))
tracer = trace.get_tracer(__name__)
otlp_exporter = OTLPSpanExporter(endpoint="http://localhost:4318/v1/traces", insecure=True)
provider.add_span_processor(BatchSpanProcessor(otlp_exporter))

# Example trace
with tracer.start_as_current_span("http-request"):
    response = requests.get("http://localhost:8080/checkout")
    print(f"Status: {response.status_code}")

print("Trace sent to Tempo. Check in Grafana.")

This Python script instruments a request to HotROD with OpenTelemetry and exports to Tempo via OTLP HTTP. Install with pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-grpc requests. Run it to generate cross-service traces. Pitfall: use 'insecure=True' only locally.

Query Traces via Tempo API

Test the API: curl -g 'http://localhost:3200/api/search?orgID=1&q=%7B.service.name%3D%22hotrod%22%7D'. Copy a Trace ID from Grafana Explore for details.

Tempo API Query (Bash)

query-traces.sh
#!/bin/bash
curl -s -G "http://localhost:3200/api/traces/$1" | jq '.'

# Usage: ./query-traces.sh <trace-id>
# Example: ./query-traces.sh 1234567890abcdef1234567890abcdef

This script queries a specific trace by hex ID. Great for automation scripts. jq formats the JSON. Get the ID from HotROD logs or Grafana.

Best Practices

  • Scalable storage: Switch to MinIO/S3 for >1TB traces/day; local WAL for dev only.
  • Instrumentation: Adopt OpenTelemetry everywhere, avoid vendor lock-in (e.g., Jaeger native).
  • Security: Enable Grafana auth, mTLS for OTLP in prod; limit exposed ports.
  • Retention: Configure compactor per tenant/orgID for multi-tenancy.
  • Metrics: Enable 'span-metrics' to correlate traces with Prometheus metrics.

Common Errors to Avoid

  • No traces visible: Check OTEL Collector logs; wrong OTLP endpoint (4317 gRPC vs 4318 HTTP).
  • Empty Grafana: Datasource not provisioned or wrong URL (use 'tempo:3200' in Docker).
  • Traces lost on restart: Without WAL, traces are volatile; always enable it.
  • Performance issues: Local index blocks at scale; migrate to object storage beyond 100GB.

Next Steps

How to Install Grafana Tempo in 2026 | Learni