Introduction
Valkey, the community fork of Redis launched in 2024 after Redis's license change to RSALv2, is an ultra-high-performance in-memory key-value store, 100% compatible with the Redis protocol. Ideal for caching, sessions, task queues, or real-time databases, Valkey excels in sub-millisecond latency and horizontal scalability via clustering. In 2026, with its IO_uring optimizations and integrated AI modules, it dominates cloud-native architectures.
This expert tutorial guides you step-by-step: from bare-metal installation to high-availability cluster configuration, including AOF/RDB persistence, ACL security, and memory optimization. Each step includes complete, functional, production-ready code. By the end, you'll deploy a resilient Valkey cluster handling 1M+ ops/sec. Perfect for DevOps architects and senior backend engineers looking for a reference guide to bookmark.
Prerequisites
- Linux system (Ubuntu 24.04+ or Rocky Linux 9+)
- 8 GB RAM minimum per node (16 GB recommended for clusters)
- Docker 27+ and Docker Compose 2.29+ for local testing
- Node.js 22+ with TypeScript 5.6+ for clients
- Advanced knowledge of TCP networking, sharding, and CAP theorem
- Tools: git, make, gcc 12+ for building from source
Installing Valkey from Source
#!/bin/bash
set -e
# Update system
sudo apt update && sudo apt upgrade -y
sudo apt install -y build-essential tcl git
# Clone official Valkey repo
cd /opt
git clone https://github.com/valkey-io/valkey.git
cd valkey
git checkout unstable # Latest branch for 2026 features
# Build with optimizations (IO_uring for Linux 5.19+)
make BUILD_TLS=yes MALLOC=libc CFLAGS="-O3 -march=native" LDFLAGS="-fuse-ld=mold"
# System installation
sudo make install
# Create user and systemd service
sudo useradd -r -s /bin/false valkey
sudo mkdir /var/lib/valkey /var/log/valkey
sudo chown valkey:valkey /var/lib/valkey /var/log/valkey
# Systemd service
sudo tee /etc/systemd/system/valkey.service > /dev/null <<EOF
[Unit]
Description=Valkey in-memory data store
After=network.target
[Service]
ExecStart=/usr/local/bin/valkey-server /etc/valkey/valkey.conf --supervised systemd
ExecStop=/usr/local/bin/valkey-cli shutdown
Restart=always
User=valkey
Group=valkey
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable valkey
sudo systemctl start valkey
# Verification
valkey-cli pingThis script installs Valkey from source with native optimizations (O3, mold linker, TLS). It sets up a robust systemd service with a dedicated user, avoiding root. The 'unstable' checkout includes 2026 features like asynchronous IO_uring. Test with 'valkey-cli ping', which returns 'PONG' on success.
Basic Configuration and Startup
Valkey configuration uses a .conf file in key-value format, parsed at runtime. We start with a secure standalone config, bound to localhost with persistence enabled. Copy the following code to /etc/valkey/valkey.conf, then restart: sudo systemctl restart valkey.
Basic Configuration File
port 6379
bind 127.0.0.1 -::1
tcp-keepalive 300
# Basic security
dir /var/lib/valkey
timeout 0
tcp-backlog 511
# Logging
loglevel notice
logfile /var/log/valkey/valkey.log
# RDB persistence (snapshots)
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
dbfile dump.rdb
rdbcompression yes
rdbchecksum yes
# AOF for durability (append-only)
appendonly yes
aof-rewrite-percentage 100
aof-rewrite-min-size 64mb
auto-aof-rewrite-percentage 100
no-appendfsync-on-rewrite no
This base config enables RDB for fast snapshots and AOF for WAL-like durability. Binding to localhost prevents network exposure. 'notice' logging balances performance and traceability. Auto AOF rewrite optimizes the log file, preventing disk bloat. Reload without downtime using valkey-cli CONFIG REWRITE.
Basic TypeScript Client with Connection
import { createClient } from 'redis';
const client = createClient({
url: 'redis://localhost:6379',
});
(async () => {
await client.connect();
// SET with TTL and NX (no overwrite)
await client.set('user:123', JSON.stringify({ name: 'Alice', age: 30 }), {
EX: 3600,
NX: true,
});
// GET and parse
const userData = await client.get('user:123');
console.log('User:', JSON.parse(userData || '{}'));
// Pipeline for batch ops (10x perf)
const pipeline = client.multi();
pipeline.set('counter:visits', 42, 'NX');
pipeline.incr('counter:visits');
pipeline.expire('counter:visits', 86400);
const results = await pipeline.exec();
console.log('Pipeline results:', results);
await client.quit();
})().catch(console.error);
Using the 'redis' client (Valkey-compatible), this code demonstrates SET/GET with TTL, NX, and pipelines for batch atomicity. Pipelines reduce network roundtrips by 70%. Run with npx tsx client-basic.ts after npm i redis tsx. Use try/catch for error handling in production.
Enabling Advanced Persistence
For high availability, combine RDB (fast recovery) and AOF (durability). RDB takes background snapshots, while AOF logs every write. In production, use RDB for quick bootstrapping and AOF for WAL replay on crashes.
Optimized Persistence Configuration
# Inherits from base.conf + overrides
include /etc/valkey/valkey.conf
# Advanced RDB
save 60 1000
save 10 10000
save 1 50000
rdb-save-incremental-fsync yes
# High-perf AOF (fsync everysec)
appendfsync everysec
auto-aof-rewrite-percentage 25
aof-rewrite-incremental-fsync yes
aof-use-rdb-preamble yes
# Hybrid: AOF + periodic RDB
repl-diskless-sync delayed
repl-diskless-sync-delay 5Includes the base config and optimizes: 'everysec' fsync balances durability/performance (max 1s loss). AOF-with-RDB preamble speeds up rewrites 2x. Incremental fsync reduces disk IOPS. Apply with valkey-cli --rdb save to test snapshots.
TypeScript Client with WATCH Transactions
import { createClient } from 'redis';
const client = createClient({ url: 'redis://localhost:6379' });
(async () => {
await client.connect();
// Simulate concurrency: WATCH for optimistic locking
await client.watch('stock:widget');
const stock = parseInt(await client.get('stock:widget') || '0');
if (stock > 0) {
const multi = client.multi();
multi.decr('stock:widget');
multi.hIncrBy('orders:123', 'widgets', 1);
const result = await multi.exec();
if (result === null) {
console.log('Concurrent conflict detected, retry...');
} else {
console.log('Order executed:', result);
}
}
await client.unwatch();
await client.quit();
})();
WATCH/MULTI implements optimistic locking for transactions without Lua (simpler). Detects concurrent conflicts via null exec(). Ideal for e-commerce inventory. Scale with Lua scripts for added complexity.
Setting Up a Valkey Cluster
Valkey supports native hash-slot-based clustering (16384 slots). Minimum 3 master nodes with replicas. Use Docker Compose to simulate production.
Docker Compose for 6-Node Cluster
version: '3.8'
services:
valkey-1:
image: valkey/valkey:8.0-alpine # Latest 2026
command: valkey-server /usr/local/etc/valkey/valkey.conf
ports:
- "7001:6379"
volumes:
- ./valkey-cluster.conf:/usr/local/etc/valkey/valkey.conf
networks:
- valkey-net
valkey-2:
image: valkey/valkey:8.0-alpine
command: valkey-server /usr/local/etc/valkey/valkey.conf
ports:
- "7002:6379"
volumes:
- ./valkey-cluster.conf:/usr/local/etc/valkey/valkey.conf
networks:
- valkey-net
valkey-3:
image: valkey/valkey:8.0-alpine
command: valkey-server /usr/local/etc/valkey/valkey.conf
ports:
- "7003:6379"
volumes:
- ./valkey-cluster.conf:/usr/local/etc/valkey/valkey.conf
networks:
- valkey-net
valkey-replica-1:
image: valkey/valkey:8.0-alpine
command: valkey-server /usr/local/etc/valkey/valkey.conf --cluster-replica-yes --cluster-node-timeout 5000
volumes:
- ./valkey-cluster.conf:/usr/local/etc/valkey/valkey.conf
networks:
- valkey-net
valkey-replica-2:
extends: valkey-replica-1
valkey-replica-3:
extends: valkey-replica-1
networks:
valkey-net:
driver: bridgeThis Compose file launches 3 masters + 3 replicas. --cluster-replica-yes auto-joins replicas. Create valkey-cluster.conf with cluster-enabled yes. Run docker compose up -d, then initialize the cluster manually.
Cluster Initialization and TypeScript Client
#!/bin/bash
# Initialize cluster (run after docker compose up)
valkey-cli --cluster create 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006 --cluster-replicas 1 -a ''
# Test cluster client
npm init -y && npm i redis tsx
cat > client-cluster.ts <<'EOF'
import { createCluster } from 'redis';
const cluster = createCluster({
rootNodes: [
{ url: 'redis://127.0.0.1:7001' },
{ url: 'redis://127.0.0.1:7002' },
{ url: 'redis://127.0.0.1:7003' },
],
});
(async () => {
await cluster.connect();
await cluster.set('cluster:key', 'value');
console.log(await cluster.get('cluster:key'));
await cluster.quit();
})();
EOF
tsx client-cluster.tsScript initializes cluster with slot/replica distribution. The createCluster client auto-discovers nodes. Transparent sharding via CRC16 hash. Scales to 1000+ nodes without downtime.
ACL Security and Monitoring
ACL (Access Control Lists) since Valkey 7+ restricts commands/users. Add to valkey.conf: aclfile /etc/valkey/users.acl. Monitor via INFO or Prometheus exporter.
Secure ACL Configuration
user default on nopass ~* &* +@all
user reader on >password123 ~cache:* &* +get +keys +scan
user writer on >strongpass456 ~* &* +@write +@list +@hash +set +del
user admin on #hashedpassword ~* +@allCreates role-based users: reader (read-only), writer (limited writes), admin (full access). ~* matches all keys, +@write enables write commands. Hash passwords with ACL GENPASS. Reload with valkey-cli ACL LOAD.
Best Practices
- Memory: Enable
activedefrag yesandmaxmemory-policy allkeys-lrufor auto-eviction (limit to 80% RAM). - Network: Use TLS (
tls-port 6380) andprotected-mode yesin production. - Backup: Cron script for
BGSAVE+ offsite S3 via rclone. - Monitoring: Prometheus exporter (
valkey_exporter), alerts onused_memory > 90%orrejected_connections. - Scaling: Add nodes via
CLUSTER MEET, rebalance slots with--cluster-rebalance.
Common Errors to Avoid
- Oversharing slots: Check
CLUSTER NODES; use--cluster-rebalanceif a node has >50% slots. - Bloated AOF: Monitor
aof_current_size; force rewrite if >2x base size. - No persistence: RDB alone loses recent writes; always use AOF in production for WAL.
- Client without retry: Implement exponential backoff on
CLUSTERDOWN(auto-healing partitions in 30s).
Next Steps
Dive deeper with the official Valkey docs, RedisJSON module for NoSQL docs, or Vector Search for AI. Test in Kubernetes with the Helm chart. Check out our advanced Learni DevOps training to master Valkey in hybrid cloud.