Skip to content
Learni
View all tutorials
Bases de données NoSQL

How to Scale Azure Cosmos DB in 2026

Lire en français

Introduction

Azure Cosmos DB is Microsoft Azure's multi-model NoSQL database, offering unlimited horizontal scalability, global distribution, and 99.999% availability SLAs. In 2026, with the rise of AI and serverless apps, mastering Cosmos DB is essential for handling massive workloads without downtime. This advanced tutorial walks you through creating an account, partitioning containers, optimizing RU/s (Request Units), implementing complex SQL queries, and configuring advanced indexing policies using the TypeScript SDK.

Why it matters: Poor partitioning leads to hotspots and skyrocketing costs; over-indexing wastes RU. We illustrate each concept with functional, copy-paste code and analogies like a 'smart load balancer' for partitioning. By the end, you'll scale to millions of operations per second. Ready to turn your backend into a scaling machine? (128 words)

Prerequisites

  • Active Azure account with a paid subscription (free tier limited for scaling).
  • Node.js 20+ and npm/yarn installed.
  • Azure CLI installed and authenticated (az login).
  • Advanced knowledge of TypeScript, async/await, and NoSQL basics.
  • Cosmos DB API key (Endpoint + Primary Key) from your account.
  • VS Code with Azure Cosmos DB extension for debugging.

Install the Cosmos DB SDK

terminal
npm init -y
npm install @azure/cosmos @azure/identity dotenv
npm install -D @types/node typescript ts-node

mkdir cosmos-scaling-demo
cd cosmos-scaling-demo

tsc --init --target es2022 --module commonjs --outDir ./dist --rootDir ./src

echo 'COSMOS_ENDPOINT=your-endpoint.azure.com:443/
COSMOS_KEY=your-primary-key==
DATABASE_ID=ScalingDB' > .env

This script sets up a Node.js project with the official @azure/cosmos SDK for programmatic interaction with Cosmos DB. The @azure/identity extension handles AAD authentication for production; dotenv loads secrets. Avoid hardcoding keys: use Azure Key Vault in production for automatic rotation.

Understanding Connections and Basics

Connecting to Cosmos DB uses a secure HTTPS endpoint and primary key (or AAD). Think of it as a 'global entry point': all operations flow through it, but scaling happens behind the scenes. Before scaling, provision an account with SQL API (multi-model: documents, graph, etc.). In the Azure Portal, enable Analytical Store for Synapse Link and Serverless for burst traffic. Check RU/s: 400 min per container, scalable to millions.

Create the Database and Partitioned Container

src/create-db-container.ts
import { CosmosClient } from '@azure/cosmos';
import dotenv from 'dotenv';
dotenv.config();

const client = new CosmosClient({ endpoint: process.env.COSMOS_ENDPOINT!, key: process.env.COSMOS_KEY! });
const databaseId = process.env.DATABASE_ID!;

async function main() {
  const { database } = await client.databases.createIfNotExists({ id: databaseId });
  console.log(`DB créée: ${databaseId}`);

  const containerId = 'Users';
  const partitionKey = { paths: ['/userId'], version: 2 };

  await database.containers.createIfNotExists({
    id: containerId,
    partitionKey,
    indexingPolicy: {
      indexingMode: 'consistent',
      includedPaths: [{ path: '/*' }],
      excludedPaths: [{ path: '/_etag/?' }]
    },
    defaultTtl: 3600
  });
  console.log(`Conteneur partitionné créé: ${containerId}`);
}

main().catch(console.error);

This code creates a database and container with partition key /userId (v2 for logical hashing), consistent indexing, and 1-hour TTL. Partitioning avoids hotspots: imagine 'silos' per userId for horizontal scaling. Pitfall: No partition limits to 10GB/container; always test with synthetic loads.

Mastering Advanced Partitioning

Logical partitioning: Choose a high-cardinality key (e.g., userId, timestamp) for even distribution. Cosmos DB auto-shards physical partitions (150GB max each). For cross-partition queries, limit with EnableCrossPartitionQuery: false if possible. Analogy: Like a restaurant with tables per group – too many customers per table = bottleneck.

Bulk Insert Data and Point Queries

src/insert-query.ts
import { CosmosClient } from '@azure/cosmos';
import dotenv from 'dotenv';
dotenv.config();

const client = new CosmosClient({ endpoint: process.env.COSMOS_ENDPOINT!, key: process.env.COSMOS_KEY! });
const database = client.database(process.env.DATABASE_ID!);
const container = database.container('Users');

async function main() {
  // Bulk insert (100 items)
  const items = [];
  for (let i = 0; i < 100; i++) {
    items.push({
      id: `user${i}`,
      userId: `userPartition${Math.floor(i / 10)}`,
      name: `User ${i}`,
      score: Math.random() * 100,
      tags: ['premium', 'active'],
      timestamp: new Date().toISOString()
    });
  }
  const { result: inserted } = await container.items.createMany(items);
  console.log(`${inserted.length} items insérés`);

  // Point query (intra-partition)
  const { resources: user } = await container.item('user0', 'userPartition0').read();
  console.log('User retrouvé:', user);
}

main().catch(console.error);

Use createMany for bulk ops (up to 500 items/req, 5MB max) to minimize RU/s. Point queries with id + partitionKey are most efficient (1 RU). Avoid scans: always specify partitionKey for <10 RU vs 1000+ in cross-partition.

Advanced SQL Query with Aggregates

query.sql
SELECT c.userId, AVG(c.score) AS avgScore, COUNT(c.id) AS userCount
FROM c
WHERE c.tags @> ['premium'] AND c.timestamp > '2026-01-01T00:00:00Z'
GROUP BY c.userId
ORDER BY avgScore DESC
OFFSET 0 LIMIT 10

This SQL query uses JSON operators (@>), temporal filters, GROUP BY, and pagination (OFFSET/LIMIT). Efficient with indexes on /tags and /timestamp. Cost: ~50 RU for 1k items; test in Data Explorer to profile.

Queries and RU/s Optimization

Run the query via SDK with container.items.query(querySpec). Profile RU in the portal (Metrics tab). Tips: ORDER BY on partition key, avoid SELECT * (use VALUE or projection). For analytics, enable Spark Connector.

Execute SQL Query in TypeScript

src/advanced-query.ts
import { CosmosClient, SqlQuerySpec } from '@azure/cosmos';
import dotenv from 'dotenv';
dotenv.config();

const client = new CosmosClient({ endpoint: process.env.COSMOS_ENDPOINT!, key: process.env.COSMOS_KEY! });
const container = client.database(process.env.DATABASE_ID!).container('Users');

const querySpec: SqlQuerySpec = {
  query: "SELECT c.userId, AVG(c.score) AS avgScore FROM c WHERE c.tags @> ['premium'] GROUP BY c.userId"
};

async function main() {
  const { resources, requestCharge } = await container.items.query(querySpec).fetchAll();
  console.log(`Résultats:`, resources);
  console.log(`RU consommés: ${requestCharge}`);
}

main().catch(console.error);

Use fetchAll() for small results; switch to iterators for >100k. Log requestCharge for tuning. Pitfall: Cross-partition GROUP BY can explode RU – partition by userId for intra-partition efficiency.

Custom Indexing Policy (JSON)

indexing-policy.json
{
  "indexingMode": "consistent",
  "automatic": true,
  "includedPaths": [
    { "path": "/userId/?", "indexes": [{ "kind": "Hash", "dataType": "String", "precision": -1 }] },
    { "path": "/score/?", "indexes": [{ "kind": "Number", "dataType": "Number", "precision": -1 }] },
    { "path": "/tags/?" }
  ],
  "excludedPaths": [
    { "path": "/timestamp/?" },
    { "path": "/*" }
  ],
  "spatialIndexes": [],
  "compositeIndexes": [
    [
      { "path": "/userId", "order": "ascending" },
      { "path": "/score", "order": "descending" }
    ]
  ]
}

This policy indexes selectively (Hash for strings, Number for score) and excludes timestamp to save RU (10-20%). Composite for ORDER BY userId+score. Apply during createIfNotExists; rebuilding is costly on large containers.

Change Feed for Event-Driven Scaling

src/change-feed.ts
import { CosmosClient, ChangeFeedIterator } from '@azure/cosmos';
import dotenv from 'dotenv';
dotenv.config();

const client = new CosmosClient({ endpoint: process.env.COSMOS_ENDPOINT!, key: process.env.COSMOS_KEY! });
const container = client.database(process.env.DATABASE_ID!).container('Users');

async function main() {
  const iterator: ChangeFeedIterator<any> = container.items.readChangeFeed({
    startFromBeginning: false
  });

  let count = 0;
  while (!(await iterator.hasMoreResults())) {
    const response = await iterator.fetchNext();
    count += response.resources.length;
    console.log(`Changements lus: ${count}`);
    // Traitez pour Azure Functions/Event Hubs
  }
}

main().catch(console.error);

Change Feed captures upserts/deletes in partition order, ideal for CDC (Change Data Capture) to Kafka or Functions. Scaling: parallel per partition. Limit: Requires lease container for multi-instance setups.

Best Practices

  • Choose autoscale RU (10x-100x max) for variable workloads; monitor via Metrics API.
  • Implement fan-out: Parallel queries per partition with Promise.all.
  • Use Serverless v2 for <5% utilization; migrate to provisioned for predictable loads.
  • Enable multi-region writes for <10ms global latency.
  • Always profile: Target <5 RU/query with minimal indexing policy.

Common Errors to Avoid

  • Hot partitions: Low-cardinality key (e.g., /country) → Use composite (userId + region) for evenness.
  • Excessive cross-partition queries: 10x cost; refactor to app-level aggregation.
  • Full indexing: Doubles insert RU; exclude non-queried fields.
  • TTL on partition key: Can trigger costly rebalancing; test with Autopilot.

Next Steps