Skip to content
Learni
View all tutorials
Développement Web

How to Implement Advanced A/B Tests with Next.js in 2026

Lire en français

Introduction

A/B tests are essential for optimizing digital products by measuring the real impact of changes on key metrics like conversion rate or time spent. In 2026, with Next.js 15 and its Server Components, server-side implementations avoid client-side biases (ad blockers, bots) and ensure deterministic bucketing based on user identity.

This expert tutorial guides you step by step to create a complete system: hashed bucketing, conditional UI variants, event tracking via Server Actions, database storage (simulated Vercel Postgres), and a dashboard with Bayesian analysis for significance. Unlike no-code tools like Optimizely, this custom approach is scalable, free, and seamlessly integrates with your stack.

Why it matters: 70% of A/B tests fail due to poor statistical rigor. Here, we calculate uplift, credible intervals, and power analysis for publishable results. Duration: 20min setup, live results. Ready to boost your KPIs by 20%+?

Prerequisites

  • Node.js 20+ and npm/yarn/pnpm
  • Next.js 15+ with App Router and TypeScript
  • Advanced knowledge: React Server Components, Server Actions, custom hooks, probabilities (beta distributions)
  • Vercel for deployment (optional, local works fine)
  • Tools: VS Code, Git

Initialize the Next.js Project

terminal
npx create-next-app@15 ab-testing-app --typescript --tailwind --eslint --app --src-dir --import-alias "@/*"
cd ab-testing-app
npm install recharts uuid
npm install -D @types/uuid
npm run dev

This script creates a minimal Next.js 15 project with TypeScript, Tailwind for quick UI, and Recharts for the dashboard. UUID handles deterministic bucketing. Run npm run dev to test at http://localhost:3000. Pitfall: Confirm Next.js 15 for native Server Actions.

Principles of Server-Side Bucketing

Bucketing assigns a user to a variant (A or B) deterministically: hash(userId) % 2. Server-side prevents inconsistencies (like page refreshes). We use an 'ab-variant' cookie for persistence. For experts: 50/50 split, but extensible to multi-variants or MVT.

Deterministic Bucketing Utility

src/lib/bucketing.ts
import { cookies } from 'next/headers';
import { v5 as uuidv5 } from 'uuid';
import { randomUUID } from 'crypto';

export const VARIANTS = ['control', 'variant'] as const;
export type Variant = typeof VARIANTS[number];

export function getUserId() {
  const cookieStore = cookies();
  let userId = cookieStore.get('userId')?.value;
  if (!userId) {
    userId = randomUUID();
    cookieStore.set('userId', userId, { maxAge: 365 * 24 * 60 * 60 });
  }
  return userId;
}

export function assignVariant(experimentId: string): Variant {
  const userId = getUserId();
  const hash = uuidv5(userId + experimentId, uuidv5.DNS);
  const index = parseInt(hash.slice(0, 8), 16) % VARIANTS.length;
  return VARIANTS[index];
}

export function getVariant(experimentId: string): Variant {
  const cookieStore = cookies();
  let variant = cookieStore.get('ab-variant')?.value as Variant;
  if (!variant) {
    variant = assignVariant(experimentId);
    cookieStore.set('ab-variant', variant, { maxAge: 365 * 24 * 60 * 60 });
  }
  return variant;
}

This utility generates a unique userId (UUID v4), then assigns the variant via UUID v5 hash (deterministic, collision-resistant). Server-only via cookies(). Benefit: sticky across devices. Pitfall: Always use getVariant in Server Components, not assignVariant alone to avoid re-assignment.

Implementing the A/B Test Page

Analogy: Like an A/B bridge where 50% take each path, measured independently. We render the variant server-side for SEO and performance (no JS hydration leaks).

Homepage with A/B Variants

src/app/page.tsx
import { getVariant } from '@/lib/bucketing';

export default function Home() {
  const variant = getVariant('hero-cta-test');

  return (
    <main className="flex min-h-screen flex-col items-center justify-center p-24">
      <h1 className="text-4xl font-bold mb-8">Test A/B Hero CTA</h1>
      <div className="max-w-md mx-auto">
        {variant === 'control' ? (
          <div className="bg-blue-500 text-white p-8 rounded-lg shadow-lg">
            <p className="text-xl mb-4">Version A : Bouton standard</p>
            <button
              className="bg-white text-blue-500 px-6 py-3 rounded font-semibold w-full"
              onClick={() => window.trackConversion?.('control_click')}
            >
              S'inscrire gratuitement
            </button>
          </div>
        ) : (
          <div className="bg-green-500 text-white p-8 rounded-lg shadow-lg">
            <p className="text-xl mb-4">Version B : Bouton urgent</p>
            <button
              className="bg-orange-400 text-white px-6 py-3 rounded font-bold text-lg w-full uppercase tracking-wide"
              onClick={() => window.trackConversion?.('variant_click')}
            >
              Offre limitée : Inscrivez-vous !
            </button>
          </div>
        )}
        <p className="text-sm mt-4 text-gray-500">Votre variante : <strong>{variant}</strong></p>
      </div>
    </main>
  );
}

Server Component fetches the variant via getVariant, rendering static HTML per bucket. Buttons track clicks via global window.trackConversion (hooked later). Perf: zero JS for variant logic. Pitfall: onClick is client-only; add 'use client' if hydration needed.

Tracking and Storing Metrics

Track impressions and conversions with a client hook + Server Action for DB logging. Simulated here with in-memory storage (replace with Vercel Postgres/Supabase). Expert: Use atomic increments for concurrency.

Client Hook for Tracking

src/hooks/useABTracking.ts
import { useEffect, useCallback } from 'react';

interface TrackEvent {
  experiment: string;
  variant: string;
  event: 'impression' | 'conversion';
}

let isTrackingLoaded = false;

export function useABTracking(experiment: string, variant: string) {
  useEffect(() => {
    if (!isTrackingLoaded) {
      (window as any).trackConversion = async (eventType: string) => {
        await fetch('/api/track', {
          method: 'POST',
          headers: { 'Content-Type': 'application/json' },
          body: JSON.stringify({ experiment, variant, event: eventType as TrackEvent['event'] }),
        });
      };
      isTrackingLoaded = true;
    }
  }, [experiment, variant]);

  useEffect(() => {
    // Track impression
    (window as any).trackConversion?.('impression');
  }, []);
}

Hook loads the global tracker once, auto-tracks impressions and onClick conversions. POSTs to API route for server-side logging (anti-tampering). Pitfall: No useCallback needed here, but add debounce for high-volume bursts.

API Route for Logging Metrics

src/app/api/track/route.ts
import { NextRequest, NextResponse } from 'next/server';

// In-memory store (use DB en prod)
const metrics: Record<string, { impressions: number; conversions: number }> = {
  'hero-cta-test_control': { impressions: 0, conversions: 0 },
  'hero-cta-test_variant': { impressions: 0, conversions: 0 },
};

export async function POST(request: NextRequest) {
  const { experiment, variant, event } = await request.json();
  const key = `${experiment}_${variant}`;

  if (!metrics[key]) {
    metrics[key] = { impressions: 0, conversions: 0 };
  }

  if (event === 'impression') {
    metrics[key].impressions++;
  } else if (event === 'conversion') {
    metrics[key].conversions++;
  }

  return NextResponse.json({ success: true });
}

export const dynamic = 'force-dynamic';

API route POST increments in-memory metrics (resets on restart; use Redis/Postgres in prod). Unique key per experiment+variant. force-dynamic enables real-time. Pitfall: No validation; add Zod in production.

Dashboard and Bayesian Analysis

Bayesian vs Frequentist: Credible intervals beat p-values. We use uniform Beta priors for posterior uplift.

Dashboard with Bayesian Stats

src/app/dashboard/page.tsx
import { BarChart, Bar, XAxis, YAxis, Tooltip, ResponsiveContainer, Legend } from 'recharts';

// Mock metrics (fetch from API/DB)
const metrics = {
  control: { impressions: 1000, conversions: 50 },
  variant: { impressions: 1000, conversions: 70 },
};

function betaPDF(x: number, alpha: number, beta: number): number {
  return Math.pow(x, alpha - 1) * Math.pow(1 - x, beta - 1) / Math.exp(Math.lgamma(alpha + beta) - Math.lgamma(alpha) - Math.lgamma(beta));
}

function sampleBeta(alpha: number, beta: number, samples = 10000): number[] {
  const samplesArr: number[] = [];
  for (let i = 0; i < samples; i++) {
    let y = Math.random();
    let u = Math.random();
    let v = Math.log(1 - y) / Math.log(Math.random());
    if (Math.log(u) < alpha * Math.log(y / (1 - y)) + beta * v) {
      samplesArr.push(y);
    } else {
      i--; // Rejection sampling
    }
  }
  return samplesArr;
}

export default function Dashboard() {
  const controlCR = metrics.control.conversions / metrics.control.impressions;
  const variantCR = metrics.variant.conversions / metrics.variant.impressions;
  const uplift = ((variantCR - controlCR) / controlCR) * 100;

  const controlAlpha = 1 + metrics.control.conversions;
  const controlBeta = 1 + (metrics.control.impressions - metrics.control.conversions);
  const variantAlpha = 1 + metrics.variant.conversions;
  const variantBeta = 1 + (metrics.variant.impressions - metrics.variant.conversions);

  const controlSamples = sampleBeta(controlAlpha, controlBeta);
  const variantSamples = sampleBeta(variantAlpha, variantBeta);
  const upliftSamples = variantSamples.map((vs, i) => ((vs - controlSamples[i]) / controlSamples[i]) * 100);
  const ciLower = upliftSamples.sort((a, b) => a - b)[Math.floor(0.025 * upliftSamples.length)];
  const ciUpper = upliftSamples.sort((a, b) => a - b)[Math.floor(0.975 * upliftSamples.length)];

  const data = [
    { name: 'Control', impressions: metrics.control.impressions, conversions: metrics.control.conversions },
    { name: 'Variant', impressions: metrics.variant.impressions, conversions: metrics.variant.conversions },
  ];

  return (
    <div className="p-8 max-w-4xl mx-auto">
      <h1 className="text-3xl font-bold mb-8">Dashboard Test A/B</h1>
      <div className="grid grid-cols-1 md:grid-cols-2 gap-8 mb-8">
        <div>
          <h2>Conversion Rates</h2>
          <ResponsiveContainer width="100%" height={300}>
            <BarChart data={data}>
              <XAxis dataKey="name" />
              <YAxis />
              <Tooltip />
              <Legend />
              <Bar dataKey="conversions" fill="#8884d8" name="Conversions" />
            </BarChart>
          </ResponsiveContainer>
        </div>
      </div>
      <div className="bg-gray-100 p-6 rounded-lg">
        <h2 className="text-2xl mb-4">Analyse Bayésienne</h2>
        <p>Uplift moyen : <strong>{uplift.toFixed(1)}%</strong></p>
        <p>Intervalle crédible 95% : [{ciLower.toFixed(1)}%, {ciUpper.toFixed(1)}%]</p>
        <p className={ciLower > 0 ? 'text-green-600' : 'text-red-600'}>
          {ciLower > 0 ? '✅ Significatif : Variant supérieur' : '❌ Pas de différence significative'}
        </p>
      </div>
    </div>
  );
}

Dashboard fetches metrics (hardcoded; fetch from API), computes CR/uplift, samples Beta posteriors via rejection sampling for CIs. Recharts for visualization. Expert: Uniform prior (alpha=1, beta=1). Pitfall: Math.lgamma needs polyfill if Node <18; increase samples for precision.

Best Practices

  • Power analysis upfront: Calculate required sample size (e.g., 1000+ per arm for 10% MDE at 80% power).
  • Sequential testing: Monitor with alpha-spending (avoid fixed peeking).
  • Segmentation: Add user props (geo, device) to bucketing.
  • Multi-arm: Extend VARIANTS to N, control FDR with Bonferroni.
  • Production: Migrate to Postgres + Redis; integrate Amplitude/PostHog.

Common Mistakes to Avoid

  • Priming bias: Don't test on existing traffic without a baseline; randomize everything.
  • Sample pollution: Avoid novelty effects with holdout groups.
  • P-hacking: Set thresholds before testing; ignore post-hoc p-values.
  • JS-only: 20-30% of users block trackers → server-side is essential.

Further Reading

Deepen your skills with our expert product optimization courses. Resources: 'Trustworthy Online Controlled Experiments' (Kohavi), GrowthBook OSS, Vercel Analytics SDK. Deploy on Vercel for edge-side A/B.