Skip to content
Learni
View all tutorials
Finance Quantitative

How to Calculate Value at Risk (VaR) in 2026

Lire en français

Introduction

The Value at Risk (VaR) is a key measure in financial risk management. It estimates the maximum potential loss of a portfolio over a given period (e.g., 1 day) at a specific confidence level (e.g., 95% or 99%). For example, a 95% VaR of $10,000 means there's a 5% chance of losing more than $10,000 in a day.

Why is it crucial in 2026? With market volatility in crypto, stocks, and bonds amplified by AI and geopolitics, regulators like Basel III demand precise VaR calculations. This beginner tutorial guides you from A to Z in Python: parametric (normal), historical, and Monte Carlo methods. You'll get complete, runnable code, visualizations, and simple analogies (like a probabilistic 'safety net'). By the end, you'll calculate VaR for any asset. Ideal for junior analysts or personal investors. (142 words)

Prerequisites

  • Python 3.10+ installed
  • Libraries: numpy, pandas, scipy, matplotlib (install via pip)
  • Basic knowledge of probabilities (mean, standard deviation) and finance (returns)
  • An editor like VS Code or Jupyter Notebook

Install dependencies and generate data

installation.sh
pip install numpy pandas scipy matplotlib

Install the essential libraries. numpy for vectorized calculations, pandas for tabular data, scipy for advanced stats, and matplotlib for charts.

Prepare return data

Before any calculations, generate or load daily returns (returns = (price_t - price_{t-1}) / price_{t-1}). Here, we simulate 1000 days of returns for a stock portfolio with a 0.1% mean and 2% volatility, realistic for an index like the S&P 500. Analogy: like observing 1000 draws from a probabilistic urn.

Generate simulated historical data

generate_data.py
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

# Paramètres : 1000 jours, mu=0.001 (rendement moyen), sigma=0.02 (volatilité)
np.random.seed(42)
returns = np.random.normal(0.001, 0.02, 1000)

# Portefeuille initial de 100 000 €
initial_value = 100000
values = initial_value * np.cumprod(1 + returns)

df = pd.DataFrame({'returns': returns, 'portfolio_value': values})
print(df.head())

# Sauvegarde pour réutilisation
df.to_csv('portfolio_returns.csv', index=False)
print('Données générées et sauvées.')

This script generates Gaussian returns and computes the cumulative portfolio value. np.cumprod accumulates compounded returns. Use seed(42) for reproducibility. Run it once to create portfolio_returns.csv.

Parametric method: VaR under normal distribution

The parametric method assumes normally distributed returns. Formula: VaR = Initial Value × (μ - z × σ), where z is the quantile (1.645 for 95%, 2.326 for 99%). Advantage: fast. Limitation: ignores 'fat tails' in real markets (crashes). Analogy: like predicting rain from average weather, ignoring extreme storms.

Calculate parametric VaR

var_parametrique.py
import numpy as np
import pandas as pd
from scipy.stats import norm

# Chargement des données
df = pd.read_csv('portfolio_returns.csv')
returns = df['returns'].values
initial_value = 100000

# Statistiques
mu = np.mean(returns)
sigma = np.std(returns)
print(f"Rendement moyen mu: {mu:.4f}, Volatilité sigma: {sigma:.4f}")

# Niveaux de confiance
confidence_levels = [0.95, 0.99]
for conf in confidence_levels:
    z = norm.ppf(conf)  # Quantile normal
    var = initial_value * (mu - z * sigma)
    print(f"VaR {int(conf*100)}%: {var:.2f} € (perte max attendue)")

Computes empirical mu and sigma, then applies the VaR formula. norm.ppf gives the z-score. Typical results: 95% VaR ~$2,700, 99% ~$4,500. Fast for large datasets.

Historical method: Based on past data

Historical VaR sorts past losses and picks the quantile. Non-parametric: captures real extremes. Example: for 95%, sort 1000 descending returns and take the 5th worst (position 50). Advantage: simple, robust to non-normality. Limitation: needs lots of historical data.

Calculate historical VaR

var_historique.py
import numpy as np
import pandas as pd

# Chargement
df = pd.read_csv('portfolio_returns.csv')
returns = df['returns'].values
initial_value = 100000

# Pertes (négatives)
losses = -returns * initial_value
losses_sorted = np.sort(losses)

confidence_levels = [0.95, 0.99]
for conf in confidence_levels:
    index = int((1 - conf) * len(losses))
    var = losses_sorted[index]
    print(f"VaR historique {int(conf*100)}%: {var:.2f} €")

# Visualisation rapide
print("Top 5 pires pertes:", losses_sorted[:5])

Converts returns to losses, sorts, and selects the quantile. For 1000 observations, 95% = index 50. Captures real shocks, often more conservative than parametric.

Monte Carlo method: Prospective simulations

Monte Carlo simulates thousands of future scenarios based on mu/sigma. Ideal for complex portfolios. Steps: generate paths, compute losses, take quantile. Analogy: rolling 10,000 weighted dice to predict future wealth.

Calculate Monte Carlo VaR

var_monte_carlo.py
import numpy as np
import pandas as pd
from scipy.stats import norm

# Données empiriques
df = pd.read_csv('portfolio_returns.csv')
returns = df['returns'].values
mu, sigma = np.mean(returns), np.std(returns)
initial_value = 100000

# Simulations : 10000 paths, 1 jour
n_simulations = 10000
sim_returns = np.random.normal(mu, sigma, n_simulations)
losses_mc = -sim_returns * initial_value
losses_mc_sorted = np.sort(losses_mc)

confidence_levels = [0.95, 0.99]
for conf in confidence_levels:
    index = int((1 - conf) * n_simulations)
    var_mc = losses_mc_sorted[index]
    print(f"VaR Monte Carlo {int(conf*100)}%: {var_mc:.2f} €")

Simulates 10,000 future returns, sorts losses. Similar to historical but forward-looking. Increase n_simulations for more precision (law of large numbers).

Comparative visualization of methods

Visualize to compare. Loss histogram + VaR lines helps validate (e.g., normal tails?).

VaR and loss distribution chart

visualisation_var.py
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

# Données
df = pd.read_csv('portfolio_returns.csv')
returns = df['returns'].values
initial_value = 100000
losses = -returns * initial_value

plt.figure(figsize=(10, 6))
plt.hist(losses, bins=50, alpha=0.7, density=True, label='Distribution historique')

# VaR lignes
var_95 = np.percentile(losses, 5)
var_99 = np.percentile(losses, 1)
plt.axvline(var_95, color='red', linestyle='--', label=f'VaR 95%: {var_95:.0f}€')
plt.axvline(var_99, color='orange', linestyle='--', label=f'VaR 99%: {var_99:.0f}€')

plt.xlabel('Pertes (€)')
plt.ylabel('Densité')
plt.title('Distribution des pertes et seuils VaR')
plt.legend()
plt.grid(True, alpha=0.3)
plt.show()
print(f'VaR 95% visuel: {var_95:.2f}€')

Histogram + np.percentile for VaR (historical equivalent). axvline draws thresholds. Save with plt.savefig('var_plot.png') for reports.

Best practices

  • Use real data: Replace simulations with yfinance for stocks (e.g., pip install yfinance).
  • Backtest: Compare predicted VaR vs actual losses over 1 year.
  • Multi-assets: Move to covariance matrices for portfolios.
  • Adapted periods: 1-day for trading, 10-day for funds.
  • Regulatory: Align with 99% / 10 days for Basel.

Common errors to avoid

  • Always assume normal: Markets have fat tails → underestimates VaR (use historical/MC).
  • Forget compounding: Use log returns for long periods, not simple.
  • Too few data: <500 obs → unstable; bootstrap if needed.
  • Ignore correlations: Covariance required for portfolios.

Next steps

Dive deeper with Expected Shortfall (CVaR), conditional VaR, or GARCH for dynamic volatility. Resources: 'Quantitative Risk Management' by McNeil; NumPy docs, SciPy stats. Pro training: Learni Group Quant Finance Courses. Test on Bitcoin data via yfinance!