Introduction
The Mistral API, developed by Mistral AI, gives you access to powerful language models like Mistral Large or Mistral Nemo directly from your apps. In 2026, it's essential for building chatbots, virtual assistants, or AI automation tools without heavy infrastructure.
Why use it? It offers great value for money, low latency, and OpenAI-like compatibility for easy migration. This beginner tutorial guides you step by step: getting your API key, first calls, streaming, and functions. By the end, you'll have a functional chatbot ready to scale.
Real value: save hours of debugging by copying our complete code. Analogy: like a turbo engine for your apps, Mistral powers AI without server overload.
Prerequisites
- Node.js 20+ installed
- Free account on console.mistral.ai with an API key generated
- Code editor (VS Code recommended)
- Basic JavaScript/TypeScript knowledge
Initialize the project and install the SDK
mkdir mistral-chatbot
cd mistral-chatbot
npm init -y
npm install @mistralai/mistralai dotenv
typescript --init
npm install -D @types/node ts-nodeThese commands create a Node.js project, install the official Mistral SDK for seamless integration, and dotenv for managing secrets. Never hardcode your API key: always use environment variables for security.
Set up environment variables
Create a .env file in the root directory with your API key from console.mistral.ai. Add .env to .gitignore to avoid committing secrets.
.env configuration file
MISTRAL_API_KEY=sk-proj-votre-cle-api-iciStore only your API key here. Replace 'sk-proj-votre-cle-api-ici' with your real key. This file isn't versioned, protecting your credentials in production.
First chat completion call
import 'dotenv/config';
import { Mistral } from '@mistralai/mistralai';
const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY! });
async function simpleChat() {
const chatResponse = await client.chat.completions.create({
model: 'mistral-tiny',
messages: [
{
role: 'user',
content: 'Explique-moi l\'IA en 3 phrases simples.'
}
],
});
console.log(chatResponse.choices[0].message.content);
}
simpleChat().catch(console.error);This code initializes the Mistral client and sends a user message to the 'mistral-tiny' model (fast and free). It prints the response. Pitfall: always await async promises to avoid silent errors.
Run the first script
Run it with npx ts-node chat-simple.ts. You'll see an AI explanation. Test with other prompts to verify.
Chat with multiple messages and system prompt
import 'dotenv/config';
import { Mistral } from '@mistralai/mistralai';
const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY! });
async function conversation() {
const chatResponse = await client.chat.completions.create({
model: 'open-mistral-nemo',
messages: [
{
role: 'system',
content: 'Tu es un assistant expert en programmation.'
},
{
role: 'user',
content: 'Comment créer une boucle for en JS ?'
},
{
role: 'assistant',
content: 'Une boucle for basique est : for (let i=0; i<5; i++) { console.log(i); }'
},
{
role: 'user',
content: 'Explique le rôle de i++.'
}
],
});
console.log(chatResponse.choices[0].message.content);
}
conversation().catch(console.error);Adds a system prompt for context and multi-turn conversation. The 'open-mistral-nemo' model is more advanced. Analogy: like a discussion thread, it maintains context for coherent responses.
Implement streaming for real-time responses
import 'dotenv/config';
import { Mistral } from '@mistralai/mistralai';
const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY! });
async function streamingChat() {
const stream = await client.chat.completions.create({
model: 'mistral-small',
messages: [{ role: 'user', content: 'Raconte une blague sur les devs.' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
console.log('\n');
}
streamingChat().catch(console.error);Enable stream: true to receive responses token by token, perfect for smooth UX like ChatGPT. Use for await to iterate. Pitfall: handle empty chunks to avoid messy logs.
Handle tools (function calling)
Advanced for beginners: Mistral supports tools like OpenAI.
Call with tools for data extraction
import 'dotenv/config';
import { Mistral } from '@mistralai/mistralai';
const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY! });
const tools = [{
type: 'function',
function: {
name: 'get_weather',
description: 'Obtient la météo pour une ville',
parameters: {
type: 'object',
properties: {
city: { type: 'string' }
},
required: ['city']
}
}
}];
async function toolsChat() {
const chatResponse = await client.chat.completions.create({
model: 'mistral-large-latest',
messages: [{ role: 'user', content: 'Quelle est la météo à Paris ?' }],
tools,
tool_choice: 'auto',
});
console.log('Tool calls:', chatResponse.choices[0].message.tool_calls);
}
toolsChat().catch(console.error);Defines a 'get_weather' tool; the model decides when to call it. In production, implement the real function. Great for AI agents. Note: 'mistral-large-latest' requires paid credits.
Express server for web chatbot
import 'dotenv/config';
import express from 'express';
import cors from 'cors';
import { Mistral } from '@mistralai/mistralai';
const app = express();
app.use(cors());
app.use(express.json());
const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY! });
app.post('/chat', async (req, res) => {
try {
const { message } = req.body;
const response = await client.chat.completions.create({
model: 'mistral-tiny',
messages: [{ role: 'user', content: message }],
});
res.json({ reply: response.choices[0].message.content });
} catch (error) {
res.status(500).json({ error: 'Erreur IA' });
}
});
app.listen(3000, () => console.log('Serveur sur port 3000'));Install with npm i express cors first. Creates a POST /chat endpoint for web APIs. Adds try/catch for robustness. Test with curl or Postman.
Best practices
- Rate limiting: Respect Mistral quotas (e.g., 100 req/min) with
p-limit. - Security: Validate inputs with Zod to prevent injections.
- Caching: Store repeated responses in Redis.
- Monitoring: Log tokens used (
usagein response) to optimize costs. - Models: Use tiny for testing, large for production.
Common errors to avoid
- Forgetting
await: causesundefinedresponses. - Invalid API key: Check format 'sk-proj-...' and credits.
- Not handling
stream: true: Blocks real-time UI. - Ignoring
max_tokens: Surprise bills (set to 1024).
Next steps
- Official docs: Mistral API Docs
- Advanced SDK: Fine-tuning and embeddings
- Integrate with Next.js or LangChain