Skip to content
Learni
View all tutorials
Google Cloud

How to Deploy a Scalable API on App Engine in 2026

Lire en français

Introduction

Google App Engine is a serverless PaaS that handles infrastructure for you, perfect for scalable APIs without ops overhead. In 2026, the standard environment shines for lightweight workloads and custom runtimes, excelling in auto-scaling, zero-downtime deployments, and native GCP integrations.

This expert tutorial walks you through deploying a full Node.js API step by step: secure routes, Firestore integration, task queues, and monitoring. Why it matters: App Engine scales from 0 to millions of req/s without manual config, cuts costs with optimized cold starts, and integrates seamlessly with Cloud Run for hybrid setups.

We'll go from empty project to prod setup: advanced app.yaml, versioning, traffic splitting. By the end, your API will be resilient, observable, and optimized for Learni Dev. Estimated time: 30 min for a live deploy. (128 words)

Prerequisites

  • Active Google Cloud Platform account (billing required, App Engine standard quota)
  • gcloud CLI installed and authenticated (gcloud auth login)
  • Node.js 20+ and npm/yarn
  • Existing GCP project (gcloud config set project YOUR_PROJECT_ID)
  • Advanced knowledge of Node.js/Express and YAML

Initialize the Node.js Project

terminal
mkdir api-app-engine
cd api-app-engine
npm init -y
npm install express @google-cloud/firestore cors helmet morgan
npm install --save-dev typescript @types/node @types/express @types/cors @types/helmet @types/morgan ts-node nodemon
npx tsc --init

This command sets up a Node.js project with Express for the API, Firestore for serverless DB, and security (CORS, helmet). Dev deps enable TypeScript for type-safe code. Pitfall: Don't forget gcloud app create for your first deploy in the project.

Configure package.json for Production

Update package.json for optimized prod startups. Add scripts for local dev (npm run dev) and prod (npm start). Specify engines.node to match App Engine (Node 20+). This avoids slow cold starts.

Complete package.json

package.json
{
  "name": "api-app-engine",
  "version": "1.0.0",
  "description": "API scalable sur App Engine",
  "main": "dist/server.js",
  "scripts": {
    "build": "tsc",
    "start": "node dist/server.js",
    "dev": "nodemon --exec ts-node server.ts"
  },
  "engines": {
    "node": "20.x"
  },
  "dependencies": {
    "express": "^4.19.2",
    "@google-cloud/firestore": "^7.7.0",
    "cors": "^2.8.5",
    "helmet": "^7.1.0",
    "morgan": "^1.10.0"
  },
  "devDependencies": {
    "@types/node": "^22.5.5",
    "@types/express": "^4.17.21",
    "@types/cors": "^2.8.17",
    "@types/helmet": "^4.0.0",
    "@types/morgan": "^1.9.9",
    "typescript": "^5.6.2",
    "ts-node": "^10.9.2",
    "nodemon": "^3.1.7"
  }
}

This package.json is prod-ready: builds TS to JS, locks Node version via engines for App Engine. Scripts separate dev/prod. Pitfall: Without engines, App Engine might downgrade Node and break deps.

Express Server with Firestore

server.ts
import express from 'express';
import cors from 'cors';
import helmet from 'helmet';
import morgan from 'morgan';
import { Firestore, FieldValue } from '@google-cloud/firestore';

const app = express();
const PORT = process.env.PORT || 8080;
const firestore = new Firestore();

app.use(helmet());
app.use(cors({ origin: '*' }));
app.use(morgan('combined'));
app.use(express.json());

app.get('/health', (req, res) => res.status(200).json({ status: 'OK' }));

app.post('/items', async (req, res) => {
  try {
    const { name } = req.body;
    if (!name) return res.status(400).json({ error: 'Name required' });
    const docRef = firestore.collection('items').doc();
    await docRef.set({ name, created: FieldValue.serverTimestamp() });
    res.status(201).json({ id: docRef.id });
  } catch (error) {
    res.status(500).json({ error: 'Internal error' });
  }
});

app.get('/items/:id', async (req, res) => {
  try {
    const doc = await firestore.collection('items').doc(req.params.id).get();
    if (!doc.exists) return res.status(404).json({ error: 'Not found' });
    res.json(doc.data());
  } catch (error) {
    res.status(500).json({ error: 'Internal error' });
  }
});

app.listen(PORT, () => {
  console.log(`Server on port ${PORT}`);
});

Full TypeScript server with Firestore CRUD routes, helmet/CORS security, and morgan logging. Uses process.env.PORT required by App Engine. Pitfall: Without async try/catch, errors crash instances; always log in prod.

Build and Test Locally

Run npm run build then npm start to test. Check /health, POST /items, and GET /items/:id with curl or Postman. Analogy: Like pre-flight checks for a plane—test cold starts by killing and restarting.

app.yaml for Standard Environment

app.yaml
runtime: nodejs20
env: standard

env_variables:
  NODE_ENV: "production"
  GOOGLE_CLOUD_PROJECT: "votre-projet-id"

automatic_scaling:
  min_instances: 1
  max_instances: 100
  target_cpu_utilization: 0.65
  max_concurrent_requests: 100

handlers:
- url: /.*
  script: auto
  secure: always
  redirect_http_response_code: 301

liveness_check:
  path: "/health"
  check_interval: 30s
  timeout: 5s
  failure_threshold: 3
  success_threshold: 2

readiness_check:
  path: "/health"
  check_interval: 10s
  timeout: 3s
  failure_threshold: 3
  success_threshold: 2

app.yaml sets Node 20 runtime, auto-scaling (1-100 instances, 65% CPU), and health checks for zero-downtime. Enforces HTTPS only. Pitfall: Without min_instances:1, cold starts add >1s latency; tune for your traffic.

First Deployment

Create the app if needed: gcloud app create --region=us-central. Then gcloud app deploy. Access via gcloud app browse. Check logs: gcloud app logs tail -s default.

Automated Deployment Script

deploy.sh
#!/bin/bash
PROJET="votre-projet-id"
VERSION="v1-$(date +%Y%m%d-%H%M%S)"

gcloud config set project $PROJET
git add .
git commit -m "Deploy $VERSION"
npm run build
gcloud app deploy --version=$VERSION --project=$PROJET

gcloud app services default versions list

gcloud app traffic default allocate --version=$VERSION --split=100

Bash script for building, deploying with timestamped version, and routing 100% traffic. Integrate with Git for CI/CD. Pitfall: Without --version, it overwrites and crashes traffic; always use new versions for rollback.

Multi-Version Traffic Splitting

terminal
gcloud app versions list --service=default

gcloud app traffic default allocate \
  --split=v1=80 \
  --split=v2=20

# Rollback
gcloud app traffic default shift v1 --version=v1

80/20 split for A/B testing without downtime. shift migrates gradually. Pitfall: Splits across >2 versions fragment metrics; stick to 2-3 in prod.

Integrate Queues for Async Tasks

For heavy workloads (e.g., emails), add to app.yaml:

yaml
dispatch:
- url: "/tasks/"
module: "default"

queue:
default:
rate: "100/m"
bucket_size: 10

Dispatch: taskqueue.add(url='/tasks/process', payload=data).

.gcloudignore to Optimize Deployment

.gcloudignore
node_modules
npm-debug.log
.DS_Store
*.log
.nyc_output
coverage
.env
.nyc_output
.cache
*.tsbuildinfo
dist

Ignores node_modules (rebuilt in env), logs, and cache for <10s deploys. Cuts bandwidth 90%. Pitfall: Forgetting it leads to slow deploys and timeouts.

Best Practices

  • Scaling fine-tuning: Monitor CPU/RAM in Cloud Monitoring, set target_cpu_utilization to 60-70% for cost/perf balance.
  • Secrets management: Use Secret Manager for API keys (gcloud secrets versions access), inject via runtime env_vars.
  • Observability: Enable Cloud Trace/Profiler; structure logs as JSON for BigQuery export.
  • Switch to flexible for custom deps: env: flex, Dockerfile with gcr.io.
  • Cost control: Keep max_instances low, use F1 instances for dev.

Common Errors to Avoid

  • Fixed port: Always use process.env.PORT; App Engine enforces 8080.
  • Ignoring cold starts: Add min_instances:1 for prod; test with gcloud app instances delete.
  • Versions without split: Direct deploy overwrites cause downtime; always --version + allocate.
  • Firestore without indexes: Composite queries fail; create via console or firestore.indexes.json.

Next Steps

Check out our Learni Google Cloud trainings to master Serverless and DevOps.