Skip to content
Learni
View all tutorials
Infrastructure as Code

How to Orchestrate a Multi-Cloud Infrastructure with Terraform in 2026

Lire en français

Introduction

Terraform, HashiCorp's open-source tool for Infrastructure as Code (IaC), remains the gold standard in 2026 for orchestrating complex multi-cloud environments. Unlike manual scripts, Terraform takes a declarative approach: you describe the desired state in HCL files, and it manages changes via plan and apply.

Why this advanced tutorial? Beginners stick to a basic main.tf, but in production, you'll handle reusable modules, remote states for team collaboration, workspaces for dev/staging/prod, and multiple providers (AWS + GCP here). We cover these patterns with 100% functional code for a VPC/EC2 (AWS) + VPC/VM (GCP) setup, including data sources, for_each, and S3 remote backend.

By the end, you'll bookmark this guide for scalable deployments: zero downtime, traceable audits, and cloud-native compliance. Estimated time: 45 min for a full setup (128 words).

Prerequisites

  • Terraform CLI v1.9+ installed (terraform version)
  • Active AWS and GCP accounts with IAM/Service Account (API keys)
  • AWS CLI configured + S3 bucket for remote state
  • GCP CLI (gcloud auth login)
  • VS Code editor with HashiCorp Terraform extension
  • Intermediate IaC knowledge (providers, resources)

Project Initialization and Providers

main.tf
terraform {
  required_version = ">= 1.9.0"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    google = {
      source  = "hashicorp/google"
      version = "~> 5.0"
    }
  }
  backend "s3" {
    bucket         = "your-terraform-state-bucket"
    key            = "multi-cloud/terraform.tfstate"
    region         = "eu-west-1"
    dynamodb_table = "terraform-locks"
    encrypt        = true
  }
}

provider "aws" {
  region = var.aws_region
}

provider "google" {
  project = var.gcp_project_id
  region  = var.gcp_region
}

This file initializes Terraform with pinned AWS/GCP providers and an S3 backend for remote state (with DynamoDB for locking). Replace your-terraform-state-bucket with your bucket. Pitfall: Forgetting encrypt=true exposes states; use terraform init afterward to migrate from local to remote.

Defining Input Variables

Before resources, let's define reusable variables for different environments. This lets you pass dev or prod via CLI or TF_VARs, avoiding hardcoding.

Variables and Outputs

variables.tf
variable "aws_region" {
  description = "Région AWS"
  type        = string
  default     = "eu-west-1"
}

variable "gcp_project_id" {
  description = "ID projet GCP"
  type        = string
}

variable "gcp_region" {
  description = "Région GCP"
  type        = string
  default     = "europe-west1"
}

variable "environment" {
  description = "Environnement (dev/prod)"
  type        = string
  validation {
    condition     = contains(["dev", "staging", "prod"], var.environment)
    error_message = "Environnement doit être dev, staging ou prod."
  }
}

output "aws_vpc_id" {
  description = "ID VPC AWS créée"
  value       = module.aws_vpc.vpc_id
}

output "gcp_network_name" {
  description = "Nom réseau GCP"
  value       = module.gcp_vpc.network_name
}

Variables with validation block invalid inputs (e.g., terraform plan -var='environment=foo' fails). Outputs expose IDs for chaining. Advanced: Use default for dev, but override with -var in prod for security.

Creating a Reusable AWS VPC Module

Now for modules: encapsulate VPC logic in a subfolder for reuse. This AWS module creates a VPC + subnets with for_each for high availability.

AWS VPC Module

modules/aws_vpc/main.tf
resource "aws_vpc" "main" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = {
    Name        = "vpc-${var.environment}
    Environment = var.environment
  }
}

resource "aws_subnet" "public" {
  for_each = toset(var.public_subnet_cidrs)

  vpc_id                  = aws_vpc.main.id
  cidr_block              = each.value
  availability_zone       = each.key
  map_public_ip_on_launch = true

  tags = {
    Name        = "public-${each.key}-${var.environment}
    Environment = var.environment
  }
}

resource "aws_internet_gateway" "gw" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "igw-${var.environment}
  }
}

for_each iterates dynamically over AZs/CIDRs (key=value), more scalable than count. Add variables.tf in the module: variable "public_subnet_cidrs" { type = map(string) }. Call it with module "aws_vpc" { source = "./modules/aws_vpc" ... }. Pitfall: Overlapping CIDRs cause plan errors.

AWS Module Call and EC2 Instance

aws.tf
module "aws_vpc" {
  source = "./modules/aws_vpc"

  environment           = var.environment
  public_subnet_cidrs = {
    "a" = "10.0.1.0/24"
    "b" = "10.0.2.0/24"
    "c" = "10.0.3.0/24"
  }
}

resource "aws_instance" "web" {
  for_each = module.aws_vpc.public_subnet_ids

  ami           = data.aws_ami.amazon_linux.id
  instance_type = "t3.micro"
  subnet_id     = each.value
  vpc_security_group_ids = [aws_security_group.web.id]

  tags = {
    Name        = "web-${each.key}-${var.environment}
    Environment = var.environment
  }
}

data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }
}

resource "aws_security_group" "web" {
  vpc_id = module.aws_vpc.vpc_id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "web-sg-${var.environment}
  }
}

Data source fetches the latest AMI dynamically (avoids hardcoding). for_each on module subnet outputs (add output "public_subnet_ids" { value = {for k,v in aws_subnet.public : k => v.id} }). Basic SG for HTTP. Pitfall: Missing data source leads to unfound AMI.

Equivalent GCP Module and Data Sources

Symmetric for GCP: VPC + VM module. Use data sources to fetch existing zones, making the module portable.

GCP VPC Module and Call

gcp.tf
module "gcp_vpc" {
  source = "./modules/gcp_vpc"

  environment = var.environment
  project_id  = var.gcp_project_id
}

resource "google_compute_instance" "web" {
  for_each = data.google_compute_zones.available.names

  name         = "web-${each.key}-${var.environment}
  machine_type = "e2-micro"
  zone         = each.value

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
    }
  }

  network_interface {
    network = module.gcp_vpc.network_name
    access_config {}
  }

  tags = ["http-server", "${var.environment}"]
}

google_compute_zones data (defined below) enables multi-zone HA. Debian instance with firewall tags. Create modules/gcp_vpc/main.tf similarly: VPC 10.0.0.0/16, subnet, network_name output. Pitfall: Missing tags break firewall.

GCP Data Sources and Firewall

data.tf
data "google_compute_zones" "available" {
  region = var.gcp_region
  status = "UP"
}

resource "google_compute_firewall" "web" {
  name    = "allow-http-${var.environment}
  network = module.gcp_vpc.network_name

  allow {
    protocol = "tcp"
    ports    = ["80"]
  }

  source_ranges = ["0.0.0.0/0"]
  target_tags   = ["http-server"]
}

Filtered UP zones data for resilience. Tag-based firewall. Add to modules/gcp_vpc/outputs.tf: output "network_name" { value = google_compute_network.main.name }. Ready to go: terraform apply exposes HTTP.

Workspaces and Bash Deploy Script

deploy.sh
#!/bin/bash

ENV=${1:-dev}

terraform workspace new ${ENV} || terraform workspace select ${ENV}

terraform init \
  -backend-config="bucket=your-terraform-state-bucket" \
  -backend-config="key=multi-cloud/${ENV}/terraform.tfstate" \
  -backend-config="region=eu-west-1"

terraform plan -var="environment=${ENV}" \
  -var="gcp_project_id=your-gcp-project"

terraform apply -auto-approve -var="environment=${ENV}" \
  -var="gcp_project_id=your-gcp-project"

terraform output -json > outputs-${ENV}.json

echo "Infra ${ENV} déployée ! Outputs dans outputs-${ENV}.json"

Bash script for per-environment workspaces (isolates states). backend-config overrides key per env. Run ./deploy.sh prod. Pitfall: Without -auto-approve, manual review; always plan first.

Best Practices

  • Modularize everything: Keep main.tf under 300 lines; use Git sources for private modules.
  • Remote state mandatory: S3 + DynamoDB for teams; terraform state mv for migrations.
  • Validated variables + secrets: Use sensitive=true, Terraform Cloud Vault, or AWS SSM.
  • CI/CD plans: GitHub Actions with terraform plan -out=tfplan + apply on merge.
  • Safe destroys: terraform destroy -target=module.xxx for granular rollbacks.

Common Errors to Avoid

  • State drift: Never edit manually in cloud consoles; run terraform refresh before plan.
  • Provider version drift: Pin ~> 5.0; terraform providers lock generates .terraform.lock.hcl.
  • Dependency cycles: Use explicit depends_on if implicit fails.
  • Forgotten workspaces: terraform workspace list; delete with rm -rf .terraform if corrupted.

Next Steps