Skip to content
Learni
View all tutorials
DevOps

How to Implement Infrastructure as Code with Terraform in 2026

18 minINTERMEDIATE
Lire en français

Introduction

Infrastructure as Code (IaC) revolutionizes infrastructure management by treating cloud resources like source code: versionable, testable, and reproducible. In 2026, amid growing hybrid cloud complexity, Terraform stands out as the leading open-source tool with its declarative HCL language and support for 2500+ providers (AWS, Azure, GCP).

This intermediate tutorial guides you through deploying a secure S3 bucket with versioning and IAM policy on AWS. Why it matters: Manual infrastructure leads to drift, unexpected costs, and downtime. With IaC, you achieve idempotency: terraform apply rebuilds the exact desired state.

Think of it like Git for code—Terraform versions your infra via an encrypted state file (optional S3 backend). Result: Smooth CI/CD, GDPR/SOC2-compliant audits. Ready to turn deployments into code?

Prerequisites

  • Terraform CLI ≥ 1.9.0 installed (download)
  • AWS account with IAM permissions (AdministratorAccess for testing)
  • AWS CLI configured (aws configure with access key/secret)
  • Basic knowledge of HCL and AWS S3
  • Git for project versioning
  • Editor like VS Code with HashiCorp Terraform extension

Initialize the Terraform Project

terminal-init.sh
mkdir terraform-iac-s3 && cd terraform-iac-s3
git init
echo '# IaC S3 Bucket Terraform' > README.md
terraform init

This command creates a project directory, initializes it with Git for versioning, and runs terraform init to download required providers and modules. Pitfall: Without Git, no history; always run in an empty folder to avoid state conflicts.

Configure the AWS Provider

The provider defines how Terraform interacts with AWS. It uses your CLI credentials by default but supports environment variables or IAM roles for security. Create main.tf with version pinning for reproducibility.

Define the Provider and Variables

main.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.40"
    }
  }
  required_version = ">= 1.5"
}

provider "aws" {
  region = var.aws_region
}

variable "aws_region" {
  description = "Région AWS"
  type        = string
  default     = "eu-west-1"
}

variable "bucket_name" {
  description = "Nom unique du bucket S3"
  type        = string
}

This file pins Terraform/provider versions to avoid breaking changes, configures the provider with a region variable, and sets up bucket_name. Use ~> 5.40 for secure minor updates; pitfall: Bucket names must be globally unique—prefix with my-app-2026-.

Create the Basic S3 Bucket

Define the S3 resource with versioning enabled for data recovery. Terraform handles idempotency: repeated applies won't create duplicates.

S3 Bucket Resource with Versioning

s3.tf
resource "aws_s3_bucket" "secure_bucket" {
  bucket = var.bucket_name

  tags = {
    Name        = "Secure IaC Bucket"
    Environment = "prod"
    ManagedBy   = "Terraform"
  }
}

resource "aws_s3_bucket_versioning" "secure_bucket_versioning" {
  bucket = aws_s3_bucket.secure_bucket.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "secure_bucket_encryption" {
  bucket = aws_s3_bucket.secure_bucket.id
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

Creates a tagged bucket, enables versioning (up to 128 versions retained), and AES256 encryption at rest. References like aws_s3_bucket.secure_bucket.id create implicit dependencies. Pitfall: No tags means no traceability; always enable encryption for compliance.

Add a Security Policy

Block public access and restrict uploads to a specific IAM role. JSON policies are parsed via aws_iam_policy_document.

Bucket Policy and IAM Role

policy.tf
data "aws_iam_policy_document" "secure_bucket_policy" {
  statement {
    sid    = "DenyUnEncryptedObjectUploads"
    effect = "Deny"
    principals {
      type        = "*"
      identifiers = ["*"]
    }
    actions = ["s3:PutObject*"]
    resources = ["${aws_s3_bucket.secure_bucket.arn}/*"]
    condition {
      test     = "StringNotEquals"
      variable = "s3:x-amz-server-side-encryption"
      values   = ["AES256"]
    }
  }
}

resource "aws_s3_bucket_policy" "secure_bucket_policy" {
  bucket = aws_s3_bucket.secure_bucket.id
  policy = data.aws_iam_policy_document.secure_bucket_policy.json
}

resource "aws_iam_role" "s3_uploader" {
  name = "s3-uploader-role"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = { Service = "ec2.amazonaws.com" }
    }]
  })
}

The data source generates a JSON policy denying unencrypted uploads and attaches it to the bucket. Creates an IAM role for EC2 uploads. Pitfall: Malformed policies cause 403 errors; validate with terraform plan and AWS Policy Simulator.

Define Outputs and Remote State

Outputs expose values (e.g., bucket ARN for apps). For production, store state in S3 + DynamoDB locking.

Outputs and S3 Backend

outputs.tf
output "bucket_arn" {
  description = "ARN du bucket S3"
  value       = aws_s3_bucket.secure_bucket.arn
}

output "bucket_name_output" {
  description = "Nom du bucket"
  value       = aws_s3_bucket.secure_bucket.bucket
}

terraform {
  backend "s3" {
    bucket         = "mon-terraform-state-2026"
    key            = "iac-s3/terraform.tfstate"
    region         = "eu-west-1"
    dynamodb_table = "terraform-locks"
  }
}

Outputs for CI/CD integration; S3 backend enables team collaboration with DynamoDB locks (create them first). Pitfall: Local state lost on folder delete; migrate to remote with terraform init -migrate-state.

Plan and Apply the Infrastructure

terminal-apply.sh
terraform validate
tfenv install 1.9.2
tfenv use 1.9.2
terraform fmt -recursive
terraform plan -var="bucket_name=mon-iac-s3-bucket-$(date +%s)-unique"
terraform apply -auto-approve -var="bucket_name=mon-iac-s3-bucket-$(date +%s)-unique"
terraform output -json

validate/fmt/plan check syntax and drift; apply deploys. Use timestamp for bucket uniqueness. Pitfall: Without -var, defaults fail on non-unique names; output -json for pipelines.

Clean Up with Destroy

terminal-destroy.sh
terraform plan -destroy
terraform destroy -auto-approve
terraform state rm aws_s3_bucket.secure_bucket # Optionnel pour cleanup sélectif

destroy removes everything idempotently; state rm for orphaned resources. Pitfall: Forgetting destroy incurs AWS costs; always preview with plan -destroy.

Best Practices

  • Modularize: Break into reusable modules (module "vpc" { source = "./modules/vpc" }).
  • Secure state: Always use S3 backend + OIDC for GitHub Actions.
  • Variables & secrets: Ignore terraform.tfvars in Git; use Vault/Terraform Cloud.
  • Testing: Integrate terratest or checkov in CI for IaC scans.
  • Version pinning: Use ~> for providers, tfenv to lock Terraform version.

Common Errors to Avoid

  • Non-unique bucket name: Add UUID/timestamp; triggers BucketAlreadyExists error.
  • State drift: Run terraform refresh after manual changes.
  • Provider version drift: Without pinning, init upgrades break things; use terraform providers lock.
  • Dependency cycles: Avoid circular refs with explicit depends_on.

Next Steps