Skip to content
Learni
View all tutorials
Cloud AWS

How to Use Amazon S3 to Store Files in 2026

Lire en français

Introduction

Amazon S3 (Simple Storage Service) is AWS's leading object storage service, infinitely scalable and reliable at 99.999999999% (11 nines). Perfect for hosting static sites, backups, ML datasets, or multimedia assets, S3 handles petabytes without any infrastructure. In 2026, with the rise of AI and edge computing, S3 integrates seamlessly with Lambda, Athena, and S3 Glacier for optimized costs.

This beginner tutorial guides you from A to Z: creating buckets via console and CLI, secure uploads, and Node.js SDK access. Every step is actionable with copy-paste code. By the end, you'll store your first production files. Why S3? Massive savings ($0.023/GB/month), durability, and GDPR/HIPAA compliance. Ready to scale? (128 words)

Prerequisites

  • Free AWS account (create one at aws.amazon.com)
  • AWS CLI v2 installed (docs.aws.amazon.com/cli)
  • Node.js 20+ and npm for the SDK
  • IAM Access Key (generated later, see step 1)
  • Code editor (VS Code recommended)

Install and Configure AWS CLI

terminal.sh
#!/bin/bash

# Download AWS CLI v2 (Linux/Mac)
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

# Verification
aws --version

# Configure credentials (replace with your values)
aws configure set aws_access_key_id YOUR_ACCESS_KEY
aws configure set aws_secret_access_key YOUR_SECRET_KEY
aws configure set default.region eu-west-1
aws configure set default.output json

# Test
aws sts get-caller-identity

This script installs AWS CLI, configures your IAM credentials, and tests the connection. Use eu-west-1 for Paris (low EU latency). Pitfall: Never commit your keys to Git; use IAM roles in production. Replace placeholders with your real keys.

Step 1: Create Your First IAM Credentials

Before using CLI, generate an Access Key via the AWS Console:

  1. Go to IAM > Users > Add user (name: s3-user, Programmatic access).
  2. Attach policy AmazonS3FullAccess (for beginners) or a custom one for least privilege.
  3. Copy Access Key ID and Secret.

Analogy: IAM keys are like passports; limit them to S3 for security.

Create an S3 Bucket

create-bucket.sh
#!/bin/bash

BUCKET_NAME="my-first-bucket-$(date +%s)-unique"
REGION="eu-west-1"

# Create the bucket (name must be globally unique!)
aws s3 mb s3://${BUCKET_NAME} --region ${REGION}

# Enable versioning (optional, protects against deletions)
aws s3api put-bucket-versioning --bucket ${BUCKET_NAME} --versioning-configuration Status=Enabled

# List buckets to verify
aws s3 ls

# Make public read (for static sites)
aws s3api put-bucket-policy --bucket ${BUCKET_NAME} --policy '{"Version":"2012-10-17","Statement":[{"Sid":"PublicRead","Effect":"Allow","Principal":"*","Action":"s3:GetObject","Resource":"arn:aws:s3:::'${BUCKET_NAME}'/*"}]}'

echo "Bucket created: s3://${BUCKET_NAME}"

Creates a unique bucket (with timestamp), enables versioning, and sets a public policy. Bucket names: lowercase, no underscores, globally unique. Pitfall: Avoid public in production; use CloudFront + OAI instead. Copy the name for next steps.

Step 2: Upload and Manage Objects

A bucket stores objects (files + metadata). Think of it like an infinite hard drive, without native hierarchy (use / to simulate folders).

Upload, List, and Download a File

manage-objects.sh
#!/bin/bash

BUCKET_NAME="my-first-bucket-$(date +%s)-unique"  # Replace with your bucket
FILE_LOCAL="example.txt"

# Create test file
cat > ${FILE_LOCAL} << EOF
Test content for S3.
Upload successful in 2026!
EOF

# Upload
aws s3 cp ${FILE_LOCAL} s3://${BUCKET_NAME}/docs/example.txt

# List objects
aws s3 ls s3://${BUCKET_NAME}/ --recursive

# Download
aws s3 cp s3://${BUCKET_NAME}/docs/example.txt ${FILE_LOCAL}.download

# Delete
aws s3 rm s3://${BUCKET_NAME}/docs/example.txt

# Empty bucket then delete (careful!)
aws s3 rm s3://${BUCKET_NAME}/ --recursive
aws s3 rb s3://${BUCKET_NAME} --force

cat ${FILE_LOCAL}.download

Handles the full lifecycle: upload (cp), ls, download, rm. Use --recursive for folders. Pitfall: Versioning keeps deletions; list versions with aws s3api list-object-versions. Use sync for large deployments.

Step 3: Integrate S3 in a Node.js App

For apps, use the AWS SDK. Install with: npm init -y && npm i @aws-sdk/client-s3 dotenv. Create .env for keys.

Install Node.js Dependencies

setup-node.sh
#!/bin/bash

mkdir s3-node-app && cd s3-node-app
npm init -y
npm install @aws-sdk/client-s3 dotenv
npm install -D typescript @types/node ts-node

cat > .env << EOF
AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY
AWS_SECRET_ACCESS_KEY=YOUR_SECRET_KEY
AWS_REGION=eu-west-1
S3_BUCKET=my-first-bucket-unique
EOF

cat > .gitignore << EOF
.env
node_modules/
EOF

Sets up a Node project with SDK v3 (modern, tree-shakeable). .env protects secrets. Pitfall: Add .env to .gitignore. Run with npx ts-node index.ts.

Complete Node.js Script for S3

index.ts
import { S3Client, PutObjectCommand, GetObjectCommand, ListObjectsV2Command, DeleteObjectCommand } from '@aws-sdk/client-s3';
import * as dotenv from 'dotenv';
import * as fs from 'fs';

dotenv.config();

const client = new S3Client({
  region: process.env.AWS_REGION,
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
  },
});

const bucketName = process.env.S3_BUCKET!;

async function uploadFile(filePath: string, key: string) {
  const fileContent = fs.readFileSync(filePath);
  const command = new PutObjectCommand({
    Bucket: bucketName,
    Key: key,
    Body: fileContent,
    ContentType: 'text/plain',
  });
  await client.send(command);
  console.log(`Uploadé: ${key}`);
}

async function listFiles() {
  const command = new ListObjectsV2Command({ Bucket: bucketName });
  const response = await client.send(command);
  console.log('Fichiers:', response.Contents?.map(obj => obj.Key));
}

async function downloadFile(key: string, localPath: string) {
  const command = new GetObjectCommand({ Bucket: bucketName, Key: key });
  const response = await client.send(command);
  fs.writeFileSync(localPath, await response.Body!.transformToString());
  console.log(`Téléchargé: ${key}`);
}

async function deleteFile(key: string) {
  const command = new DeleteObjectCommand({ Bucket: bucketName, Key: key });
  await client.send(command);
  console.log(`Supprimé: ${key}`);
}

// Exécution
(async () => {
  await uploadFile('test.txt', 'app/test.txt');
  await listFiles();
  await downloadFile('app/test.txt', 'downloaded.txt');
  await deleteFile('app/test.txt');
})();

Full script for upload, list, download, and delete using async/await. SDK v3 uses Commands for type safety. Pitfall: Handle errors with try/catch; in production, use IAM roles/EC2 metadata. Create test.txt first.

Step 4: Secure with Policies and Lifecycle

Analogy: Bucket = safe; policies = locks. Set via Console > Permissions.

JSON Policy for Private Bucket + Lifecycle

bucket-policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyPublic",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::my-private-bucket",
        "arn:aws:s3:::my-private-bucket/*"
      ],
      "Condition": {
        "BoolIfExists": {
          "aws:SecureTransport": "false"
        }
      }
    }
  ]
}

Policy blocks non-SSL HTTP access and public reads. Apply with: aws s3api put-bucket-policy --bucket my-bucket --policy file://policy.json. Use Console for lifecycle to auto-archive to Glacier after 30 days.

Best Practices

  • Least privilege: Specific IAM policies (s3:PutObject only).
  • Encryption: Enable SSE-S3/KMS by default.
  • Lifecycle rules: Archive to IA/Glacier after 30 days for -80% costs.
  • CloudFront CDN: Cache for global speed.
  • Monitoring: Enable S3 Analytics + CloudWatch alarms on 404s.

Common Errors to Avoid

  • Non-unique bucket name: Add timestamp/UUID.
  • Exposed keys: Always use .env or roles; scan Git.
  • Accidental public: Check Block Public Access (Console > Permissions).
  • No versioning: Enable to undo deletions.
  • Forgot region: Specify --region everywhere.

Next Steps