Introduction
Azure Kubernetes Service (AKS) is Microsoft's managed service on Azure for orchestrating Kubernetes containers. In 2026, AKS simplifies cluster management by automating updates, scaling, and security, letting you focus on your applications. This beginner tutorial walks you through creating an AKS cluster, deploying a simple web app (Nginx), and exposing it via a LoadBalancer. Why does it matter? Kubernetes dominates container orchestration (80% of enterprises use it), and AKS cuts setup time by 70% compared to a vanilla cluster. By the end, you'll have a production-ready, horizontally scalable deployment. We use Azure CLI for infrastructure and kubectl for workloads, with complete, tested YAML manifests. Estimated time: 30 minutes. Ready to containerize?
Prerequisites
- A free Azure account (create one at portal.azure.com)
- Azure CLI installed (version 2.65+)
- kubectl installed (version 1.31+)
- Docker Desktop for local testing (optional)
- Basic terminal and YAML knowledge
Install and Connect to Azure CLI
# Install Azure CLI (on Ubuntu/Debian, adapt for Windows/macOS)
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
# Check the installation
az --version
# Log in to Azure
az login
# Set the default subscription (replace with your ID)
az account set --subscription "your-subscription-id"
# List available locations
az account list-locations --output tableThese commands install Azure CLI, authenticate your session, and set up the environment. Use az login to open a browser for authentication. Avoid errors by checking your subscription with az account show. On Windows, use the official PowerShell script.
Create a Resource Group
A resource group is a logical container for your Azure resources. It streamlines management, billing, and cascading deletion.
Create the Resource Group
# Create a resource group (West Europe region, adjust as needed)
az group create --name rg-aks-demo --location "westeurope"
# Verify creation
az group show --name rg-aks-demo --query location -o tsvThis creates rg-aks-demo in West Europe to minimize latency. Pick a region close to your users. Pitfall: Don't forget --location, or it fails. Use az group delete for cleanup.
Create the AKS Cluster
# Create an AKS cluster with 2 nodes (system nodepool)
az aks create \
--resource-group rg-aks-demo \
--name aks-demo-cluster \
--node-count 2 \
--enable-addons monitoring \
--generate-ssh-keys \
--node-vm-size Standard_D2_v2
# Wait 5-10 min, then check
az aks show --resource-group rg-aks-demo --name aks-demo-cluster --output tableCreates a managed cluster with monitoring enabled and auto-generated SSH keys. Standard_D2_v2 is cost-effective for demos (~$0.10/hour). Creation time: ~7 min. Avoid tiny VMs for real workloads.
Configure kubectl for AKS
kubectl is the Kubernetes CLI tool. Configure it to point to your AKS cluster.
Connect kubectl to the Cluster
# Get kubectl credentials
az aks get-credentials --resource-group rg-aks-demo --name aks-demo-cluster
# Verify connection
kubectl get nodes
# List namespaces
kubectl get nsget-credentials updates ~/.kube/config. kubectl get nodes confirms 2 ready nodes. Pitfall: Multiple clusters? Check with kubectl config current-context.
Deploy the Nginx Application
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.27-alpine
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancerThis manifest deploys 3 Nginx pods with resource requests/limits and a LoadBalancer Service for external access. Alpine image keeps it lightweight. Copy-paste into a file and apply.
Apply the Deployment
# Save the previous YAML to nginx-deployment.yaml
# Apply the manifest
kubectl apply -f nginx-deployment.yaml
# Check pods
kubectl get pods -l app=nginx
# Check service and get external IP
kubectl get svc nginx-service
# Test (wait for IP)
curl http://<EXTERNAL_IP>apply -f creates the Deployment + Service atomically. get svc shows EXTERNAL-IP (~5 min provisioning). Scale with kubectl scale deployment nginx-deployment --replicas=5. Public access ready.
Scale and Monitor
# Scale horizontally
kubectl scale deployment nginx-deployment --replicas=5
# Auto-scaler (HPA)
kubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=3 --max=10
# Pod logs
kubectl logs deployment/nginx-deployment
# Describe a pod for debugging
kubectl describe pod <POD_NAME>Manual scaling or HPA based on CPU. Logs and describe help with debugging. Pitfall: No resources defined? HPA ignores it. Check with kubectl get hpa.
Best Practices
- Always set resource requests/limits: Prevents noise and optimizes costs.
- Use namespaces:
kubectl create ns prodto isolate environments. - Enable Azure Monitor: Built into AKS, auto-dashboards in portal.azure.com.
- Secrets for credentials:
kubectl create secret generic db-pass --from-literal=password=secret. - Helm for complex apps: Move beyond raw YAML after this beginner tutorial.
Common Errors to Avoid
- Cluster not ready: Wait for
kubectl get nodesall Ready before deploying. - LoadBalancer IP stuck
: Check quotas/subnet in Azure Portal. - Lost kubectl context: Refresh with
az aks get-credentials. - Hidden costs: ~$0.10/hour/node + LoadBalancer; delete with
az group delete --name rg-aks-demo --yes.
Next Steps
Master advanced AKS: Learni DevOps & Kubernetes Training. Resources: Official AKS Docs, Kubernetes.io Basics. Try NGINX Ingress or CI/CD with GitHub Actions.