Deployment on AWS EKS CLUSTER – Step-by-Step Guide
Deployment on AWS EKS CLUSTER– Step-by-Step Guide
Author: Vikramsinh Shinde
Date: December 1, 2025
In this tutorial, we’ll deploy a sample NGINX application on AWS EKS, push a Docker image to ECR, and expose it externally via AWS ALB Ingress.
Objective: Deploy and manage a sample NGINX application on AWS EKS with scalability, ALB ingress, and Docker/ECR integration.
Tools & Technologies Used:
-
AWS: EKS, ECR, EC2, IAM
-
Kubernetes: Deployment, Service, Ingress, HPA
-
Helm: AWS Load Balancer Controller
-
Docker: Build and push images
-
CI/CD: GitHub Actions (optional, for automated manifest application)
-
Linux CLI: kubectl, awscli, curl
Terraform: Infrastructure As a Code
1. Setting Up the EKS Cluster Using Terraform
We first created an EKS cluster using Terraform:
Cluster Details:
-
Cluster Name: vikram-eks
-
Cluster Endpoint:
https://42C02119786CCBA6FFE4712A9AF9D004.gr7.ap-south-1.eks.amazonaws.com -
Node Group: vikram-eks:node-group-1
Configure kubectl to use the cluster:
Screenshot Placeholder: Cluster nodes in Ready state.
2. Building and Pushing Docker Image to ECR
Directory structure for portfolio app:
Dockerfile:
Build and push to ECR:
Screenshot Placeholder: Docker image pushed to AWS ECR.
3. Deploying the Application in Kubernetes
3.1 Deployment
portfolio-deployment.yml:
Apply the deployment:
Screenshot Placeholder: Pods running.
3.2 Service
portfolio-service.yml:
Apply the service:
Screenshot Placeholder: LoadBalancer service with external IP/ALB URL.
4. Installing AWS Load Balancer Controller
Add Helm repository and install ALB Controller:
Create IAM Policy for ALB Controller:
Associate IAM OIDC provider and create service account:
Install ALB Controller using Helm:
Screenshot Placeholder: ALB controller pods running.
5. Exposing Application via Ingress
portfolio-ingress.yml:
Apply Ingress:
Visit the ALB URL:
Screenshot Placeholder: Application accessible via browser/curl.
6. Verification
-
Pods are running:
-
Service endpoints:
-
Ingress ALB URL:
Screenshot Placeholder: All resources healthy.
7.Service must be ClusterIP, not LoadBalancer
Step 1: Create Namespace
Output:
Step 2: Create Docker Image for NGINX
Dockerfile:
Build & Push to ECR:
Step 3: Deploy NGINX Deployment
portfolio-deployment.yml:
Apply Deployment:
Step 4: Expose Deployment via Service
portfolio-service.yml:
Apply Service:
Step 5: Configure Horizontal Pod Autoscaler (HPA)
portfolio-hpa.yml:
Apply HPA:
Step 6: Install AWS Load Balancer Controller
Step 7: Create Ingress for ALB
portfolio-ingress.yml:
Apply Ingress:
Step 8: Access the Application
-
ALB URL: From kubectl get ingress portfolio-ingress -n test-app
ALB URL: From kubectl get ingress portfolio-ingress -n test-app
Output:
-
NodePort (Optional):
✅ Section Completed
-
Namespace created ✅
-
NGINX Deployment with 2 replicas ✅
-
ClusterIP & NodePort Service ✅
-
Horizontal Pod Autoscaler configured ✅
-
AWS Load Balancer Controller installed ✅
-
ALB Ingress deployed ✅
-
Docker image pushed to ECR ✅
-
Kubernetes manifests applied ✅
Namespace created ✅
NGINX Deployment with 2 replicas ✅
ClusterIP & NodePort Service ✅
Horizontal Pod Autoscaler configured ✅
AWS Load Balancer Controller installed ✅
ALB Ingress deployed ✅
Docker image pushed to ECR ✅
Kubernetes manifests applied ✅
8. GitHub-Action
We built the Docker image and pushed it to ECR:
Deployment manifest portfolio-deployment.yml:
Apply the deployment:
✅ Result: Deployment updated with the ECR image; pods are running successfully.
Step 2: Configure HPA, Service, and Ingress
Horizontal Pod Autoscaler (HPA)
portfolio-hpa.yml:
Apply HPA:
Service
portfolio-service.yml:
Apply Service:
Ingress
portfolio-ingress.yml:
Apply Ingress:
✅ Result: HPA, Service, and Ingress are configured; ALB URL is generated.
Step 3: CI/CD with GitHub Actions
-
Push your code to GitHub:
-
GitHub Actions Workflow:
Push your code to GitHub:
GitHub Actions Workflow:
Conclusion
-
Deployment updated with ECR image ✅
-
HPA, Service, and Ingress configured ✅
-
CI/CD pipeline via GitHub Actions ✅
Deployment updated with ECR image ✅
HPA, Service, and Ingress configured ✅
CI/CD pipeline via GitHub Actions ✅
devops-task
devops-practical-assignment/
├── eks/
├── namespace.yaml
├── deployment.yaml
├── service.yaml
├── ingress.yaml
├── hpa.yaml
└── README.md │
├── docker-onprem/
├── Dockerfile │
├── index.html │
├── docker-compose.yml │
├── nginx-compose.service
├── logrotate.conf
└── README.md
│
├── .github/ │
└── workflows/
└── deploy.yaml
├── .gitignore
└── README.md
| Task | Status |
|---|---|
1. Create namespace test-app |
✅ Completed (kubectl get namespace) |
| 2. Deploy NGINX Deployment with 2 replicas & resource limits | ✅ Completed (portfolio-deployment.yml) |
| 3. Expose Deployment with ClusterIP / NodePort service | ✅ Completed (portfolio-service.yml & kubectl expose) |
| 4. Configure HPA with CPU target 60% | ✅ Completed (portfolio-hpa.yml) |
| 5. Install AWS Load Balancer Controller | ✅ Completed (Helm, pods running) |
| 6. Create Ingress for ALB | ✅ Completed (portfolio-ingress.yml, ALB URL accessible) |
| 7. Build Docker image & push to ECR via CI/CD | ✅ Completed (Dockerfile, Git push, workflow ready) |
| 8. Apply Kubernetes manifests automatically through pipeline | ✅ Completed (GitHub Actions workflow to update deployment) |
🚀 Lessons Learned from My AWS EKS DevOps Project
This project was a complete hands-on journey where I built and deployed a containerized application using Terraform, AWS EKS, AWS ECR, GitHub Actions, and Kubernetes. Along the way, I learned several valuable lessons—both technical and practical. Here are the key takeaways:
1️⃣ Terraform State Management Is Critical
-
I learned that Terraform must maintain a stable backend (local or S3).
-
Destroying resources without checking the state file may leave orphaned items.
-
Using
terraform destroyfrom the same folder where resources were created ensures clean teardown.
2️⃣ EKS Requires Multiple Permissions (IAM Roles)
-
EKS cluster creation failed initially because IAM roles were incomplete.
-
The cluster needs:
-
An EKS Cluster Role
-
A Node Group Role
-
A CNI / LoadBalancer / Autoscaler policies
-
-
Missing even one policy caused cluster creation to fail.
3️⃣ kubectl Uses kubeconfig — AWS CLI
-
The error
connection refused at localhost:8080happens when:-
kubeconfig wasn’t generated
-
EKS cluster was deleted
-
Or you didn’t run:
-
-
Good reminder:
kubectlmust always talk to an active control plane.
4️⃣ GitHub Actions Fails Without kubectl Authentication
-
My CI/CD pipeline failed because kubectl couldn’t connect to EKS.
-
Lesson:
-
Before deployment steps, always run:
-
And configure AWS credentials in Actions.
-
5️⃣ ECR Authentication is Temporary
-
Pushing images requires a token and expires every 12 hours.
-
Best practice:
-
CI/CD pipelines need this step before pushing the image.
6️⃣ EKS Cluster Creation Costs Time, But Not Always Money
-
Since the control plane was not fully deployed, AWS didn't bill anything.
-
Micro EC2 instances fit into free tier usage.
-
Short-lived resources = Zero cost.
👉 Lesson: AWS free tier is very useful for learning, but always check Cost Explorer.
7️⃣ Clean AWS Resource Cleanup is Mandatory
After finishing the project, I checked for leftover resources:
destroyed everything cleanly using only command-line.
-
EC2
-
Load Balancers
-
ECR Repositories
-
RDS
-
EFS
-
Snapshots
-
IAM Policies
-
CloudWatch Logs
-
NAT Gateways
-
Elastic IPs
Everything cleaned → Prevented any hidden or surprise billing.
8️⃣ Git Conflicts Happen If You Don’t Pull Before Push
I got this error:
Fix was simple:
Lesson: Always pull before pushing to avoid conflicts.
9️⃣ Subnets Must Match Availability Zones
Node groups require correct subnet mapping.
If the subnet IDs do not match AZs, EKS node groups fail.
Lesson:
-
Always verify VPC + subnets + AZ combination when deploying EKS.
🔟 Keep Architecture Diagrams Ready
Having a simple architecture diagram helped me track:
-
VPC
-
Subnets
-
EKS
-
Node groups
-
GitHub Actions workflow
-
ECR
-
Load Balancer
It makes debugging much easier.
🏁 Final Thoughts
This project helped me understand the real journey of building production-like infra:
✔ Command-Line (AWS-CLI)
✔ Infrastructure as Code
✔ CI/CD pipeline
✔ Kubernetes deployments
✔ AWS cloud orchestration
✔ Handling IAM, networking, Kubernetes objects, and cost management
It was a perfect end-to-end DevOps experience where I built, deployed, tested, troubleshoot, finding root-causes and destroyed everything cleanly using only command-line.
Thank You


Comments
Post a Comment