Project Overview
This project automates the deployment of a 3-tier MERN (MongoDB, Express.js, React, Node.js) stack application on AWS EKS (Elastic Kubernetes Service) using Terraform, Jenkins, Argo CD, and DevSecOps best practices. The infrastructure is provisioned via Terraform, CI/CD is managed by Jenkins and Argo CD, and security scanning is implemented using SonarQube, and Trivy.
Infrastructure Setup
1. Master EC2 (Jenkins Server)
-
Instance Type:
t3a.2xlarge
(for heavy workload) -
Security Group: Open ports
8080 (Jenkins)
,9000 (SonarQube)
you can use default security group or create new one . -
IAM Role: Attached with
AdministratorAccess
for Terraform execution -
Storage: 30GB
gp3
Recommended -
Connection: AWS Session Manager (No SSH key pair)
-
Launch EC2 instance in default VPC or you can use own .
-
Access Jenkins using the ip of instance and port
http://<ip>:8080
and further configure jenkins. -
Lauch server with tools we required on the server so we install it using the user data .
-
Tools :
-
Java 21 JDK – Latest Long-Term Support (LTS) version of Java.
-
Jenkins – Automation server for CI/CD pipelines.
-
Docker – Containerization platform for application packaging and deployment.
-
SonarQube – Code quality and security analysis tool.
-
AWS CLI v2 – Command-line interface for interacting with AWS services.
-
kubectl – Command-line tool for managing Kubernetes clusters.
-
eksctl – CLI for creating and managing AWS EKS Kubernetes clusters.
-
Terraform – Infrastructure-as-code tool for cloud provisioning.
-
Trivy – Security scanner for detecting vulnerabilities in containers and infrastructure.
-
Helm – Kubernetes package manager for managing applications.
-
-
# user data to install important tools. #!/bin/bash # Script for Ubuntu 22.04 LTS - Latest DevOps Tools Installation (Updated) # Update and upgrade system sudo apt update -y && sudo apt upgrade -y # Update package index sudo apt update # Install Java 21 JDK (latest LTS) sudo apt install openjdk-21-jdk -y # Verify the installation java -version # Install Jenkins (Latest LTS) curl -fsSL https://pkg.jenkins.io/debian/jenkins.io-2023.key | sudo tee \ /usr/share/keyrings/jenkins-keyring.asc > /dev/null echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \ https://pkg.jenkins.io/debian binary/ | sudo tee \ /etc/apt/sources.list.d/jenkins.list > /dev/null sudo apt-get update -y sudo apt-get install jenkins -y # Install Docker (Latest Stable) sudo apt install docker.io -y sudo usermod -aG docker jenkins sudo usermod -aG docker ubuntu sudo systemctl enable --now docker sudo chmod 777 /var/run/docker.sock # Run SonarQube LTS Docker Container (Latest) docker run -d --name sonar -p 9000:9000 sonarqube:lts-community # Install AWS CLI v2 (Latest) curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" sudo apt install unzip -y unzip awscliv2.zip sudo ./aws/install aws --version # Install kubectl (Latest Stable) curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl kubectl version --client # Install eksctl (Latest Release) curl --silent --location "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp sudo mv /tmp/eksctl /usr/local/bin eksctl version # Install Terraform (Latest from HashiCorp) wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list sudo apt update && sudo apt install terraform -y terraform -version # Install Trivy (Latest) sudo apt-get install wget apt-transport-https gnupg lsb-release -y wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo gpg --dearmor -o /usr/share/keyrings/trivy-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/trivy-archive-keyring.gpg] https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/trivy.list sudo apt update && sudo apt install trivy -y trivy --version # Install Helm (Latest Stable) curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh helm version echo -e "\n✅ All tools installed and updated to the latest LTS versions."
-
Jenkins Credentials:
-
Access Jenkins from the server, and further configuration is required to access it.
-
Username:
admin
-
Password:
982dce9676a6473dbcd716ce7c535543
( sample )
-
2. Terraform Setup (via Jenkins)
-
Plugins Installed:
-
AWS Credentials
-
Terraform Plugin
-
Pipeline: AWS Steps
-
-
AWS Credentials Configured:
- Access Key & Secret Key stored in Jenkins
-
Terraform Path:
/usr/bin/terraform
(verified viawhereis terraform
) -
Create a pipeline for the infrastructure - build with parameter: first plan, then apply.
-
Pipeline fetches all the relevant dependencies and everything from GitHub.
-
GitHub Url :
ttps://github.com/iabhishekpratap/infra-eks-action.git
-
# Pipeline for the infra properties([ parameters([ string( defaultValue: 'dev', name: 'Environment' ), choice( choices: ['plan', 'apply', 'destroy'], name: 'Terraform_Action' ) ]) ]) pipeline { agent any stages { stage('Preparing') { steps { sh 'echo Preparing' } } stage('Git Pulling') { steps { git branch: 'main', url: 'https://github.com/iabhishekpratap/infra-eks-action.git' } } stage('Init') { steps { withAWS(credentials: 'aws-creds', region: 'ap-south-1') { script { // Check if the eks directory exists if (fileExists('eks')) { echo 'Directory eks exists.' } else { error 'Directory eks does not exist.' } sh 'terraform -chdir=eks/ init' } } } } stage('Validate') { steps { withAWS(credentials: 'aws-creds', region: 'ap-south-1') { sh 'terraform -chdir=eks/ validate' } } } stage('Action') { steps { withAWS(credentials: 'aws-creds', region: 'ap-south-1') { script { if (params.Terraform_Action == 'plan') { sh "terraform -chdir=eks/ plan -var-file=${params.Environment}.tfvars" } else if (params.Terraform_Action == 'apply') { sh "terraform -chdir=eks/ apply -var-file=${params.Environment}.tfvars -auto-approve" } else if (params.Terraform_Action == 'destroy') { sh "terraform -chdir=eks/ destroy -var-file=${params.Environment}.tfvars -auto-approve" } else { error "Invalid value for Terraform_Action: ${params.Terraform_Action}" } } } } } } }
3. EKS Cluster (Private VPC)
-
Provisioned via Jenkins Pipeline
-
VPC with Public/Private Subnets
-
EKS Control Plane + 2 Worker Nodes
-
Jump Server (Bastion Host) in Public Subnet so we can access the private clusters.
-
4. Jump Server Configuration
-
Used to access the private EKS cluster.
-
Choose dev-medium-vpc created by Terraform using Jenkins
-
Choose any public subnet for it
-
Create a new security group
-
Launch with user data
-
Tools required
-
AWS CLI – Command Line Interface for interacting with AWS services
-
kubectl – Command line tool for managing Kubernetes clusters
-
eksctl – CLI tool for creating and managing EKS clusters on AWS
-
Helm – Package manager for Kubernetes
-
-
# User Data For Jump Server #!/bin/bash # Installing AWS CLI #!/bin/bash curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" sudo apt install unzip -y unzip awscliv2.zip sudo ./aws/install # Installing Kubectl #!/bin/bash sudo apt update sudo apt install curl -y sudo curl -LO "https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl" sudo chmod +x kubectl sudo mv kubectl /usr/local/bin/ kubectl version --client # Installing eksctl #! /bin/bash curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp sudo mv /tmp/eksctl /usr/local/bin eksctl version # Intalling Helm #! /bin/bash sudo snap install helm --classic
-
Configure AWS CLI on the jump server
-
After AWS CLI is configured:
aws eks update-kubeconfig --name dev-medium-eks-cluster --region ap-south-1
-
This command configures kubectl to talk to your EKS cluster by:
-
🔐 Fetching EKS Cluster Credentials
- Pulls the endpoint and auth config for your cluster
-
🛠 Updating or Creating your ~/.kube/config
-
Adds a new context for your EKS cluster
-
Enables kubectl to use AWS IAM to authenticate
-
-
📡 Connects kubectl to the EKS control plane
-
-
Verification that you can get your pods from the jump server.
kubectl get nodes kubectl get pods -A
Now Configuring Load Balancer on EKS with ALB Ingress Controller on Jump Server.
To expose our application running inside an EKS (Elastic Kubernetes Service) cluster, we will configure a Load Balancer using an Ingress Controller. The specific controller we will use is the ALB Ingress Controller, now known as the AWS Load Balancer Controller.
How Can a Pod in EKS Create AWS Resources Like a Load Balancer?
A pod running inside a Kubernetes (EKS) cluster can create AWS resources (such as a Load Balancer) using a concept called IAM Roles for Service Accounts (IRSA).
Understanding IRSA:
-
This mechanism connects a Kubernetes service account with an AWS IAM role.
-
It allows a pod to assume the IAM role and use the permissions associated with it.
-
As a result, the pod can interact with AWS services, for example, to create an Application Load Balancer (ALB).Practical Explanation:
In our EKS cluster, we run an Ingress Controller as a pod.
-
This pod needs permission to create AWS resources such as a Load Balancer.
-
To enable this, we:
-
Create an IAM role with the required AWS permissions.
-
Create a Kubernetes service account.
-
Attach the IAM role to the service account using IRSA.
-
Configure the Ingress Controller to use that service account.
-
In simple words, we tie the concept of a Kubernetes service account with an AWS IAM role, allowing a pod in the cluster to act like an AWS IAM user and create or manage AWS resources.
Here is the instructions for setting up the AWS Load Balancer Controller on EKS using IRSA:
🚀 Deploying AWS Load Balancer Controller on EKS
To deploy the AWS Load Balancer Controller (formerly ALB Ingress Controller) on an EKS cluster, follow these steps:
🧾 Step 1: Fetch the IAM Policy for the Controller
Download the IAM policy required for the controller:
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
Create the IAM policy in AWS:
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
✅ This policy is essential for the Load Balancer Controller pod to interact with AWS services like creating ALBs.
🔐 Step 2: Associate OIDC Provider with the EKS Cluster
To allow your EKS cluster to use IAM roles for service accounts (IRSA), associate the OIDC provider:
eksctl utils associate-iam-oidc-provider \
--region=ap-south-1 \
--cluster=dev-medium-eks-cluster \
--approve
✅ This is a prerequisite for enabling IRSA in your cluster.
📛 Step 3: Create the Kubernetes Service Account with IAM Role
Create a service account and link it to the IAM role using the following command:
eksctl create iamserviceaccount \
--cluster=dev-medium-eks-cluster \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn arn:aws:iam::182399696202:policy/AWSLoadBalancerControllerIAMPolicy \
--approve \
--region=ap-south-1 \
--override-existing-serviceaccounts
This command:
-
🔧 Creates a Kubernetes service account named
aws-load-balancer-controller
in thekube-system
namespace. -
🔐 Links it with the IAM role
AmazonEKSLoadBalancerControllerRole
. -
📎 Attaches the necessary IAM policy to allow ALB operations.
-
🔗 Enables the pod to use IRSA to manage AWS Load Balancers.
Check if the service account has been created:
kubectl get sa -n kube-system
🧰 Step 4: Add Helm Repository for EKS Charts
Install Helm if not already done:
sudo snap install helm --classic
Add and update the EKS Helm chart repository:
helm repo add eks https://aws.github.io/eks-charts
helm repo update
📦 Step 5: Install AWS Load Balancer Controller via Helm
Install the controller using Helm with the existing service account:
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=dev-medium-eks-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
Check the pod status:
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller
If the pod is in a crash loop, upgrade the deployment with required parameters:
helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
--set clusterName=dev-medium-eks-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set region=ap-south-1 \
--set vpcId=vpc-0b9f421697e6f974d \
-n kube-system
✅ Step 6: Verify Deployment
Check if the AWS Load Balancer Controller deployment is running properly:
kubectl get deployment -n kube-system aws-load-balancer-controller
You should see the READY
status as 2/2
, confirming that both containers are running.
Now your EKS cluster is ready with the AWS Load Balancer Controller, and it can manage ALBs securely using IAM Roles for Service Accounts (IRSA).
🚀Now, let's configure Argo CD.
🎯 Why Do We Use Argo CD?
Argo CD is a GitOps tool for Kubernetes that lets you automatically deploy and manage applications in your cluster from a Git repository.
🧠 What is GitOps?
GitOps = Git + DevOps
You store your Kubernetes YAML files in Git, and Argo CD ensures your cluster is always synchronized with Git.
🔥 Benefits of Using Argo CD
Feature | Description |
---|---|
✅ Declarative Deployments | All configurations (apps, secrets, services) are stored in Git |
🔁 Auto-Sync | Automatically reverts any cluster drift back to the Git-defined state |
👁️ Visual UI Dashboard | Easily view which apps are deployed and their status |
🕵️ Auditable | Every change is tracked in Git (version-controlled history) |
🔒 Secure | CI/CD pipelines don’t need direct access to the Kubernetes cluster |
⏱️ Real-time Monitoring | Immediate feedback if an application fails or drifts |
🔄 Rollback Support | Easily revert to any previous Git commit/version |
🛠 Step-by-Step Setup
1. Create the argocd
namespace
kubectl create namespace argocd
2. Install Argo CD in that namespace
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml
This command installs all required Argo CD components.
3. Check the pods and services
kubectl get pods -n argocd
kubectl get all -n argocd
🌐 Expose Argo CD UI Using Load Balancer
Patch the argocd-server
service to use a LoadBalancer:
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
What does type: LoadBalancer
do?
-
Provisions an external cloud load balancer (like AWS ELB/ALB)
-
Makes Argo CD accessible from the internet
-
Provides a public IP or DNS name for external access
Note: This LoadBalancer is created by the EKS cluster, not the AWS Load Balancer Controller.
Access Argo CD
-
Go to the AWS EC2 Console → Load Balancers
-
Find the newly created Load Balancer for Argo CD
-
Copy the DNS name
-
Paste the DNS into your browser to access the Argo CD web UI
🔐 Argo CD Admin Credentials
Default Username
admin
Get and Decode the Initial Password
kubectl get secrets -n argocd
kubectl edit secret argocd-initial-admin-secret -n argocd
Look for the base64 encoded password:
RkhQMVh1ZHdaUmQxeXJFMg==
Decode it using:
echo RkhQMVh1ZHdaUmQxeXJFMg== | base64 --decode
You’ll get:
FHP1XudwZRd1yrE2
Use these credentials on the Argo CD login page.
🛡️ Now SonarQube Configuration and Integration with CI/CD
📘 What is SonarQube?
SonarQube is an open-source code quality and security analysis tool. It scans your codebase and provides feedback on:
-
🔴 Bugs
-
⚠️ Code Smells
-
🛡️ Security Vulnerabilities
-
🧪 Unit Test Coverage
-
✅ Coding Standards Compliance
It supports many programming languages including Java, JavaScript, TypeScript, Python, Go, and more.
🚀 Why SonarQube in DevOps?
Feature | Benefit |
---|---|
✅ Static Code Analysis | Catches issues before code reaches production |
📊 Quality Gates | Enforces thresholds like "no critical bugs" or "minimum 80% test coverage" |
🧠 Developer Feedback | Provides suggestions directly during build time |
🔐 Security Scanning | Identifies vulnerabilities including OWASP Top 10 |
🔁 CI/CD Integration | Seamlessly integrates with Jenkins, GitHub Actions, GitLab, etc. |
📝 Dashboards | Allows teams to track project code health visually |
⚙️ Setup SonarQube via Docker
1. Check if SonarQube is Running
docker ps
2. Run SonarQube Container
docker run -d --name sonarqube \
-p 9000:9000 \
sonarqube:community
Make sure to expose port 9000 in your AWS EC2 Security Group Inbound Rules.
3. Default Credentials
-
Username:
admin
-
Password:
admin
🔑 Generate Token for Jenkins Integration in Sonar
Token Example:
squ_21383141065fbb7df9aa7665bc968871d0332b91
🔔 Create Webhook for Jenkins
Go to SonarQube UI > Administration > Webhooks, then:
-
Name:
Jenkins
-
URL:
http://<jenkins-ip>:8080/sonarqube-webhook/
-
Secret: Leave empty
-
Click Create
This allows Jenkins to receive scan results after analysis completes.
📦 Create Projects in SonarQube
Frontend
-
Project Name:
frontend
-
Project Key:
frontend
Backend
-
Project Name:
backend
-
Project Key:
backend
📍 Run Sonar Scanner
Frontend Analysis
# use the given inside your pipleline
sonar-scanner \
-Dsonar.projectKey=frontend \
-Dsonar.sources=. \
-Dsonar.host.url=http://<sonarqube-public-ip>:9000 \
-Dsonar.token=squ_21383141065fbb7df9aa7665bc968871d0332b91
Backend Analysis
# use the given inside your pipleline
sonar-scanner \
-Dsonar.projectKey=backend \
-Dsonar.sources=. \
-Dsonar.host.url=http://<sonarqube-public-ip>:9000 \
-Dsonar.token=squ_21383141065fbb7df9aa7665bc968871d0332b91
🔐 Setup Jenkins Credentials
GitHub Credentials
-
Username:
iabhishekpratap
-
Password: GitHub Personal Access Token
-
Example Token:
ghp_Qb2Ia08dRl9ER9buwoMuvW37sG3PXDJu
SonarQube Token
-
Kind: Secret Text
-
ID:
sonar-token
-
Value: SonarQube token
AWS Account ID
-
Kind: Secret Text
-
ID:
ACCOUNT_ID
-
Value: Your AWS account ID
🐳 ECR Configuration
If not already created using Terraform, manually create ECR repositories:
-
frontend
-
backend
Jenkins ECR Credentials
Frontend ECR
-
Secret Name:
frontend
-
ID:
ECR_REPO1
Backend ECR
-
Secret Name:
backend
-
ID:
ECR_REPO2
Required so Jenkins can push Docker images to AWS ECR.
🔧 Jenkins Tool Configuration
Install Plugins:
-
Docker
-
Docker Commons
-
Docker Pipeline
-
Docker API
-
Docker Build Step
-
Eclipse Temurin Installer
-
NodeJS
-
OWASP Dependency-Check
-
SonarQube Scanner
Configure Tools in Jenkins
NodeJS
- Name:
nodejs
SonarQube Scanner
- Name:
sonar-scanner
🛠 Configure SonarQube Server in Jenkins
Go to: Manage Jenkins > Configure System
Add SonarQube server:
-
Name:
sonarqube
-
Server URL:
http://<sonarqube-ip>:9000
-
Authentication Token:
sonar-token
🧪 Jenkins Pipeline for Frontend and Backend
-
Pipeline will:
Code Checkout
Jenkins clones the repository from GitHub using the provided credentials.-
-
SonarQube Static Analysis
The frontend code is scanned by SonarQube for:-
Bugs
-
Code smells
-
Security vulnerabilities
-
Coding standard adherence
-
-
Quality Gate Validation
Jenkins waits for SonarQube to approve the code based on predefined quality gates. -
Trivy Filesystem Scan
The working directory (frontend app folder) is scanned for vulnerabilities in dependencies and file system. -
Docker Image Build
Jenkins builds the Docker image for the frontend, using the Dockerfile in the project. -
Push Docker Image to AWS ECR
The image is tagged with the build number and pushed to the Amazon ECR registry securely. -
Trivy Image Scan
After pushing, the Docker image is scanned again using Trivy to ensure no vulnerabilities exist at the container image level. -
Update Kubernetes Deployment YAML
Jenkins pulls the repo again, updates the deployment manifest with the new image tag, commits the change, and pushes it back to GitHub. -
Ready for GitOps via Argo CD
The updated manifest in the GitHub repository is picked up by Argo CD, which deploys the new version to the Kubernetes cluster.
-
-
🔄 Argo CD Deployment
Step 1: Connect Git Repo to Argo CD
-
Go to Argo CD UI → Settings → Repositories
-
Add repository via SSH
-
Provide SSH credentials
-
Click Connect
💽 EBS and CSI Driver
While creating your EKS cluster, you configured the AWS EBS CSI Driver for persistent volume support.
-
CSI Driver talks to AWS to provision EBS volumes
-
EBS volume is created in the same Availability Zone as your pod
-
Automatically attached to the EC2 node running the pod
Application Deployment (3-Tier)
-
MongoDB Deployment
-
Used public MongoDB Docker image.
-
Persistent Volume (EBS) via CSI driver.
-
-
Backend (Node.js + Express)
-
Docker image stored in ECR.
-
Kubernetes Deployment + Service.
-
-
Frontend (React)
-
Docker image stored in ECR.
-
Kubernetes Deployment + Service.
-
-
Ingress Controller (ALB)
- Exposed via AWS Load Balancer.
Deploy Database Before App
Create Namespace
kubectl create ns three-tier
Deploy Database According to image.
-
Push database manifests to Git
-
Argo CD syncs and deploys it
Now Create App for Backend
According to this image
Update the image from the ECR in the Kubernetes-Manifests-file/Frontend/deployment.yaml
Now Create App for Frontend
Currently, we cannot access the application because it is in the EKS cluster, and direct access is not possible. To solve this, we create a new app ingress to access the application. The application is deployed as a service type cluster. We then change it to a service type load balancer or use an ingress resource. Now, create an app for the ingress according to the image.
Final Application Access
-
URL:
http://<ALB-DNS>
orhttps://custom-domain.com
-
Components Running:
-
Frontend (React)
-
Backend (Express/Node.js)
-
MongoDB (Persistent Storage)
-
Custom Domain (Route 53)
-
DNS Record (Alias):
-
Name:
@
-
Target: ALB DNS (
k8s-threetie-mainlb-XXXX.elb.amazonaws.com
) -
TTL:
300
-
🔍 Monitoring Setup with Prometheus and Grafana
Set up a real-time monitoring solution using Prometheus and Grafana to collect, store, and visualize Kubernetes cluster metrics.
📦 Step 1: Add Helm Repositories
Add the Prometheus and Grafana Helm chart repositories:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
🚀 Step 2: Install Prometheus Stack
Install the kube-prometheus-stack, which includes both Prometheus and Grafana:
helm install prometheus prometheus-community/kube-prometheus-stack --namespace default
📌 Note:
If you face issues like pods not running or PV-related problems, it may be due to resource limits. You can resolve this by increasing pod limits or trying alternative installation methods.
🌐 Step 3: Expose Prometheus and Grafana
By default, both Prometheus and Grafana services are of type ClusterIP
, making them inaccessible externally.
Change service type to LoadBalancer
:
kubectl edit svc prometheus-grafana
# Change 'type: ClusterIP' to 'type: LoadBalancer'
Check the external IP:
kubectl get svc prometheus-grafana
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
prometheus-grafana LoadBalancer 172.20.111.19 a63322...elb.amazonaws.com 80:32451/TCP
📎 Access Grafana in your browser using the External IP or DNS:
http://<EXTERNAL-IP>
📈 Step 4: Expose Prometheus Console
Repeat the process for the Prometheus service:
kubectl edit svc prometheus-kube-prometheus-prometheus
# Change 'type: ClusterIP' to 'type: LoadBalancer'
Once updated, access Prometheus using:
http://<EXTERNAL-IP>:9090
🔐 Step 5: Access Grafana Dashboard
Grafana admin credentials (Base64-encoded):
-
Username:
YWRtaW4=
-
Password:
cHJvbS1vcGVyYXRvcg==
To decode:
echo "YWRtaW4=" | base64 --decode # admin
echo "cHJvbS1vcGVyYXRvcg==" | base64 --decode # prom-operator
📊 Step 6: Import Dashboards in Grafana
-
Open Grafana in your browser.
-
Go to Dashboard > Import.
-
Paste a dashboard ID from Grafana Dashboard Repository.
-
Select Prometheus as the data source.
-
Click Import.
✅ Now Prometheus collects metrics, Grafana visualizes them, and you can monitor your cluster in real-time through custom dashboards.
Steps Summary
Set up a Jenkins server in your default VPC with a user-created security group, not the default, with access to Jenkins and SonarQube ports.
Access the Jenkins server and install important plugins.
Set up AWS credentials on the Jenkins server so it can create infrastructure on AWS. Set up GitHub credentials so it can access the infrastructure repository.
After creating the infrastructure, create a jump server in the VPC created by the Jenkins server or by Terraform. Choose a public subnet to access it and launch it with user data.
Configure AWS CLI on the jump server and configure kubectl to communicate with your EKS cluster. Check the pod status deployed by the Jenkins server using Jenkins and Terraform.
Now, configure the Load Balancer on our EKS because our application will have an ingress controller.
Now configure Argo CD, create the namespace argocd, and expose Argo CD as a Load Balancer so we can access it.
Now configure SonarQube by creating a webhook because we need a webhook to notify an external service, in our case Jenkins, when project analysis is done. Then Jenkins can know everything is good and create projects for the frontend and backend.
Create secret credentials in Jenkins because JENKINS needs to push images to the ECR repo, so we need ECR credentials SECRET – frontend Id – ECR_REPO1 Backend ECR_REPO2
Install remaining plugins in Jenkins and configure Docker, Docker Commons, Docker Pipeline, Docker API, docker-build-step, Eclipse Temurin installer, NodeJS, OWASP Dependency-Check, and SonarQube Scanner.
Now set up the Sonar server.
Now access Argo CD and connect your GitHub application repo using SSH.
First, we create a database on the Kubernetes cluster using the database manifest on the Git repo, and then we deploy the frontend and backend. This is done using Argo CD.
Create and deploy applications for the frontend and backend using Argo CD.
Now we are not able to access the application because it is in the EKS cluster, and we are not able to access it directly. So, we create a new app ingress to access the application, which creates a load balancer to access the application from outside.
Now we can access the application using the DNS found on the load balancer in AWS.
Now install Prometheus and Grafana for monitoring and configure them.
Conclusion
This project successfully implements a fully automated, secure, and scalable MERN stack deployment on AWS EKS using Infrastructure as Code (Terraform), CI/CD (Jenkins + Argo CD), and DevSecOps best practices. Monitoring via Prometheus & Grafana ensures observability, while custom domain integration enhances accessibility.
Key Achievements
✅ Infrastructure Automation (Terraform)
✅ Secure CI/CD Pipeline (SonarQube, Trivy, )
✅ GitOps Deployment (Argo CD)
✅ Kubernetes Monitoring (Prometheus + Grafana)
✅ Custom Domain & Load Balancing (Route 53 + ALB)
This setup ensures high availability, security, and scalability for modern cloud-native applications. 🚀