My K8s creation for the Kubernetes Resume Challenge

A brief summary of the 2024 the Kubernetes Resume Challenge follows.

Although resume is in the name, rather than a personal website, this challenge asks participants to deploy a basic e-commerce website.

The objective of this challenge was to simulate the deployment an e-commerce website. For this project it was crucial to consider the challenges of modern web application deployment and how containerization and Kubernetes (K8s) offer compelling solutions:

  • Scalability: How can application automatically adjust to fluctuating traffic?

  • Consistency: How to ensure application runs the same across all environments?

  • Availability: How to configure application with zero downtime?

Containerization, using Docker, encapsulates your application and its environment, ensuring it runs consistently everywhere. Kubernetes, a container orchestration platform, automates deployment, scaling, and management, offering:

  • Dynamic Scaling: Adjusts application resources based on demand.

  • Self-healing: Restarts failed containers and reschedules them on healthy nodes.

  • Seamless Updates & Rollbacks: Enables zero-downtime updates and easy rollbacks.

By leveraging Kubernetes and containerization a scalable, consistent, and resilient deployment strategy is embraced. This not only demonstrates technical acumen but aligns with modern DevOps practices.

For more details of this challenge look at these two links:

Take on the Kubernetes challenge overview

The Kubernetes Resume Challenge Implementation Steps

In this article I describe what I created to meet the objectives of this Kubernetes challenge. There are 13 steps and two extra credit in the challenge and below I describe my response to each step.

Goal

This project highlights proficiency in Kubernetes and containerization, demonstrating the ability to deploy, scale, and manage web applications efficiently in a K8s environment, underscoring cloud-native deployment skills.

Step 1: Certification

In November 2023 I wrote and passed the CKA exam primarily using the KodeKloud CKA course material.

Step 2: Containerize Your E-Commerce Website and Database

Website

After installing Docker on an AWS EC2 I created the following Dockerfile for the website part of the application:

FROM php:7.4-apache

RUN docker-php-ext-install mysqli pdo pdo_mysql && docker-php-ext-enable pdo_mysql

COPY ./learning-app-ecommerce/ /var/www/html/

EXPOSE 80

You will notice the copy of the learning-app-ecommerce directory. This is KodeKloud provided mock e-commerce application code that I cloned with this command:

$ git clone https://github.com/kodekloudhub/learning-app-ecommerce.git

Database

Initially I did not realize I could fully deploy the mariadb database without a custom Docker image. Later I deprecated this Docker file because while doing some searching for this project I came across this Stackoverflow article that introduced me to multiline YAML and ConfigMaps. Then while learning about multiline YAML I found this excellent KodeKloud article. Thus, initially, I created the following Dockerfile for the database part of the application.

FROM mariadb:latest

COPY ./db-load-script.sql /docker-entrypoint-initdb.d/

RUN apt-get update

ENV MYSQL_ROOT_PASSWORD mypass
ENV MYSQL_DATABASE ecomdb

EXPOSE 3306

As I said earlier, this Dockerfile is not used in the final application and you can see the replacement in the mysql-pod.yaml file.

Step 3: Set Up Kubernetes on a Public Cloud Provider

I created my K8s cluster on AWS EKS using eksctl using this manifest:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: k8s-challenge
  region: us-east-1
  version: "1.29"
managedNodeGroups:
- amiFamily: AmazonLinux2
  desiredCapacity: 2
  iam:
    withAddonPolicies:
      ebs: true
      awsLoadBalancerController: true
      cloudWatch: true
  instanceType: t3a.xlarge
  labels:
    alpha.eksctl.io/cluster-name: k8s-challenge
    alpha.eksctl.io/nodegroup-name: standard-workers
  maxSize: 3
  minSize: 1
  name: k8s-challenge-std-workers
  privateNetworking: false
  tags:
    alpha.eksctl.io/nodegroup-name: standard-workers
    alpha.eksctl.io/nodegroup-type: managed
vpc:
  id: vpc-f069ee94
  manageSharedNodeSecurityGroupRules: true
  nat:
    gateway: Disable
  subnets:
    public:
      us-east-1a:
        id: subnet-817978aa
      us-east-1b:
        id: subnet-3cc8344a
      us-east-1c:
        id: subnet-a8abbcf1
      us-east-1d:
        id: subnet-f4aa3191
      us-east-1f:
        id: subnet-80f87c8c

The EKS cluster is created with this manifest using this command:

eksctl create cluster -f eksctl-manifest.yaml

Then I installed the CSI, CNI and LoadBalancer Controller Add-ons using the AWS documentation found at these locations:

Install the AWS CSI Storage Add-on

Install the AWS CNI Storage Add-on

Create the Amazon VPC CNI plugin for Kubernetes IAM role

Optional installation of the AWS LoadBalancer Controller Add-on

The installation of the AWS LoadBalancer Controller Add-on is optional because it depends on what kind of LoadBalancer you want to install. If you just want the Classic LoadBalancer without the features of the ALB or NLB then you do not need to install the AWS LoadBalancer Controller Add-on. For more information on this topic look at this link. If you decide to install the AWS LoadBalancer Controller Add-on then you must add annotations to the website-service.yaml manifest file similar to this so that the Ingress object is correctly configured:

apiVersion: v1
kind: Service
metadata:
  name: website-service
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
  ...

There are many other LoadBalancer annotations that can be used for an Ingress.

If you install the AWS LoadBalancer Controller Add-on and then attempt to create a LoadBalancer Service without annotations you will probably get a service error like this:

Events:
  Type     Reason            Age   From     Message
  ----     ------            ----  ----     -------
  Warning  FailedBuildModel  49s   service  Failed build model due to unable to resolve at least one subnet (0 match VPC and tags: [kubernetes.io/role/internal-elb])

The AWS Load Balancer Controller requires subnets with specific tags. By tagging your subnets correctly, you enable the AWS Load Balancer Controller to use auto-discovery to create the load balancer successfully. Read this document for information on the subnet tags

Public subnets are used for internet-facing load balancers. These subnets must have the following tags:

kubernetes.io/role/elb    1 or ``

Private subnets are used for internal load balancers. These subnets must have the following tags:

kubernetes.io/role/internal-elb    1 or ``

In order to use an ALB, referred to as an Ingress in Kubernetes, you must first install the AWS Load Balancer Controller add-on following these AWS instructions.

Ensure you create the AmazonEKSLoadBalancerControllerRole IAM Role described in Steps 1 and 2.

In step 5 of these instructions I recommend using the Helm 3 or later instructions, rather than the Kubernetes manifest instructions.

Here is the helm install command I used:

helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=k8s-challenge \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller

As shown in step 6 of the AWS instructions, I confirmed the deployment with this command:

kubectl get deployment -n kube-system aws-load-balancer-controller
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   2/2     2            2           3m23s

If the READY column reports 0/2 for a long time then the AmazonEKSLoadBalancerControllerRole IAM Role is probably not setup correctly.

To undo the helm install command use the following command:

helm uninstall aws-load-balancer-controller -n kube-system

Step 4: Deploy Your Website to Kubernetes

I deployed the website with these initial versions of my website-deployment.yaml and mysql-pod.yaml manifest files.

website-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: website-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        app: ecom-web
        tier: frontend
    spec:
      containers:
      - image: briangaber/ecom-web:v1
        name: ecom-web
        ports:
        - containerPort: 80
        env:
        - name: DB_HOST
          value: "mysql-service"
        - name: DB_USER
          value: "ecomuser"
        - name: DB_PASSWORD
          value: "ecompassword"
        - name: DB_NAME
          value: "ecomdb"
        - name: FEATURE_DARK_MODE
          valueFrom:
            configMapKeyRef:
              name: feature-toggle
              key: FEATURE_DARK_MODE

mysql-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: ecom-web
    tier: backend
  name: ecom-db
spec:
  containers:
  - image: briangaber/ecom-db:v1
    name: ecom-db
    ports:
    - containerPort: 3306
    env:
    - name: MYSQL_USER
      value: "ecomuser"
    - name: MYSQL_PASSWORD
      value: "ecompassword"

Notice that this initial version of mysql.pod.yaml uses the deprecated version of my custom mariadb Docker image. As stated earlier, this manifest file was changed to what is shown below after learning about using multiline YAML with ConfigMaps:

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: ecom-web
    tier: backend
  name: ecom-db
spec:
  containers:
  - image: mariadb
    name: ecom-db
    ports:
    - containerPort: 3306
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: mypass
    - name: MYSQL_DATABASE
      value: ecomdb
    - name: MYSQL_USER
      value: ecomuser
    - name: MYSQL_PASSWORD
      value: ecompassword
    volumeMounts:
      - name: mariadb-initdb
        mountPath: /docker-entrypoint-initdb.d
  volumes:
  - name: mariadb-initdb
    configMap:
      name: initdb

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: initdb
data:
  initdb.sql: |-
    USE ecomdb;
    CREATE TABLE products (id mediumint(8) unsigned NOT NULL auto_increment,Name varchar(255) default NULL,Price varchar(255) default NULL, ImageUrl varchar(255) default NULL,PRIMARY KEY (id)) AUTO_INCREMENT=1;
    INSERT INTO products (Name,Price,ImageUrl) VALUES ("Laptop","100","c-1.png"),("Drone","200","c-2.png"),("VR","300","c-3.png"),("Tablet","50","c-5.png"),("Watch","90","c-6.png"),("Phone Covers","20","c-7.png"),("Phone","80","c-8.png"),("Laptop","150","c-4.png");

However, these manifest files were modified several times over the course of this project for things like Secrets, Persistent Volumes, probes, etc and the various modifications are shown later.

Step 5: Expose Your Website

I exposed the deployment using this service:

apiVersion: v1
kind: Service
metadata:
  name: website-service
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    tier: frontend
  type: LoadBalancer

Step 6: Implement Configuration Management

The task of this step was to add a feature toggle to the web application to enable a “dark mode” for the website. I do not claim to be an expert PHP and CSS developer I was able to accomplish this by adding a few lines of php to the index.php file and creating a new style-dark.css file from the provided style.css file.

Inline PHP code

<?php
    $darkModeTrue = getenv('FEATURE_DARK_MODE') === 'true';

    if ($darkModeTrue) {
        echo "<link rel=\"stylesheet\" href=\"css/style-dark.css\">\n";
    } else {
        echo "<link rel=\"stylesheet\" href=\"css/style-light.css\">\n";
    }
?>

style-dark.css code

I added on line of CSS to the body section at line 76 of style.css

body {
  font-family: "Fira Sans", sans-serif;
  background-color: SlateGray;
  /*Section Fix*/
}

feature-style-cm.yaml

This is the ConfigMap manifest that creates the FEATURE_DARK_MODE environment variable that is read by the inline PHP shown above.

apiVersion: v1
kind: ConfigMap
metadata:
  name: feature-toggle
data:
  FEATURE_DARK_MODE: "true"

Changing the value of FEATURE_DARK_MODE with the k edit cm feature-toggle command, recreating the ConfigMap and then deleting the website-deployment Pods (which causes new pods to be created) results in the webpage displaying with a light or dark background.

Step 7: Scale Your Application

The task of this step was to prepare for a marketing campaign expected to triple traffic. This was done using this command:

$ kubectl scale deployment/website-deployment --replicas=6

New pods were added so quickly that, by the time I ran the k get pod command, all six pods were running.

Step 8: Perform a Rolling Update

The task of this step was to update the website to include a new promotional banner for the marketing campaign. I accomplished this task with these actions:

  • Modified the web application’s code to include the promotional banner.

  • Build and push new Docker image.

  • Update the website-deployment.yaml manifest file with the new image version and applied the changes.

  • Monitored the update using kubectl rollout status deployment/ecom-web to watch the rolling update process. However, the update completed too quickly for my to observe changes. However, I was able to see the change immediately in my browser

Th step proved that the website updates with zero downtime, demonstrating rolling updates’ effectiveness in maintaining service availability.

Step 9: Roll Back a Deployment

The task of this step was to simulate that the new banner introduced a bug requiring a roll back to the previous version. This was achieved using this command:

$ kubectl rollout undo deployment/website-deployment

The success of this command was seen by refreshing the webpage shown by the browser.

Step 10: Autoscale Your Application

The task of this step was to automate scaling based on CPU usage to handle unpredictable traffic spikes.

First, a Horizontal Pod Autoscaler was created targeting 50% CPU utilization, with a minimum of 2 and a maximum of 10 pods using this command:

$ autoscale deployment website-deployment --cpu-percent=50 --min=2 --max=10
horizontalpodautoscaler.autoscaling/website-deployment autoscaled

Second, load was simulated using the Apache Bench tool to generate traffic and increase CPU load. This tool was installed with the yum install httpd-tools command. The load was simulated with a command like ab -t 10 -n 20000 -c 20000http://my-value.us-east-1.elb.amazonaws.com/

Third, observed the HPA in action by monitoring with with kubectl get hpa command.

Step 11: Implement Liveness and Readiness Probes

The task of this step was to ensure the web application is restarted if it becomes unresponsive and doesn’t receive traffic until ready.

This task was accomplished by adding liveness and readiness probes to the website-deployment.yaml and mysql-pod.yaml manifest files as shown below:

website-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: website-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        app: ecom-web
        tier: frontend
    spec:
      containers:
      - image: briangaber/ecom-web:v1
        name: ecom-web
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /index.php
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 5
        livenessProbe:
          httpGet:
            path: /index.php
            port: 80
          initialDelaySeconds: 10
          periodSeconds: 5
        env:
        - name: DB_HOST
          value: "mysql-service"
        - name: DB_USER
          value: "ecomuser"
        - name: DB_PASSWORD
          value: "ecompassword"
        - name: DB_NAME
          value: "ecomdb"
        - name: FEATURE_DARK_MODE
          valueFrom:
            configMapKeyRef:
              name: feature-toggle
              key: FEATURE_DARK_MODE

mysql-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: ecom-web
    tier: backend
  name: ecom-db
spec:
  containers:
  - image: mariadb
    name: ecom-db
    ports:
    - containerPort: 3306
    readinessProbe:
      tcpSocket:
        port: 3306
      initialDelaySeconds: 5
      periodSeconds: 5
    livenessProbe:
      tcpSocket:
        port: 3306
      initialDelaySeconds: 10
      periodSeconds: 5
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: mypass
    - name: MYSQL_DATABASE
      value: ecomdb
    - name: MYSQL_USER
      value: ecomuser
    - name: MYSQL_PASSWORD
      value: ecompassword
    volumeMounts:
      - name: mariadb-initdb
        mountPath: /docker-entrypoint-initdb.d
  volumes:
  - name: mariadb-initdb
    configMap:
      name: initdb

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: initdb
data:
  initdb.sql: |-
    USE ecomdb;
    CREATE TABLE products (id mediumint(8) unsigned NOT NULL auto_increment,Name varchar(255) default NULL,Price varchar(255) default NULL, ImageUrl varchar(255) default NULL,PRIMARY KEY (id)) AUTO_INCREMENT=1;
    INSERT INTO products (Name,Price,ImageUrl) VALUES ("Laptop","100","c-1.png"),("Drone","200","c-2.png"),("VR","300","c-3.png"),("Tablet","50","c-5.png"),("Watch","90","c-6.png"),("Phone Covers","20","c-7.png"),("Phone","80","c-8.png"),("Laptop","150","c-4.png");

I tested the probe functionality by deleting the index.php in the running website-deployment pods before and after implementing the probes. Before the implementation of the probes when index.php was deleted the pods would stay running, but the website was inaccessible because index.php did not exist. After the probes were implemented and then index.php was deleted, then the pods would restart because the liveness probe would detect the website as down.

Step 12: Utilize ConfigMaps and Secrets

The task of this step was to securely manage the database connection string and feature toggles without hardcoding them in the application.

This task was completed by the creation of mysql-cm.yaml and mysql-secrets.yaml manifest files and the modification of website-deployment.yaml and mysql-pod.yaml manifest file as shown below:

mysql-cm.yaml

The ConfigMap object was created with this command:

$ kubectl create configmap mariadb-config --from-literal=mysql_db=ecomdb --from-literal=mysql_user=ecomuser

Then the manifest below was displayed using the k get cm mariadb-config -o yaml command:

apiVersion: v1
kind: ConfigMap
metadata:
  name: mariadb-config
data:
  mysql_db: ecomdb
  mysql_user: ecomuser

mysql-secrets.yaml

The Secret object was created with this command:

$ kubectl create secret generic mariadb-secret --from-literal=mysql_root_password=mypass --from-literal=mysql_password=ecompassword

Then the manifest below was displayed using the k get secret mariadb-secret -o yaml command

apiVersion: v1
kind: Secret
metadata:
  name: mariadb-secret
type: Opaque
data:
  mysql_password: ZWNvbXBhc3N3b3Jk
  mysql_root_password: bXlwYXNz

website-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: website-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        app: ecom-web
        tier: frontend
    spec:
      containers:
      - image: briangaber/ecom-web:v1
        name: ecom-web
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /index.php
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 5
        livenessProbe:
          httpGet:
            path: /index.php
            port: 80
          initialDelaySeconds: 10
          periodSeconds: 5
        env:
        - name: DB_HOST
          value: "mysql-service"
        - name: DB_USER
          valueFrom:
            configMapKeyRef:
              name: mariadb-config
              key: mysql_user
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mariadb-secret
              key: mysql_password
        - name: DB_NAME
          valueFrom:
            configMapKeyRef:
              name: mariadb-config
              key: mysql_db
        - name: FEATURE_DARK_MODE
          valueFrom:
            configMapKeyRef:
              name: feature-toggle
              key: FEATURE_DARK_MODE

mysql-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: ecom-web
    tier: backend
  name: ecom-db
spec:
  containers:
  - image: mariadb
    name: ecom-db
    ports:
    - containerPort: 3306
    readinessProbe:
      tcpSocket:
        port: 3306
      initialDelaySeconds: 5
      periodSeconds: 5
    livenessProbe:
      tcpSocket:
        port: 3306
      initialDelaySeconds: 10
      periodSeconds: 5
    env:
    - name: MYSQL_ROOT_PASSWORD
      valueFrom:
        secretKeyRef:
          name: mariadb-secret
          key: mysql_root_password
    - name: MYSQL_DATABASE
      valueFrom:
        configMapKeyRef:
          name: mariadb-config
          key: mysql_db
    - name: MYSQL_USER
      valueFrom:
        configMapKeyRef:
          name: mariadb-config
          key: mysql_user
    - name: MYSQL_PASSWORD
      valueFrom:
        secretKeyRef:
          name: mariadb-secret
          key: mysql_password
    volumeMounts:
      - name: mariadb-initdb
        mountPath: /docker-entrypoint-initdb.d
  volumes:
  - name: mariadb-initdb
    configMap:
      name: initdb
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: initdb
data:
  initdb.sql: |-
    USE ecomdb;
    CREATE TABLE products (id mediumint(8) unsigned NOT NULL auto_increment,Name varchar(255) default NULL,Price varchar(255) default NULL, ImageUrl varchar(255) default NULL,PRIMARY KEY (id)) AUTO_INCREMENT=1;
    INSERT INTO products (Name,Price,ImageUrl) VALUES ("Laptop","100","c-1.png"),("Drone","200","c-2.png"),("VR","300","c-3.png"),("Tablet","50","c-5.png"),("Watch","90","c-6.png"),("Phone Covers","20","c-7.png"),("Phone","80","c-8.png"),("Laptop","150","c-4.png");

Extra Credit 1: Package everything in Helm

The task of this extra credit was to utilize Helm to package the application, making deployment and management on Kubernetes clusters more efficient and scalable.

The Helm Chart structure for this project was created by running the following command:

helm create k8s-challenge

Then the Chart.yaml, values.yaml and all manifest files in the /templates directory were modified for this project.

This is the Chart.yaml I created:

annotations:
  category: Training
  images: |
    - name: ecom-web
      image: https://hub.docker.com/r/briangaber/ecom-web
apiVersion: v2
name: k8s-challenge
description: A Helm chart for Kubernetes

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 1.0.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.0.0"
description: Custom image based upon php:7.4-apache with mysqli and KodeKloud learning-app-ecommerce application added
keywords:
- apache
- http
- https
- www
- web
- php
- php-apache
- mariadb
maintainers:
- name: Brian Gaber
  url: https://github.com/bgaber/k8s-challenge

This is the values.yaml file I created:

# Default values for k8s-challenge.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

namespace: k8s-challenge

websiteDeployment:
  deploymentName: website-deployment
  containerName: ecom-web
  replicas: 2
  image: briangaber/ecom-web:v1
  port: 80
  appLabel: ecom-web
  tierLabel: frontend
  serviceName: website-service

databasePod: 
  podName: ecom-db
  image: mariadb
  port: 3306
  appLabel: ecom-db
  tierLabel: backend
  serviceName: mysql-service

storage:
  scName: aws-ssd-class
  pvcName: mariadb-pv-claim

The k8s manifests files in the /templates directory were validated using this command:

helm template k8s-challenge --debug

The application was installed with this Helm Chart command:

helm install k8s-challenge k8s-challenge

The application was uninstalled with this Helm Chart command:

helm uninstall k8s-challenge

Extra Credit 2: Implement Persistent Storage

The task of this extra credit was to ensure data persistence for the MariaDB database across pod restarts and redeployments.

This task was accomplished with an AWS EBS Storage Class and a Persistent Volume Claim.

ebs-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: aws-ssd-storage
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
  csi.storage.k8s.io/fstype: xfs
  type: gp3
  encrypted: "true"

mysql-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mariadb-pv-claim
  labels:
    app: ecom-web
    tier: backend
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: aws-ssd-storage
  resources:
    requests:
      storage: 20Gi

Then the mysql-pod.yaml manifest file was modified with volumeMounts and volumes as shown below:

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: ecom-web
    tier: backend
  name: ecom-db
spec:
  containers:
  - image: mariadb
    name: ecom-db
    ports:
    - containerPort: 3306
    readinessProbe:
      tcpSocket:
        port: 3306
      initialDelaySeconds: 5
      periodSeconds: 5
    livenessProbe:
      tcpSocket:
        port: 3306
      initialDelaySeconds: 10
      periodSeconds: 5
    env:
    - name: MYSQL_ROOT_PASSWORD
      valueFrom:
        secretKeyRef:
          name: mariadb-secret
          key: mysql_root_password
    - name: MYSQL_DATABASE
      valueFrom:
        configMapKeyRef:
          name: mariadb-config
          key: mysql_db
    - name: MYSQL_USER
      valueFrom:
        configMapKeyRef:
          name: mariadb-config
          key: mysql_user
    - name: MYSQL_PASSWORD
      valueFrom:
        secretKeyRef:
          name: mariadb-secret
          key: mysql_password
    volumeMounts:
      - name: mariadb-initdb
        mountPath: /docker-entrypoint-initdb.d
      - name: mariadb-pv
        mountPath: /var/lib/mysql
  volumes:
  - name: mariadb-initdb
    configMap:
      name: initdb
  - name: mariadb-pv
    persistentVolumeClaim:
      claimName: mariadb-pv-claim

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: initdb
data:
  initdb.sql: |-
    USE ecomdb;
    CREATE TABLE products (id mediumint(8) unsigned NOT NULL auto_increment,Name varchar(255) default NULL,Price varchar(255) default NULL, ImageUrl varchar(255) default NULL,PRIMARY KEY (id)) AUTO_INCREMENT=1;
    INSERT INTO products (Name,Price,ImageUrl) VALUES ("Laptop","100","c-1.png"),("Drone","200","c-2.png"),("VR","300","c-3.png"),("Tablet","50","c-5.png"),("Watch","90","c-6.png"),("Phone Covers","20","c-7.png"),("Phone","80","c-8.png"),("Laptop","150","c-4.png");

Artifacts

The various artifacts (Dockerfiles, K8s manifests, etc) created for this challenge can be found in my github k8s-challenge repository.

Conclusion

After reading through this challenge and its steps, my initial thought was this was going to be difficult, even though I recently passed my CKA exam. However, as I pondered and began the process of working through the steps I realized it was not as difficult as I first imagined. I learned a lot from the practicality of the steps in this challenge, especially, building the Docker images, using probes, Storage Classes, PVCs and deployment rollback. I feel much more confident in applying the K8s knowledge I gained in studying for the CKA. This was a very worthwhile effort because the practical, hands-on implementation of this Kubernetes application solidified my K8s skills. I showed proficiency in Kubernetes and containerization, demonstrating the ability to deploy, scale, and manage web applications efficiently in a K8s environment, underscoring cloud-native deployment skills.