Deployments in Kubernetes | Deploy a Django application on Kubernetes

Deployments in Kubernetes | Deploy a Django application on Kubernetes

·

5 min read

In my previous blog, I discussed how to install k8s, what are pods, and also created n nginx pod. In this blog, I will let you know how to connect a worker node to a Master node and also walk you through deployment in Kubernetes. Also, a surprise project coming ahead to deploy a Django application using Kubernetes!

Deployments in K8s

According to kubernetes.io, "A Deployment provides declarative updates for Pods and ReplicaSets."

In easy terms, a Deployment is an object that manages the deployment and scaling of a set of pods. It provides a declarative way to define and manage the desired state of a replicated application.

Deployments Use Cases

A deployment is used:

  • To roll out a ReplicaSet.

  • Manage the lifecycle of pods and ensure the desired number of replicas are running.

  • To define the desired state of your application by specifying the number of replicas, the container image, and other configuration details.

  • For rolling updates, allowing to update the application without downtime.

  • To scale your application horizontally.

K8s using Horizontal Pod Autoscaler to scale up and down the applications.

How to connect the worker to the Master node?

Create two EC2 instances. Install docker, kubeadm, kubectl, and kubelet on both instances.

A quick recap: kubeadm is a command-line utility used to bootstrap and manage the control plane components of a Kubernetes cluster.

kubectl is the primary command-line tool for interacting with a Kubernetes cluster.

kubelet is an agent that runs on each node in the Kubernetes cluster. It is responsible for managing the containers and running the pods.

Once completed follow the following steps:

Let's connect the worker node to the Master.

On the master node, become a root user and initiate the kubeadm.

sudo su
kubeadm init

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now to connect the worker to the master node, we need to configure the CNI. Here we are using Weave net.

kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml

On the Worker node to ensure that the system is in a suitable state for resetting Kubernetes components, let us run the commands below:

sudo su
kubeadm reset pre-flight checks

The kubeadm join command is used to join a worker node to an existing Kubernetes cluster. Paste the join command which you generated in the Master node:

kubeadm join 172.31.41.186:6443 --token qfy9l6.ldpqs1npq76c4i1q \
    --discovery-token-ca-cert-hash sha256:cd80dea66f46a9dbdb7e46e4bc68d62485332287be8d4a1ea2d4fb389dcb0728 --v=5

You can observe that the worker is unable to connect to master the port 6443 is not open to the world.

So open port 6443 on Master to everywhere by editing the inbound rules of your instance. After updating the inbound security rules, you can see the worker connecting to Master.

To check if the worker is connected to the master, by checking the following command on the master:

kubectl get nodes

Now let's get started with the tasks.

Cherry on the cake: You can include this as a project on your resume too.

Task: Create one Deployment file to deploy a sample todo-app on K8s using the "Auto-healing" and "Auto-Scaling" features.

I am using the following repository for this project: https://github.com/thesnehasuresh/django-todo-cicd.git

Clone the repository to your local.

git clone https://github.com/thesnehasuresh/django-todo-cicd.git

The docker file in the cloned repository has the following:

FROM python:3
WORKDIR /data
RUN pip install django==3.2
COPY . .
RUN python manage.py migrate
EXPOSE 8000
CMD ["python","manage.py","runserver","0.0.0.0:8000"]

Go to the location and build the image.

cd django-todo-cicd
docker build . -t snehaks/django-todo:latest

Let's verify if the image is created.

Yes, it is created. Now push this image to the docker hub.

Login to dockerhub.

docker login

Now push the repository to registry.

docker push snehaks/django-todo:latest

Let's verify in Docker Hub if the image is pushed.

Yes, the image is pushed from our local repo.

Now let us start with creating the deployment. Create the deployment.yml file.

vim deployment.yml
cat deployment.yml

The deployment.yml will have the following details:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-django-application
spec:
  replicas: 3
  selector:
    matchLabels:
      app: django-todo-app
  template:
    metadata:
      labels:
        app: django-todo-app
    spec:
      containers:
        - name: todo-app
          image: snehaks/django-todo:latest
          ports:
            - containerPort: 8000

Let's create this deployment. Run this command as a root user:

kubectl apply -f deployment.yml

Let us verify if any pods are created.

kubectl get pods

Let us test if the container we created is working or not. This will be checked on the Worker node.

Let's connect to any one of the containers locally.

sudo docker exec -it <container_id> bash

Let's connect to the application using the container's IP.

curl -L http://<container_ip>:8000

So the deployment I created is working successfully.

Let's check the auto-healing and autoscaling features.

What are auto-healing and auto-scaling features in k8s?

Auto-healing, also known as self-healing, is a feature that automatically detects and recovers from failures within the cluster.

Auto-scaling is a feature that dynamically adjusts the number of running instances (pods) based on the current demand or resource utilization.

So I will delete two containers.

Use the following commands on the Master node to delete pods:

kubectl get pods
kubectl delete pod <podname> <podname>

If we check the pods again, we can observe that the minimum number of pods we specified (in this case 3) will be up again.

You can observe that two pods are created before 77s, proving that those came up automatically after the deletion of pods. This is the feature of k8s: auto-scaling and auto-healing.

To delete deployment, we use:

kubectl delete -f deployment.yml

We can also observe that, along with deployment, the pods created are also deleted.

Yay! You created a cluster and deployed a Django application on the worker node!


In this blog, I have discussed deployments in k8s and also deployed a Django application using K8s. If you have any questions or would like to share your experiences, feel free to leave a comment below. Don't forget to read my blogs and connect with me on LinkedIn and let's have a conversation.

To help me improve my blog and correct my mistakes, I am available on LinkedIn as Sneha K S. Do reach me and I am open to suggestions and corrections.

#Day32 #90daysofdevops

Did you find this article valuable?

Support Sneha K S by becoming a sponsor. Any amount is appreciated!