Table of contents
- Namespaces
- Tasks
- Task 1: Create a Namespace for your Deployment. Create a Namespace for your Deployment. (Hint: Use the command kubectl create namespace <namespace-name> to create a Namespace). Update the deployment.yml file to include the Namespace. (Hint: Apply the updated deployment using the command: kubectl apply -f deployment.yml -n <namespace-name>). Verify that the Namespace has been created by checking the status of the Namespaces in your cluster.
- Task 2: Explain what you know about Services, Load Balancing, and Networking in Kubernetes.
In the previous blog, we deployed a Django application on Kubernetes using the deployemnt.yml file. In this blog let's work with namespaces and learn the services in Kubernetes.
Namespaces
In Kubernetes, namespaces provide a mechanism for isolating groups of resources within a single cluster.
Namespaces are used to create a scope for objects and resources, allowing multiple teams or applications to share a cluster without interference.
Namespaces are applicable only to objects (Deployments, services, etc.) and not cluster-wide objects (StorageClass, Nodes, and PersistentVolumes).
Namespaces that are already present in the K8s cluster: default, kube-node-lease, kube-public, Kube-system.
When to use Namespace?
Some common use cases of Namespaces are:
When you have a shared Kubernetes cluster where multiple applications coexist, namespaces can be used to provide isolation between different tenants.
Namespaces can be used to separate different environments, such as development, staging, and production.
Namespaces can be used to enforce resource quotas and limits on different teams or applications.
RBAC (Role-Based Access Control) in Kubernetes can be applied at the namespace level. This allows you to define different roles and permissions for different teams or users within specific namespaces.
Namespaces can be used to separate testing and CI/CD pipelines.
Note: I have explained Services in Kubernetes, in the Task 2 section as part of #90daysofdevops challenge.
Tasks
Task 1: Create a Namespace for your Deployment. Create a Namespace for your Deployment. (Hint: Use the command kubectl create namespace <namespace-name>
to create a Namespace). Update the deployment.yml file to include the Namespace. (Hint: Apply the updated deployment using the command: kubectl apply -f deployment.yml -n <namespace-name>).
Verify that the Namespace has been created by checking the status of the Namespaces in your cluster.
kubectl create namespace <name_of_namespace>
To check the namespaces present:
kubectl get namespaces
The namespace we created deploy1 is present here.
To apply:
Update the deployment file.
kubectl apply -f deployment.yml -n <name_of_namespace>
We can check the pods created under the namespace.
kubectl get pods -n=<name_of_namespace>
Task 2: Explain what you know about Services, Load Balancing, and Networking in Kubernetes.
Services
In Kubernetes, a Service is a method for exposing a network application that is running as one or more Pods in your cluster.
An example from the official documentation of k8s, explains very easily what a service is:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
This is a YAML manifest file that describes a Kubernetes Service. This Service definition named "my-service" will route incoming TCP traffic on port 80 to the Pods with the label app.kubernetes.io/name
: MyApp
and forward it to their port 9376. This allows other components within the cluster to access the Pods of the "MyApp" application through the Service's stable DNS name and port 80, without needing to know the specific IP addresses or port numbers of the Pods.
Why do you need services?
Services distribute incoming network traffic across multiple Pods that are part of the Service.
Services have a stable DNS name and IP address that can be used by other components within the cluster to access the Pods.
When scaling the number of Pods or performing rolling updates, the Service automatically adjusts the routing of traffic to the available Pods.
Services can be used for internal communication between different components within the cluster.
Services can be configured to expose applications externally, allowing access from outside the cluster.
Types of services in Kubernetes:
ClusterIP Service
NodePort Service
LoadBalancer Service
ExternalName Service (This type of service is used to provide DNS aliases to external services)
Headless Service (This service type is used for stateful sets where each pod has a unique identity)
Ingress (This is not a service type per se, but a way to expose HTTP and HTTPS services to the outside world)
We will discuss a few of the above-mentioned services.
ClusterIP
This is the default service type in Kubernetes. It creates a virtual IP address within the cluster to access the pods that are part of the service.
ClusterIP services expose a set of pods in a K8s cluster to the other pods in the same cluster using a virtual IP address.
A common practical use case for ClusterIP service would be when a web application consists of a backend API service and a frontend service. The frontend service can access the backend API service which can be exposed as a ClusterIP service. The backend pods are not directly accessible from outside the Kubernetes cluster, but they can be accessed by the frontend pods that are running in the same cluster.
So, the ClusterIP will provide access to an application within a Kubernetes cluster but without access from the world, and will use an IP from the cluster’s IP pool and will be accessible via a DNS name in the cluster’s scope.
NodePort
A NodePort service is a type of Kubernetes service that allows access to a set of pods running on a cluster from outside the cluster. It exposes the pods to the outside world by opening a specific port on all the nodes of the cluster.
This type of service exposes the container to the outside world by mapping a static port on the worker node(s) to the pod. This is useful for testing or accessing a service from outside the cluster.
When a NodePort is created, K8s assigns a random port number (between 30000 and 32767 by default) on each node in the cluster. The service can then be accessed using the IP address of any node in the cluster and the assigned port number.
The NodePort type is: tied to a specific host like ЕС2, if the host isn’t available from the world – then such a Service will not provide external access to pods, will use an IP from a provider’s pool, for example, AWS VPC CIDR, and will provide access to pods only on the same Worker Node.
A practical example is, NodePort Service can be used to expose a web application to the internet.
LoadBalancer
A LoadBalancer service is a type of Kubernetes service that provides external network access to a set of pods running on a cluster. It creates a load balancer that distributes incoming traffic across multiple backend pods, providing scalability and high availability for the application.
When you create a LoadBalancer service, Kubernetes creates a cloud load balancer (if you are running on a cloud provider) or a software load balancer (if you are running on-premises) that distributes traffic across the pods in the service. The load balancer provides a stable IP address that external clients can use to connect to the service.
So, the LoadBalancer Service type will provide external access to pods, will provide a basic load-balancing to pods on different EC2, will give the ability to terminate SSL/TLS sessions, and doesn’t support level-7 routing.
It can be used to distribute traffic across a set of pods running a web application or a backend API service, for example.
Kubernetes Networking
Kubernetes networking is the process of enabling communication between the various components of a Kubernetes cluster. It involves configuring networking resources such as pods, services, and ingress to allow traffic to flow between them.
Kubernetes has fundamental requirements for networking implementations:
Pods should be able to communicate with other pods on other nodes also without NAT.
Agents on the node should be able to communicate with all the pods on that node.
In Kubernetes, each pod gets its own IP address, and services act as a stable endpoint for accessing a set of pods. This allows for load balancing and automatic scaling of application traffic.
In addition, Kubernetes provides network policies that allow for fine-grained control over traffic flow between pods.
In this blog, I have discussed services and namespaces in Kubernetes. If you have any questions or would like to share your experiences, feel free to leave a comment below. Don't forget to read my blogs and connect with me on LinkedIn and let's have a conversation.
To help me improve my blog and correct my mistakes, I am available on LinkedIn as Sneha K S. Do reach me and I am open to suggestions and corrections.
#Day33 #90daysofdevops