The world has moved on from the entire concept of virtual machines to something way more efficient – containers. This transition has led big players seeking professionals with Kubernetes certification because of their expertise in managing containerized applications. For the newbies starting with Kubernetes, it’s an open-source platform that is known for its ability to automate deployment and tracking of containers.

What exactly is Kubernetes?

Often referred to as K8s, this container management tool is instrumental in putting application containers together based on applied logic for quicker operations. Supported by significant container management platforms like Docker EE, AWS EKS, OpenShift, IBM Cloud, Kubernetes is often used by developers to put multiple applications or modules working together in software, and quick-install newer versions.

At its core, Kubernetes functions on basic concepts that, when combined, can yield excellent functionalities. However, there are a few necessary technicalities involved in this tool that you may need to know before getting deeper into the Kubernetes networking infrastructure. They are:

API server: A gateway to an etc datastore, it is primarily responsible for keeping your application container into your desired state. Since the Kubernetes architecture is governed by the Kube-API server, each time you need to update the status of the containers, you need to call the API server, reporting the desired state.

Controllers: Sounding as they do, these abstractions come into the equation post declaration of your desired state of a container in the API server. Working entirely on the simple loop:

while true:

X = currentState()

Y=desiredState()

if X == Y:

return # Do nothing

else:

do(tasks to get to Y)

The pseudo-code represents the controllers’ ability to ensure that the desired state of the API server remains unaltered, while only reacting to any changes.

Master Node: Previously known as “minions”, nodes represent a worker machine in the Kubernetes architecture. The worker machine can be either a Virtual or a physical machine based on the container. The nodes house the required resources like container runtime to run pods, managed by master components.

Worker node: This represents any server atypical to the application containers and Kube-proxies.

Service: Defined as a proxy to reproduce pods and service requests, which can also be used for load balance between pods.

Pod: The crucial object for Kubernetes installation, where each is having its dedicated IP address, containing multiple containers.

What is Kubernetes networking?

The fundamental design principle is based on the concept of each pod’s IP address, that doesn’t change even when a container gets replaced by a new one. As all the containers hold the IP address of the pods within the pod, a few containers (also known as sandbox containers) work to contain a network namespace. This reduces the chances of any port collisions with the host applications. Since the pods are routable, they can be programmed to have extensive communication in-between and across nodes.

The Kubernetes networking model is based on the following constraints:

  • The Pods can communicate with other pods without any Network Address Translation.
  • The Nodes can communicate with the Pods without NAT.
  • The Pod IP remains visible across all levels.

In Kubernetes, every effective deployment is termed as a cluster. Now, these clusters are visualized in two parts – the control plane, and the computer machines. The control plane controls the master nodes and the computer machines, the worker nodes.

The worker nodes are responsible for the seamless running of pods – that also house containers in them. Since the worker nodes are operated via the Linux environment, it can be from a physical or a virtual machine. As Kubernetes master nodes work on top of an operating system, taking instructions from admin, they can be decided based on their task performance; if they’re suited for specific task management.

Kubernetes POD network:

Source: Medium

Pod and Container Communication:

Since pods (having containers) are the building blocks of Kubernetes networking, it can reach out to other pods on a local host having two types of communications:

Intra-Node Pod Network: Here, the packet leaves the pod 1 network to the Linux bridge, which then pushes the packet to its destination.

Inter-Node Communication: Here, the packet leaves the pod 1 network to the Linux bridge, and since it doesn’t have a destination address, it returns to the primary network interface. However, the packet then leaves node 1 to reach another node where it enters the routing table, which sends it to pod4 as it contains the CIDR block. Therefore, the packet goes to node2, and the bridge carries it to pod4.

Services

The services are a type of resource that figures out the proxy to be used to forward the requests set of pods and receive the traffic determined by its selector. These can be further broken down in the four types:

  • ClusterIP: The by-default service that declares the service on an inter-cluster IP.
  • NodePort: In this service type, each Node’s IP is declared at a static port that can contact outside a cluster.
  • LoadBalancer: It uses the cloud provider’s load balancer to expose the service type.
  • ExternalName: Here, the mapping of services to the content of the external Name takes place. It is done by returning a CNAME record with its containing value.

How does networking work in Kubernetes?

Kubernetes Networking works in a way that comes from an extensive system of running clusters and creating networks. If your group is already operational, without any deployed network, you may need to:

Credits: neuvector

Step 1: Run the config file against K8s from the networking perspective. It begins by creating two pods.

Credits: neuvector

Step 2: A quick ping can show if the network is running properly.

Credits: neuvector

Compared to Docker networking, the Kubernetes only manages the networking by attaching devices to the docker. It results in integrating docker with docker swarm with its vast capabilities in executing different function types like overlay, macvlan, etc. Since the k8s don’t rely on docker0, instead creates its bridge cbr0 – it gets easier to discern between the two.

With all this information, we hope you would be finally on your path to having a good foundational understanding of how the Kubernetes networking works. Good luck!