The Elastisys Tech Blog

Search

Kubernetes “what is”

Kubernetes has over the past years increased in popularity and adoption by companies developing cutting edge applications. But what is Kubernetes, and what are the various parts that make it work?

Kubernetes is an open-source container orchestration system that automates the management of containerized applications. It enables developers to easily deploy, scale, and manage their applications in a consistent and efficient manner. Kubernetes was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). It was inspired by Google's internal container orchestration system, Borg.  

Kubernetes is built around several core concepts that we will cover in this "what is" blog. Understanding these concepts is crucial to understanding how Kubernetes works. One core concept in Kubernetes is the “Cluster”, but what is a Cluster?

What is a Kubernetes Cluster?

A Kubernetes Cluster is a set of one or more Nodes. A Kubernetes Cluster is an abstraction of the actual machines running the workload and manages everything around scheduling, networking, and storage.

This abstraction simplifies the deployment and management of your workload immensely. You simply tell the “Cluster” to deploy an application, and it will decide which machine will run the actual workload. However, a Cluster is nothing more than the underlying machines, and these are typically divided into two categories, Control plane Nodes and Workers Nodes. 

What are Kubernetes Nodes?

A Kubernetes Node is a machine, either a virtual or physical, that runs your containerized workload. There are generally two types of Nodes in Kubernetes: Control plane Nodes and Worker Nodes. 

Control plane Nodes manage all of the Kubernetes components, the network, and the workload that runs in a Cluster. Worker Nodes run the actual workload – your application(s). 

The Control plane runs the Kubernetes API server, which you talk to when deploying an application. The other various components are then responsible for scheduling the workload on one of the workers.

Generally, control plane Nodes are separated from worker Nodes, but in some special cases one Node can be both a control plane and a worker Node. In production environments, it is advised to run multiple control plane Nodes, separated from worker Nodes, for high availability and fault-tolerance. 

The number and size of worker Nodes depends on the resource demands of hosted applications. As we mentioned earlier, worker Nodes run your application, and in Kubernetes all application components need to run inside of containers. 

What is a Container?

A container is a way of packaging your application together with everything that is needed to run it. This includes the code, a runtime, libraries and tools, and in some cases configuration (although it is usually wise to separate configuration from the application). 

The purpose of this is to make them independent of the underlying hardware and to always run in a consistent manner. Containers are also supposed to be isolated units for increased security. Containers are an evolution of the older virtualization technique of virtual machines (VMs). Compared to VMs, containers are much more lightweight, portable, and efficient.

Containers are the foundation of Kubernetes, all workload that runs on Kubernetes is packaged as a container. Despite this, you can’t directly schedule individual containers on Kubernetes, but must do so through Kubernetes’ own concepts, by leveraging the concept of a Kubernetes Pod.

What is a Kubernetes Pod?

A Pod is the smallest computational unit that can be deployed in Kubernetes. A Pod can contain one or more containers, with a shared networking namespace, internal IP and shared storage. 

Pods are the atomic unit of scheduling in Kubernetes, meaning that the scheduler can only place a single Pod, not an individual container in the Cluster. In addition to just the container(s) in a Pod, it also contains the configuration that determines how the container(s) should run, such as command line arguments and environment variables. Pods are ephemeral by design, meaning that there is no guarantee that individual Pods will keep running.

When deploying applications in production environments it is rare to use Pods directly.

Some Control-plane components run as simple Pods, and Pods are sometimes useful to deploy for debugging purposes. But in most cases, one would use one of the higher-level abstractions available in Kubernetes, such as a Deployment, DaemonSet or ReplicaSet to deploy the application in Pods.

What is a Kubernetes Deployment?

A Deployment is a higher-level abstraction in Kubernetes that manages the state of a group of identical Pods. It is a resource that wraps around a Pod description with additional information about replication, updates, and rollbacks.

A Deployment contains information on how many replicas of the Pod should be deployed, configuration about how to handle updates to the Pods, and the ability to rollback to a previous version of the Pod if necessary. A Deployment will ensure that the desired number of replicas of a Pod is upheld. If a Pod is deleted or crashed, the Deployment will automatically schedule a new Pod in its place. Deployments are a very common resource in Kubernetes and useful for deploying most applications. Due to the nature of Kubernetes, these applications are most commonly designed with the microservice architecture. In some cases, however, a DaemonSet or a StatefulSet is preferred as an alternative deployment mechanism. 

What is a Kubernetes DaemonSet?

A DaemonSet is another higher-level abstraction in Kubernetes, very similar to a Deployment. In the simplest case, a DaemonSet ensures that every Node in a Cluster runs a replica of a specific Pod.

As Nodes are added, new Pods will automatically be scheduled, and as Nodes are removed, their accompanied Pods will be deleted. DaemonSets can also be further specialized to only run Pods on certain Nodes if that is needed.

DaemonSets can be very useful for some specific types of applications. One such example is the native Kubernetes component kube-proxy, which runs on every Node in the Cluster to facilitate Kubernetes' networking functionality. Other examples include logging collection frameworks such as Fluentd.

What is a Kubernetes StatefulSet?

A StatefulSet is yet another high-level abstraction in Kubernetes that manages the state of a set of Pods. Unlike Deployments and DaemonSets, a StatefulSet will guarantee the ordering and uniqueness of the Pods that it manages. 

This is useful for stateful applications with dedicated persistent storage. When a Pod in a StatefulSet dies, Kubernetes will automatically schedule a Pod with the same unique identifier to ensure that it retains the exact configuration and persistent data. Databases are typical applications that will utilize a StatefulSet instead of a regular Deployment in order to manage storage in a consistent manner. 

Now we’ve described the most common resources in Kubernetes for deploying the actual workload. In addition to those, there are two additional resources in Kubernetes that are very common and used in order to separate configuration data from the application code. These are ConfigMaps and Secrets and these resources are definitely something you need to know about before deploying your application on Kubernetes.

What are Kubernetes ConfigMaps?

Kubernetes ConfigMaps are objects that store configuration data in key-value pairs. ConfigMaps are used for any configuration that is not considered confidential information, such as command-line flags, environment variables, or other configuration files. 

ConfigMaps are useful for separating configuration data from the application itself, making it much easier to update and modify the behavior of the application without rebuilding the underlying container. To configure a Pod to use a ConfigMap, you can either pass it in as a command-line argument, environment variable, or mount it as a file within the Pod. 

For confidential information, make sure to use Kubernetes Secrets instead. 

What are Kubernetes Secrets?

Kubernetes Secrets are objects that are intended to be used to store sensitive configuration data such as passwords, access tokens, or keys. 

This information is not required to be put in a Pod specification, resulting in a decreased risk of the information being exposed. This also means that one does not need to include this data inside the application code.

Something very important to know about Secrets is that the data is not encrypted by default, but simply base64 encoded. This means that anyone with API access to either Secrets or the ETCD database can see and even modify the confidential data. Even those with permission to create Pods, can do so and access Secrets from within the Pod itself. Special care must therefore be taken to ensure safe use of Secrets, such as configuring RBAC rules and enabling encryption at rest. 

In addition to configuration through ConfigMaps and Secrets, there are more resources in Kubernetes that are useful to run in conjunction with Pods. Since the workload runs in containers, which are typically isolated from the host due to security reasons, some of these resources are needed to expose an application to the internet. These resources are Kubernetes Services and Ingresses.

What is a Kubernetes Service?

A Service is an object in Kubernetes that is used as a way to expose a logical set of Pods as a network resource. Services allow for communication between Pods inside a Cluster and communication to Pods from outside of the Cluster.

Services are an abstraction, meaning that they do not “run” as a process on any machine in the way that Pods (or the underlying containers) do. Most objects or resources work this way in Kubernetes. They are simply a description of an abstract resource that is accessible through the Kubernetes API. The Kubernetes control-plane components can then use this description to actually implement the specific functionality. Since Pods are ephemeral by nature, Services provide a common DNS name and IP to a set of Pods for a consistent way of communicating with the Pods. A Service will also do some simple load balancing if it targets more than one Pod to distribute the load. 

Kubernetes Services are great for exposing Pods to the internal network, but in order to expose Kubernetes Services externally, an Ingress is needed. 

What is a Kubernetes Ingress?

An Ingress is an object in Kubernetes to expose Kubernetes Services externally. Ingresses lets you specify HTTP and HTTPS routes and rules for which routes target which internal Service. 

For Ingresses to work, an Ingress Controller needs to be installed in the Cluster, as these do not come native with Kubernetes. The most common is the NGINX Ingress Controller, but there are several alternatives, such as Traefik or Istio

We have now covered configuration and networking. The last piece of the puzzle that can be useful when deploying an application is storage. Since Pods are ephemeral and isolated, there is a problem with storing and sharing data. The solution in Kubernetes to this problem is Volumes.

What are Kubernetes Volumes?

Volumes in Kubernetes are essentially a directory inside the Pod possibly containing some data. The particularities of how the Volumes are created, where the data is stored, and what they contain all depend on the type of Volume used. 

Volumes are an abstraction that solves the issue with the ephemeral nature of Pods and their storage. When a Pod or the container within crashes, any data stored in the internal file system of the container is lost. There is also an issue with trying to share data between multiple containers running within the same Pod. Volumes solve these issues.

There are several different types of Volumes. The most basic type is ephemeral Volumes which are used solely as temporary disk storage and to facilitate communication between containers in the same Pod. These Volumes have the same lifespan as the Pod, meaning that they are deleted when the Pod is deleted. In order to have Volumes persist for longer, Persistent Volumes are needed.

What are Kubernetes Persistent Volumes and Persistent Volume Claims?

Persistent Volumes in Kubernetes are Volumes which persist for longer than the lifespan of a Pod, meaning that the data in those Volumes are not affected by the Pod lifecycle.

A Persistent Volume (PV) is a resource in Kubernetes that must either be provisioned by an administrator of a Cluster or dynamically provisioned using a StorageClass. To make use of a Persistent Volume, there is a separate Kubernetes object called a Persistent Volume Claim (PVC). These are requests for a specific amount of PV resources. These PVCs can then be consumed by a Pod in order for it to gain access to the PV data. Dynamic provisioning allows for a PV to be dynamically created to match a newly created PVC, thus not requiring Cluster administrators to manually allocate PV resources any time they are needed.

Configuration, networking, and storage together cover everything that is needed to deploy a complex application in Kubernetes. There are, however, more resources in Kubernetes that are useful to understand. One of these that is especially important for larger environments that require certain isolation between Pods in the Cluster is Namespaces.

What are Kubernetes Namespaces?

Namespaces in Kubernetes is a way of isolating a number of resources in a single Cluster. Namespaces are useful when working in larger environments to organize and increase security. 

With Namespaces you can limit the reach of objects in Kubernetes such as only allowing Pods to access Secrets in its own Namespace. Namespaces can also be used to limit the access of users of the Cluster, such as giving application developers access only to those Namespaces in which their application is deployed in, avoiding the risk of them modifying Kubernetes’ own Cluster components or other crucial applications. Not all resources in Kubernetes are Namespaced, such as Persistent Volumes, but many are.

Another resource that is not as common, but useful for some specific purposes are Jobs. 

What are Kubernetes Jobs and CronJobs?

A Kubernetes Job is a resource that can be used to perform a set task once. A Job can schedule one or more Pods to perform some task until completion. If a specified number of Pods complete successfully, the Job itself completes successfully. 

If the Pods fail for some reason, the Job will retry by rescheduling new Pods a specified number of times until the Job is deemed failed. Jobs can be useful for performing certain specific tasks, such as taking a backup of a database. 

A Kubernetes CronJob is simply a resource that can create Job resources on a regular scheduled basis. CronJobs use the same Cron syntax as Unix crontabs to describe the schedule in which it creates Jobs. 

Jobs are very useful for specific purposes, such as setting up initial configuration when deploying an application, or other maintenance tasks. It is quite common to see Jobs in complex applications or tools that might be useful in conjunction with your own application.

So far, we’ve talked about many Kubernetes concepts in the context of creating and deploying an application of your own. However, in Kubernetes, it is very common to also deploy other tools and applications, for example databases, message queues, or tools for monitoring. One common concept that recurs in such cases is the Kubernetes Operator.

What is a Kubernetes Operator?

A Kubernetes Operator is a Kubernetes extension aimed at using custom resources to automatically deploy and manage an application and its components. This allows for modification of the behavior of Kubernetes without changing its source code.

An Operator is not a native Kubernetes component but rather a design pattern used by applications developed to run on a Kubernetes Cluster. An Operator is supposed to mimic a user who is managing an application. Operators make use of Custom Resource Definitions (CRD) in order to determine what the state of an application is supposed to be. If the configuration differs from the actual state in the Cluster, the Operator will automatically modify the resources running in the Cluster so that the actual state matches the desired configuration. A CRD is an extension of the Kubernetes API to allow new resources to be added to a Cluster in order to extend the capabilities of Kubernetes.

Operators are very common for large and complex applications, such as databases or message brokers. These make it a lot simpler for users of a Kubernetes Cluster to deploy these complex applications. Operators are one solution for the deployment of a specific type of application. Another more general tool that is very common to aid in the deployment of applications is Helm, the package manager for Kubernetes.

What is Helm?

Helm is a package manager for Kubernetes, which makes it much easier to configure and install applications running on top of Kubernetes. No matter the complexity of the application, “Helm install” is always just a single command for the developer.

In Helm you have the concept of a Helm Chart. This is a package of an application, containing all of the YAML manifests needed to install the application in Kubernetes, and also any and all of the configurations that the application supports. Helm Charts often make extensive use of templating in order to provide the option to modify values in the YAML manifests. Using templating allows the user installing an application to define their own configuration to the install, be it the resources requests, labels, or even the image that a specific container might use. Helm keeps a versioned history of chart installations, allowing one to rollback to previous versions if needed. 

Conclusion

We have now given a brief introduction to some of the core concepts of Kubernetes. This should be a good starting point to those wanting to get started with their cloud native transformation. Kubernetes is a very large and complex ecosystem of not only native components but also an immense variety of external open-source projects. Check out the full cloud native landscape here. This is what makes Kubernetes such a powerful framework for deploying applications of any size and complexity. We hope this blog was valuable for you to get a head start with your Kubernetes journey!

If you want to learn more, Elastisys is a Linux Foundation Authorized Training Partner. We offer the official Linux Foundation courses Kubernetes Administration (LFS458) and Kubernetes for App Developers (LFD459) as well as other shorter or tailor made courses. Check out our full training offering here: https://elastisys.com/training/.
Share:
LinkedIn
Twitter
Reddit