Master Thesis: Evaluation of Kubernetes alternatives for fog computing

Share:
LinkedIn
X
Reddit

Fog computing is an emerging technology which aims to solve limitations of cloud computing. Kubernetes (K8s) is currently the king of container orchestration systems within cloud computing. Its popularity has also led to the creation of many alternative platforms for creating Kubernetes clusters. These alternatives include K3s, MicroK8s and KubeEdge which claim to be suitable for fog and edge computing applications. I have compared these platforms to regular K8s to determine how suitable they are for these applications. The comparison was performed on the default configuration of the platforms and the criteria was how lightweight the platforms are and how well they perform in scenarios with a geographical distribution of nodes and unreliable networks.

This post was written by our thesis worker Elias Åström.

Deeper dive into business value

We expect to see an explosion in the number of devices connected to the internet with adoption of the Internet of Things (IoT) technology paradigm. This is expected to cause more strain on networks and cloud data centers. Many IoT applications also require low latency which cloud computing fails to provide.

This has motivated the creation of fog computing. Fog computing extends cloud computing to solve its limitations. It does this by providing computation, storage and networking services between end devices and cloud data centers. These computation resources are usually, but not always, located at the edge of networks to enable low latency. The primary advantages of fog computing over cloud computing are that it provides a much lower latency, is better suited for applications requiring mobility support, causes less strain on the network and is more energy efficient compared to cloud computing.

K8s has become one of cloud computing’s most popular technologies for a reason. It eliminates a lot of the manual process involved in deploying and scaling containerized applications. Fog computing has different characteristics from cloud computing which could result in Kubernetes requiring some changes to be suitable for it. A few alternative Kubernetes platforms have been created which claim to be more suited for fog and edge computing. This evaluation attempts to see which Kubernetes platform is currently most well suited for fog computing in its default configuration.

Deeper dive into experimental design and setup

Fog computing is characterised by a geographical distribution of nodes and wireless networks like 5G having a predominant role in network communication. Another characteristic is that the hardware in edge and fog nodes have limited computing resources. Because of this the evaluation will focus on how lightweight the platforms are and how well they perform in scenarios with a geographical distribution of nodes and unreliable networks. In the experiments an Nginx web server hosting a static website with 64 KiB of data is deployed. To load test this server is accessed by users simulated from the load-testing tool Locust.

In the first experiment a cluster is deployed with three nodes, one master node and two worker nodes. Nginx is deployed on one of the worker nodes and Prometheus is deployed on the other. The web server is accessed through the node port of the service on the worker node from a client machine running Locust. A node port in Kubernetes is a port which is opened on all nodes and forwards traffic to nodes running the service. A shell script which monitors Linux processes and Pushgateway are used with Prometheus to gather resource usage metrics for how much CPU and memory the different platforms use.

In the second experiment a cluster is deployed with four nodes, one master node and three worker nodes with simulated geographical locations. Nginx pods are deployed on all the worker nodes. The web server is accessed through the node port of the service on the node running in Stockholm from a client machine also located in Stockholm. The response time of user requests are measured by Locust and the experiment is run with 0 % simulated packet loss and 5 % simulated packet loss between all nodes. The following figure illustrates the setup for the simulated fog computing scenario along with the simulated locations of the machines.

To simulate a fog computing scenario the Linux command tc is used to insert artificial latency and packet loss between the nodes. The latency values between the nodes are configured according to the following table.

AmsterdamStockholmLondonParis
Amsterdamn/a22 ms7 ms11 ms
Stockholm22 msn/a26 ms30 ms
London7 ms26 msn/a9 ms
Paris11 ms30 ms9 msn/a

Results

The following plot shows the CPU usage of the Kubernetes components for the different platforms on the worker node. The values show that KubeEdge, K8s and K3s are all relatively lightweight. They had an average CPU usage of 0.0177, 0.0259 and 0.0346 respectively through the experiment. MicroK8s is significantly more heavyweight that the other platforms with an average CPU usage of 0.1244.

The following plot shows the memory usage of the Kubernetes components for the different platforms on the worker node. The values once again show that KubeEdge, K8s and K3s are the most lightweight. They had an average memory usage of 133.066 MiB, 188.923 MiB and 146.998 MiB respectively through the experiment. MicroK8s is significantly more heavyweight that the other platforms with an average memory usage of 781.396 MiB.

The following plot shows the average and 95th percentile response time over the course of the experiment simulating a fog computing scenario. The only platform which has a good response time is KubeEdge with an average response time of 6 ms. K3s, K8s and MicroK8s all fail at providing a short response time when accessing the web server from an edge node. They had average response times of 60 ms, 122 ms and 122 ms respectively in the experiment.

The following plots shows the average and 95th percentile response time over the course of the experiment with the same scenario except with 5 % packet loss between all nodes. The difference when having 5 % packet loss is that the response times of all services except for KubeEdge increases significantly. The average response times of K3s, K8s and MicroK8s became 199 ms, 409 ms and 552 ms respectively. The average response time of KubeEdge remains at 6 ms. Interestingly MicroK8s is especially badly affected by the unreliable network connections.

Interpretation of results

When measuring resource usage on the worker nodes MicroK8s takes significantly more resources than the other platforms evaluated. The reason for this is that MicroK8s does not differentiate between master nodes and worker nodes in a cluster. This means that in MicroK8s all nodes are a Kubernetes API server and run the control plane components. The result of this is that in the experiment, MicroK8s has more responsibilities which understandably means it will take more computing resources. The other platforms (K3s, K3s and KubeEdge) were all quite similar in their resource usage with KubeEdge being the most lightweight. K3s is also a more lightweight alternative to K8s when looking only at memory usage.

In the simulated fog computing scenario where worker nodes are spread out geographically, all platforms except KubeEdge fail miserably at providing adequate response times for the web service. The reason for this has to do with how Kubernetes routes requests by default. By default Kubernetes will attempt to load balance requests across all replicated pods and this behavior is also present in K3s and MicroK8s. This means that the requests will be routed to nodes running Nginx pods all over Europe which results in some requests taking a very long time to get a reply. KubeEdge runs specialized components to get better capabilities for edge computing and it appears that this is one of the thing which it has changed.

Conclusion

Out of all the platforms in the evaluation, KubeEdge is the most suitable alternative for deploying a Kubernetes cluster in a fog computing environment. This is because it has minimal resource requirements for worker nodes to be deployed on devices at the edge, which are often resource constrained. It was also the only platform to perform well in the simulated fog computing scenario.

For worker nodes in a cluster, K3s is a lightweight alternative to K8s when looking at memory usage. MicroK8s is very resource heavy for worker nodes as a result of the design decision to have all nodes be Kubernetes API servers.

K8s, K3s and MicroK8s would all benefit greatly from proximity aware routing in the simulated fog computing scenario. Currently they are very unsuitable for replicating pods over large geographical distances because they will naively load balance requests between nodes with no consideration of the latency between them. This increased the response times by several hundreds of milliseconds in my experiments.

KubeEdge managed to deliver very low response times for the service in the simulated fog computing scenario unlike the other platforms. This together with its low resource usage for fog nodes makes KubeEdge the best Kubernetes platform for fog computing in its default configuration by the criteria used in this evaluation.

Share:
LinkedIn
X
Reddit