The Elastisys Tech Blog

Search

Elastisys Engineering: How to set up Kubernetes on Exoscale

In this Elastisys Engineering blog post, we show how to set up a production-ready Kubernetes cluster on Exoscale using kubespray and Terraform. It gets a control plane and a set of worker nodes, and leverages Rook and Ceph to get Persistent Volume support.

Why install Kubernetes using kubespray?

The reasons why one might want to install Kubernetes “manually” are many:

  • Allows you to run Kubernetes on cloud providers even if they do not offer a managed Kubernetes service such as AKS, EKS, or GKE.
  • Not entangled with the cloud provider’s IAM or other services, which makes a multi-cloud strategy easier to implement in practice. This way, your tools and processes will not be cloud-vendor specific.
  • Allows full control over the control plane: if you want to install, e.g., an OpenID Connect provider such as Dex to integrate with your Identity Provider (IdP), you can. With a managed service, what you can or cannot do is at the mercy of your cloud provider.

So let’s get to it!

Requirements

Our cluster setup is as following:

  • One control plane node: 2GB RAM, 2 vCPU, 50GB local storage.
  • Three worker nodes: 8GB RAM, 4 vCPU, 100GB local storage each.

All nodes are running Ubuntu 20.04 LTS. The cluster is running Kubernetes v1.19.7 and is installed using kubespray 2.15.0. You will also need Terraform to follow along with this guide.

Infrastructure

The first thing that is needed is to set up all the infrastructure needed for the cluster.

Terraform

The easiest way to deploy a production-ready Kubernetes cluster is to use the Terraform script from kubespray

Clone kubespray for the script. Exoscale support is on master right now.

git clone https://github.com/kubernetes-sigs/kubespray.git
cd kubespray

Create a new folder in the inventory folder for your cluster.

CLUSTER=my-exoscale-cluster
cp -r inventory/sample inventory/$CLUSTER
cp contrib/terraform/exoscale/default.tfvars inventory/$CLUSTER/
cd inventory/$CLUSTER

Edit default.tfvars and make sure that ceph_partition_size for all the workers is set to 50. (To match the reference setup)

For authentication you can use a encrypted credentials file ~/.cloudstack.ini or ./cloudstack.ini. This file can be created by running:

cat << EOF > cloudstack.ini
[cloudstack]
key = 
secret = 
EOF
sops --encrypt --in-place --pgp  cloudstack.ini
sops cloudstack.ini

Insert your API key in key and API secret in secret. Follow the Exoscale IAM Quick-start to learn how to generate API keys.

To create the cluster, run the following and follow the instructions on the screen.

terraform init ../../contrib/terraform/exoscale
sops exec-file -no-fifo cloudstack.ini 'CLOUDSTACK_CONFIG={} terraform apply -var-file default.tfvars ../../contrib/terraform/exoscale'

You should now have a inventory file inventory.ini that you can use with kubespray. To test it and to make sure that all the nodes are properly up and running, run the following:

ansible -i inventory.ini -m ping all

Other setup

If you are setting up the nodes by yourself, please keep in mind that Exoscale at the time of writing doesn’t have support for adding additional disks. Therefore you need to split the root disk into multiple partitions.

This can be achieved by making sure that your instance has more than 50GB of disk and that the following lines is added to your user-data before booting.

#cloud-config
bootcmd:
- [ cloud-init-per, once, move-second-header, sgdisk, --move-second-header, /dev/vda ]
- [ cloud-init-per, once, create-ceph-part, parted, --script, /dev/vda, 'mkpart 2 50GB -1' ] # Create ceph partition spanning from 50GB from start to end

More information about this can be found in this blog post, Rook Ceph on Kubernetes.

kubespray

When the infrastructure is up and running, it’s time to add Kubernetes on top of all this. If you have followed the suggested way of spinning up the infrastructure, you should be able to run:

ansible-playbook -i inventory.ini ../../cluster.yml -b -v

NOTE: You might want to set the value kubeconfig_localhost in the file group_vars/k8s-cluster/k8s-cluster.yml to true to get the kubeconfig file. Just remember that it will use the private IP of the server, so update the server IP to match the IP of the control plane load balancer.

When Ansible is finished, verify that you have access to the Kubernetes cluster by running:

kubectl get nodes

Rook

Install rook by installing the rook operator chart.

helm repo add rook-release https://charts.rook.io/release
kubectl create namespace rook-ceph
helm upgrade --install --namespace rook-ceph rook-ceph rook-release/rook-ceph --version v1.5.5 --wait

# Need to use ceph v15.2.7 to be able to use partitions
# See https://github.com/rook/rook/issues/6849
curl -s https://raw.githubusercontent.com/rook/rook/d381196/cluster/examples/kubernetes/ceph/cluster.yaml | kubectl apply -n rook-ceph -f -
curl -s https://raw.githubusercontent.com/rook/rook/d381196/cluster/examples/kubernetes/ceph/csi/rbd/storageclass.yaml | kubectl apply -n rook-ceph -f -

Summary

In this article, we have shown how to set up a production-ready Kubernetes cluster on Exoscale using kubespray and Terraform, complete with Persistent Volume support via Rook and Ceph. These steps help you set up Kubernetes clusters on cloud environments where no managed service is available. Or if your use-case is such that you do not want to use one.

Read more of our engineering blog posts

This blog post is part of our engineering blog post series. Experience and expertise, straight from our engineering team. Always with a focus on technical, hands-on HOWTO content with copy-pasteable code or CLI commands.
Would you like to read more content like this? Click the button below and see the other blog posts in this series!


Want to keep up with the latest in cloud and Kubernetes?

Let us deliver it straight to your inbox!

Share:
LinkedIn
Twitter
Reddit