Cross Data Center Redundancy Now Available for Enterprise Compliant Kubernetes Customers

Cross Data Center Redundancy Now Available for Enterprise Compliant Kubernetes Customers

Elastisys is proud to announce general availability of our cross data center redundancy feature for Enterprise customers of our Managed Compliant Kubernetes Service. Using secure inter-cluster networking, a replicated networked file system, and Kubernetes’ fault tolerance features, customers can enjoy seamless automatic data replication and workload migration. Business continuity is offered and downtime minimized because data is already present where needed, with little to no impact for end-users.

When disaster strikes: data center or cloud region failures

Kubernetes clusters are typically deployed to a single cloud region. This means that if that region fails, a disaster recovery plan has to be triggered. Administrators will scramble to stand up equivalent infrastructure and restore data from backups in a different cloud region. Some early adopters use “federation” (via e.g. Kubefed) to deploy applications to multiple clusters across regions. But federated clusters do not collaborate — they are merely centrally managed and coordinated. Thus, they do not offer automatic data migration and failover in case a region fails.

With the solution Elastisys is announcing today, data is seamlessly distributed and replicated across data centers (cloud regions). Thus, if a disaster strikes, Pods and their persistent data are available to serve end-user requests practically immediately.

Seamless automatic data replication and workload migration

Enterprise customers of the Elastisys Managed Compliant Kubernetes service can now opt in to our cross data center redundancy solution. In this setup, a multi-region cluster is set up, and data is automatically replicated across regions by the underlying storage provider. In case of regional failure, the data is therefore already replicated and ready to be used. And workloads are automatically started up in the still-functioning cluster. 

The below picture shows how the setup is designed, and we also describe it in detail in this technical post.