Is it possible to join two separate kubernetes clusters? - kubernetes

I have deployments on one Kubernetes cluster that I might want to move to another Kubernetes cluster in the future. Is it possible to combine these two clusters or must I redeploy everything? If the answer is yes, what if there are StatefulSets?

The short answer is no.
You can connect to clusters with something like Kubernetes Federation or if you have Calico, you can use something like BGP peering
You'll have to redeploy everything and in the case of StatefulSets, it really depends where you are storing your state. For example:
Is it MySql? Backup your db and restore it in the new place.
Is it Cassandra? Can you reattach the same physical volumes in the cloud provider? if not, then you'll have to transfer your data.
Is it etcd, Consul or Zookeeper? Can you back it up or attach the same physical volumes?

Related

Kubernetes Snapshots

I have an on-premise rancher server and there are two clusters in it. Let's get them cluster A and cluster B. In cluster A, I am creating a db snapshot and I need to copy that snapshot into cluster B. I am not a kubernetes expert therefore can someone help me with good ideas to achieve this or some reference materials that I could further refer to achieve this task?
There could be multiple ways to synchronize data between clusters. First off, see if your hypervisor supports volume replication. That way, you can copy data across to another volume and mount that volume into the applications in the secondary cluster.
Another approach would be to use Velero with Restic to backup the volumes to an object store (min-io/s3) and then restore those in the second cluster as shown in this example.
OpenEBS sounds like another viable option but I haven't had a chance to work with it yet. Linstor is another solution I have heard of.

Can I sync Persistent volume between two k8s clusters?

I have deployed two k8s clusters and i want that if someone will create pv in first cluster then it should automatically get created in the second cluster. How can i achieve this?
simply speaking you can't: these are separate clusters and each of them has a separate configuration. there is no built-in mechanism of triggering between separate clusters. you would need to build your own program that would be watching both API servers and applying the changes.
I'm guessing however that you probably want to share filesystem data between clusters: if so, then have a look at volume types backed by network/distributed file systems such as NFS or ceph.

Can two kubernetes clusters share the same external etcd and work like master slave

We have a requirement to setup a geo redundant cluster. I am looking at sharing an external etcd cluster to run two kubernetes clusters. It may sound absurd at first, but the requirements have come down to it..I am seeking some direction to whether it is possible, and if not, what are the challenges.
Yes it is possible, you can have a single etcd cluster and multiple k8s clusters attached to it. The key to achieve it, is to use -etcd-prefix string flag from kubernetes apiserver. This way each cluster will use different root path for storing its resources and avoid possible conflict with second cluster in the etcd. In addition to it, you should also setup the appropriate rbac rules and certificates for each k8s cluster. You can find more detailed information about it in the following article: Multi-tenant external etcd for Kubernetes clusters.
EDIT: Ooh wait, just noticed that you want to have those two clusters to behave as master-slave. In that case you could achieve it by assign to the slave cluster a read-only role in the etcd and change it to read-write when it has to become master. Theoretically it should work, but I have never tried it and I think the best option is to use builtin k8s mechanism for high-availability like leader-election.

Recover a Kubernetes Cluster

At the moment I have a Kubernetes cluster distributed on AWS via kops. I have a doubt: is it possible to make a sort of snapshot of the Kubernetes cluster and recreate the same environment (master and pod nodes), for example to be resilient or to migrate the cluster in an easy way? I know that the Heptio Ark exists, it is very beautiful. But I'm curious to know if there is an easy way to do it. For example, is it enough to back up Etcd (or in my case the snapshot of EBS volumes)?
Thanks a lot. All suggestions are welcome
kops stores its state in an S3 bucket identified by the KOPS_STATE_STORE. So yes, if your cluster has been removed you can restore it by running kops create cluster.
Keep in mind that it doesn't restore your etcd state so for that you are going to set up etcd backups. You could also make use of Heptio Ark.
Similar answers to this topic:
Recover kops Kubernetes cluster
How to restore kubernetes cluster using kops?
As mentioned by Rico in the earlier post, you can use Velero to back up your etcd using cli client. Another option to consider for the scenario you described is CAPE: CAPE provides an easy to use control plane for Kubernetes Multi-cluster App & Data Management via a friendly user interface.
See below for resources:
How to create an on-demand K8s Backup:
https://www.youtube.com/watch?v=MOPtRTeG8sw&list=PLByzHLEsOQEB01EIybmgfcrBMO6WNFYZL&index=7
How to Restore/Migrate K8s Backup to Another Cluster:
https://www.youtube.com/watch?v=dhBnUgfTsh4&list=PLByzHLEsOQEB01EIybmgfcrBMO6WNFYZL&index=10

usecase for etcd inside Kubernetes

I was just wondering, why it is useful to run etcd cluster inside Kubernetes, when Kubernetes itself depends on etcd.
It just does not make sense to me, as if I have HA Kube, I am also forced to have HA etcd outside. Hence to reason to install it again inside...
I have an external ETCD that manages my k8s HA cluster and im not letting any developer apps near it. I would be too concerned about something going wrong and breaking the k8s cluster. It is also a fixed size at 3 which works well for the cluster size with its requirements. If the developers need a key/value store for their db and want etcd, this would be a great way to make one in the cluster for the applications. With it being statefulsets, its scalable.
If you're using Kubernetes via GKE, the underlying Etcd cluster is not exposed in any way.