Persistent Storage (Gluster/Other storage) for Kubernetes - kubernetes

I am trying to setup Kubernetes with Gluster using Heketi. After several tries, i am still not able to get it working. I am using RancherOS with Rancher application running on it which manages Kubernetes. I am now looking for alternative approaches.
1) Setup GlusterFS and Heketi on a standalone RHEL host:
In this case, my question is whether it's even possible to manage a Kubernetes cluster present on another set of hosts or do i need to have GlusterFS servers present on the same hosts that Kubernetes manages? If it's the former, then will i have to mount the gluster partition on Kubernetes node?
2) Kubernetes with any other persistent storage provider:
We cannot use cloud providers for sure. Paid ones are also not an option so i was considering Ceph. Anyone with successful implementation and whether we have something similar to Heketi for Ceph setup?

#Technext, we can help you to configure heketi + gluster configuration in kubernetes setup. We have many users who has already configured the same. If you havent tried this please check gluster-kubernetes repo https://github.com/gluster/gluster-kubernetes.
Can you please let us know the error you are into when you deploy Heketi gluster ? We can try to help you to achieve this target.
Waiting for your reply.

Related

How to claim & configure Bitnami Docker Discourse data in Kubernetes Persistent volume?

Currently I have bitnami discourse in docker and the data storing in pod, we scale up the pods from 1 to many. Now I am facing error with media uploads, the issue is the pods data are not in sync so I have to mount single volume and shared it between pods. But I want to do that persistent volume claim in kubernetes with the help of azure-storage-class not in docker volume.
I presume you are using your own kubernetes manifest as bitnami aren't supporting discourse chart currently. Mainly, what I understand that you need is a volume that could be accessed by many PODs.
I think you would need read-write-many volume, this is not usually supported by cloud providers but let's give a try in azure.
I hope it helps.

Is there a way to enable nested virtualization in GKE cluster node?

I am trying to use KubeVirt with GKE cluster.
I found I am able to create a nested virtualization enabled GCP VM, but I didn't find a way to achieve the same thing for GKE cluster node.
If I cannot enable nested virtualization for GKE cluster node, I can only use the kubevirt with debug.useEmulation which is not what I want.
Thanks
Yes you can -- it isn't even hard to do, it just isn't very intuitive.
Start a GKE cluster with ubuntu/containerd, n1-standard nodes and minimum cpu of Haswell. I think you also need to enable "Basic Authorization" to get virtctl working (sorry).
Find the template used for your new cluster, then to determine the proper source image:
gcloud compute instance-templates describe --format=json | jq ".properties.disks[0].initializeParams.sourceImage"
Create a copy of the source disk with nested virtualization enabled:
gcloud compute images --project $PROJECT create $NEW_IMAGE_NAME --source-image $SOURCE_IMAGE --source-image-project=$SOURCE_PROJECT --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"
Use "Create Similar" on the template for your GKE cluster. Change the boot disk to $NEW_IMAGE_NAME. You will also need to drill down to networking/alias and change the default subnet to your pod network.
Trigger a rolling update on the group for your GKE nodes to move them to the new template.
You can now install kubevirt (I had to use 0.38.1 instead of the current)
Caveats: I don't know how to use google disk images for kubevirt which would be an obvious match. I haven't even figured out how to get private GCR working with CDI. Oh, and console doesn't work due to websocket problems. But... you can shell to a gke node and see /dev/kvm, you can also kubevirt a VM then ssh into it, so yes, it does work.
Anyone know how to make any of this better?
Currently nested virtualization is available only on GCE as per this docs.
There is already question regarding supporting Nested Virtualization on GKE and it can be found here. I'd say it's not introduced yet, thats why you cannot find proper documentation about GKE and nested virtualization.
Also please consider that GCP and GKE are quite different.
Google Compute Engine VM instance is unmanaged by google. So besides ready base image, you can do whatever you need, like it would be normal VM.
However, Google Kubernetes Engine was created especially for containers. Thoses VMs are managed by google. GKE already creates Cluster for you and all VMs are automatically part of the cluster. In GKE you are unable to run Minikube or Kubeadm.
Here you have some characteristics of GKE

Deploy Kubernetes on OpenStack

I am trying to understand the relationship between Kubernetes and OpenStack. I am confused around the topic of deploying Kubernetes on OpenStack and doing my research I found there are too many tutorials. My understanding of the sequence is:
Start several nova instances on OpenStack.
Install Kubernetes master on one instance and install Kubernetes node on other instances.
Submit YAML file using kubectl and Kubernetes will create and deploy my application.
As for Kubernetes's self-healing capacity, can Kubernetes restart some of the failed nova instances? Which component in Kubernetes is responsible for restart/reboot/delete/re-provision nova instances? Is it Kubernetes master? If so, what will happen if the Kubernetes master is down and cannot be recovered?
1, 2 and 3 are correct.
Self-healing
You can deploy in master HA configuration. The recommended way is either 3 or 5 master with a quorum of (n + 1)/ 2
Can Kubernetes reprovision/restart some the failed nova instances?
Not really. That's after nova to manage all the server services. Kubernetes has an OpenStack module that allows it to interact with OpenStack components like create external load balancer and creates volumes that can be used with your workloads/pods/containers.
You can either use kubeadm or kubespray to bootstrap a cluster.
Hope it helps.
If you want to deploy Kubernetes on top of Openstack I would recommend that you look into Openstack Magnum. This is the most common use case for Openstack and Kubernates.
There is also the possibility of running the Openstack Control Plane under Kubernetes, which would allow you to better scale and auto-heal Openstack services. This is primarily for the Control Plane (e.g. nova-api), and as far as I know there is no way of running nova-computes under Kubernetes.
I found a good blog post here that describes some of the benefits from such an approach.
Yes, you're spot on with your observations in the case of running Kubernetes on top of OpenStack and the other answers here give you further pointers already. I just wanted to point out, in addition, that the other way round is also an option, that is, running OpenStack on top of Kubernetes, for example using OpenStack-Helm.

Recover a Kubernetes Cluster

At the moment I have a Kubernetes cluster distributed on AWS via kops. I have a doubt: is it possible to make a sort of snapshot of the Kubernetes cluster and recreate the same environment (master and pod nodes), for example to be resilient or to migrate the cluster in an easy way? I know that the Heptio Ark exists, it is very beautiful. But I'm curious to know if there is an easy way to do it. For example, is it enough to back up Etcd (or in my case the snapshot of EBS volumes)?
Thanks a lot. All suggestions are welcome
kops stores its state in an S3 bucket identified by the KOPS_STATE_STORE. So yes, if your cluster has been removed you can restore it by running kops create cluster.
Keep in mind that it doesn't restore your etcd state so for that you are going to set up etcd backups. You could also make use of Heptio Ark.
Similar answers to this topic:
Recover kops Kubernetes cluster
How to restore kubernetes cluster using kops?
As mentioned by Rico in the earlier post, you can use Velero to back up your etcd using cli client. Another option to consider for the scenario you described is CAPE: CAPE provides an easy to use control plane for Kubernetes Multi-cluster App & Data Management via a friendly user interface.
See below for resources:
How to create an on-demand K8s Backup:
https://www.youtube.com/watch?v=MOPtRTeG8sw&list=PLByzHLEsOQEB01EIybmgfcrBMO6WNFYZL&index=7
How to Restore/Migrate K8s Backup to Another Cluster:
https://www.youtube.com/watch?v=dhBnUgfTsh4&list=PLByzHLEsOQEB01EIybmgfcrBMO6WNFYZL&index=10

Rancher connect to kubernetes instead of start kubernetes

Rancher is designed (as best as I can tell) to own and run a kubernetes cluster. Rancher does provide a configuration so that kubectl can interact w/ the kubernetes cluster. Rancher seems like a nice tool. But as far as I can tell, there is no way to connect to an existing kubernetes cluster. Is there any way to do this?
If you are looking for a service that can connect to an existing k8s cluster(s) then try Containership. You can use Kubectl and/or the Containership UI to manage you workloads, config maps, etc on multiple clusters.
Hope this helps!
I got this answer on the rancher forums
There is not, most of the value we can add at the moment is around configuring, managing, and controlling access to the installation we setup.
https://forums.rancher.com/t/rancher-connect-to-kubernetes-instead-of-start-kubernetes/3209