I am working with kubernetes and i tried to dynamically provision volumes on top of gluster cluster and heketi through k8s pvc.
it can happen that my data will be corrupted or maybe lost so i need to know THE BEST WAY to backup and restore logical volumes (lv) for a running gluster cluster on top of kubernetes
The best approach of doing glusterfs backup is by using geo-replication.
https://docs.gluster.org/en/v3/Administrator%20Guide/Geo%20Replication/
Related
Can we use kubernetes volumes for deployments? If yes than that will be mutliple pods sharing the same volume?
If that is possible then what happens when all the pods for the deployment are on different host machines?
Especially when using Amazon EBS where an ebs volume cannot be shared across multiple hosts.
Yes, you can use a persistent volume for deployments
Such a volume will be mounted to your desired location in all the pods
If you use EBS block storage, all your pods will need to be scheduled on the same node where you have attached your volume. This may not work if you have many replicas
You will have to use a network file storage, such as EFS, GlusterFS, Portworx, etc. with ReadWriteMany if you want your pods to be spun up on different nodes
EBS will give you the best performance with the aforementioned single node limitation
I have a cluster with 2 nodes with local storage, I want to move a pod and its volume from node 1 to node 2, because the disk of node 1 is a little full. thanks
Either use Volume Snapshot
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support
Or
Use Velero, Velero is an open-source tool to safely backup and restore k8s resources & PV's
https://velero.io/
I have a requirement where I would like to mount an EFS that has been created in AWS to be attached directly to a POD in an EKS cluster without mounting it on the actual EKS node.
My understanding was that if the EFS can be treated as an NFS server, then a PV/PVC can be created out of this and then directly mounted onto an EKS Pod.
I have done the above using EBS but with a normal vanilla Kubernetes and not EKS, I would like to know how to go about it for EFS and EKS. Is it even possible? Most of the documentations that I have read say that the mount path is mounted on the node and then to the k8s pods. But I would like to bypass the mounting on the node and directly mount it to the EKS k8s pods.
Are there any documentations that I can refer?
That is not possible, because pods exist on nodes, therefore it has to be mounted on the nodes that host the pods.
Even when you did it with EBS, under the bonnet it was still attached to the node first.
However, you can restrict access to AWS resources with IAM using kube2iam or you can use the EKS native solution to assign IAM roles to Kubernetes Service Accounts. The benefit of using kube2iam is it going to work with Kops should you migrate to it from EKS.
Currently, I have deploy an artifactory running in a cluster, but not the cluster is down, I can't find the reason. And I start another cluster.
The data in old cluster is on a cloud disk, then create pv, and create pvc. And now I want to mount that disk to new cluster and use that data, is that possible and how to implement it ?
Thanks.
Glusterfs create a volume and set auth.allow for all kubernetes nodes.
Then I can use kubernetes endpoint to use glusterfs volume.
But,If I create too many rc or pods using glusterfs endpoint, the data is all in the same gluster volume path / .
I know I can create create more glusterfs volume and kubernetes endpoints for each rc or pods to use. But I don't think it's the best practice.
If you want different data sets, you need to create different volumes. I'm not sure what other answer you're looking for?