Kubernetes: How to re-use pvc and pv in another cluster - kubernetes

Currently, I have deploy an artifactory running in a cluster, but not the cluster is down, I can't find the reason. And I start another cluster.
The data in old cluster is on a cloud disk, then create pv, and create pvc. And now I want to mount that disk to new cluster and use that data, is that possible and how to implement it ?
Thanks.

Related

Persistent Volume and Kubernetes upgrade

What happens to the persistent volume post cluster upgrade ?
The Kubernetes cluster is for a stateful application. It has one pv and corresponding pvc for storing input data. I would like to understand if there is a way to preserve the input data during K3S upgrade.
The kubernetes PV are not created on the node disk storage: when you kill your StatefulSet pod, It may be deployed on a different node, with the same PV.
Most of cloud providers use their block storage services as a default backend for K8S PV (ex: AWS EBS) and they provide other CSI (Container Storage Interface) drivers to use other storage services (ex: NFS service).
So when you upgrade your cluster, you can re-use your data if they are stored outside the cluster, you need just to check which CSI you are using, and read its doc to understand where it is created.

Active MQ in HA Shared Database (Master/Slave) on Kubernetes with StatefulSet

I am in the process of deploying ActiveMQ 5.15 in HA on Kubernetes. Previously I was using a deployment and a clusterIP Service. And it was working fine. The master will boot up and the slave will wait for the lock to be acquired. If I delete the pod which is the master one, the slave picks up and becomes the master.
Now I want to try with statefulset basing myself on this thread.
Deployment is done successfully and two pods were created with id0 and id1. But what I noticed is that both pods were master. They were both started. I noticed also that two PVC were created id0 and id1 in the case of Statefulset compared to deployment which had only 1 PVC. Could that be the issue since it is no more a shared storage? Can we still achieve a master/slave setup with Statefulset?
I noticed also that two PVC were created id0 and id1 in the case of statefulset compared to deployment which had only 1 PVC. Could that be the issue since it is no more a shared storage?
You are right. When using k8s StatefulSets each Pod gets its own persistent storage (dedicated PVC and PV), and this persistent storage is not shared.
When a Pod gets terminated and is rescheduled on a different Node, the Kubernetes controller will ensure that the Pod is associated with the same PVC which will guarantee that the state is intact.
In your case, to achieve a master/slave setup, consider using a shared network location / filesystem for persistent storage like:
NFS storage for on-premise k8s cluster.
AWS EFS for EKS.
or Azure Files for AKS.
Check the complete list of PersistentVolume types currently supported by Kubernetes (implemented as plugins).

Create a kubernetes cluster using Disks is not allowed in GCP

I want a kubernetes data to be backed up. So I have a persistent disk with kubernetes cluster and have set reclaimPolicy: Retain on storage.yaml file to make sure disks should not be deleted.
After the deletion of kubernetes cluster the disk retains on compute Engine -> Disks. But with the help of disk i am unable to create a kubernetes cluster. I have options only to create VM in GCP.
Is there a possible way to create a new kubernetes cluster with the existing disk on GCP
To use your existing Persistent Disk in a new GKE cluster, you'll need to first create the new cluster, then:
create and apply new PersistentVolume and PersistentVolumeClaim objects based on the name of your existing PD.
Once those exist in your cluster, you'll be able to give a Pod's container access to that volume by specifying values for the container's volumeMounts.mountPath and volumeMounts.name in the pod definition file.
You'll find more details about how to achieve this in the doc here.

How to attach an EKS volume directly to an EKS Pod

I have a requirement where I would like to mount an EFS that has been created in AWS to be attached directly to a POD in an EKS cluster without mounting it on the actual EKS node.
My understanding was that if the EFS can be treated as an NFS server, then a PV/PVC can be created out of this and then directly mounted onto an EKS Pod.
I have done the above using EBS but with a normal vanilla Kubernetes and not EKS, I would like to know how to go about it for EFS and EKS. Is it even possible? Most of the documentations that I have read say that the mount path is mounted on the node and then to the k8s pods. But I would like to bypass the mounting on the node and directly mount it to the EKS k8s pods.
Are there any documentations that I can refer?
That is not possible, because pods exist on nodes, therefore it has to be mounted on the nodes that host the pods.
Even when you did it with EBS, under the bonnet it was still attached to the node first.
However, you can restrict access to AWS resources with IAM using kube2iam or you can use the EKS native solution to assign IAM roles to Kubernetes Service Accounts. The benefit of using kube2iam is it going to work with Kops should you migrate to it from EKS.

backup and restore Logical volumes for glusterfs and kubernetes

I am working with kubernetes and i tried to dynamically provision volumes on top of gluster cluster and heketi through k8s pvc.
it can happen that my data will be corrupted or maybe lost so i need to know THE BEST WAY to backup and restore logical volumes (lv) for a running gluster cluster on top of kubernetes
The best approach of doing glusterfs backup is by using geo-replication.
https://docs.gluster.org/en/v3/Administrator%20Guide/Geo%20Replication/