Create a kubernetes cluster using Disks is not allowed in GCP - kubernetes

I want a kubernetes data to be backed up. So I have a persistent disk with kubernetes cluster and have set reclaimPolicy: Retain on storage.yaml file to make sure disks should not be deleted.
After the deletion of kubernetes cluster the disk retains on compute Engine -> Disks. But with the help of disk i am unable to create a kubernetes cluster. I have options only to create VM in GCP.
Is there a possible way to create a new kubernetes cluster with the existing disk on GCP

To use your existing Persistent Disk in a new GKE cluster, you'll need to first create the new cluster, then:
create and apply new PersistentVolume and PersistentVolumeClaim objects based on the name of your existing PD.
Once those exist in your cluster, you'll be able to give a Pod's container access to that volume by specifying values for the container's volumeMounts.mountPath and volumeMounts.name in the pod definition file.
You'll find more details about how to achieve this in the doc here.

Related

Persistent Volume and Kubernetes upgrade

What happens to the persistent volume post cluster upgrade ?
The Kubernetes cluster is for a stateful application. It has one pv and corresponding pvc for storing input data. I would like to understand if there is a way to preserve the input data during K3S upgrade.
The kubernetes PV are not created on the node disk storage: when you kill your StatefulSet pod, It may be deployed on a different node, with the same PV.
Most of cloud providers use their block storage services as a default backend for K8S PV (ex: AWS EBS) and they provide other CSI (Container Storage Interface) drivers to use other storage services (ex: NFS service).
So when you upgrade your cluster, you can re-use your data if they are stored outside the cluster, you need just to check which CSI you are using, and read its doc to understand where it is created.

PersistentVolumeClaim used by multiple pods: one for writing and another for backup

In a Kubernetes cluster on Oracle cloud, I have a pod with an Apache server.
This pod needs a persistent volume so I used a persistentVolumeClaim and the cloud provider is able to automatically create an associated volume (Oracle Block Volume).
The access mode used by the PVC is readWriteOnce and therefore the volume created has the same access mode.
Everything work great.
Now I want to backup this volume using borg backup and borgmatic by starting a new pod regularly with a cronJob.
This backup pod needs to mount the volume in read only.
Question:
Can I use the previously defined PVC?
Do I need to create a new PVC with readOnly access mode?
As per documentation:
ReadWriteOnce:
the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node.
That means if you make a strict rule for deploying your pods to the same node, you can use the same PVC, here's the INSTRUCTION

Access Kubernetes Persistent Volume data

Is there any way to access Google cloud Kubernetes persistent volume data without using pod. I cannot start pod due to data corruption in persistent volume. Have any command line tool or any other way.
If you have any concerns running pod with any specific application, in that case, you can run the Ubuntu POD and attach that pod to the PVC and access the data.
There also another option to clone the PV and PVC, perform the testing, and newly created PV and PVC while the old one will work as the backup option.
For cloning PV and PVC you can also use the tool : https://velero.io/
You can also attach the PVC to the POD in read-only mode and try accessing the data.
PersistentVolume resources are cluster resources that exist independently of Pods. This means that the disk and data represented by a PersistentVolume continue to exist as the cluster changes and as Pods are deleted and recreated.
It is possible to save data from your PersistentVolume with Status: Terminating and RetainPolicy set to default(delete). Your PersistentVolumes will not be terminated until there is a pod, deployment or to be more specific a PersistentVolumeClaim using it.
The steps we took to remedy our broken state are as follows:
The first thing you want to do is to create a snapshot of your PersistentVolumes.
In GKE console, go to Compute Engine -> Disks and find your volume there and create a snapshot of your volume. use
kubectl get pv | grep pvc-name
Use the snapshot to create a disk:
gcloud compute disks create name-of-disk --size=10 --source-snapshot=name-of-snapshot --type=pd-standard --zone=your-zone
At this point, stop the services using the volume and delete the volume and volume claim.
Re-create the volume manually with the data from the disk and update your volume claim to target a specific volume file.
For more information refer to the links below.
Accessing file shares from Google Kubernetes Engine clusters.
Configure a Pod to Use a PersistentVolume for Storage

Kubernetes: How to re-use pvc and pv in another cluster

Currently, I have deploy an artifactory running in a cluster, but not the cluster is down, I can't find the reason. And I start another cluster.
The data in old cluster is on a cloud disk, then create pv, and create pvc. And now I want to mount that disk to new cluster and use that data, is that possible and how to implement it ?
Thanks.

who can create a persistent volume in kubernetes?

Its mentioned in kubernetes official website as below for PV and PVC.
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).
who is adminstrator here? when they mention it in persistent volume perspective?
An administrator in this context is the admin of the cluster. Whomever is deploying the PV/PVC. (An operations engineer, system engineer, SysAdmin)
For example - an engineer can configure AWS Elastic File System to have space available in the Kubernetes cluster, then use a PV/PVC to make that available to a specific pod container in the cluster. This means that if the pod is destroyed for whatever reason, the data in the PVC persists and is available to other resources.