Change NFS server from persistent volume in Kubernetes - kubernetes

I have an Kubernetes cluster with some applications including Minio and Vault. Those applications use some PVs with NFS plugin. I want to change the NFS server with another one (better performance). I don't want to lose any data. How can I do that?

Related

Persistent Volume and Kubernetes upgrade

What happens to the persistent volume post cluster upgrade ?
The Kubernetes cluster is for a stateful application. It has one pv and corresponding pvc for storing input data. I would like to understand if there is a way to preserve the input data during K3S upgrade.
The kubernetes PV are not created on the node disk storage: when you kill your StatefulSet pod, It may be deployed on a different node, with the same PV.
Most of cloud providers use their block storage services as a default backend for K8S PV (ex: AWS EBS) and they provide other CSI (Container Storage Interface) drivers to use other storage services (ex: NFS service).
So when you upgrade your cluster, you can re-use your data if they are stored outside the cluster, you need just to check which CSI you are using, and read its doc to understand where it is created.

Can kubernetes volumes be used for deployments? If so what happens if each pod is on different host?

Can we use kubernetes volumes for deployments? If yes than that will be mutliple pods sharing the same volume?
If that is possible then what happens when all the pods for the deployment are on different host machines?
Especially when using Amazon EBS where an ebs volume cannot be shared across multiple hosts.
Yes, you can use a persistent volume for deployments
Such a volume will be mounted to your desired location in all the pods
If you use EBS block storage, all your pods will need to be scheduled on the same node where you have attached your volume. This may not work if you have many replicas
You will have to use a network file storage, such as EFS, GlusterFS, Portworx, etc. with ReadWriteMany if you want your pods to be spun up on different nodes
EBS will give you the best performance with the aforementioned single node limitation

Google Cloud Storage as volume for Kubernetes Stateful Set

Is it possible to mount a GCP Storage bucket into a kubernetes Pod? I am aware about other volume options such as Persistent Volume Claims backed by GCP's Standard Persistent Disks which are the Disks used in GCP's Compute Engines. However, when a Standard Persistent Disk is attached to a Compute Engine, it can not be shared with a Pod.
My use case requires being able for external applications to write on some kind of Storage and at the same time have that storage mounted into a kubernetes Pod.
You need some PV with ReadWriteMany access mode, according to this table there are no such options in GCP out-of-box. A solution would be to provision your own NFS server for that e.g. as descirbed here

who can create a persistent volume in kubernetes?

Its mentioned in kubernetes official website as below for PV and PVC.
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).
who is adminstrator here? when they mention it in persistent volume perspective?
An administrator in this context is the admin of the cluster. Whomever is deploying the PV/PVC. (An operations engineer, system engineer, SysAdmin)
For example - an engineer can configure AWS Elastic File System to have space available in the Kubernetes cluster, then use a PV/PVC to make that available to a specific pod container in the cluster. This means that if the pod is destroyed for whatever reason, the data in the PVC persists and is available to other resources.

On GCE Kubernetes, how can I create volumes where multiple consumers can write?

K8S Volume documentation mentions that only a single consumer can write to GCE PD. What can be used on GCE for volumes where multiple consumers can write simultaneously, for example when hosting a private Docker registry?
I see an sample for creating NFS volume on GCE. Is there a straightforward solution that I am missing?
I followed this solution to
create a GCE PD,
host a NFS server with GCE volume mounted at "/exports"
Use this NFS server as a volume
This was easy to do. One change I made was to add storage-class "" to the GCE PD PV and PVC file as I did not have a default storage class.