Persistent Volume and Kubernetes upgrade - kubernetes

What happens to the persistent volume post cluster upgrade ?
The Kubernetes cluster is for a stateful application. It has one pv and corresponding pvc for storing input data. I would like to understand if there is a way to preserve the input data during K3S upgrade.

The kubernetes PV are not created on the node disk storage: when you kill your StatefulSet pod, It may be deployed on a different node, with the same PV.
Most of cloud providers use their block storage services as a default backend for K8S PV (ex: AWS EBS) and they provide other CSI (Container Storage Interface) drivers to use other storage services (ex: NFS service).
So when you upgrade your cluster, you can re-use your data if they are stored outside the cluster, you need just to check which CSI you are using, and read its doc to understand where it is created.

Related

Deploying Openstack Magnum on bare metal

When speaking about Openstack Magnum deployment of Kubernetes cluster (on bare metal nodes), is it somehow possible to leverage local disks on those nodes to act as persistent storage for containers?
In advance, thanks a lot.
Openstack Magnum uses Cinder to provision storage for kubernetes cluster. As you can read here:
In some use cases, data read/written by a container needs to persist
so that it can be accessed later. To persist the data, a Cinder volume
with a filesystem on it can be mounted on a host and be made available
to the container, then be unmounted when the container exits.
...
Kubernetes allows a previously created Cinder block to be mounted to a
pod and this is done by specifying the block ID in the pod YAML file.
When the pod is scheduled on a node, Kubernetes will interface with
Cinder to request the volume to be mounted on this node, then
Kubernetes will launch the Docker container with the proper options to
make the filesystem on the Cinder volume accessible to the container
in the pod. When the pod exits, Kubernetes will again send a request
to Cinder to unmount the volume’s filesystem, making it available to
be mounted on other nodes.
Its usage is described in this section of the documentation.
If setting up Cinder seems like too much overhead, you can use local volume type which allows to use local storage device such as a disk, partition or directory already mounted on a worker node's filesystem.

Google Cloud Storage as volume for Kubernetes Stateful Set

Is it possible to mount a GCP Storage bucket into a kubernetes Pod? I am aware about other volume options such as Persistent Volume Claims backed by GCP's Standard Persistent Disks which are the Disks used in GCP's Compute Engines. However, when a Standard Persistent Disk is attached to a Compute Engine, it can not be shared with a Pod.
My use case requires being able for external applications to write on some kind of Storage and at the same time have that storage mounted into a kubernetes Pod.
You need some PV with ReadWriteMany access mode, according to this table there are no such options in GCP out-of-box. A solution would be to provision your own NFS server for that e.g. as descirbed here

Create a kubernetes cluster using Disks is not allowed in GCP

I want a kubernetes data to be backed up. So I have a persistent disk with kubernetes cluster and have set reclaimPolicy: Retain on storage.yaml file to make sure disks should not be deleted.
After the deletion of kubernetes cluster the disk retains on compute Engine -> Disks. But with the help of disk i am unable to create a kubernetes cluster. I have options only to create VM in GCP.
Is there a possible way to create a new kubernetes cluster with the existing disk on GCP
To use your existing Persistent Disk in a new GKE cluster, you'll need to first create the new cluster, then:
create and apply new PersistentVolume and PersistentVolumeClaim objects based on the name of your existing PD.
Once those exist in your cluster, you'll be able to give a Pod's container access to that volume by specifying values for the container's volumeMounts.mountPath and volumeMounts.name in the pod definition file.
You'll find more details about how to achieve this in the doc here.

Kubernetes: How to re-use pvc and pv in another cluster

Currently, I have deploy an artifactory running in a cluster, but not the cluster is down, I can't find the reason. And I start another cluster.
The data in old cluster is on a cloud disk, then create pv, and create pvc. And now I want to mount that disk to new cluster and use that data, is that possible and how to implement it ?
Thanks.

who can create a persistent volume in kubernetes?

Its mentioned in kubernetes official website as below for PV and PVC.
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).
who is adminstrator here? when they mention it in persistent volume perspective?
An administrator in this context is the admin of the cluster. Whomever is deploying the PV/PVC. (An operations engineer, system engineer, SysAdmin)
For example - an engineer can configure AWS Elastic File System to have space available in the Kubernetes cluster, then use a PV/PVC to make that available to a specific pod container in the cluster. This means that if the pod is destroyed for whatever reason, the data in the PVC persists and is available to other resources.