Kubernetes : Force value of "$pvname" with nfs-client-provsisioner - kubernetes

I use nfs-client-provisioner inside my kubernetes cluster.
But, the name of the PersistentVolume is random.
cf. doc:
nfs-client-provisioner
--> Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}
But, where could i change the value of pvName ??
Actually, it's random, for exemple : pvName = pvc-2v82c574-5bvb-491a-bdfe-061230aedd5f

This is the naming convention of directories corresponding to the PV names but stored on share of NFS server
If it comes to PV name provisioned dynamically by nfs-provisioner it follow the following naming convention:
pvc- + claim.UID
Background information:
According to the design proposal of external storage provisioners (NFS-client belongs to this category), you must not declare volumeName explicitly in PVC spec.
# volumeName: must be empty!
pv.Name MUST be unique. Internal provisioners use name based on claim.UID to produce conflicts when two provisioners accidentally provision a PV for the same claim, however external provisioners can use any mechanism to generate an unique PV name.
In case of nfs-client provisioner, the pv.Name generation is handled by the controller library, and it gets following format:
pvc- + claim.UID
Source
I hope it helps.

Related

What if several PV meet PVC spec?

I'm studying k8s and got a question about PV and PVC binding.
PVC defines the specs it wants (capacity, access mode etc..) in the YAML file
and find appropriate PV in the cluster to bind each other.
Here, let's say our PVC wants at least 5GB capacity and RWO (ReadWriteOnce) mode.
And there are two PVs
PV1: 5GB, RWO
PV2: 10GB, RWO
which one would bind to the PVC? Both of them meets the spec of PVC.
Plus, what if we the pod fails and recreated?
If PV works as we want(in retain mode), I think the same PV should be bound to the PVC(pod) again to preserve the data. Does k8s guarantees this work?
If there's something ambiguous in my question, please let me know.
Thank you.
which one would bind to the PVC? Both of them meets the spec of PVC.
You cannot specify "at least 5 GiB" of storage. The number provided in the PVC specification will always be a concrete value and a PV that better fits the requirement should be the one bound. In this case, it will be PV1: 5GiB RWO.
If PV works as we want(in retain mode), I think the same PV should be bound to the PVC(pod) again to preserve the data. Does k8s guarantees this work
Yes, it is guaranteed. However, you will first need to ensure that you manually 'bind' the PVC to the PV using reservation.
Also, understand that a pod dying/restarting has no effect on a PVC->PV mapping. That is the entire point of having PersistentVolumes in the first place, they should be isolated from crashes in the pods that mount them. As soon as the pod comes back up, the PVC will be mounted as a volume again, and everything will be restored.
You can always learn more from the official documentation.

Kubernetes how different mountPath share data in single pod

I read an article from here which the data is shared in the same Pod with 2 different containers. These 2 containers both have volumnMount on name, shared-data. But both of them having different mountPath.
My question is, if these mountPath are not same, how are they sharing data? And what is the path for the volume shared-data? My thought is, both should have the same path in order to share data, and i seems like mistaken some concept, but not sure what.
Kubernetes maintains the storage internally. It doesn't have a fixed path that you can see, and it doesn't matter if it gets mounted in the same place in different containers.
By way of analogy, imagine you have an external USB drive. If you've unplugged the drive, it doesn't make sense to ask "what is its path"; and if you plug it in and mount it on /mnt/usb on one machine, that doesn't stop you from mounting it on /home/me/app/data when you plug it into a different machine.
The volume does have a name within its pod (in your example, shared-data). If the volume is backed by a PersistentVolumeClaim that will also have a name. Potentially the matching PersistentVolume is something like an AWS EBS volume, and that will have a name. But none of these names are fixed filesystem paths, and for the most part you can't directly use these to access the file content.
There is only one volume being created "shared-data" which in being declared in pod initially empty :
volumes:- name: shared-data emptyDir: {}
and shared between these two containers .That volume exists on the pod level and it existence only depends on the pod not the two containers .However its bind mounted by the two : meaning whatever you add/edit on the one container or the other , will affect the volume (in your case adding index.html from the debian container).. and yes you can find the path of the volume :/var/lib/kubelet/pods/PODUID/volumes/kubernetes.io~empty-dir/VOLUMENAME .. there is similar question answered here

Is it necessary to create persitent volume object and then claim it using persistent volume claim or we can directly use storage class

As in documentation of Kubernetes, it says if we use Storage class then it creates dynamic PV(Persistent Volume) object according to our need and using PVC(persistent volume claim) we can claim that now my question is if I create Storage class object then we still need to create PV object or we can use PVC to claim storage directly from Storage Class Object.
I mean what sense of creating PV objects than?
Object means creating a YAML file of it.
Creating of PV is needed if you don't have a StorageClass. If there is a StorageClass then PV is not needed. For a StorageClass to be able to perform dynamic provision a driver which implements CSI spec need to installed in the kubernetes cluster which is not always available or possible or supported.

kubernetes volume : One shared volume and one dedicate volume between replicated pods

I'm new to Kubernetes and learning it.
have deployment kind of pods and replcas=3.
Is there any way I can mount separate volume for each pod and one volume for all pods.
Requirements:
case 1- My application that is generating some temp file name tempfile.txt, So there is three replica pod, each one will generate tempfile.txt but content might be different. So If I use shared volume that will override each other .
case-2: I have a common file that is not part of image, that will be used by all pods starting the application i.e copy files from host to all pods's container
Thanks in Advance.
There are multiple ways to achieve the first part. Here is mine:
Instead of a deployment, use a statefulSet to create the replicas. statefulSets allow you to include a volume template which each pod have created with it, thus each new pod will have a new PV created specificlaly for it.
This does require your cluster to allow for dynamically provisioned volumes.
Depending on the size of your tempfile.txt, your use case, and your cluster/node configuration, you might also want to consider using a hostPath volume which will use the local storage of your node.
For the second part of your question, using any readWriteMany volume will work (such as any NFS option).
On the note of subPath, this should also work, so long as you define different subPaths for each pod. the example in the link provided by DT does this by creating a subpath based off the pod name.

How do you reuse a volume in Kubernetes?

Let's say that you wanted to create a Jenkins Deployment. As Jenkins uses a local XML file for configuration and state, you would want to create a PersistentVolume so that your data could be saved across Pod evictions and Deployment deletions. I know that the Retain reclaimPolicy will result in the data persisting on the detached PersistentVolume, but the documentation says this is just so that you can manually reclaim the data on it later on, and seems to say nothing about the volume being automatically reused if its mounting Pods are ever brought back up.
It is difficult to articulate what I am even trying to ask, so forgive me if this seems like a nebulous question, but:
If you delete the Jenkins deployment, then later decide to recreate it where you left off, how do you get it to re-mount that exact PersistentVolume on which that specific XML configuration is still stored?
Is this a case where you would want to use a StatefulSet? It seems like, in this case, Jenkins would be considered "stateful."
Is the PersistentVolumeClaim the basis of a volume's "identity"? In other words, is the expectation for the PersistentVolumeClaim to be the stable identifier by which an application can bind to a specific volume with specific data on it?
you can use stateful sets. scaling down deletes the pod, leaving the claims alone. Persistent volume claims can be deleted only manually, in order to release the underlying PersistentVolume
a scale-up can reattach the same claim along with the bound Persistent Volume and its contents to the newly created pod instance.
If you have accidentally scaled down a StatefulSet, you can scale up again and the new pod will have the same persisted state again.
If you delete the Jenkins deployment, then later decide to recreate it
where you left off, how do you get it to re-mount that exact
PersistentVolume on which that specific XML configuration is still
stored?
By using the PersistentVolumeClaim that was bound to that PersistentVolume, assuming the PersistentVolumeClaim and its PersistentVolume haven't been deleted. You should be able to try it :-)
Is this a case where you would want to use a StatefulSet? It seems
like, in this case, Jenkins would be considered "stateful."
Yes, you could use StatefulSet for its stable storage. With no need for persistent identities and stable hostnames, though, I'm not sure of the benefits compared to a master and dynamic slaves Deployment. Unless the idea is to partition the work (e.g. "areas" of the source control repo) across several Jenkins masters and their slaves...
Is the PersistentVolumeClaim the basis of a volume's "identity"? In
other words, is the expectation for the PersistentVolumeClaim to be
the stable identifier by which an application can bind to a specific
volume with specific data on it?
Yes (see my answer to the first question) - the PersistentVolumeClaim is like a stable identifier by which an application can mount the specific volume the claim is bound to.