PVC behavior with Dynamic Provision and Replicaset - kubernetes

I have a question please, for PVC that is bound to one PV through dynamic storageclass on a pod created by a replica set , if that pod gets terminated and restarted on another host will it get the same PV?
what i saw that the Pod could not be rescheduled till the same PV was active but i am not able to understand what should be the standard behavior and how PVC should react differently between replica set and statefulset

another host mean another Kubernetes node?
If POD gets restarted or terminated and scheduled again on another node in that case if PVC and PV exist disk will be mounted to that specific node and POD again starts running. Yes here, PVC and PV will be the same but still depends on Retain policy.
You can read more about : https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#deployments_vs_statefulsets
PersistentVolumes can have various reclaim policies, including
"Retain", "Recycle", and "Delete". For dynamically provisioned
PersistentVolumes, the default reclaim policy is "Delete". This means
that a dynamically provisioned volume is automatically deleted when a
user deletes the corresponding PersistentVolumeClaim.
read more at : https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/
if your pod gets terminated or restarted means you are not deleting the PVC in that case PV will be there POD will again get attach to PVC and start running on the respective node.

Related

Can I reuse same Persistent Volume for another PersistentVolumeClaim with an access mode of `ReadWriteOnce`?

I have an use case, where I want a PV to be used by a single pod at any give time and then once the pods are deleted, the same PV should get used by another pod. Is there a way to achieve this in k8s?
Note:- My pods are sort lived (may be stays for around 60-90 mins)
Can I reuse same Persistent Volume for another PersistentVolumeClaim with an access mode of ReadWriteOnce?
What really serves your requirement is "Using a RWO Persistent Volume Claim with different pods (one pod at a time)" , Yes it is possible.
When a POD gets deleted which was accessing the PVC , Then Same PVC can be attached to a different POD.

Does in Kubernetes a PV/PVC guarantees sticky mounting of pods?

I would like to understand if through PVC/PV a pod that is using a volume after a failure will be always re-attached to the same volume or not. Essentially I know that this can be a case for Statefulset but I am trying to understand if this can be also achieved with PVC and PV. Essentially assuming that a Pod_A is attached to Volume_X, then Pod_A fails but in the meantime a Volume_Y was added to the cluster that can potentially fulfil the PVC requirements. So what does it happen when Pod_A is re-created, does it get always mounted to Volume_X or is there any chance that it gets mounted to the new Volume_Y?
a pod that is using a volume after a failure will be always re-attached to the same volume or not
yes, the Pod will be re-attached to the same volume, because it still has the same PVC declared in its manifest.
Essentially assuming that a Pod_A is attached to Volume_X, then Pod_A fails but in the meantime a Volume_Y was added to the cluster that can potentially fulfil the PVC requirements.
The Pod still has the same PVC in its manifest, so it will use the same volume. But if you create a new PVC, it might be bound to the new volume.
So what does it happen when Pod_A is re-created, does it get always mounted to Volume_X or is there any chance that it gets mounted to the new Volume_Y?
The Pod still has the same PVC in its manifest, so it will use the volume that is bound by that PVC. Only when you create a new PVC, that claim can be bound the new volume.

Kubernetes: hostPath Static Storage with PV vs hard coded hostPath in Pod Volume

I'm learning Kubernetes and there is something I don't get well.
There are 3 ways of setting up static storage:
Pods with volumes you attach diretctly the storage to
Pods with a PVC attached to its volume
StatefulSets with also PVC inside
I can understand the power of PVC when working together with StorageClass, but not when working with static storage and local storage like hostPath
To me, it sounds very similar:
In the first case I have a volume directly attached to a pod.
In the second case I have a volume statically attached to a PVC, which is also manually attached to a Pod. In the end, the volume will be statically attached to the Pod.
On both cases, the data will remain when the Pod is terminates and will be adopted by the next Pod which the corresponing definition, right?
The only profit I see from using PVCs over plain Pod is that you can define the acces mode. Apart of that. Is there a difference when working with hostpath?
On the other hand, the advantage of using a StatefulSet instead of a PVC is (if understood properly) that it get a headless service, and that the rollout and rollback mechanism works differently. Is that the point?
Thank you in advance!
Extracted from this blog:
The biggest difference is that the Kubernetes scheduler understands
which node a Local Persistent Volume belongs to. With HostPath
volumes, a pod referencing a HostPath volume may be moved by the
scheduler to a different node resulting in data loss. But with Local
Persistent Volumes, the Kubernetes scheduler ensures that a pod using
a Local Persistent Volume is always scheduled to the same node.
Using hostPath does not garantee that a pod will restart on the same node. So you pod can attach /tmp/storage on k8s-node-1, then if you delete and re-create the pod, it may attach tmp/storage on k8s-node-[2-n]
On the contrary, if you use PVC/PV with local persistent storage class, then if you delete and re-create a pod, it will stick on the node which handle the local persistent storage.
StatefulSet creates pods and has volumeClaimTemplate field, which creates a dedicated PVC for each pod. So each pod created by the statefulSet will have its own dedicated storage, linked with Pod->PVC->PV->Storage. So StatefulSet use also the PVC/PV mechanism.
More details are available here.

Kubernetes Persistent Volume Claim FileSystemResizePending

i have a persistent volume claim for a kubernetes pod which shows the message "Waiting for user to (re-)start a pod to finish file system resize of volume on node." if i check it with 'kubectl describe pvc ...'
The rezising itself worked which was done with terraform in our deployments but this message still shows up here and i'm not really sure how to get this fixed? The pod was already restarted several times - i tried kubectl delete pod and scale it down with kubectl scale deployment.
Does anyone have an idea how to get rid of this message?screenshot
There are few things to consider:
Instead of using the Terraform, try resizing the PVC by editing it manually. After that wait for the underlying volume to be expanded by the storage provider and verify if the FileSystemResizePending condition is present by executing kubectl get pvc <pvc_name> -o yaml. Than, make sure that all the associated pods are restarted so the whole process can be completed. Once file system resizing is done, the PVC will automatically be updated to reflect new size.
Make sure that your volume type is supported for expansion. You can expand the following types of volumes:
gcePersistentDisk
awsElasticBlockStore
Cinder
glusterfs
rbd
Azure File
Azure Disk
Portworx
FlexVolumes
CSI
Check if in your StorageClass the allowVolumeExpansion field is set to true.

StatefulSet behavior when a node dies/gets restarted and has a PersistentVolume

Suppose I have a resource foo which is a statefulset with 3 replicas. Each makes a persistent volume claim.
One of the foo pods (foo-1) dies, and a new one starts in its place. Will foo-1 be bound to the same persistent volume that the previous foo-1 had before it died? Will the number of persistent volume claims stay the same or grow?
This edge case doesn't seem to be in the documentation on StatefulSets.
Yes you can. A PVC is going to create a disk on GCP, and add it as secondary disk to the node in which the pod is running.
Upon deletion of an individual pod, K8s is going to re-create the pod on the same node it was running. If it is not possible (say the node no longer exists), the pod will be created on another node, and the secondary disk will be moved to that node.