Kubernetes- Persistant volumes - postgresql

I am trying to deploy Postgresql through helm on microk8s, but pod keeps pending showing pod has unbound immediate PersistentVolumeClaims error.
I tried creating pvc and a storageclass inside it, and editing it but all keeps pending.
Does anyone know whats holding the pvc claiming pv?

on the 'PVC' it shows 'no persistent volumes available for this claim and no storage class is set' Error
This means that you have to prepare PersistentVolumes for your platform that can be used by your PersistentVolumeClaims (e.g. with correct StorageClass or other requirements)

Related

Kubernetes - All PVCs Bound, yet "pod has unbound immediate PersistentVolumeClaims"

Unfortunately I am unable to paste configs or kubectl output, but please bear with me.
Using helm to deploy a series of containers to K8s 1.14.6, all containers are deploying successfully except for those that have initContainer sections defined within them.
In these failing deployments, their templates define container and initContainer stanzas that reference the same persistent-volume (and associated persistent-volume-claim, both defined elsewhere).
The purpose of the initContainer is to copy persisted files from a mounted drive location into the appropriate place before the main container is established.
Other containers (without initContainer stanzas) mount properly and run as expected.
These pods which have initContainer stanzas, however, report "failed to initialize" or "CrashLoopBackOff" as they continually try to start up. The kubectl describe pod of these pods gives only a Warning in the events section that "pod has unbound immediate PersistentVolumeClaims." The initContainer section of the pod description says it has failed because "Error" with no further elaboration.
When looking at the associated pv and pvc entries from kubectl, however, none are left pending, and all report "Bound" with no Events to speak of in the description.
I have been able to find plenty of articles suggesting fixes when your pvc list shows Pending claims, yet none so far that address this particular set of circumstance when all pvcs are bound.
When a PVC is "Bound", this means that you do have a PersistentVolume object in your cluster, whose claimRef refers to that PVC (and usually: that your storage provisioner is done creating the corresponding volume in your storage backend).
When a volume is "not bound", in one of your Pod, this means the node where your Pod was scheduled is unable to attach your persistent volume. If you're sure there's no mistake in your Pods volumes, you should then check logs for your csi volumes attacher pod, when using CSI, or directly in nodes logs when using some in-tree driver.
While the crashLoopBackOff thing is something else. You should check for logs of your initContainer: kubectl logs -c <init-container-name> -p. From your explanation, I would suppose there's some permission issues when copying files over.

Does in Kubernetes a PV/PVC guarantees sticky mounting of pods?

I would like to understand if through PVC/PV a pod that is using a volume after a failure will be always re-attached to the same volume or not. Essentially I know that this can be a case for Statefulset but I am trying to understand if this can be also achieved with PVC and PV. Essentially assuming that a Pod_A is attached to Volume_X, then Pod_A fails but in the meantime a Volume_Y was added to the cluster that can potentially fulfil the PVC requirements. So what does it happen when Pod_A is re-created, does it get always mounted to Volume_X or is there any chance that it gets mounted to the new Volume_Y?
a pod that is using a volume after a failure will be always re-attached to the same volume or not
yes, the Pod will be re-attached to the same volume, because it still has the same PVC declared in its manifest.
Essentially assuming that a Pod_A is attached to Volume_X, then Pod_A fails but in the meantime a Volume_Y was added to the cluster that can potentially fulfil the PVC requirements.
The Pod still has the same PVC in its manifest, so it will use the same volume. But if you create a new PVC, it might be bound to the new volume.
So what does it happen when Pod_A is re-created, does it get always mounted to Volume_X or is there any chance that it gets mounted to the new Volume_Y?
The Pod still has the same PVC in its manifest, so it will use the volume that is bound by that PVC. Only when you create a new PVC, that claim can be bound the new volume.

PVC behavior with Dynamic Provision and Replicaset

I have a question please, for PVC that is bound to one PV through dynamic storageclass on a pod created by a replica set , if that pod gets terminated and restarted on another host will it get the same PV?
what i saw that the Pod could not be rescheduled till the same PV was active but i am not able to understand what should be the standard behavior and how PVC should react differently between replica set and statefulset
another host mean another Kubernetes node?
If POD gets restarted or terminated and scheduled again on another node in that case if PVC and PV exist disk will be mounted to that specific node and POD again starts running. Yes here, PVC and PV will be the same but still depends on Retain policy.
You can read more about : https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#deployments_vs_statefulsets
PersistentVolumes can have various reclaim policies, including
"Retain", "Recycle", and "Delete". For dynamically provisioned
PersistentVolumes, the default reclaim policy is "Delete". This means
that a dynamically provisioned volume is automatically deleted when a
user deletes the corresponding PersistentVolumeClaim.
read more at : https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/
if your pod gets terminated or restarted means you are not deleting the PVC in that case PV will be there POD will again get attach to PVC and start running on the respective node.

0/2 nodes are available....2 pod has unbound immediate PersistentVolumeClaims

I installed thanos using bitnami's helm chart.
After installing prometheus with Helm Chart
Likewise, minio was deployed together while installing thanos with Helm Chart.
The deployed minio pod is in Pending state
0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
Problems arise.
kubectl get pvc check result is Pending status
no persistent volumes available for this claim and no storage class is set
This happens.
How to build pv in this situation...?
Basically you used a pvc in the pod where that pvc is not bound to any pv. You can either do dynamic provisioning (at that time, pvc will create a pv for you as far the requirements), or you can manually create a pv.
For dynamic provisioning can see this doc.
For pv doc

Kubernetes Persistent Volume Claim FileSystemResizePending

i have a persistent volume claim for a kubernetes pod which shows the message "Waiting for user to (re-)start a pod to finish file system resize of volume on node." if i check it with 'kubectl describe pvc ...'
The rezising itself worked which was done with terraform in our deployments but this message still shows up here and i'm not really sure how to get this fixed? The pod was already restarted several times - i tried kubectl delete pod and scale it down with kubectl scale deployment.
Does anyone have an idea how to get rid of this message?screenshot
There are few things to consider:
Instead of using the Terraform, try resizing the PVC by editing it manually. After that wait for the underlying volume to be expanded by the storage provider and verify if the FileSystemResizePending condition is present by executing kubectl get pvc <pvc_name> -o yaml. Than, make sure that all the associated pods are restarted so the whole process can be completed. Once file system resizing is done, the PVC will automatically be updated to reflect new size.
Make sure that your volume type is supported for expansion. You can expand the following types of volumes:
gcePersistentDisk
awsElasticBlockStore
Cinder
glusterfs
rbd
Azure File
Azure Disk
Portworx
FlexVolumes
CSI
Check if in your StorageClass the allowVolumeExpansion field is set to true.