I installed thanos using bitnami's helm chart.
After installing prometheus with Helm Chart
Likewise, minio was deployed together while installing thanos with Helm Chart.
The deployed minio pod is in Pending state
0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
Problems arise.
kubectl get pvc check result is Pending status
no persistent volumes available for this claim and no storage class is set
This happens.
How to build pv in this situation...?
Basically you used a pvc in the pod where that pvc is not bound to any pv. You can either do dynamic provisioning (at that time, pvc will create a pv for you as far the requirements), or you can manually create a pv.
For dynamic provisioning can see this doc.
For pv doc
Related
I have a question please, for PVC that is bound to one PV through dynamic storageclass on a pod created by a replica set , if that pod gets terminated and restarted on another host will it get the same PV?
what i saw that the Pod could not be rescheduled till the same PV was active but i am not able to understand what should be the standard behavior and how PVC should react differently between replica set and statefulset
another host mean another Kubernetes node?
If POD gets restarted or terminated and scheduled again on another node in that case if PVC and PV exist disk will be mounted to that specific node and POD again starts running. Yes here, PVC and PV will be the same but still depends on Retain policy.
You can read more about : https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#deployments_vs_statefulsets
PersistentVolumes can have various reclaim policies, including
"Retain", "Recycle", and "Delete". For dynamically provisioned
PersistentVolumes, the default reclaim policy is "Delete". This means
that a dynamically provisioned volume is automatically deleted when a
user deletes the corresponding PersistentVolumeClaim.
read more at : https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/
if your pod gets terminated or restarted means you are not deleting the PVC in that case PV will be there POD will again get attach to PVC and start running on the respective node.
I have pvc template creation enabled in statefulset.The pod is created before the pvc binds with the pv. Pod is stuck in pending state for long time showing that the pvc is not bound. Can we alter the time for this pod to restart?
I am using an EKS fargate cluster
I am trying to deploy Postgresql through helm on microk8s, but pod keeps pending showing pod has unbound immediate PersistentVolumeClaims error.
I tried creating pvc and a storageclass inside it, and editing it but all keeps pending.
Does anyone know whats holding the pvc claiming pv?
on the 'PVC' it shows 'no persistent volumes available for this claim and no storage class is set' Error
This means that you have to prepare PersistentVolumes for your platform that can be used by your PersistentVolumeClaims (e.g. with correct StorageClass or other requirements)
I have set up AKS cluster with 14 Linux nodes. I am deploying code using helm charts. The pods with manual storageClass get created successfully but the ones that use default storageClass fail to create a persistent volume claim with the error.
Warning ProvisioningFailed (x894 over 33h)
persistentvolume-controller Failed to provision volume with
StorageClass "default": azureDisk - claim.Spec.Selector is not
supported for dynamic provisioning on Azure disk
I tried creating an NFS storage and add that to the kubernetes cluster using kubectl command but the pods are not using the that NFS mount for volume creation.
kubectl describe pvc dev-aaf-aaf-sms -n onap
Name: dev-aaf-aaf-sms
Namespace: onap
StorageClass: default
Status: Pending
Volume:
Labels: app=aaf-sms
chart=aaf-sms-4.0.0
heritage=Tiller
release=dev-aaf
Annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/azure-disk
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: dev-aaf-aaf-sms-6bbffff5db-qxm7j
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed <invalid> (x894 over 33h) persistentvolume-controller Failed to provision volume with StorageClass "default": azureDisk - claim.Spec.Selector is not supported for dynamic provisioning on Azure disk
Can someone with Azure AKS or Kubernetes understanding provide some guidance here.
Q: Is it possible to setup a default NFS volume mount for all nodes on an AKS cluster using kubectl?
It appears to be a compatibility constraint between Azure and Kubernetes for “default” storageClass.
For PV with “manual” storageClass, PVC gets dynamically created successfully. So we need to define the default storageClass for nodes on the AKS cluster. In my case I need to define it as an NFS mount.
I know how to do it on an individual VM after installing kubernetes on it but I am struggling to set it for all nodes of an AKS cluster. Azure documentation only talks about doing it at pod level and not node level
you are clearly "hitting" this piece of code which implies that you cannot have selector on your PVC.spec
Trying to create pod but getting following error:
0/3 nodes are available: 1 node(s) had no available volume zone.
I tried to attach more volume but still the error is same.
Warning FailedScheduling 2s (x14 over 42s) default-scheduler 0/3 nodes are available: 1 node(s) had no available volume zone, 2 node(s) didn't have free ports for the requested pod ports.
My problem was that the AWS EC2 Volume and Kubernetes PersistentVolume (PV) state got somehow out of sync / corrupted. Kubernetes believed there was a bound PV while the EC2 Volume showed as "available", not mounted to a worker node.
Update: The volume was in a different avail. zone then either of the 2 EC2 nodes and thus could not be attached to them.
The solution was to delete all relevant resources - StatefulSet, PVC (crucial!), PV. Then I was able to apply them again and Kubernetes succeeded in creating a new EC2 Volume and attaching it to the instance.
As you can see in my configuration, I have a StatefulSet with a "volumeClaimTemplate" (=> PersistentVolumeClaim, PVC) (and a matching StorageClass definition) so Kubernetes should dynamically provision an EC2 Volume, attach it to a worker and expose it as a PersistentVolume.
See kubectl get pvc, kubectl get pv and in the AWS Console - EC2 - Volumes.
NOTE: "Bound" = the PV is bound to the PVC.
Here is a description of a laborious way to restore a StatefulSet on AWS if you have a snapshot of the EBS volume (5/2018): https://medium.com/#joatmon08/kubernetes-statefulset-recovery-from-aws-snapshots-8a6159cda6f1