I have postgres pod using a PersistentVolumeClaim for the database storage, in mode ReadWriteOnce.
To upgrade the pod using Helm is tricky because the new pod is blocked until the old pod releases the claim, and Helm won't remove the old pod until the new pod is ready.
How does one normally handle this problem? I can't seem to find documentation on this anywhere, and I would think that this is a common problem.
This is my pvc:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pgdata-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 20Gi
selector:
matchLabels:
app: postgres
In case when you are using ReadWriteOnce mode, my proposal is to use "StatefulSet" with "volumeClaimTemplates" it was tested successfully (however without helm).
As an example please take a look for this:
https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/
Please share with the results and your findings.
Related
I created a pvc which uses vsphere as backend storage privider:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: hello-world-logs
namespace: mbe
labels:
app: hello-world
spec:
storageClassName: vsphere
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
But when i am trying to resize the pvc in question to 2Gi, i got the following message:
Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC
PS:
I added allowVolumeExpansion: true to my storage class object.
my k8s version : 1.18
What should i do to make it working ?
Any help would be really appriciated!
Thank you.
When I push my deployments, for some reason, I'm getting the error on my pods:
pod has unbound PersistentVolumeClaims
Here are my YAML below:
This is running locally, not on any cloud solution.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 ()
creationTimestamp: null
labels:
io.kompose.service: ckan
name: ckan
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: ckan
spec:
containers:
image: slckan/docker_ckan
name: ckan
ports:
- containerPort: 5000
resources: {}
volumeMounts:
- name: ckan-home
mountPath: /usr/lib/ckan/
subPath: ckan
volumes:
- name: ckan-home
persistentVolumeClaim:
claimName: ckan-pv-home-claim
restartPolicy: Always
status: {}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ckan-pv-home-claim
labels:
io.kompose.service: ckan
spec:
storageClassName: ckan-home-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
volumeMode: Filesystem
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ckan-home-sc
provisioner: kubernetes.io/no-provisioner
mountOptions:
- dir_mode=0755
- file_mode=0755
- uid=1000
- gid=1000
You have to define a PersistentVolume providing disc space to be consumed by the PersistentVolumeClaim.
When using storageClass Kubernetes is going to enable "Dynamic Volume Provisioning" which is not working with the local file system.
To solve your issue:
Provide a PersistentVolume fulfilling the constraints of the claim (a size >= 100Mi)
Remove the storageClass from the PersistentVolumeClaim or provide it with an empty value ("")
Remove the StorageClass from your cluster
How do these pieces play together?
At creation of the deployment state-description it is usually known which kind (amount, speed, ...) of storage that application will need.
To make a deployment versatile you'd like to avoid a hard dependency on storage. Kubernetes' volume-abstraction allows you to provide and consume storage in a standardized way.
The PersistentVolumeClaim is used to provide a storage-constraint alongside the deployment of an application.
The PersistentVolume offers cluster-wide volume-instances ready to be consumed ("bound"). One PersistentVolume will be bound to one claim. But since multiple instances of that claim may be run on multiple nodes, that volume may be accessed by multiple nodes.
A PersistentVolume without StorageClass is considered to be static.
"Dynamic Volume Provisioning" alongside with a StorageClass allows the cluster to provision PersistentVolumes on demand.
In order to make that work, the given storage provider must support provisioning - this allows the cluster to request the provisioning of a "new" PersistentVolume when an unsatisfied PersistentVolumeClaim pops up.
Example PersistentVolume
In order to find how to specify things you're best advised to take a look at the API for your Kubernetes version, so the following example is build from the API-Reference of K8S 1.17:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ckan-pv-home
labels:
type: local
spec:
capacity:
storage: 100Mi
hostPath:
path: "/mnt/data/ckan"
The PersistentVolumeSpec allows us to define multiple attributes.
I chose a hostPath volume which maps a local directory as content for the volume. The capacity allows the resource scheduler to recognize this volume as applicable in terms of resource needs.
Additional Resources:
Configure PersistentVolume Guide
If your using rancher k3s kubernetes distribution, set storageClassName to local-path as described in the doc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-path-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 2Gi
To use it on other distributions use https://github.com/rancher/local-path-provisioner
I ran into this issue but I realized that I was creating my PV's with "manual" StorageClass type.
YOUR POD
Expects what kind of storage class?
YOUR PVC Definition
volumeClaimTemplates --> storageClassName : "standard"
PV
spec --> storageClassName : "standard"
In may case the problem was, the wrong name of PersistentVolume specified in PersistentVolumeClaim declaration.
But there might be more reasons to it. Make sure that :
The volumeName name specified in PVC match PV name
The storageClassName name specified in PVC match PV name
The sufficient capacity size is allocated to your resource
The access modes of You PV and PVC are consistent
The number of PV match PVC
For detailed explanation read this article.
We faced a very similar issue today. For us the problem was that there was no CSI driver installed on the nodes. To check the drivers installed, you can use this command:
kubectl get csidriver
Our managed kubernetes clusters v1.25 run in Google Cloud, so for us the solution was to just enable the feature ‘Compute Engine persistent disk CSI Driver’
What specific changes need to be made to the yaml below in order to get the PersistentVolumeClaim to bind to the PersistentVolume?
An EC2 instance in the same VPC subnet as the Kubernetes worker nodes has an ip of 10.0.0.112 and and has been configured to act as an NFS server in the /nfsfileshare path.
Creating the PersistentVolume
We created a PersistentVolume pv01 with pv-volume-network.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: "/nfsfileshare"
server: "10.0.0.112"
and by typing:
kubectl create -f pv-volume-network.yaml
Then when we type kubectl get pv pv01, the pv01 PersistentVolume shows a STATUS of "Available".
Creating the PersistentVolumeClaim
Then we created a PersistentVolumeClaim named `` with pv-claim.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
And by typing:
kubectl create -f pv-claim.yaml
STATUS is Pending
But then when we type kubectl get pvc my-pv-claim, we see that the STATUS is Pending. The STATUS remains as Pending for as long as we continued to check back.
Note this OP is different than this other question, because this problem persists even with quotes around the NFS IP and the path.
Why is this PVC not binding to the PV? What specific changes need to be made to resolve this?
I diagnosed the problem by typing kubectl describe pvc my-pv-claim and looking in the Events section of the results.
Then, based on the reported Events, I was able to fix this by changing storageClassName: manual to storageClassName: slow.
The problem was that the PVC's StorageClassName did not meet the requirement that it match the class specified in the PV.
I created an EBS volume with 30 GiB size. Made two manifest files:
pv-ebs.yml
pvc-ebs.yml
In pv-ebs.yml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ebs
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
awsElasticBlockStore:
fsType: ext4
# The EBS volume ID
volumeID: vol-111222333aaabbbccc
in pvc-ebs.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-prometheus-alertmanager
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
release: "stable"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-prometheus-server
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
release: "stable"
Use helm installed it: helm install --name prometheus stable/prometheus.
But on the k8s dashboard, got message:
prometheus-prometheus-alertmanager-3740839786-np7kb
No nodes are available that match all of the following predicates:: NoVolumeZoneConflict (2).
prometheus-prometheus-server-3176041168-m3w2g
PersistentVolumeClaim is not bound: "prometheus-prometheus-server" (repeated 2 times)
Is there anything wrong about my method?
Pods
Persistent Volumes
When EBS is created, it is provisioned in a particular AZ and it can not be cross-zone mounted. If you do not have nodes available in the same zone for scheduling the pod, it will not start.
Another thing is that with a properly configured kube cluster, you should not need to create PV on your own at all, just create PVC and let dynamic provisioning do it's thing.
If you installed your cluster with KOPs the PVs will be created for you automatically. Just wait a few min and refresh your screen. The errors will go away.
If you have setup your cluster in another way you want to create your volumes in AWS ec2 create-volume and then create PVs and then when helm runs it will claim those PVs.
I have a GKE based Kubernetes setup and a POD that requires a storage volume. I attempt to use the config below:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-scratch-space
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
storageClassName: standard
This PVC is not provisioned. I get the below error:
Failed to provision volume with StorageClass "standard": googleapi: Error 503: The zone 'projects/p01/zones/europe-west2-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
Looking at GKE quotas page, I don't see any issues. Deleting other PVCs also is not solving the issue. Can anyone help? Thanks.
There is no configuration problem at your side - there are actually not enough resources in the europe-west2-b zone to create a 2T persistent disk. Either try for a smaller volume or use a different zone.
There is an example for GCE in the docs. Create a new StorageClass specifying say the europe-west1-b zone (which is actually cheaper than europe-west2-b) like this:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gce-pd-europe-west1-b
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zones: europe-west1-b
And modify your PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-scratch-space
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
storageClassName: gce-pd-europe-west1-b