POD in pending status because PV and PVC unable to delete - kubernetes

I am having a problem related with PV and PVC and I really do not know how to fix it. Could someone give some insight about it?
Github -> https://github.com/Sarony11/wordpress-k8s/tree/main/k8s
Files:
wordpress-pv.yaml (added after comments)
apiVersion: v1
kind: PersistentVolume
metadata:
name: wp1-pv
labels:
app: wordpress
spec:
capacity:
storage: 5Gi
storageClassName: standard
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /tmp/data
wordpress-pv-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp1-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
volumeName: wp1-html
wordpress-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: wp1-app
spec:
replicas: 3
selector:
matchLabels:
app: wordpress
tier: frontend
component: app
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
component: app
spec:
containers:
- image: wordpress:5.6.0-apache
name: wp1
env:
- name: WORDPRESS_DB_HOST
value: "35.204.214.81"
- name: WORDPRESS_DB_USER
value: wordpressdb
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wp1-port
volumeMounts:
- name: wp1-pv
mountPath: /var/www/html
volumes:
- name: wp1-pv
persistentVolumeClaim:
claimName: wp1-pv-claim
COMANDS
administrator#master-ubuntu:~/learning/wordpress-k8s$ microk8s.kubectl apply -k k8s
secret/mysql-pass-7m4b7ft482 created
service/wp1-clusterip-service created
ingress.networking.k8s.io/ingress-service created
persistentvolume/wp1-pv created
persistentvolumeclaim/wp1-pv-claim created
administrator#master-ubuntu:~/learning/wordpress-k8s$ microk8s.kubectl get
pv,pvc -n default
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/wp1-pv 5Gi RWO Retain Available standard 74s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/wp1-pv-claim Pending wp1-html 0 standard
74s
administrator#master-ubuntu:~/learning/wordpress-k8s$ microk8s.kubectl describe pv wp1-pv
Name: wp1-pv
Labels: app=wordpress
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass: standard
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /tmp/data
HostPathType:
Events: <none>
administrator#master-ubuntu:~/learning/wordpress-k8s$ microk8s.kubectl describe pvc wp1-pv-claim
Name: wp1-pv-claim
Namespace: default
StorageClass: standard
Status: Pending
Volume: wp1-html
Labels: app=wordpress
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 0
Access Modes:
VolumeMode: Filesystem
Mounted By: wp1-app-54ddf5fb78-7f8j6
wp1-app-54ddf5fb78-hgqmj
wp1-app-54ddf5fb78-sxn9j
Events: <none>
The result is still Pending for both PV and PVS and of course, pods are in Pending.

Please check/share the status of pvc and pv.
check the status
#kubectl get pvc,pv -n <namespace>
Describe the pvc and pv.
#kubectl describe pvc <PVC-name> -n namespace
#kubectl describe pv <PV-name> -n namespace

So the problem here is that you're not creating any persistent volume.
That's why your pvc remains pending as is your pod.
To make it work, you need to provide a PV that matches the pvc spec
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: "wp-html"
Below is an example for a PV that should match your pvc
apiVersion: v1
kind: PersistentVolume
metadata:
name: wp-html
labels:
app: wordpress
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /data/pv/wp-html
This PV will store the data under the /data/pv/wp-html folder on your local node.

I have tested it on my environment and can give you some advices.
Before starting read how to properly configure dynamic provisioning in Kubernetes - dynamics-provisioning-kubernetes.
Your pv yaml is ok.
Delete volumeName field from you pvc yaml file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp1-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
Thanks to this your pvc will be bound successfully to existing pv.
Change volumeMounts value in your Deployment yaml file, for example to wp1-pv-storage:
apiVersion: apps/v1
kind: Deployment
metadata:
name: wp1-app
spec:
replicas: 3
selector:
matchLabels:
app: wordpress
tier: frontend
component: app
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
component: app
spec:
containers:
- image: wordpress:5.6.0-apache
name: wp1
env:
- name: WORDPRESS_DB_HOST
value: "35.204.214.81"
- name: WORDPRESS_DB_USER
value: wordpressdb
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wp1-port
volumeMounts:
- name: wp1-pv-storage
mountPath: /var/www/html
volumes:
- name: wp1-pv-storage
persistentVolumeClaim:
claimName: wp1-pv-claim

Related

Kubernetes Security Policy fsgroup not working

Trying to mount Kubernetes volume to a pod(running as non-root) with fsGroup SecurityContext option. But the volume is still mounted as root and getting permission denied from the pod when trying to do write operations on the filesystem
Created the Persistent volume:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-demo
spec:
storageClassName: nfs
capacity:
storage: 10Gi
nfs:
server: xxx.xxx.xxx.xxx
path: /nfs/data/demo
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
Persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: demo-vol
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: pv-demo
StatefulSet for the application deployment. The container image starts as user 1001(belonging to group 0)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: demo-app
labels:
app: demo-app
spec:
replicas: 1
serviceName: demo-app
selector:
matchLabels:
app: demo-app
spec:
securityContext:
fsGroup: 0
fsGroupChangePolicy: "OnRootMismatch"
containers:
- name: demo-app-container
image: <theImage>
volumeMounts:
- mountPath: /store
name: demo-vol
volumes:
- name: demo-vol
persistentVolumeClaim:
claimName: demo-vol

WaitForFirstConsumer PersistentVolumeClaim waiting for first consumer to be created before binding. Auto provisioning does not work

Hi I know this might be a possible duplicate, but I cannot get the answer from this question.
I have a prometheus deployment and would like to give it a persistent volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--storage.tsdb.retention.time=60d"
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
resources:
requests:
cpu: 500m
memory: 500M
limits:
cpu: 1
memory: 1Gi
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
persistentVolumeClaim:
claimName: prometheus-pv-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-pv-claim
spec:
storageClassName: default
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
Now both the pvc and the deployment cannot be scheduled because the pvc waits for the deployment and the other way around. As far as I am concerned we have a cluster with automatic provisioning, thus I cannot just create a pv. How can I solve this problem, because other deployments do use pvc in the same style and it works.
Its because of namespace. PVC is a namespaced object you can look here. Your PVC is on the default namespace. Moving it to monitoring namespace should work.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-pv-claim
namespace: monitoring
spec:
storageClassName: default
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
Does your PVC deploy the correct namespace with deployment?
And make sure the StorageClass has volumeBindingMode: WaitForFirstConsumer
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: default
...
volumeBindingMode: WaitForFirstConsumer

Persistent volume Kubernetes on Google Cloud

I have a Redis pod on my Kubernetes cluster on Google Cloud. I have built PV and the claim.
kind: PersistentVolume
apiVersion: v1
metadata:
name: redis-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: my-size
accessModes:
- ReadWriteOnce
hostPath:
path: "/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: postgres
name: redis-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: my size
I also mounted it in my deployment.yaml
volumeMounts:
- mountPath: /data
name: redis-pv-claim
volumes:
- name: redis-pv-claim
persistentVolumeClaim:
claimName: redis-pv-claim
I can't see any error while running describe pod
Volumes:
redis-pv-claim:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: redis-pv-claim
ReadOnly: false
But it just can't save any key. After every deployment, the "/data" folder is just empty.
My NFS is active now but i still cant keep data .
Describe pvc
Namespace: my namespace
StorageClass: nfs-client
Status: Bound
Volume: pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-class: nfs-client
volume.beta.kubernetes.io/storage-provisioner: cluster.local/ext1-nfs-client-provisioner
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By: my grafana pod
Events: <none>
Describe pod gives me an error though.
Warning FailedMount 18m kubelet, gke-devcluster-pool-1-36e6a393-rg7d MountVolume.SetUp failed for volume "pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3" : mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/8f7b6630-ed9b-427a-9ada-b75e1805ed60/volumes/kubernetes.io~nfs/pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3 --scope -- /
home/kubernetes/containerized_mounter/mounter mount -t nfs 192.168.1.21:/mnt/nfs/development-test-claim-pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3 /var/lib/kubelet/pods/8f7b6630-ed9b-427a-9ada-b75e1805ed60
/volumes/kubernetes.io~nfs/pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3
Output: Running scope as unit: run-ra5925a8488ef436897bd44d526c57841.scope
Mount failed: mount failed: exit status 32
Mounting command: chroot
Working redis with PV and PVC on GKE
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
type: LoadBalancer
ports:
- port: 6379
name: redis
selector:
app: redis
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: redis
spec:
selector:
matchLabels:
app: redis
serviceName: redis
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redislabs/rejson
args: ["--requirepass", "password", "--appendonly", "no", "--loadmodule", "/usr/lib/redis/modules/rejson.so"]
ports:
- containerPort: 6379
name: redis
resources:
limits:
cpu: .50
memory: 1500Mi
requests:
cpu: .25
memory: 1000Mi
volumeMounts:
- name: redis-volume
mountPath: /data
volumeClaimTemplates:
- metadata:
name: redis-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
you can update image in this stateful sets as per need.

Local persistent Volume 1 node(s) didn't find available persistent volumes to bind

I'm getting started with persistent volumes and k8s. Im trying to use a local folder on my RH7 box w minikube installed. Im getting the error:
0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
StorageClass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PersistentVolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /usr/local/docker/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
PersistentVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 1Gi
Nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
name: nginx-deployment
template:
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
containers:
- name: nginx-deployment
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: example-local-claim
[root#localhost docker]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
example-local-pv 1Gi RWO Retain Available local-storage 76s
[root#localhost docker]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
example-local-claim Pending local-storage 75s
The issue is in your persistent volume, your persistent volume has following node selector:
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
where your PV is trying to bind on node whose name is my-node, which it couldn't find as there is no node with that name. Please check your node-name using:
kubectl get nodes
And put the node name at the place of my-node and it will work. Hope this helps.
If you work with docker-desktop on windows, so try this:
Create dir in wsl:
docker run --rm -it -v /:/k8s alpine mkdir /k8s/mnt/k8s
Create storageclass,vol,vol claim,pod:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-storage
spec:
capacity:
storage: 50000Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/k8s
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgredb
namespace: xxx
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: postgres
name: postgres
namespace: xxx
spec:
serviceName: postgres
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:14-alpine
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgredb

Unable to mount local PersistentVolume in k8s 1.13

I'm trying to deploy a stateful set with a persistent volume claim on a bare metal kubernetes cluster (v1.13) but the pod times out when trying to mount the volume.
I have a local-storage storage class defined:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
I have a PV defined:
kind: PersistentVolume
apiVersion: v1
metadata:
name: cassandradev1
labels:
app: cassandra
environment: dev
spec:
storageClassName: local-storage
capacity:
storage: 1Ti
accessModes:
- ReadWriteOnce
local:
path: "/data1/cassandradev1"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node1
And I have a stateful set that issues a claim (truncated):
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra-set
spec:
...
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
selector:
matchLabels:
app: "cassandra"
environment: "dev"
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 1Ti
When I try to apply the stateful set, the Pod gets scheduled but times out:
Normal Scheduled 2m13s default-scheduler Successfully assigned default/cassandra-set-0 to my-node1
Warning FailedMount 13s kubelet, my-node1 Unable to mount volumes for pod "cassandra-set-0 (dd252f77-fda3-11e8-96d3-1866dab905dc)": timeout expired waiting for volumes to attach or mount for pod "default"/"cassandra-set-0". list of unmounted volumes=[cassandra-data]. list of unattached volumes=[cassandra-data default-token-t2dg8]
If I look at the logs for the controller I see an error message for no volume plugin matched:
kubectl logs pod/kube-controller-manager -n kube-system
W1212 00:51:24.218114 1 plugins.go:845] FindExpandablePluginBySpec(cassandradev1) -> err:no volume plugin matched
Any ideas on where to look next?
First of all PV definition is incorrect, there is no hostPath in local-storage class. This is how you should define your local storage PV:
kind: PersistentVolume
apiVersion: v1
metadata:
name: cassandradev1
labels:
app: cassandra
environment: dev
spec:
storageClassName: local-storage
capacity:
storage: 1Ti
accessModes:
- ReadWriteOnce
local:
path: "/data1/cassandradev1"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node1
Also keep in mind unlike hostPath, /data1/cassandradev1 should exist on my-node1, local-storage doesn't automatically create the path and when you deploy statefulset and path is not there, it will give error related to mounting.
This should resolve your issue. Hope this helps.
EDIT: So, I setup the cassandra statefulset with local-storage by following yaml files. I have omitted some config maps and so it will not work as it is. Could you please try to check what is different there:
apiVersion: apps/v1
kind: StatefulSet
metadata:
generation: 1
labels:
state: cassandra
name: cassandra
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: cassandra
serviceName: cassandra
template:
metadata:
annotations:
pod.alpha.kubernetes.io/initialized: "true"
creationTimestamp: null
labels:
app: cassandra
spec:
containers:
- args:
- chmod -R 777 /logs/; /on_start.sh
command:
- /bin/sh
- -c
image: <image>
imagePullPolicy: Always
name: cassandra
ports:
- containerPort: 9042
protocol: TCP
resources: {}
volumeMounts:
- mountPath: /data
name: data
imagePullSecrets:
- name: gcr-imagepull-json-key
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: local-storage
PV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: cassandra-data-vol-0
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: data-cassandra-0
namespace: default
local:
path: /data/cassandra-0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ip-10-0-1-91.ec2.internal
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
Make sure /data/cassandra-0 exists before you create PV. Let me know if you face any issues.