How to identify the storage space left in a PVC? - kubernetes

I have a pod with PVC request of 10Gi and is successfully bound to PV(both definition below)
I came across the accepted answer of similar question, which suggests to run kubectl -n <namespace> exec <pod-name> df
Upon doing the same i got the following -
pavan#p1: kubectl exec mysql-deployment-95f7dd544-mmjv9 df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 51572172 5797112 43640736 12% /
tmpfs 65536 0 65536 0% /dev
tmpfs 1021680 0 1021680 0% /sys/fs/cgroup
/dev/vda1 51572172 5797112 43640736 12% /etc/hosts
shm 65536 0 65536 0% /dev/shm
tmpfs 1021680 12 1021668 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 1021680 0 1021680 0% /proc/acpi
tmpfs 1021680 0 1021680 0% /sys/firmware
pavan#p1: kubectl exec mysql-deployment-95f7dd544-mmjv9 -- df -h
Filesystem Size Used Avail Use% Mounted on
overlay 50G 5.6G 42G 12% /
tmpfs 64M 0 64M 0% /dev
tmpfs 998M 0 998M 0% /sys/fs/cgroup
/dev/vda1 50G 5.6G 42G 12% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 998M 12K 998M 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 998M 0 998M 0% /proc/acpi
tmpfs 998M 0 998M 0% /sys/firmware
I coudn't quite understand the o/p, i reqested for 10Gi and i don't see any mount with 10gi total capacity?
PV definition :
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mysql"
PVC definition:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Deployment definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: mysql-pod
template:
metadata:
labels:
app: mysql-pod
spec:
containers:
- name: mysql-container
image: mysql:5.7
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
P.s: Node has a capacity of 50G
EDIT 1:
PV describe:
pavan#p1: kubectl describe pv/mysql-pv-volume
Name: mysql-pv-volume
Labels: type=local
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"mysql-pv-volume"},"spec":{"acc...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: default/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/mysql
HostPathType:
Events: <none>
PVC describe:
pavan#p1: kubectl describe pvc/mysql-pv-claim
Name: mysql-pv-claim
Namespace: default
StorageClass: manual
Status: Bound
Volume: mysql-pv-volume
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mysql-pv-claim","namespace":"default"},"spec":{"acc...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: mysql-deployment-95f7dd544-mmjv9
Events: <none>
Pod describe:
pavan#p1: kubectl describe pod/mysql-deployment-95f7dd544-mmjv9
Name: mysql-deployment-95f7dd544-mmjv9
Namespace: default
Priority: 0
Node: pay0k-k8-dev-bytq/10.130.219.196
Start Time: Mon, 09 Sep 2019 18:14:17 +0800
Labels: app=mysql-pod
pod-template-hash=95f7dd544
Annotations: <none>
Status: Running
IP: 10.244.0.123
Controlled By: ReplicaSet/mysql-deployment-95f7dd544
Containers:
mysql-container:
Container ID: docker://83f4730892fd6908ef3dfae3b9125d25cb7467d24df89323c43d3ab136376147
Image: mysql:5.7
Image ID: docker-pullable://mysql#sha256:1a121f2e7590f949b9ede7809395f209dd9910e331e8372e6682ba4bebcc020b
Port: 3306/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 09 Sep 2019 18:14:19 +0800
Ready: True
Restart Count: 0
Environment:
MYSQL_DATABASE: <set to the key 'mysql-database' of config map 'mysqldb'> Optional: false
MYSQL_ROOT_PASSWORD: <set to the key 'mysql-root-password' in secret 'db-credentials'> Optional: false
MYSQL_USER: <set to the key 'mysql-user' in secret 'db-credentials'> Optional: false
MYSQL_PASSWORD: <set to the key 'mysql-password' in secret 'db-credentials'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gbpxc (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-gbpxc:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gbpxc
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 41m default-scheduler Successfully assigned default/mysql-deployment-95f7dd544-mmjv9 to pay0k-k8-dev-bytq
Normal Pulled 41m kubelet, pay0k-k8-dev-bytq Container image "mysql:5.7" already present on machine
Normal Created 41m kubelet, pay0k-k8-dev-bytq Created container mysql-container
Normal Started 41m kubelet, pay0k-k8-dev-bytq Started container mysql-container

In your PV definition your specify hostPath, so your data is stored directly on worker - this is why you see 50GB and you are skipping this additional layer provided by cloud provider which create pv directly from pvc.
Keeping data directly on a node isn't good approach because node could be removed/replaced at any time.
You should use digital ocean pvs instead of you're manually defined, if you spin new pod on digital ocean's pv output from df should show 10GB.

Related

Failed to bind to volume when installing RabbitMQ on K8S

I'm trying to install rabbit-mq using helm, but installation fails because of volume issues.
This is my storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
This is my persistent volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: main-pv
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /media/2TB-DATA/k8s-pv
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node-dev
This is the output to list my storage and pv:
# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage (default) kubernetes.io/no-provisioner Delete Immediate false 14m
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
main-pv 100Gi RWX Delete Available local-storage 40m
After I install rabbit-mq:
helm install rabbitmq bitnami/rabbitmq
The pod is in Pending state, and I see this error:
# kubectl describe pvc
Name: data-rabbitmq-0
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app.kubernetes.io/instance=rabbitmq
app.kubernetes.io/name=rabbitmq
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: rabbitmq-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 3m20s (x4363 over 18h) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
What am I doing wrong?
Maybe platform related. Where did you try to do that? Im asking cause just cant reproduce on GKE - it works fine
Cluster version, labels, nodes
kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
gke-cluster-1-default-pool-82008fd9-8x81 Ready <none> 96d v1.14.10-gke.36 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=europe-west4,failure-domain.beta.kubernetes.io/zone=europe-west4-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-cluster-1-default-pool-82008fd9-8x81,kubernetes.io/os=linux,test=node
gke-cluster-1-default-pool-82008fd9-qkp7 Ready <none> 96d v1.14.10-gke.36 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=europe-west4,failure-domain.beta.kubernetes.io/zone=europe-west4-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-cluster-1-default-pool-82008fd9-qkp7,kubernetes.io/os=linux,test=node
gke-cluster-1-default-pool-82008fd9-tlc7 Ready <none> 96d v1.14.10-gke.36 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=europe-west4,failure-domain.beta.kubernetes.io/zone=europe-west4-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-cluster-1-default-pool-82008fd9-tlc7,kubernetes.io/os=linux,test=node
PV, Storageclass
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: main-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: test
operator: In
values:
- node-test
Installing chart:
helm install rabbitmq bitnami/rabbitmq
...
kubectl get pods
NAME READY STATUS RESTARTS AGE
...
pod/rabbitmq-0 1/1 Running 0 3m40s
...
kubectl describe pod rabbitmq-0
Name: rabbitmq-0
Namespace: default
Priority: 0
Node: gke-cluster-1-default-pool-82008fd9-tlc7/10.164.0.29
Start Time: Thu, 03 Sep 2020 07:34:10 +0000
Labels: app.kubernetes.io/instance=rabbitmq
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rabbitmq
controller-revision-hash=rabbitmq-8687f4cb9f
helm.sh/chart=rabbitmq-7.6.4
statefulset.kubernetes.io/pod-name=rabbitmq-0
Annotations: checksum/secret: 433e8ea7590e8d9f1bb94ed2f55e6d9b95f8abef722a917b97a9e916921d7ac5
kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container rabbitmq
Status: Running
IP: 10.16.2.13
IPs: <none>
Controlled By: StatefulSet/rabbitmq
Containers:
rabbitmq:
Container ID: docker://b1a567522f50ac4c0663db2d9eca5fd8721d9a3d900ac38bb58f0cae038162f2
Image: docker.io/bitnami/rabbitmq:3.8.7-debian-10-r0
Image ID: docker-pullable://bitnami/rabbitmq#sha256:9abd53aeef6d222fec318c97a75dd50ce19c16b11cb83a3e4fb91c4047ea0d4d
Ports: 5672/TCP, 25672/TCP, 15672/TCP, 4369/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
State: Running
Started: Thu, 03 Sep 2020 07:34:34 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Liveness: exec [/bin/bash -ec rabbitmq-diagnostics -q check_running] delay=120s timeout=20s period=30s #success=1 #failure=6
Readiness: exec [/bin/bash -ec rabbitmq-diagnostics -q check_running] delay=10s timeout=20s period=30s #success=1 #failure=3
Environment:
BITNAMI_DEBUG: false
MY_POD_IP: (v1:status.podIP)
MY_POD_NAME: rabbitmq-0 (v1:metadata.name)
MY_POD_NAMESPACE: default (v1:metadata.namespace)
K8S_SERVICE_NAME: rabbitmq-headless
K8S_ADDRESS_TYPE: hostname
RABBITMQ_FORCE_BOOT: no
RABBITMQ_NODE_NAME: rabbit#$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
K8S_HOSTNAME_SUFFIX: .$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
RABBITMQ_MNESIA_DIR: /bitnami/rabbitmq/mnesia/$(RABBITMQ_NODE_NAME)
RABBITMQ_LDAP_ENABLE: no
RABBITMQ_LOGS: -
RABBITMQ_ULIMIT_NOFILES: 65536
RABBITMQ_USE_LONGNAME: true
RABBITMQ_ERL_COOKIE: <set to the key 'rabbitmq-erlang-cookie' in secret 'rabbitmq'> Optional: false
RABBITMQ_USERNAME: user
RABBITMQ_PASSWORD: <set to the key 'rabbitmq-password' in secret 'rabbitmq'> Optional: false
RABBITMQ_PLUGINS: rabbitmq_management, rabbitmq_peer_discovery_k8s, rabbitmq_auth_backend_ldap
Mounts:
/bitnami/rabbitmq/conf from configuration (rw)
/bitnami/rabbitmq/mnesia from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from rabbitmq-token-mclhw (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-rabbitmq-0
ReadOnly: false
configuration:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: rabbitmq-config
Optional: false
rabbitmq-token-mclhw:
Type: Secret (a volume populated by a Secret)
SecretName: rabbitmq-token-mclhw
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m42s default-scheduler Successfully assigned default/rabbitmq-0 to gke-cluster-1-default-pool-82008fd9-tlc7
Normal SuccessfulAttachVolume 6m36s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-8145821b-ed09-11ea-b464-42010aa400e3"
Normal Pulling 6m32s kubelet, gke-cluster-1-default-pool-82008fd9-tlc7 Pulling image "docker.io/bitnami/rabbitmq:3.8.7-debian-10-r0"
Normal Pulled 6m22s kubelet, gke-cluster-1-default-pool-82008fd9-tlc7 Successfully pulled image "docker.io/bitnami/rabbitmq:3.8.7-debian-10-r0"
Normal Created 6m18s kubelet, gke-cluster-1-default-pool-82008fd9-tlc7 Created container rabbitmq
Normal Started 6m18s kubelet, gke-cluster-1-default-pool-82008fd9-tlc7 Started container rabbitmq

Pod access PVC subdirectory that already existed

I have a pod created using a deployment using git-sync image and mount the volume to a PVC
kind: Deployment
metadata:
name: config
namespace: test
spec:
replicas: 1
selector:
matchLabels:
demo: config
template:
metadata:
labels:
demo: config
spec:
containers:
- args:
- '-ssh'
- '-repo=git#domain.com:org/repo.git'
- '-dest=conf'
- '-branch=master'
- '-depth=1'
image: 'k8s.gcr.io/git-sync:v3.1.1'
name: git-sync
securityContext:
runAsUser: 65533
volumeMounts:
- mountPath: /etc/git-secret
name: git-secret
readOnly: true
- mountPath: /config
name: cus-config
securityContext:
fsGroup: 65533
volumes:
- name: git-secret
secret:
defaultMode: 256
secretName: git-creds
- name: cus-config
persistentVolumeClaim:
claimName: cus-config
After the deployment, I checked the pod and got a file path like this.
/tmp/git/conf/subdirA/some.Files
Then I created a second pod from another deployment and want to mount the tmp/git/conf/subdirA on the second pod. This is the example of my second deployment script.
kind: Deployment
metadata:
name: test-mount-config
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: 'nginx:1.7.9'
name: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /root/conf
name: config
subPath: tmp/git/conf/subdirA
volumes:
- name: config
persistentVolumeClaim:
claimName: cus-config
This is my PVC
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: conf
name: config
namespace: test
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: conf
namespace: test
provisioner: spdbyz
reclaimPolicy: Retain
I already read about subpath on PVC, but everytime I checked the folder /root/conf on the second pod, there is nothing inside it.
Any idea on how to mount specific PVC subdirectory on another pod?
Very basic example on how share file content between PODs using PV/PVC
First Create a persistent volume refer below yaml example with hostPath configuration
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv-1
labels:
pv: my-pv-1
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /var/log/mypath
$ kubectl create -f pv.yaml
persistentvolume/my-pv-1 created
Second create a persistent volume claim using below yaml example
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc-claim-1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv: my-pv-1
$ kubectl create -f pvc.yaml
persistentvolumeclaim/my-pvc-claim-1 created
Verify the pv and pvc STATUS is set to BOUND
$ kubectl get persistentvolume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv-1 1Gi RWX Retain Bound default/my-pvc-claim-1 62s
$ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc-claim-1 Bound my-pv-1 1Gi RWX 58
Third consume the pvc in required PODs refer below example yaml where the volume is mounted on two pods nginx-1 and nginx-2.
apiVersion: v1
kind: Pod
metadata:
name: nginx-1
spec:
containers:
- image: nginx
name: nginx-1
volumeMounts:
- mountPath: /var/log/mypath
name: test-vol
subPath: TestSubPath
volumes:
- name: test-vol
persistentVolumeClaim:
claimName: my-pvc-claim-1
$ kubectl create -f nginx-1.yaml
pod/nginx-1 created
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-1 1/1 Running 0 35s 10.244.3.53 k8s-node-3 <none> <none>
Create second POD and consume same PVC
apiVersion: v1
kind: Pod
metadata:
name: nginx-2
spec:
containers:
- image: nginx
name: nginx-2
volumeMounts:
- mountPath: /var/log/mypath
name: test-vol
subPath: TestSubPath
volumes:
- name: test-vol
persistentVolumeClaim:
claimName: my-pvc-claim-1
$ kubectl create -f nginx-2.yaml
pod/nginx-2 created
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-1 1/1 Running 0 55s 10.244.3.53 k8s-node-3 <none> <none>
nginx-2 1/1 Running 0 35s 10.244.3.54 k8s-node-3 <none> <none>
Test by connecting to container 1 and write to the file on mount-path.
root#nginx-1:/# df -kh
Filesystem Size Used Avail Use% Mounted on
overlay 12G 7.3G 4.4G 63% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/vda1 12G 7.3G 4.4G 63% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 3.9G 12K 3.9G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware
root#nginx-1:/# cd /var/log/mypath/
root#nginx-1:/var/log/mypath# date >> date.txt
root#nginx-1:/var/log/mypath# date >> date.txt
root#nginx-1:/var/log/mypath# cat date.txt
Thu Jan 30 10:44:42 UTC 2020
Thu Jan 30 10:44:43 UTC 2020
Now connect tow second POD/container and it should see the file from first as below
$ kubectl exec -it nginx-2 -- /bin/bash
root#nginx-2:/# cat /var/log/mypath/date.txt
Thu Jan 30 10:44:42 UTC 2020
Thu Jan 30 10:44:43 UTC 2020

0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate

I setup a k8s in a multiple node. But the PersistentVolume can not be created successfully, when I am trying to create a simple PostgreSQL.
The StorageClass
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
The StatefulSet is:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
app: postgres
serviceName: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- env:
...
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: my-local-pv
subPath: pgdata
terminationGracePeriodSeconds: 60
volumeClaimTemplates:
- metadata:
name: my-local-pv
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
The describe pod:
Name: postgres-0
Namespace: kong
Priority: 0
Node: <none>
Labels: app=postgres
controller-revision-hash=postgres-9c9cf868d
statefulset.kubernetes.io/pod-name=postgres-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/postgres
Containers:
postgres:
Image: postgres:9.5
Port: 5432/TCP
Host Port: 0/TCP
Environment:
POSTGRES_USER: kong
POSTGRES_PASSWORD: kong
POSTGRES_DB: kong
PGDATA: /var/lib/postgresql/data/pgdata
Mounts:
/var/lib/postgresql/data from postgres-data (rw,path="pgdata")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b5mkt (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
postgres-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: postgres-data-postgres-0
ReadOnly: false
default-token-b5mkt:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-b5mkt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.
Warning FailedScheduling <unknown> default-scheduler 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.
kubectl get pv:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-local-pv 2Gi RWO Retain Available local-storage 3m59s
Please help me. Thanks in advance

JupyterHub Hub Pod stuck in pending - pod has unbound immediate PersistentVolumeClaims

I am having issues deploying juypterhub on kubernetes cluster. The issue I am getting is that the hub pod is stuck in pending.
Stack:
kubeadm
flannel
weave
helm
jupyterhub
Runbook:
$kubeadm init --pod-network-cidr="10.244.0.0/16"
$sudo cp /etc/kubernetes/admin.conf $HOME/ && sudo chown $(id -u):$(id -g) $HOME/admin.conf && export KUBECONFIG=$HOME/admin.conf
$kubectl create -f pvc.yml
$kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-aliyun.yml
$kubectl apply --filename https://git.io/weave-kube-1.6
$kubectl taint nodes --all node-role.kubernetes.io/master-
Helm installations as per https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-helm.html
Jupyter installations as per https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-jupyterhub.html
config.yml
proxy:
secretToken: "asdf"
singleuser:
storage:
dynamic:
storageClass: local-storage
pvc.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: standard
spec:
capacity:
storage: 100Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /dev/vdb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: standard
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
The warning is:
$kubectl --namespace=jhub get pod
NAME READY STATUS RESTARTS AGE
hub-fb48dfc4f-mqf4c 0/1 Pending 0 3m33s
proxy-86977cf9f7-fqf8d 1/1 Running 0 3m33s
$kubectl --namespace=jhub describe pod hub
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 35s (x3 over 35s) default-scheduler pod has unbound immediate PersistentVolumeClaims
$kubectl --namespace=jhub describe pv
Name: standard
Labels: type=local
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: default/standard
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /dev/vdb
HostPathType:
Events: <none>
$kubectl --namespace=kube-system describe pvc
Name: hub-db-dir
Namespace: jhub
StorageClass:
Status: Pending
Volume:
Labels: app=jupyterhub
chart=jupyterhub-0.8.0-beta.1
component=hub
heritage=Tiller
release=jhub
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 13s (x7 over 85s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Mounted By: hub-fb48dfc4f-mqf4c
I tried my best to follow the localstorage volume configuration on the official kubernetes website, but with no luck
-G
Managed to fix it using the following configuration.
Key points:
- I forgot to add the node in nodeAffinity
- it works without putting in volumeBindingMode
apiVersion: v1
kind: PersistentVolume
metadata:
name: standard
spec:
capacity:
storage: 2Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /temp
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- INSERT_NODE_NAME_HERE
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: local-storage
provisioner: kubernetes.io/no-provisioner
config.yaml
proxy:
secretToken: "token"
singleuser:
storage:
dynamic:
storageClass: local-storage
make sure your storage/pv looks like this:
root#asdf:~# kubectl --namespace=kube-system describe pv
Name: standard
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"standard"},"spec":{"accessModes":["ReadWriteOnce"],"capa...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Bound
Claim: jhub/hub-db-dir
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [asdf]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /temp
Events: <none>
root#asdf:~# kubectl --namespace=kube-system describe storageclass
Name: local-storage
IsDefaultClass: Yes
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/no-provisioner
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
Now the hub pod looks something like this:
root#asdf:~# kubectl --namespace=jhub describe pod hub
Name: hub-5d4fcd8fd9-p6crs
Namespace: jhub
Priority: 0
PriorityClassName: <none>
Node: asdf/192.168.0.87
Start Time: Sat, 23 Feb 2019 14:29:51 +0800
Labels: app=jupyterhub
component=hub
hub.jupyter.org/network-access-proxy-api=true
hub.jupyter.org/network-access-proxy-http=true
hub.jupyter.org/network-access-singleuser=true
pod-template-hash=5d4fcd8fd9
release=jhub
Annotations: checksum/config-map: --omitted
checksum/secret: --omitted--
Status: Running
IP: 10.244.0.55
Controlled By: ReplicaSet/hub-5d4fcd8fd9
Containers:
hub:
Container ID: docker://d2d4dec8cc16fe21589e67f1c0c6c6114b59b01c67a9f06391830a1ea711879d
Image: jupyterhub/k8s-hub:0.8.0
Image ID: docker-pullable://jupyterhub/k8s-hub#sha256:e40cfda4f305af1a2fdf759cd0dcda834944bef0095c8b5ecb7734d19f58b512
Port: 8081/TCP
Host Port: 0/TCP
Command:
jupyterhub
--config
/srv/jupyterhub_config.py
--upgrade-db
State: Running
Started: Sat, 23 Feb 2019 14:30:28 +0800
Ready: True
Restart Count: 0
Requests:
cpu: 200m
memory: 512Mi
Environment:
PYTHONUNBUFFERED: 1
HELM_RELEASE_NAME: jhub
POD_NAMESPACE: jhub (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: <set to the key 'proxy.token' in secret 'hub-secret'> Optional: false
Mounts:
/etc/jupyterhub/config/ from config (rw)
/etc/jupyterhub/secret/ from secret (rw)
/srv/jupyterhub from hub-db-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hub-token-bxzl7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub-config
Optional: false
secret:
Type: Secret (a volume populated by a Secret)
SecretName: hub-secret
Optional: false
hub-db-dir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hub-db-dir
ReadOnly: false
hub-token-bxzl7:
Type: Secret (a volume populated by a Secret)
SecretName: hub-token-bxzl7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>

Kubernetes pod pending when a new volume is attached (EKS)

Let me describe my scenario:
TL;DR
When I create a deployment on Kubernetes with 1 attached volume, everything works perfectly. When I create the same deployment, but with a second volume attached (total: 2 volumes), the pod gets stuck on "Pending" with errors:
pod has unbound PersistentVolumeClaims (repeated 2 times)
0/2 nodes are available: 2 node(s) had no available volume zone.
Already checked that the volumes are created in the correct availability zones.
Detailed description
I have a cluster set up using Amazon EKS, with 2 nodes. I have the following default storage class:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
And I have a mongodb deployment which needs two volumes, one mounted on /data/db folder, and the other mounted in some random directory I need. Here is an minimal yaml used to create the three components (I commented some lines on purpose):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: my-project
creationTimestamp: null
labels:
io.kompose.service: my-project-db-claim0
name: my-project-db-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: my-project
creationTimestamp: null
labels:
io.kompose.service: my-project-db-claim1
name: my-project-db-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: my-project
name: my-project-db
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: my-db
spec:
containers:
- name: my-project-db-container
image: mongo
imagePullPolicy: Always
resources: {}
volumeMounts:
- mountPath: /my_dir
name: my-project-db-claim0
# - mountPath: /data/db
# name: my-project-db-claim1
ports:
- containerPort: 27017
restartPolicy: Always
volumes:
- name: my-project-db-claim0
persistentVolumeClaim:
claimName: my-project-db-claim0
# - name: my-project-db-claim1
# persistentVolumeClaim:
# claimName: my-project-db-claim1
That yaml works perfectly. The output for the volumes is:
$ kubectl describe pv
Name: pvc-307b755a-039e-11e9-b78d-0a68bcb24bc6
Labels: failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1c
Annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: gp2
Status: Bound
Claim: my-project/my-project-db-claim0
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: aws://us-east-1c/vol-xxxxx
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
Name: pvc-308d8979-039e-11e9-b78d-0a68bcb24bc6
Labels: failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1b
Annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: gp2
Status: Bound
Claim: my-project/my-project-db-claim1
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: aws://us-east-1b/vol-xxxxx
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
And the pod output:
$ kubectl describe pods
Name: my-project-db-7d48567b48-slncd
Namespace: my-project
Priority: 0
PriorityClassName: <none>
Node: ip-192-168-212-194.ec2.internal/192.168.212.194
Start Time: Wed, 19 Dec 2018 15:55:58 +0100
Labels: name=my-db
pod-template-hash=3804123604
Annotations: <none>
Status: Running
IP: 192.168.216.33
Controlled By: ReplicaSet/my-project-db-7d48567b48
Containers:
my-project-db-container:
Container ID: docker://cf8222f15e395b02805c628b6addde2d77de2245aed9406a48c7c6f4dccefd4e
Image: mongo
Image ID: docker-pullable://mongo#sha256:0823cc2000223420f88b20d5e19e6bc252fa328c30d8261070e4645b02183c6a
Port: 27017/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 19 Dec 2018 15:56:42 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/my_dir from my-project-db-claim0 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-pf9ks (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
my-project-db-claim0:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-project-db-claim0
ReadOnly: false
default-token-pf9ks:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-pf9ks
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7m22s (x5 over 7m23s) default-scheduler pod has unbound PersistentVolumeClaims (repeated 2 times)
Normal Scheduled 7m21s default-scheduler Successfully assigned my-project/my-project-db-7d48567b48-slncd to ip-192-168-212-194.ec2.internal
Normal SuccessfulMountVolume 7m21s kubelet, ip-192-168-212-194.ec2.internal MountVolume.SetUp succeeded for volume "default-token-pf9ks"
Warning FailedAttachVolume 7m13s (x5 over 7m21s) attachdetach-controller AttachVolume.Attach failed for volume "pvc-307b755a-039e-11e9-b78d-0a68bcb24bc6" : "Error attaching EBS volume \"vol-01a863d0aa7c7e342\"" to instance "i-0a7dafbbdfeabc50b" since volume is in "creating" state
Normal SuccessfulAttachVolume 7m1s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-307b755a-039e-11e9-b78d-0a68bcb24bc6"
Normal SuccessfulMountVolume 6m48s kubelet, ip-192-168-212-194.ec2.internal MountVolume.SetUp succeeded for volume "pvc-307b755a-039e-11e9-b78d-0a68bcb24bc6"
Normal Pulling 6m48s kubelet, ip-192-168-212-194.ec2.internal pulling image "mongo"
Normal Pulled 6m39s kubelet, ip-192-168-212-194.ec2.internal Successfully pulled image "mongo"
Normal Created 6m38s kubelet, ip-192-168-212-194.ec2.internal Created container
Normal Started 6m37s kubelet, ip-192-168-212-194.ec2.internal Started container
Everything is created without any problems. But if I uncomment the lines in the yaml so two volumes are attached to the db deployment, the pv output is the same as earlier, but the pod gets stuck on pending with the following output:
$ kubectl describe pods
Name: my-project-db-b8b8d8bcb-l64d7
Namespace: my-project
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: name=my-db
pod-template-hash=646484676
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/my-project-db-b8b8d8bcb
Containers:
my-project-db-container:
Image: mongo
Port: 27017/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/data/db from my-project-db-claim1 (rw)
/my_dir from my-project-db-claim0 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-pf9ks (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
my-project-db-claim0:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-project-db-claim0
ReadOnly: false
my-project-db-claim1:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-project-db-claim1
ReadOnly: false
default-token-pf9ks:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-pf9ks
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 60s (x5 over 60s) default-scheduler pod has unbound PersistentVolumeClaims (repeated 2 times)
Warning FailedScheduling 2s (x16 over 59s) default-scheduler 0/2 nodes are available: 2 node(s) had no available volume zone.
I've already read these two issues:
Dynamic volume provisioning creates EBS volume in the wrong availability zone
PersistentVolume on EBS can be created in availability zones with no nodes (Closed)
But I already checked that the volumes are created in the same zones as the cluster nodes instances. In fact, EKS creates two EBS by default in us-east-1b and us-east-1c zones and those volumes works. The volumes created by the posted yaml are on those regions too.
See this article: https://kubernetes.io/blog/2018/10/11/topology-aware-volume-provisioning-in-kubernetes/
The gist is that you want to update your storageclass to include:
volumeBindingMode: WaitForFirstConsumer
This causes the PV to not be created until the pod is scheduled. It fixed a similar problem for me.
Sounds like it's trying to create a volume in an availability zone where you don't have any volumes on. You can try restricting your StorageClass to the availability zones where you have nodes.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- us-east-1b
- us-east-1c
This is very similar to this question and this answer except that the issue described is on GCP and in this case it's AWS.
In this case, you should check the availability zone of your worker nodes (EC2 instances).
As a Example :
worker node 1 = eu-central-1b
worker node 2 = eu-central-1c
Then create the volume in including one of an availability zone which mentioned above(do not create the volume with eu-central-1a).
after you create the volume, create your PersistentVolume and PersistentVolumeClaim by attaching a newly created volume to your cluster like below.
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
failure-domain.beta.kubernetes.io/region: eu-central-1
failure-domain.beta.kubernetes.io/zone: eu-central-1b
name: mongo-pv
namespace: default
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 100Gi
awsElasticBlockStore:
fsType: ext4
volumeID: aws://eu-central-1b/vol-063342ab9be5d2929
storageClassName: gp2
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: gp2
volumeName: mongo-pv