Host Directory Not Showing Files From Kubernetes Persistent Volume Claim - kubernetes

I am migrating an application that consists of 1 or more tomcat servers and 1 or more wildfly servers to k8s. I have it up and working with a deployments for tomcat and wildfly with a clusterIP for each. I am now trying to combine log files from the multiple nodes into a single log directory on the host (for now single host deployment). I have created a PVC with the following yaml files:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /C/k8s/local-pv/logs
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: local-storage
resources:
requests:
storage: 2Gi
I have the tomcat deployment mapped to the voles by:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
spec:
replicas: 2
selector:
matchLabels:
component: tomcat-deployment
template:
metadata:
labels:
component: tomcat
spec:
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chmod -R 777 /usr/local/tomcat/logs"]
volumeMounts:
- name: local-persistent-storage
mountPath: /usr/local/tomcat/logs
containers:
- name: tomcat
image: tomcat-full:latest
volumeMounts:
- name: local-persistent-storage
mountPath: /usr/local/tomcat/logs
subPath: tomcat
volumes:
- name: local-persistent-storage
persistentVolumeClaim:
claimName: local-claim
I startup all the components and the system is functioning as expected. However, when I look in the host directory C/k8s/local-pv/logs/tomcat, no files are showing. I connect to the tomcat pod with docker exec and I see the log files from both tomcat servers. The files shown in the /usr/local/tomcat/logs survive a deployment restart so I know they are written somewhere. I searched my entire hard drive and the files are not anywhere.
I checked the pvc, pv, and storage class
kubectl describe pvc local-claim
Name: local-claim
Namespace: default
StorageClass: local-storage
Status: Bound
Volume: local-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By: tomcat-deployment-644698fdc6-jmxz9
tomcat-deployment-644698fdc6-qvx9s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 12m (x3 over 14m) persistentvolume-controller waiting for first consumer to be created before binding
kubectl describe pv local-pv
Name: local-pv
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Bound
Claim: default/local-claim
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /C/k8s/pv/logs
HostPathType:
Events: <none>
kubectl get storageclass local-storage
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 19m
What piece am I missing? It appears that the local storage class is not bound to a volume claim and tomcat application.

Related

1 node(s) didn't find available persistent volumes to bind error on stateful set pod creation

I'm starting out with K8s and I'm stuck at setting up mongo db in replica set mode with local persistent volume. I'm using StorageClass, PersistentVolume and PersistentVolumeClaim.
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mongo-pv 1Gi RWO Retain Available mongo-storage 24m
but when inspect the pod I get
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
This post answer https://stackoverflow.com/a/70069138/2704032 confirmed my suspect that I might be using the wrong label..
So I had a look at the PV and I see that as I've set nodeAffinity as
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernets.io/hostname
operator: In
values:
- docker-desktop
it's looking for
Node Affinity:
Required Terms:
Term 0: kubernets.io/hostname in [docker-desktop]
I checked nodes with kubectl get nodes --show-labels
and it does have that label as the output shows
NAME STATUS ROLES AGE VERSION LABELS
docker-desktop Ready control-plane 7d9h v1.24.1 beta.kubernetes.io/arch=arm64,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm64,kubernetes.io/hostname=docker-desktop,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
I tried using volumeClaimTemplate in the StatefulSet as
volumeClaimTemplates:
- metadata:
name: mongo-vctemplate
spec:
storageClassName: mongo-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
but it didn't make a difference..I also tried to specify the pic in the pv with the claimRef parameter but still the insidious error comes up at pod creation..
What else can I check or do I need to setup?
Many thanks as usual
Here are my yaml files
StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mongo-storage
provisioner: kubernetes.io/no-provisioner
# volumeBindingMode: Immediate
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
labels:
type: local
spec:
capacity:
storage: 1Gi
# persistentVolumeReclaimPolicy: Retain # prod
persistentVolumeReclaimPolicy: Delete # local tests
storageClassName: mongo-storage
# claimRef:
# name: mongo-pvc
accessModes:
- ReadWriteOnce
# volumeMode: Filesystem #default if omitted
# hostPath:
# path: /mnt/data
local:
path: /mnt/disk/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernets.io/hostname
operator: In
values:
- docker-desktop
PVC
piVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: mongo-storage
# volumeName: mongo-pv # this will make it unbundable???
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo-statefulset
spec:
selector:
matchLabels:
app: mongo-pod # has to match .spec.template.metadata.labels
serviceName: mongo-clusterip-service
replicas: 1 # 3
template:
metadata:
labels:
app: mongo-pod # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo-container
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-pv-cont
mountPath: /data/db #/mnt/data
volumes:
- name: mongo-pv-cont
persistentVolumeClaim:
claimName: mongo-pvc
It is a typo in kubernets.io/hostname. It should be kubernetes.io/hostname in the pv definition.
similar to this one:
Error while using local persistent volumes in statefulset pod

How to Deploy an existing EBS volume to EKS PVC?

I have an existing ebs volume in AWS with data on it. I need to create a PVC in order to use it in my pods.
Following this guide: https://medium.com/pablo-perez/launching-a-pod-with-an-existing-ebs-volume-mounted-in-k8s-7b5506fa7fa3
persistentvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: jenkins-volume
labels:
type: amazonEBS
spec:
capacity:
storage: 60Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-011111111x
fsType: ext4
[$$]>kubectl describe pv
Name: jenkins-volume
Labels: type=amazonEBS
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 60Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-011111111x
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
persistentVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: jenkins-pvc-shared4
namespace: jenkins
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60Gi
[$$]>kubectl describe pvc jenkins-pvc-shared4 -n jenkins
Name: jenkins-pvc-shared4
Namespace: jenkins
StorageClass: gp2
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 12s (x2 over 21s) persistentvolume-controller waiting for first consumer to be created before binding
[$$]>kubectl get pvc jenkins-pvc-shared4 -n jenkins
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins-pvc-shared4 Pending gp2 36s
Status is pending (waiting to the consumer to be attached) - but it should already be provisioned.
The "waiting for consumer" message suggests that your StorageClass has its volumeBindingMode set to waitForFirstConsumer.
The default value for this setting is Immediate: as soon as you register a PVC, your volume provisioner would provision a new volume.
The waitForFirstConsumer on the other hand would wait for a Pod to request usage for said PVC, before the provisioning a volume.
The messages and behavior you're seeing here sound normal. You may create a Deployment mounting that volume, to confirm provisioning works as expected.
try fsType "xfs" instead of ext4
StorageClass is empty for your PV.
According to your guide, you created storageClass "standard", so add to your PersistentVolume spec
storageClassName: standard
and also set it in persistentVolumeClaim instead of gp2
The right config should be:
[$$]>cat persistentvolume2.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-name
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: aws://eu-west-2a/vol-123456-ID
capacity:
storage: 60Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: gp2
[$$]>cat persistentVolumeClaim2.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: new-namespace
labels:
app.kubernetes.io/name: <app-name>
name: pvc-name
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60Gi
storageClassName: gp2
volumeName: pv-name

Cannot get Pod to bind local-storage in minikube. "node(s) didn't find available persistent volumes", "waiting for first consumer to be created"

I'm having some trouble configuring my Kubernetes deployment on minikube use local-storage. I'm trying to set up a rethinkdb instance that will mount a directory from the minikube VM to the rethinkdb Pod. My setup is the following
Storage
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: rethinkdb-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rethinkdb-pv-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
So I define a storageClass of local-storage type as described online in the tutorials. I then make a PersistentVolume that asks for 10GB of storage from the /mnt/data path on the underlying host. I have made this directory on the minikube VM
$ minikube ssh
$ ls /mnt
data sda1
This PersistentVolume has the storage class of local-storage and requests it from nodes matching the nodeAffinity section of hostname in 'minikube'.
I then make a PersistentVolumeClaim that asks for the type local-storage and requests 5GB.
Everything is good here, right? Here is the output of kubectl
$ kubectl get pv,pvc,storageClass
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/rethinkdb-pv 10Gi RWO Delete Available local-storage 9m33s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/rethinkdb-pv-claim Pending local-storage 7m51s
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 9m33s
storageclass.storage.k8s.io/standard (default) k8s.io/minikube-hostpath Delete Immediate false 24h
RethinkDB Deployment
I now attempt to make a Deployment with a single replica of the standard RethinkDB container.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
name: database
name: rethinkdb
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
selector:
matchLabels:
service: rethinkdb
template:
metadata:
creationTimestamp: null
labels:
service: rethinkdb
spec:
containers:
- name: rethinkdb
image: rethinkdb:latest
volumeMounts:
- mountPath: /data
name: rethinkdb-data
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: rethinkdb-data
persistentVolumeClaim:
claimName: rethinkdb-pv-claim
This asks for a single replica of rethinkdb and it tries to mount the rethinkdb-pv-claim Persistent Volume Claim as the name rethinkdb-data and then attempts to mount that at /data in the container.
This is what shows, though
Name: rethinkdb-6dbf4ccdb-64gk5
Namespace: development
Priority: 0
Node: <none>
Labels: pod-template-hash=6dbf4ccdb
service=rethinkdb
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/rethinkdb-6dbf4ccdb
Containers:
rethinkdb:
Image: rethinkdb:latest
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/data from rethinkdb-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-d5ncp (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
rethinkdb-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: rethinkdb-pv-claim
ReadOnly: false
default-token-d5ncp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-d5ncp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 73s (x7 over 8m38s) default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
"1 node(s) didn't find available persistent volumes to bind". I'm not sure how that is because the PVC is available.
$ kubectl describe pvc
Name: rethinkdb-pv-claim
Namespace: development
StorageClass: local-storage
Status: Pending
Volume:
Labels: <none>
Annotations: Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: rethinkdb-6dbf4ccdb-64gk5
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 11s (x42 over 10m) persistentvolume-controller waiting for first consumer to be created before binding
I think one hint is that the field
Nodes <null> for the Pod - does that mean it isn't assigned to a node?
I think the issue is that one of mine was ReadWriteOnce and the other one was ReadWriteMany, then I had trouble getting permissions right when running minikube mount /tmp/data:/mnt/data so I just got rid of mounting it to the underlying filesystem and now it works

Kubernetes NFS storage using PV and PVC

I have a 3 nodes cluster running in VirtualBox and I'm trying to create a NFS storage using PV and PVC, but it seems that I'm doing something wrong.
I have the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
labels:
type: nfs
spec:
capacity:
storage: 100Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /redis/data
server: 192.168.56.2 #ip of my master-node
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 100Mi
storageClassName: slow
selector:
matchLabels:
type: nfs
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-master
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: data
mountPath: "/redis/data"
ports:
- containerPort: 6379
volumes:
- name: data
persistentVolumeClaim:
claimName: redis-pvc
I've already installed nfs-common in all my nodes.
Whenever creating the PV, PVC and POD the pod does not start and I get the following:
Warning FailedMount 30s kubelet, kubenode02 MountVolume.SetUp failed for volume "redis-pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9326d818-b78a-42cc-bcff-c487fc8840a4/volumes/kubernetes.io~nfs/redis-pv --scope -- mount -t nfs -o hard,nfsvers=4.1 192.168.56.2:/redis/data /var/lib/kubelet/pods/9326d818-b78a-42cc-bcff-c487fc8840a4/volumes/kubernetes.io~nfs/redis-pv
Output: Running scope as unit run-rc316990c37b14a3ba24d5aedf66a3f6a.scope.
mount.nfs: Connection timed out
Here is the status of kubectl get pv, pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/redis-pv 100Mi RWO Retain Bound default/redis-pvc slow 8s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/redis-pvc Bound redis-pv 100Mi RWO slow 8s
Any ideas of what am I missing?
1 - you need to install your NFS Server: Follow the instructions in this link:
https://vitux.com/install-nfs-server-and-client-on-ubuntu/
2- Create your sharedfolder where you want to persist your data<
mount 192.168.56.2:/mnt/sharedfolder /mnt/shared/folder_client
3- Change in PV.yaml the following instructions:
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
labels:
type: nfs
spec:
capacity:
storage: 100Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /mnt/sharedfolder
server: 192.168.56.2 #ip of my master-node

One of two PersistentVolumeClaims' status is "Pending"

I have a file has PV, Service and 2 Pod statefulset including Dynamic PVC.
When I deployed the file, a problem happened at PVC status.
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound pv-test 10Gi RWO my-storage-class 7m19s
www-web-1 Pending my-storage-class 7m17s
One of PVC's Status is "Pending" and the reason is "Storage class name not found".
But one of PVC was created normally.
Below is the content of the file.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: "my-storage-class"
capacity:
storage: 10Gi
hostPath:
path: /tmp/data
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 2 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
If someone knows the cause, let me know.
Thanks in advance.
Describe information about PV, PVC (www-web-1), Pod (web-1)
kubectl describe pv pv-test
Name: pv-test
Labels: type=local
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"pv-test"},"spec":{"accessModes...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: my-storage-class
Status: Bound
Claim: default/www-web-0
Reclaim Policy: Delete
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /tmp/data
HostPathType: DirectoryOrCreate
Events: <none>
#kubectl describe pvc www-web-1
Name: www-web-1
Namespace: default
StorageClass: my-storage-class
Status: Pending
Volume:
Labels: app=nginx
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 20s (x7 over 2m) persistentvolume-controller storageclass.storage.k8s.io "my-storage-class" not found
Mounted By: web-1
#kubectl describe po web-1
Name: web-1
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: app=nginx
controller-revision-hash=web-6596ffb49b
statefulset.kubernetes.io/pod-name=web-1
Annotations: <none>
Status: Pending
IP:
Controlled By: StatefulSet/web
Containers:
nginx:
Image: k8s.gcr.io/nginx-slim:0.8
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/usr/share/nginx/html from www (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lnfvq (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
www:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: www-web-1
ReadOnly: false
default-token-lnfvq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lnfvq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m43s (x183 over 8m46s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
Your volume pv-test has accessModes: - ReadWriteOnce I think you need to create one more volume for second pod.
So I think if www-web-1 is also trying to mount pv-test it won't be able to mount it.
You are using hostpath volume to store the data. You are using /tmp/data on the host. Ensure that /tmp/data directory exists in all the nodes in the cluster.