kubernetes volume mount timeout - kubernetes

I am using PVC to attach the volume to one deployment for grafana, but it times out and container kept in creating stage.
Storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: grafana-storagetest
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
iopsPerGB: "10"
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-claimtest
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
storageClassName: grafana-storagetest
resources:
requests:
storage: 10G
Service
apiVersion: v1
kind: Service
metadata:
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: grafana-proxytest_mapping
prefix: /v1/anonymous/grafana-proxytest
rewrite: /
service: grafana-proxytest.grafana-proxytest:8080
timeout_ms: 20000
connect_timeout_ms: 20000
labels:
app: grafana-proxytest
name: grafana-proxytest
namespace: grafana-proxytest
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: grafana-proxytest
type: ClusterIP
status:
loadBalancer: {}
Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "20"
labels:
version: v1
name: grafana-proxytest-v1
namespace: grafana-proxytest
spec:
replicas: 1
selector:
matchLabels:
app: grafana-proxytest
version: v1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: grafana-proxytest
version: v1
spec:
containers:
image: <aws_ecr>
imagePullPolicy: Always
name: grafana-proxytest
ports:
- containerPort: 3000
protocol: TCP
resources:
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafanapdtest
image: grafana/grafana:latest
imagePullPolicy: Always
name: grafanatest
ports:
- containerPort: 3000
protocol: TCP
resources:
requests:
cpu: 100m
memory: 200Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
- name: grafanapdtest
persistentVolumeClaim:
claimName: grafana-claimtest
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Pod status
grafana-proxytest-v1-7cb5b6b6cf-z5zml 0/2 ContainerCreating 0 4m18s
Describe Pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m14s (x2 over 4m14s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 7 times)
Normal Scheduled 4m12s default-scheduler Successfully assigned grafana-proxytest/grafana-proxytest-v1-7cb5b6b6cf-z5zml to ip-10-10-107-59.ap-southeast-1.compute.internal
Normal SuccessfulAttachVolume 4m10s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-f8ad51be-74c5-11ea-8623-068411799338"
Warning FailedMount 2m9s kubelet, ip-10-10-107-59.ap-southeast-1.compute.internal Unable to mount volumes for pod "grafana-proxytest-v1-7cb5b6b6cf-z5zml_grafana-proxytest(fcf38cab-74c5-11ea-8623-068411799338)": timeout expired waiting for volumes to attach or mount for pod "grafana-proxytest"/"grafana-proxytest-v1-7cb5b6b6cf-z5zml". list of unmounted volumes=[grafanapdtest]. list of unattached volumes=[grafanapdtest default-token-pzxdk]
Expected result - The volume should be attached properly and the pod should be in running status.
I checked and the storage class, PVC, and PV are created.
Edit 1
Added the namespace into PVC as below, but still it fails:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-claimtest
namespace: grafana-proxytest
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
storageClassName: grafana-storagetest
resources:
requests:
storage: 10G
Still getting below error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m18s default-scheduler Successfully assigned grafana-proxytest/grafana-proxytest-v1-7cb5b6b6cf-gpk4l to ip-10-10-107-59.ap-southeast-1.compute.internal
Normal SuccessfulAttachVolume 2m16s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-5bbc63a3-755f-11ea-920e-0a280935f44c"
Warning FailedMount 15s kubelet, ip-10-10-107-59.ap-southeast-1.compute.internal Unable to mount volumes for pod "grafana-proxytest-v1-7cb5b6b6cf-gpk4l_grafana-proxytest(62c50717-755f-11ea-8623-068411799338)": timeout expired waiting for volumes to attach or mount for pod "grafana-proxytest"/"grafana-proxytest-v1-7cb5b6b6cf-gpk4l". list of unmounted volumes=[grafanapdtest]. list of unattached volumes=[grafanapdtest default-token-pzxdk]
One thing as per the above logs i says that attach succeeded for volume, but then it fails for mounting grafanapdtest, which I am not able to understand.
PVC details:
get pvc -n grafana-proxytest
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
grafana-claimtest Bound pvc-5bbc63a3-755f-11ea-920e-0a280935f44c 10Gi RWO grafana-storagetest 15m

Related

Error: 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims

I am deploying a CouchDB cluster on Kubernetes.
It worked and I got an error when I tried to scale it.
I try scale my Statefulset and I am getting this error when I desscribe couchdb-3:
0/3 nodes are available: 3 pod has unbound immediate
PersistentVolumeClaims.
And this error when I describe hpa:
invalid metrics (1 invalid out of 1), first error is: failed to get
cpu utilization: missing request for cpu
failed to get cpu utilization: missing request for cpu
I run "kubectl get pod -o wide" and receive this result:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
couchdb-0 1/1 Running 0 101m 10.244.2.13 node2 <none> <none>
couchdb-1 1/1 Running 0 101m 10.244.2.14 node2 <none> <none>
couchdb-2 1/1 Running 0 100m 10.244.2.15 node2 <none> <none>
couchdb-3 0/1 Pending 0 15m <none> <none> <none> <none>
How can I fix it !?
Kubernetes Version: 1.22.4
Docker Version 20.10.11, build dea9396
Ubuntu 20.04
My hpa file:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-couchdb
spec:
maxReplicas: 16
minReplicas: 6
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: couchdb
targetCPUUtilizationPercentage: 50
pv.yaml:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-0
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-1
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-2
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-2"
I set nfs in /etc/exports: /var/couchnfs 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
statefulset.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: couchdb
labels:
app: couch
spec:
replicas: 3
serviceName: "couch-service"
selector:
matchLabels:
app: couch
template:
metadata:
labels:
app: couch # pod label
spec:
containers:
- name: couchdb
image: couchdb:2.3.1
imagePullPolicy: "Always"
env:
- name: NODE_NETBIOS_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NODENAME
value: $(NODE_NETBIOS_NAME).couch-service # FQDN in vm.args
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: admin
- name: COUCHDB_SECRET
value: b1709267
- name: ERL_FLAGS
value: "-name couchdb#$(NODENAME)"
- name: ERL_FLAGS
value: "-setcookie b1709267" # the “password” used when nodes connect to each other.
ports:
- name: couchdb
containerPort: 5984
- name: epmd
containerPort: 4369
- containerPort: 9100
volumeMounts:
- name: couch-pvc
mountPath: /opt/couchdb/data
volumeClaimTemplates:
- metadata:
name: couch-pvc
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
selector:
matchLabels:
volume: couch-volume
You have 3 persistent volumes and 3 pods claiming each. One PV can’t be claimed by more than one pod.
Since you are using NFS as backend, you can use dynamic provisioning of persistent volumes.
https://github.com/openebs/dynamic-nfs-provisioner

Home assistant config for k3s

I've created the following set of yaml files for home assistant. I like using yaml over helm because I find it gives me more control. The issue i'm having is that it is stuck in pending mode. For context i've set up a node affinity such that it hits the node with the zwave stick plugged into it. The nodes are raspberry pi 4 - 8gb. Here are the config files
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: homeassistant
name: homeassistant
namespace: homeassistant
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: homeassistant
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
labels:
io.kompose.service: homeassistant
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- masternode
containers:
- env:
- name: DISABLE_JEMALLOC
value: "1"
image: homeassistant/raspberrypi4-homeassistant:stable
name: homeassistant
ports:
- containerPort: 8123
resources: {}
volumeMounts:
- mountPath: /config
name: homeassistant-pv-config
- mountPath: /etc/localtime
name: homeassistant-pv-time
readOnly: true
restartPolicy: Always
volumes:
- name: homeassistant-pv-config
persistentVolumeClaim:
claimName: homeassistant-pv-config
- name: homeassistant-pv-time
persistentVolumeClaim:
claimName: homeassistant-pv-time
readOnly: true
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: homeassistant
name: homeassistant
namespace: homeassistant
spec:
ports:
- name: "8123"
port: 8123
targetPort: 8123
selector:
io.kompose.service: homeassistant
status:
loadBalancer: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: homeassistant-pv-config
name: homeassistant-pv-config
namespace: homeassistant
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: homeassistant-pv-time
name: homeassistant-pv-time
namespace: homeassistant
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 100Mi
status: {}
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: homeassistant-pv-config
namespace: homeassistant
labels:
type: local
spec:
storageClassName: local-path
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: homeassistant
name: homeassistant-pv-config
hostPath:
path: "/home/pi/Software/homeassistant/config"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: homeassistant-pv-time
namespace: homeassistant
labels:
type: local
spec:
storageClassName: local-path
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: homeassistant
name: homeassistant-pv-time
hostPath:
path: "/home/pi/Software/homeassistant/localtime"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: homeassistant-ingress
namespace: homeassistant
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
rules:
- host: homeassistant.local
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: homeassistant
port:
number: 8123
Troubleshooting done so far:
The obvious start, stop restart cluster and the nodes
SS-MacBook:homeassistant ss$ kubectl get pods -n homeassistant
NAME READY STATUS RESTARTS AGE
homeassistant-78b4dd6c7d-sjw5n 0/1 Pending 0 8m9s
Here is the output for logs
SS-MacBook:homeassistant ss$ kubectl logs homeassistant-78b4dd6c7d-sjw5n -n homeassistant
No output shows up
Events:
SS-MacBook:homeassistant ss$ kubectl get events -n homeassistant
LAST SEEN TYPE REASON OBJECT MESSAGE
58m Warning FailedScheduling pod/homeassistant-7bc457756f-5zg4w error while running "VolumeBinding" prebind plugin for pod "homeassistant-7bc457756f-5zg4w": Failed to bind volumes: timed out waiting for the condition
46m Warning FailedScheduling pod/homeassistant-7bc457756f-5zg4w error while running "VolumeBinding" prebind plugin for pod "homeassistant-7bc457756f-5zg4w": Failed to bind volumes: timed out waiting for the condition
36m Warning FailedScheduling pod/homeassistant-7bc457756f-5zg4w error while running "VolumeBinding" prebind plugin for pod "homeassistant-7bc457756f-5zg4w": Failed to bind volumes: timed out waiting for the condition
25m Warning FailedScheduling pod/homeassistant-7bc457756f-5zg4w error while running "VolumeBinding" prebind plugin for pod "homeassistant-7bc457756f-5zg4w": Failed to bind volumes: timed out waiting for the condition
12m Normal WaitForPodScheduled persistentvolumeclaim/homeassistant-pv-time waiting for pod homeassistant-7bc457756f-5zg4w to be scheduled
10m Normal ScalingReplicaSet deployment/homeassistant Scaled down replica set homeassistant-7bc457756f to 0
10m Warning FailedScheduling pod/homeassistant-7bc457756f-5zg4w skip schedule deleting pod: homeassistant/homeassistant-7bc457756f-5zg4w
10m Normal SuccessfulDelete replicaset/homeassistant-7bc457756f Deleted pod: homeassistant-7bc457756f-5zg4w
10m Normal ScalingReplicaSet deployment/homeassistant Scaled up replica set homeassistant-78b4dd6c7d to 1
10m Warning FailedScheduling pod/homeassistant-7bc457756f-5zg4w error while running "VolumeBinding" prebind plugin for pod "homeassistant-7bc457756f-5zg4w": Failed to bind volumes: timed out waiting for the condition
10m Normal SuccessfulCreate replicaset/homeassistant-78b4dd6c7d Created pod: homeassistant-78b4dd6c7d-sjw5n
2m55s Normal WaitForPodScheduled persistentvolumeclaim/homeassistant-pv-time waiting for pod homeassistant-78b4dd6c7d-sjw5n to be scheduled
38s Warning FailedScheduling pod/homeassistant-78b4dd6c7d-sjw5n error while running "VolumeBinding" prebind plugin for pod "homeassistant-78b4dd6c7d-sjw5n": Failed to bind volumes: timed out waiting for the condition
So after the events there is clearly an issue with a volume however i have triple checked and the volumes exist and are not already bound to something else. Any ideas?

How to set pvc with statefulset in kubernetes?

On GKE, I set a statefulset resource as
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
selector:
matchLabels:
app: redis
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
resources:
limits:
memory: 2Gi
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /usr/share/redis
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-data-pvc
Want to use pvc so created this one. (This step was did before the statefulset deployment)
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
When check the resource in kubernetes
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
redis-data-pvc Bound pvc-6163d1f8-fb3d-44ac-a91f-edef1452b3b9 10Gi RWO standard 132m
The default Storage Class is standard.
kubectl get storageclass
NAME PROVISIONER
standard (default) kubernetes.io/gce-pd
But when check the statafulset's deployment status. It always wrong.
# Describe its pod details
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22s default-scheduler persistentvolumeclaim "redis-data-pvc" not found
Warning FailedScheduling 17s (x2 over 20s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
Normal Created 2s (x2 over 3s) kubelet Created container redis
Normal Started 2s (x2 over 3s) kubelet Started container redis
Warning BackOff 0s (x2 over 1s) kubelet Back-off restarting failed container
Why can't it find the redis-data-pvc name?
What you have done, should work. Make sure that the PersistentVolumeClaim and the StatefulSet is located in the same namespace.
Thats said, this is an easier solution, and that let you easier scale up to more replicas:
When using StatefulSet and PersistentVolumeClaim, use the volumeClaimTemplates: field in the StatefulSet instead.
The volumeClaimTemplates: will be used to create unique PVCs for each replica, and they have unique naming ending with e.g. -0 where the number is an ordinal used for the replicas in a StatefulSet.
So instead, use a SatefuleSet manifest like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
selector:
matchLabels:
app: redis
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
resources:
limits:
memory: 2Gi
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /usr/share/redis
volumeClaimTemplates: // this will be used to create PVC
- metadata:
name: redis-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi

One of two PersistentVolumeClaims' status is "Pending"

I have a file has PV, Service and 2 Pod statefulset including Dynamic PVC.
When I deployed the file, a problem happened at PVC status.
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound pv-test 10Gi RWO my-storage-class 7m19s
www-web-1 Pending my-storage-class 7m17s
One of PVC's Status is "Pending" and the reason is "Storage class name not found".
But one of PVC was created normally.
Below is the content of the file.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: "my-storage-class"
capacity:
storage: 10Gi
hostPath:
path: /tmp/data
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 2 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
If someone knows the cause, let me know.
Thanks in advance.
Describe information about PV, PVC (www-web-1), Pod (web-1)
kubectl describe pv pv-test
Name: pv-test
Labels: type=local
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"pv-test"},"spec":{"accessModes...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: my-storage-class
Status: Bound
Claim: default/www-web-0
Reclaim Policy: Delete
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /tmp/data
HostPathType: DirectoryOrCreate
Events: <none>
#kubectl describe pvc www-web-1
Name: www-web-1
Namespace: default
StorageClass: my-storage-class
Status: Pending
Volume:
Labels: app=nginx
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 20s (x7 over 2m) persistentvolume-controller storageclass.storage.k8s.io "my-storage-class" not found
Mounted By: web-1
#kubectl describe po web-1
Name: web-1
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: app=nginx
controller-revision-hash=web-6596ffb49b
statefulset.kubernetes.io/pod-name=web-1
Annotations: <none>
Status: Pending
IP:
Controlled By: StatefulSet/web
Containers:
nginx:
Image: k8s.gcr.io/nginx-slim:0.8
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/usr/share/nginx/html from www (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lnfvq (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
www:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: www-web-1
ReadOnly: false
default-token-lnfvq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lnfvq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m43s (x183 over 8m46s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
Your volume pv-test has accessModes: - ReadWriteOnce I think you need to create one more volume for second pod.
So I think if www-web-1 is also trying to mount pv-test it won't be able to mount it.
You are using hostpath volume to store the data. You are using /tmp/data on the host. Ensure that /tmp/data directory exists in all the nodes in the cluster.

k8s - Cinder "0/x nodes are available: x node(s) had volume node affinity conflict"

I have my own cluster k8s. I'm trying to link the cluster to openstack / cinder.
When I'm creating a PVC, I can a see a PV in k8s and the volume in Openstack.
But when I'm linking a pod with the PVC, I have the message k8s - Cinder "0/x nodes are available: x node(s) had volume node affinity conflict".
My yml test:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: classic
provisioner: kubernetes.io/cinder
parameters:
type: classic
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-infra-consuldata4
namespace: infra
spec:
storageClassName: classic
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: consul
namespace: infra
labels:
app: consul
spec:
replicas: 1
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
containers:
- name: consul
image: consul:1.4.3
volumeMounts:
- name: data
mountPath: /consul
resources:
requests:
cpu: 100m
limits:
cpu: 500m
command: ["consul", "agent", "-server", "-bootstrap", "-ui", "-bind", "0.0.0.0", "-client", "0.0.0.0", "-data-dir", "/consul"]
volumes:
- name: data
persistentVolumeClaim:
claimName: pvc-infra-consuldata4
The result:
kpro describe pvc -n infra
Name: pvc-infra-consuldata4
Namespace: infra
StorageClass: classic
Status: Bound
Volume: pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"pvc-infra-consuldata4","namespace":"infra"},"spec":...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ProvisioningSucceeded 61s persistentvolume-controller Successfully provisioned volume pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c using kubernetes.io/cinder
Mounted By: consul-85684dd7fc-j84v7
kpro describe po -n infra consul-85684dd7fc-j84v7
Name: consul-85684dd7fc-j84v7
Namespace: infra
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: app=consul
pod-template-hash=85684dd7fc
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/consul-85684dd7fc
Containers:
consul:
Image: consul:1.4.3
Port: <none>
Host Port: <none>
Command:
consul
agent
-server
-bootstrap
-ui
-bind
0.0.0.0
-client
0.0.0.0
-data-dir
/consul
Limits:
cpu: 2
Requests:
cpu: 500m
Environment: <none>
Mounts:
/consul from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nxchv (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-infra-consuldata4
ReadOnly: false
default-token-nxchv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nxchv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 36s (x6 over 2m40s) default-scheduler 0/6 nodes are available: 6 node(s) had volume node affinity conflict.
Why K8s successful create the Cinder volume, but it can't schedule the pod ?
Try finding out the nodeAffinity of your persistent volume:
$ kubctl describe pv pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [xxx]
Then try to figure out if xxx matches the node label yyy that your pod is supposed to run on:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
yyy Ready worker 8d v1.15.3
If they don't match, you will have the "x node(s) had volume node affinity conflict" error, and you need to re-create the persistent volume with the correct nodeAffinity configuration.
I also ran into this issue when I forgot to deploy the EBS CSI driver before I tried to get my pod to connect to it.
kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"