I have 3-node kubernetes, host names are host_1, host_2, host_3.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
host_1 Ready master 134d v1.10.1
host_2 Ready <none> 134d v1.10.1
host_3 Ready <none> 134d v1.10.1
I have defined 3 local persistent volumes of size 100M, mapped to a local directory on each node. I used the following descriptor 3 times where <hostname> is one of: host_1, host_2, host_3:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume-<hostname>
spec:
capacity:
storage: 100M
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /opt/jnetx/volumes/test-volume
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <hostname>
After applying three such yamls, I have the following:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
test-volume-host_1 100M RWO Delete Available local-storage 58m
test-volume-host_2 100M RWO Delete Available local-storage 58m
test-volume-host_3 100M RWO Delete Available local-storage 58m
Now, I have a very simple container that writes to a file. The file should be located on the local persistent volume. I deploy it as a statefulset with 1 replica and map volumes via statefulset's volumeClaimTemplates:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: filewriter
spec:
serviceName: filewriter
...
replicas: 1
template:
spec:
containers:
- name: filewriter
...
volumeMounts:
- mountPath: /test/data
name: fw-pv-claim
volumeClaimTemplates:
- metadata:
name: fw-pv-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 100M
The volume claim seems to have been created ok and bound to pv on the first host:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
test-volume-host_1 100M RWO Delete Bound default/fw-pv-claim-filewriter-0 local-storage 1m
test-volume-host_2 100M RWO Delete Available local-storage 1h
test-volume-host_3 100M RWO Delete Available local-storage 1h
But, the pod hangs in Pending state:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
filewriter-0 0/1 Pending 0 4s
If we describe, we can see the following errors:
$ kubectl describe pod filewriter-0
Name: filewriter-0
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2s (x8 over 1m) default-scheduler 0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) had volume node affinity conflict.
Can you help me figure out what is wrong? Why can't it just create the pod?
It seems that the one node where the PV is available has a taint that your StatefulSet does not have toleration for.
I had a very similar case to the above and observed the same symptom (node affinity conflict). In my case the issue was that I had 2 volumes attached to 2 different nodes but was trying to use them within 1 pod.
I detected this by using kubectl describe pvc name-of-pvc and noting the selected-node annotation. Once I set the pod to use volumes that were both on one node, I no longer had issues.
Related
I am trying to deploy a application on EKS cluster version 1.23. When I executed the files, my deployment stuck in pending state. I describe the pod and find below error.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NotTriggerScaleUp 2m55s cluster-autoscaler pod didn't trigger scale-up: 2 node(s) had volume node affinity conflict
Warning FailedScheduling 57s (x3 over 3m3s) default-scheduler 0/15 nodes are available: 1 node(s) were unschedulable, 14 node(s) had volume node affinity conflict.
I also followed Kubernetes Pod Warning: 1 node(s) had volume node affinity conflict this but still no luck.
The PVC file is:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ankit-eks-discovery-pvc
namespace: ankit-eks
spec:
storageClassName: ankit-eks-discovery
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1024M
And the storage class used for that is:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ankit-eks-discovery
namespace: ankit-eks
#volumeBindingMode: WaitForFirstConsumer
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
fsType: ext4
type: gp2
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- us-east-1a
- us-east-1b
- us-east-1c
- us-east-1d
The PV description is:
Name: pvc-c9f6d0d3-0348-4ff4-8d9f-e01af1996e60
Labels: topology.kubernetes.io/region=us-east-1
topology.kubernetes.io/zone=us-east-1a
Annotations: pv.kubernetes.io/migrated-to: ebs.csi.aws.com
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
volume.kubernetes.io/provisioner-deletion-secret-name:
volume.kubernetes.io/provisioner-deletion-secret-namespace:
Finalizers: [kubernetes.io/pv-protection]
StorageClass: ankit-eks-discovery
Status: Bound
Claim: ankit-eks/ankit-eks-discovery-pvc
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
Required Terms:
Term 0: topology.kubernetes.io/zone in [us-east-1a]
topology.kubernetes.io/region in [us-east-1]
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-0eb1d80b2882356b2
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
I tried deleting and recreating deployment, PVC and storage class but no luck.
I also checked the labels of my nodes. They are also correct.
[ankit#ankit]$ kubectl describe no ip-10-211-26-94.ec2.internal
Name: ip-10-211-26-94.ec2.internal
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=m5d.large
beta.kubernetes.io/os=linux
eks.amazonaws.com/capacityType=ON_DEMAND
eks.amazonaws.com/nodegroup=ankit-nodes
eks.amazonaws.com/nodegroup-image=ami-0eb3216fe26784e21
failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1b
k8s.io/cloud-provider-aws=b69ac44d98ef071c695017c202bde456
kubernetes.io/arch=amd64
kubernetes.io/hostname=ip-10-211-26-94.ec2.internal
kubernetes.io/os=linux
node.kubernetes.io/instance-type=m5d.large
topology.ebs.csi.aws.com/zone=us-east-1b
topology.kubernetes.io/region=us-east-1
topology.kubernetes.io/zone=us-east-1b
Annotations: alpha.kubernetes.io/provided-node-ip: 10.211.26.94
csi.volume.kubernetes.io/nodeid: {"ebs.csi.aws.com":"i-068a872a874b02642"}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
I don't know what I am doing wrong. Can anyone help here.
I am trying to find solutions for my issue about 1 week.
I try to add volume in Azure k8s to my Pod, when I did this, I got error:
Unable to attach or mount volumes: unmounted volumes=[azure], unattached volumes=[azure kube-api-access-11111]: timed out waiting for the condition
My PersistentVolume
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-azuredisk
namespace: test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: managed-csi
My pod
And all information about my Pod
kubectl -n test describe pods mypod
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-azuredisk 5Gi RWO Retain Bound test/pvc-azuredisk managed-csi 73m
kubectl get pvc -n test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-azuredisk Bound pv-azuredisk 5Gi RWO managed-csi 74m
Could you help me with some advice, where is my error?
I'm starting out in K8s and I'm not quite wrapping my head around deploying a StatefulSet with multiple replicas bound to a local disc using PV+PVC+SC vs. volumeClaimTemplates + HostPath
scenarios.
My goal is to deploy a MongoDB StatefulSet with 3 replicas set in (mongo's replica set) ReplicaSet mode and bound each one to a local ssd.
I did a few tests and got a few concepts to get straight.
Scenario a) using PV+PVC+SC:
If in my StatefulSet's container (set with replicas:1) I declare a volumeMounts and a Volume I can point it to a PVC which uses a SC used by a PV which points to a physical local ssd folder.
The concept is straight, it all maps beautifully.
If I increase the replicas to be more the one then from the second pod onward they'll won't find a Volume to bind to..and I get the 1 node(s) didn't find available persistent volumes to bind error.
This makes me realise that the storage capacity reserved from the PVC on that PV is not replicated as the pods in the StatefulSet and mapped to each created POD.
Scenario b) volumeClaimTemplates + HostPath:
I commented out the Volume, and instead used the volumeClaimTemplates which indeed works as I was expecting in scenario a, for each created pod an associated claim gets created and some storage capacity gets reserved for that Pod. Here also pretty straight concept, but it only works as long as I use storageClassName: hostpath in volumeClaimTemplates. I tried using my SC and the result is the same 1 node(s) didn't find available persistent volumes to bind error.
Also, when created with volumeClaimTemplates PV names are useless and confusing as the start with PVC..
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mg-pv 3Gi RWO Delete Available mg-sc 64s
pvc-32589cce-f472-40c9-b6e4-dc5e26c2177a 50Mi RWO Delete Bound default/mg-pv-cont-mongo-3 hostpath 36m
pvc-3e2f4e50-30f8-4ce8-8a62-0b923fd6aa79 50Mi RWO Delete Bound default/mg-pv-cont-mongo-1 hostpath 37m
pvc-8f4ff966-c30a-469f-a68d-ed579ef2a96f 50Mi RWO Delete Bound default/mg-pv-cont-mongo-4 hostpath 36m
pvc-9f8c933b-85d6-4024-8bd0-6668feee8757 50Mi RWO Delete Bound default/mg-pv-cont-mongo-2 hostpath 37m
pvc-d6c212f3-2391-4137-97c3-07836c90b8f3 50Mi RWO Delete Bound default/mg-pv-cont-mongo-0 hostpath 37m
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mg-pv-cont-mongo-0 Bound pvc-d6c212f3-2391-4137-97c3-07836c90b8f3 50Mi RWO hostpath 37m
mg-pv-cont-mongo-1 Bound pvc-3e2f4e50-30f8-4ce8-8a62-0b923fd6aa79 50Mi RWO hostpath 37m
mg-pv-cont-mongo-2 Bound pvc-9f8c933b-85d6-4024-8bd0-6668feee8757 50Mi RWO hostpath 37m
mg-pv-cont-mongo-3 Bound pvc-32589cce-f472-40c9-b6e4-dc5e26c2177a 50Mi RWO hostpath 37m
mg-pv-cont-mongo-4 Bound pvc-8f4ff966-c30a-469f-a68d-ed579ef2a96f 50Mi RWO hostpath 37m
mg-pvc Pending mg-sc 74s
Is there any way to get to set the volumeClaimTemplates's PVs names as something more useful as when declaring a PV?
How to point volumeClaimTemplates's PVs to an ssd as I'm doing for my scenario a?
Many thanks
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: mg-pv
labels:
type: local
spec:
capacity:
storage: 3Gi
persistentVolumeReclaimPolicy: Delete
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
storageClassName: mg-sc
local:
path: /Volumes/ProjectsSSD/k8s_local_volumes/mongo/mnt/data/unt
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
SC
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mg-sc
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mg-pvc
spec:
storageClassName: mg-sc
# volumeName: mg-pv
resources:
requests:
# storage: 1Gi
storage: 50Mi
accessModes:
- ReadWriteOnce
StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
role: mongo
environment: test
serviceName: 'mongo'
replicas: 5
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- '--bind_ip'
- all
- '--replSet'
- rs0
# - "--smallfiles"
# - "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mg-pv-cont
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: 'role=mongo,environment=test'
- name: KUBERNETES_MONGO_SERVICE_NAME
value: 'mongo'
### using volumes you have to have one persistent volume for each created pod..useful only for static set of pods
# volumes:
# - name: mg-pv-cont
# persistentVolumeClaim:
# claimName: mg-pvc
## volume claim templates create a claim for each created pos, so if scaling up or down the number of pod they¡ll clame their own space in the persistent volume.
volumeClaimTemplates:
- metadata:
name: mg-pv-cont # this binds
# name: mg-pv-pvc-template # same name as volumeMounts or it won't bind.
### Waiting for deployments to stabilize...
### - statefulset/mongo: Waiting for statefulset spec update to be observed...
spec:
# storageClassName: mg-sc
storageClassName: hostpath
accessModes: ['ReadWriteOnce']
resources:
requests:
storage: 50Mi
Ok, after fiddling with it a bit more and testing a couple more of configurations I found out that the PVC to PV binding happens in a 1:1 manner, so once the PV has bound to a claim (either PVC or volumeClaimTemplates) no other claim can bind to it. So the solution is just to create many PVs as many pods you expect to create. And some extra for scaling up and down the replicas of your StatefulSet. Now in the volumeClaimTemplates: spec: storageClassName: you can user the SC you defined so the those PV get used. No use for PVC if using volumeClassTemplates..it'd just create a claim that nobody uses..
Hope this will help others starting out in the Kubernetes world.
Cheers.
I am trying to mount a persistent volume on pods (via a deployment).
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- image: ...
volumeMounts:
- mountPath: /app/folder
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: volume-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
However, the pod stays in "ContainerCreating" status and the events show the following error message.
Unable to mount volumes for pod "podname": timeout expired waiting for volumes to attach or mount for pod "namespace"/"podname". list of unmounted volumes=[volume]. list of unattached volumes=[volume]
I verified that the persistent volume claim is ok and bound to a persistent volume.
What am I missing here?
When you create a PVC without specifying a PV or type of StorageClass in GKE clusters it will fall back to default option:
StorageClass: standard
Provisioner: kubernetes.io/gce-pd
Type: pd-standard
Please take a look on official documentation: Cloud.google.com: Kubernetes engine persistent volumes
There could be a lot of circumstances that can produce error message encountered.
As it's unknown how many replicas are in your deployment as well as number of nodes and how pods were scheduled on those nodes, I've tried to reproduce your issue and I encountered the same error with following steps (GKE cluster was freshly created to prevent any other dependencies that might affect the behavior).
Steps:
Create a PVC
Create a Deployment with replicas > 1
Check the state of pods
Additional links
Create a PVC
Below is example YAML definition of a PVC the same as yours:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
After applying above definition please check if it created successfully. You can do it by using below commands:
$ kubectl get pvc volume-claim
$ kubectl get pv
$ kubectl describe pvc volume-claim
$ kubectl get pvc volume-claim -o yaml
Create a Deployment with replicas > 1
Below is example YAML definition of deployment with volumeMounts and replicas > 1:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu-deployment
spec:
selector:
matchLabels:
app: ubuntu
replicas: 10 # amount of pods must be > 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
volumeMounts:
- mountPath: /app/folder
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: volume-claim
Apply it and wait for a while.
Check the state of pods
You can check the state of pods with below command:
$ kubectl get pods -o wide
Output of above command:
NAME READY STATUS RESTARTS AGE IP NODE
ubuntu-deployment-2q64z 0/1 ContainerCreating 0 4m27s <none> gke-node-1
ubuntu-deployment-4tjp2 1/1 Running 0 4m27s 10.56.1.14 gke-node-2
ubuntu-deployment-5tn8x 0/1 ContainerCreating 0 4m27s <none> gke-node-1
ubuntu-deployment-5tn9m 0/1 ContainerCreating 0 4m27s <none> gke-node-3
ubuntu-deployment-6vkwf 0/1 ContainerCreating 0 4m27s <none> gke-node-1
ubuntu-deployment-9p45q 1/1 Running 0 4m27s 10.56.1.12 gke-node-2
ubuntu-deployment-lfh7g 0/1 ContainerCreating 0 4m27s <none> gke-node-3
ubuntu-deployment-qxwmq 1/1 Running 0 4m27s 10.56.1.13 gke-node-2
ubuntu-deployment-r7k2k 0/1 ContainerCreating 0 4m27s <none> gke-node-3
ubuntu-deployment-rnr72 0/1 ContainerCreating 0 4m27s <none> gke-node-3
Take a look on above output:
3 pods are in Running state
7 pods are in ContainerCreating state
All of the Running pods are located on the same gke-node-2
You can get more detailed information why pods are in ContainerCreating state by:
$ kubectl describe pod NAME_OF_POD_WITH_CC_STATE
The Events part in above command shows:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned default/ubuntu-deployment-2q64z to gke-node-1
Warning FailedAttachVolume 14m attachdetach-controller Multi-Attach error for volume "pvc-7d756147-6434-11ea-a666-42010a9c0058" Volume is already used by pod(s) ubuntu-deployment-qxwmq, ubuntu-deployment-9p45q, ubuntu-deployment-4tjp2
Warning FailedMount 92s (x6 over 12m) kubelet, gke-node-1 Unable to mount volumes for pod "ubuntu-deployment-2q64z_default(9dc28e95-6434-11ea-a666-42010a9c0058)": timeout expired waiting for volumes to attach or mount for pod "default"/"ubuntu-deployment-2q64z". list of unmounted volumes=[volume]. list of unattached volumes=[volume default-token-dnvnj]
Pod cannot pass ContainerCreating state because of failed mounting of a volume. Mentioned volume is already used by other pods on a different node.
ReadWriteOnce: The Volume can be mounted as read-write by a single node.
Additional links
Please take a look at: Cloud.google.com: Access modes of persistent volumes.
There is detailed answer on topic of access mode: Stackoverflow.com: Why can you set multiple accessmodes on a persistent volume
As it's unknown what you are trying to achieve please take a look on comparison between Deployments and Statefulsets: Cloud.google.com: Persistent Volume: Deployments vs statefulsets.
If doing this in a cloud provider, the storageClass object will create the respective volume for your persistent volume claim.
If you are trying to do this locally on minikube or in a self managed kubernetes cluster, you need to manually create the storageClass that will provide the volumes for you, or create it manually like this example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The hostPath variable will mount this data in the current pod node.
I have been trying to run kafka/zookeeper on Kubernetes. Using helm charts I am able to install zookeeper on the cluster. However the ZK pods are stuck in pending state. When I issued describe on one of the pod "didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate." was the reason for scheduling failure. But when I issue describe on PVC , I am getting "waiting for first consumer to be created before binding". I tried to re-spawn the whole cluster but the result is same. Trying to use https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/ as guide.
Can someone please guide me here ?
kubectl get pods -n zoo-keeper
kubectl get pods -n zoo-keeper
NAME READY STATUS RESTARTS AGE
zoo-keeper-zk-0 0/1 Pending 0 20m
zoo-keeper-zk-1 0/1 Pending 0 20m
zoo-keeper-zk-2 0/1 Pending 0 20m
kubectl get sc
kubectl get sc
NAME PROVISIONER AGE
local-storage kubernetes.io/no-provisioner 25m
kubectl describe sc
kubectl describe sc
Name: local-storage
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
Provisioner: kubernetes.io/no-provisioner
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
kubectl describe pod foob-zookeeper-0 -n zoo-keeper
ubuntu#kmaster:~$ kubectl describe pod foob-zookeeper-0 -n zoo-keeper
Name: foob-zookeeper-0
Namespace: zoo-keeper
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: app=foob-zookeeper
app.kubernetes.io/instance=data-coord
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=foob-zookeeper
app.kubernetes.io/version=foob-zookeeper-9.1.0-15
controller-revision-hash=foob-zookeeper-5321f8ff5
release=data-coord
statefulset.kubernetes.io/pod-name=foob-zookeeper-0
Annotations: foobar.com/product-name: zoo-keeper ZK
foobar.com/product-revision: ABC
Status: Pending
IP:
Controlled By: StatefulSet/foob-zookeeper
Containers:
foob-zookeeper:
Image: repo.data.foobar.se/latest/zookeeper-3.4.10:1.6.0-15
Ports: 2181/TCP, 2888/TCP, 3888/TCP, 10007/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 1
memory: 2Gi
Liveness: exec [zkOk.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Readiness: tcp-socket :2181 delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
ZK_REPLICAS: 3
ZK_HEAP_SIZE: 1G
ZK_TICK_TIME: 2000
ZK_INIT_LIMIT: 10
ZK_SYNC_LIMIT: 5
ZK_MAX_CLIENT_CNXNS: 60
ZK_SNAP_RETAIN_COUNT: 3
ZK_PURGE_INTERVAL: 1
ZK_LOG_LEVEL: INFO
ZK_CLIENT_PORT: 2181
ZK_SERVER_PORT: 2888
ZK_ELECTION_PORT: 3888
JMXPORT: 10007
Mounts:
/var/lib/zookeeper from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nfcfx (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-foob-zookeeper-0
ReadOnly: false
default-token-nfcfx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nfcfx
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 69s (x4 over 3m50s) default-scheduler 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.
kubectl get pv
ubuntu#kmaster:~$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv 50Gi RWO Retain Available local-storage 10m
ubuntu#kmaster:~$
kubectl get pvc local-claim
ubuntu#kmaster:~$ kubectl get pvc local-claim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
local-claim Pending local-storage 8m9s
ubuntu#kmaster:~$
kubectl describe pvc local-claim
ubuntu#kmaster:~$ kubectl describe pvc local-claim
Name: local-claim
Namespace: default
StorageClass: local-storage
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 2m3s (x26 over 7m51s) persistentvolume-controller waiting for first consumer to be created before binding
Mounted By: <none>
MY PV files:
cat create-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/kafka-mount
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kmaster
cat pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 50Gi
It looks like you created your PV on master node. By default master node is marked unschedulable by ordinary pods using so called taint. To be able to run some service on master node you have two options:
1) Add toleration to some service to allow it to run on master node:
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
You may even specify that some service runs only on master node:
nodeSelector:
node-role.kubernetes.io/master: ""
2) You can remove taint from master node, so any pod can run on it. You should know that this is dangerous because can make your cluster very unstable.
kubectl taint nodes --all node-role.kubernetes.io/master-
Read more here and taints and tolerations: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/