Deploy MongoDB in k8s for development purposes - mongodb

I'm totally newbie in k8s, but read some basic about resourses in k8s and helm trying to create simple cluster in minikube:
Start minikube:
minikube start --cpus "4" --disk-size "40000mb"
Create namespace:
kubectl create namespace test
Using binami helm charts for mongo with custom values.yaml:
replicaCount: 1
architecture: standalone
persistence:
enabled: true
existingClaim: "test/mongodb-data"
nameOverride: test-mongodb
service:
nameOverride: test-mongodb
type: NodePort
nodePorts:
mongodb: 30000
namespaceOverride: test
auth:
rootUser: admin
rootPassword: root
usernames: ["admin"]
passwords: ["123"]
databases: ["test"]
Create volume.yaml for mongodb:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-data
namespace: test
labels:
type: local
annotations:
volume.alpha.kubernetes.io/storage-class: standard
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/mongodb-data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-data
namespace: test
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Finally trying apply to cluster:
helm upgrade --install mongodb --namespace test --values ./k8s/backend/charts/mongodb/values.yaml bitnami/mongodb --wait --debug
As result in console see this:
history.go:56: [debug] getting history for release mongodb
Release "mongodb" does not exist. Installing it now.
install.go:192: [debug] Original chart version: ""
install.go:209: [debug] CHART PATH: C:\Temp\helm\repository\mongodb-13.6.4.tgz
client.go:128: [debug] creating 5 resource(s)
wait.go:66: [debug] beginning wait for 5 resources with timeout of 5m0s
ready.go:277: [debug] Deployment is not ready: test/mongodb-test-mongodb. 0 out of 1 expected pods are ready
...
Error: timed out waiting for the condition
helm.go:84: [debug] timed out waiting for the condition
When run see:
kubectl describe pods -n test mongodb-test-mongodb-7585fc9c48-rk45r
Name: mongodb-test-mongodb-7585fc9c48-rk45r
Namespace: test
Priority: 0
Node: <none>
Labels: app.kubernetes.io/component=mongodb
app.kubernetes.io/instance=mongodb
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=qabat-mongodb
helm.sh/chart=mongodb-13.6.4
pod-template-hash=7585fc9c48
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mongodb-test-mongodb-7585fc9c48
Containers:
mongodb:
Image: docker.io/bitnami/mongodb:6.0.4-debian-11-r0
Port: 27017/TCP
Host Port: 0/TCP
Liveness: exec [/bitnami/scripts/ping-mongodb.sh] delay=30s timeout=10s period=20s #success=1 #failure=6
Readiness: exec [/bitnami/scripts/readiness-probe.sh] delay=5s timeout=5s period=10s #success=1 #failure=6
Environment:
BITNAMI_DEBUG: false
MONGODB_EXTRA_USERNAMES: admin
MONGODB_EXTRA_DATABASES: test
MONGODB_EXTRA_PASSWORDS: <set to the key 'mongodb-passwords' in secret 'mongodb-qabat-mongodb'> Optional: false
MONGODB_ROOT_USER: admin
MONGODB_ROOT_PASSWORD: <set to the key 'mongodb-root-password' in secret 'mongodb-qabat-mongodb'> Optional: false
ALLOW_EMPTY_PASSWORD: no
MONGODB_SYSTEM_LOG_VERBOSITY: 0
MONGODB_DISABLE_SYSTEM_LOG: no
MONGODB_DISABLE_JAVASCRIPT: no
MONGODB_ENABLE_JOURNAL: yes
MONGODB_PORT_NUMBER: 27017
MONGODB_ENABLE_IPV6: no
MONGODB_ENABLE_DIRECTORY_PER_DB: no
Mounts:
/bitnami/mongodb from datadir (rw)
/bitnami/scripts from common-scripts (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xgpl5 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
common-scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mongodb-test-mongodb-common-scripts
Optional: false
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test/mongodb-data
ReadOnly: false
kube-api-access-xgpl5:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m11s (x3 over 12m) default-scheduler 0/1 nodes are available: 1 persistentvolumeclaim "test/mongodb-data" not found. preemption: 0/1 nodes are availab
le: 1 Preemption is not helpful for scheduling.
What's wrong and how it fix? I try same flow with rabbitmq helm, but same result...

Related

Kubernetes Pod is unable to mount volumes to GCP Filestore

I am new to Kubernetes, and as a part of tutorial I have spun up a GKE cluster and a GCP Filestore instance.
Now I am trying to mount Grafana's volume to this Filestore instance. However, it is getting timed out. I am unable to decipher where the mistake lies. I need your help in addressing the issue.
PFB the output.
C:\Users\ak>kubectl describe pod/grafana-7c666cff94-vkgh4
Name: grafana-7c666cff94-vkgh4
Namespace: bc
Priority: 0
Node: gke-bc-gke-cluster-bc-nodepool-9496e187-zsnw/10.51.0.5
Start Time: Fri, 02 Sep 2022 16:21:28 +0530
Labels: app=grafana
pod-template-hash=7c666cff94
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/grafana-7c666cff94
Containers:
grafana:
Container ID:
Image: grafana/grafana:8.4.4
Image ID:
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 250m
memory: 750Mi
Liveness: tcp-socket :3000 delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:3000/robots.txt delay=10s timeout=2s period=30s #success=1 #failure=3
Environment: <none>
Mounts:
/var/lib/grafana from fileserver (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v7qjd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
fileserver:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: fileserver-claim
ReadOnly: false
kube-api-access-v7qjd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 43m default-scheduler Successfully assigned bluecopa/grafana-7c666cff94-vkgh4 to gke-bc-gke-cluster-bc-nodepool-9496e187-zsnw
Warning FailedMount 4m15s (x11 over 40m) kubelet MountVolume.SetUp failed for volume "fileserver" : mount failed: exit status 1
Mounting command: /home/kubernetes/containerized_mounter/mounter
Mounting arguments: mount -t nfs 10.168.189.130:/bc_fs /var/lib/kubelet/pods/cf44b980-7461-4c0e-a32f-673588160692/volumes/kubernetes.io~nfs/fileserver
Output: Mount failed: mount failed: exit status 32
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs xx.xx.xx.xx:/bc_fs /var/lib/kubelet/pods/cf44b980-7461-4c0e-a32f-673588160692/volumes/kubernetes.io~nfs/fileserver]
Output: mount.nfs: Connection timed out
Warning FailedMount 3m16s (x12 over 37m) kubelet Unable to attach or mount volumes: unmounted volumes=[fileserver], unattached volumes=[fileserver kube-api-access-v7qjd]: timed out waiting for the condition
Warning FailedMount 59s (x7 over 41m) kubelet Unable to attach or mount volumes: unmounted volumes=[fileserver], unattached volumes=[kube-api-access-v7qjd fileserver]: timed out waiting for the condition
PV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: fileserver
namespace: bluecopa
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteMany
nfs:
path: /bc_fs
server: xx.xx.xx.xx
PVC.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fileserver-claim
namespace: bluecopa
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: fileserver
resources:
requests:
storage: 100Gi
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
namespace: bluecopa
spec:
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
securityContext:
fsGroup: 472
supplementalGroups:
- 0
containers:
- name: grafana
image: grafana/grafana:8.4.4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http-grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /robots.txt
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 3000
timeoutSeconds: 1
resources:
requests:
cpu: 250m
memory: 750Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: fileserver
volumes:
- name: fileserver
persistentVolumeClaim:
claimName: fileserver-claim
While using the volume mounts in pods we need to watch out for security context
Use the securitycontext as follows in deployment file
securityContext:
runAsUser: 0
Use the following security context in the deployment file
This will help you out to mount the volume without any issues.
For more information check out this documents
Doc1 &
Doc2
Here is the output of deployment pod

Unable to attach or mount volumes: timed out waiting for the condition

One of the pods in my local cluster can't be started because I get Unable to attach or mount volumes: unmounted volumes=[nats-data-volume], unattached volumes=[nats-data-volume nats-initdb-volume kube-api-access-5b5cz]: timed out waiting for the condition error.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
deployment-nats-db-5f5f9fd6d5-wrcpk 0/1 ContainerCreating 0 19m
deployment-nats-server-57bbc76d44-tz5zj 1/1 Running 0 19m
$ kubectl describe pods deployment-nats-db-5f5f9fd6d5-wrcpk
Name: deployment-nats-db-5f5f9fd6d5-wrcpk
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Tue, 12 Oct 2021 21:42:23 +0600
Labels: app=nats-db
pod-template-hash=5f5f9fd6d5
skaffold.dev/run-id=1f5421ae-6e0a-44d6-aa09-706a1d1aa011
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/deployment-nats-db-5f5f9fd6d5
Containers:
nats-db:
Container ID:
Image: postgres:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 1
memory: 256Mi
Requests:
cpu: 250m
memory: 128Mi
Environment Variables from:
nats-db-secrets Secret Optional: false
Environment: <none>
Mounts:
/docker-entrypoint-initdb.d from nats-initdb-volume (rw)
/var/lib/postgresql/data from nats-data-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5b5cz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nats-data-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nats-pvc
ReadOnly: false
nats-initdb-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nats-pvc
ReadOnly: false
kube-api-access-5b5cz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned default/deployment-nats-db-5f5f9fd6d5-wrcpk to docker-desktop
Warning FailedMount 4m9s (x2 over 17m) kubelet Unable to attach or mount volumes: unmounted volumes=[nats-data-volume], unattached volumes=[nats-initdb-volume kube-api-access-5b5cz nats-data-volume]: timed out waiting for the condition
Warning FailedMount 112s (x6 over 15m) kubelet Unable to attach or mount volumes: unmounted volumes=[nats-data-volume], unattached volumes=[nats-data-volume nats-initdb-volume kube-api-access-5b5cz]: timed out waiting for the condition
I don't know where the issue is. The PVs and PVCs are all seemed to be successfully applied.
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nats-pv 50Mi RWO Retain Bound default/nats-pvc local-hostpath-storage 21m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/nats-pvc Bound nats-pv 50Mi RWO local-hostpath-storage 21m
Following are the configs for SC, PV and PVC:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-hostpath-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nats-pv
spec:
capacity:
storage: 50Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: local-hostpath-storage
hostPath:
path: /mnt/wsl/nats-pv
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nats-pvc
spec:
volumeName: nats-pv
resources:
requests:
storage: 50Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: local-hostpath-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-nats-db
spec:
selector:
matchLabels:
app: nats-db
template:
metadata:
labels:
app: nats-db
spec:
containers:
- name: nats-db
image: postgres:latest
envFrom:
- secretRef:
name: nats-db-secrets
volumeMounts:
- name: nats-data-volume
mountPath: /var/lib/postgresql/data
- name: nats-initdb-volume
mountPath: /docker-entrypoint-initdb.d
resources:
requests:
cpu: 250m
memory: 128Mi
limits:
cpu: 1000m
memory: 256Mi
volumes:
- name: nats-data-volume
persistentVolumeClaim:
claimName: nats-pvc
- name: nats-initdb-volume
persistentVolumeClaim:
claimName: nats-pvc
This pod will be started successfully if I comment out volumeMounts and volumes keys. And it's specifically with this /var/lib/postgresql/data path. Like if I remove nats-data-volume and keep nats-initdb-volume, it's started successfully.
Can anyone help me where I'm wrong exactly? Thanks in advance and best regards.
...if I remove nats-data-volume and keep nats-initdb-volume, it's started successfully.
This PVC cannot be mounted twice, that's where the condition cannot be met.
Looking at your spec, it seems you don't mind which worker node will run your postgress pod. In this case you don't need PV/PVC, you can mount hostPath directly like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-nats-db
spec:
selector:
matchLabels:
app: nats-db
template:
metadata:
labels:
app: nats-db
spec:
containers:
- name: nats-db
image: postgres:latest
envFrom:
- secretRef:
name: nats-db-secrets
volumeMounts:
- name: nats-data-volume
mountPath: /var/lib/postgresql/data
- name: nats-data-volume
mountPath: /docker-entrypoint-initdb.d
resources:
requests:
cpu: 250m
memory: 128Mi
limits:
cpu: 1000m
memory: 256Mi
volumes:
- name: nats-data-volume
hostPath:
path: /mnt/wsl/nats-pv
type: DirectoryOrCreate

CephFS Unable to attach or mount volumes: unmounted volumes=[image-store]

I'm having trouble getting my Kube-registry up and running on cephfs. I'm using rook to set this cluster up. As you can see, I'm having trouble attaching the volume. Any idea what would be causing this issue? any help is appreciated.
kube-registry.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: rook-cephfs
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-registry
namespace: kube-system
labels:
k8s-app: kube-registry
kubernetes.io/cluster-service: "true"
spec:
replicas: 3
selector:
matchLabels:
k8s-app: kube-registry
template:
metadata:
labels:
k8s-app: kube-registry
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: registry
image: registry:2
imagePullPolicy: Always
resources:
limits:
cpu: 100m
memory: 100Mi
env:
# Configuration reference: https://docs.docker.com/registry/configuration/
- name: REGISTRY_HTTP_ADDR
value: :5000
- name: REGISTRY_HTTP_SECRET
value: "Ple4seCh4ngeThisN0tAVerySecretV4lue"
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
protocol: TCP
livenessProbe:
httpGet:
path: /
port: registry
readinessProbe:
httpGet:
path: /
port: registry
volumes:
- name: image-store
persistentVolumeClaim:
claimName: cephfs-pvc
readOnly: false
Storagelass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
# clusterID is the namespace where operator is deployed.
clusterID: rook-ceph
# CephFS filesystem name into which the volume shall be created
fsName: myfs
# Ceph pool into which the volume shall be created
# Required for provisionVolume: "true"
pool: myfs-data0
# Root path of an existing CephFS volume
# Required for provisionVolume: "false"
# rootPath: /absolute/path
# The secrets contain Ceph admin credentials. These are generated automatically by the operator
# in the same namespace as the cluster.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
reclaimPolicy: Deletea
kubectl describe pods --namespace=kube-system kube-registry-58659ff99b-j2b4d
Name: kube-registry-58659ff99b-j2b4d
Namespace: kube-system
Priority: 0
Node: minikube/192.168.99.212
Start Time: Wed, 25 Nov 2020 13:19:35 -0500
Labels: k8s-app=kube-registry
kubernetes.io/cluster-service=true
pod-template-hash=58659ff99b
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/kube-registry-58659ff99b
Containers:
registry:
Container ID:
Image: registry:2
Image ID:
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 100m
memory: 100Mi
Liveness: http-get http://:registry/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:registry/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
REGISTRY_HTTP_ADDR: :5000
REGISTRY_HTTP_SECRET: Ple4seCh4ngeThisN0tAVerySecretV4lue
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /var/lib/registry
Mounts:
/var/lib/registry from image-store (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nw4th (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
image-store:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: cephfs-pvc
ReadOnly: false
default-token-nw4th:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nw4th
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 13m (x3 over 13m) default-scheduler running "VolumeBinding" filter plugin for pod "kube-registry-58659ff99b-j2b4d": pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 13m default-scheduler Successfully assigned kube-system/kube-registry-58659ff99b-j2b4d to minikube
Warning FailedMount 2m6s (x5 over 11m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[image-store], unattached volumes=[image-store default-token-nw4th]: timed out waiting for the condition
Warning FailedAttachVolume 59s (x6 over 11m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-6eeff481-eb0a-4269-84c7-e744c9d639d9" : attachdetachment timeout for volume 0001-0009-rook-c
ceph provisioner logs, I restarted my cluster so the name will be different but output is the same
I1127 18:27:19.370543 1 csi-provisioner.go:121] Version: v2.0.0
I1127 18:27:19.370948 1 csi-provisioner.go:135] Building kube configs for running in cluster...
I1127 18:27:19.429190 1 connection.go:153] Connecting to unix:///csi/csi-provisioner.sock
I1127 18:27:21.561133 1 common.go:111] Probing CSI driver for readiness
W1127 18:27:21.905396 1 metrics.go:142] metrics endpoint will not be started because `metrics-address` was not specified.
I1127 18:27:22.060963 1 leaderelection.go:243] attempting to acquire leader lease rook-ceph/rook-ceph-cephfs-csi-ceph-com...
I1127 18:27:22.122303 1 leaderelection.go:253] successfully acquired lease rook-ceph/rook-ceph-cephfs-csi-ceph-com
I1127 18:27:22.323990 1 controller.go:820] Starting provisioner controller rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-797b67c54b-42jwc_4e14295b-f73d-4b94-bae9-ff4f2639b487!
I1127 18:27:22.324061 1 clone_controller.go:66] Starting CloningProtection controller
I1127 18:27:22.324205 1 clone_controller.go:84] Started CloningProtection controller
I1127 18:27:22.325240 1 volume_store.go:97] Starting save volume queue
I1127 18:27:22.426790 1 controller.go:869] Started provisioner controller rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-797b67c54b-42jwc_4e14295b-f73d-4b94-bae9-ff4f2639b487!
I1127 19:08:39.850493 1 controller.go:1317] provision "kube-system/cephfs-pvc" class "rook-cephfs": started
I1127 19:08:39.851034 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"kube-system", Name:"cephfs-pvc", UID:"7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06", APIVersion:"v1", ResourceVersion:"7744", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "kube-system/cephfs-pvc"
I1127 19:08:43.670226 1 controller.go:1420] provision "kube-system/cephfs-pvc" class "rook-cephfs": volume "pvc-7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06" provisioned
I1127 19:08:43.670262 1 controller.go:1437] provision "kube-system/cephfs-pvc" class "rook-cephfs": succeeded
E1127 19:08:43.692108 1 controller.go:1443] couldn't create key for object pvc-7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06: object has no meta: object does not implement the Object interfaces
I1127 19:08:43.692189 1 controller.go:1317] provision "kube-system/cephfs-pvc" class "rook-cephfs": started
I1127 19:08:43.692205 1 controller.go:1326] provision "kube-system/cephfs-pvc" class "rook-cephfs": persistentvolume "pvc-7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06" already exists, skipping
I1127 19:08:43.692220 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"kube-system", Name:"cephfs-pvc", UID:"7c47bda7-0c7b-4ca0-b6d0-19d717ef2e06", APIVersion:"v1", ResourceVersion:"7744", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned
In the pasted YAML for your StorageClass, you have:
reclaimPolicy: Deletea
Was that a paste issue? Regardless, this is likely what is causing your problem.
I just had this exact problem with some of my Ceph RBD volumes, and the reason for it was that I was using a StorageClass that had
reclaimPolicy: Delete
However, the cephcsi driver was not configured to support it (and I don't think it actually supports it either).
Using a StorageClass with
reclaimPolicy: Retain
fixed the issue.
To check this on your cluster, run the following:
$ kubectl get sc rook-cephfs -o yaml
And look for the line that starts with reclaimPolicy:
Then, look at the csidriver your StorageClass is using. In your case it is rook-ceph.cephfs.csi.ceph.com
$ kubectl get csidriver rook-ceph.cephfs.csi.ceph.com -o yaml
And look for the entries under volumeLifecycleModes
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
creationTimestamp: "2020-11-16T22:18:55Z"
name: rook-ceph.cephfs.csi.ceph.com
resourceVersion: "29863971"
selfLink: /apis/storage.k8s.io/v1beta1/csidrivers/rook-ceph.cephfs.csi.ceph.com
uid: a9651d30-935d-4a7d-a7c9-53d5bc90c28c
spec:
attachRequired: true
podInfoOnMount: false
volumeLifecycleModes:
- Persistent
If the only entry under volumeLifecycleModes is Persistent, then your driver is not configured to support reclaimPolicy: Delete.
If instead you see
volumeLifecycleModes:
- Persistent
- Ephemeral
Then your driver should support reclaimPolicy: Delete

Failed to bind to volume when installing RabbitMQ on K8S

I'm trying to install rabbit-mq using helm, but installation fails because of volume issues.
This is my storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
This is my persistent volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: main-pv
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /media/2TB-DATA/k8s-pv
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node-dev
This is the output to list my storage and pv:
# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage (default) kubernetes.io/no-provisioner Delete Immediate false 14m
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
main-pv 100Gi RWX Delete Available local-storage 40m
After I install rabbit-mq:
helm install rabbitmq bitnami/rabbitmq
The pod is in Pending state, and I see this error:
# kubectl describe pvc
Name: data-rabbitmq-0
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app.kubernetes.io/instance=rabbitmq
app.kubernetes.io/name=rabbitmq
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: rabbitmq-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 3m20s (x4363 over 18h) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
What am I doing wrong?
Maybe platform related. Where did you try to do that? Im asking cause just cant reproduce on GKE - it works fine
Cluster version, labels, nodes
kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
gke-cluster-1-default-pool-82008fd9-8x81 Ready <none> 96d v1.14.10-gke.36 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=europe-west4,failure-domain.beta.kubernetes.io/zone=europe-west4-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-cluster-1-default-pool-82008fd9-8x81,kubernetes.io/os=linux,test=node
gke-cluster-1-default-pool-82008fd9-qkp7 Ready <none> 96d v1.14.10-gke.36 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=europe-west4,failure-domain.beta.kubernetes.io/zone=europe-west4-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-cluster-1-default-pool-82008fd9-qkp7,kubernetes.io/os=linux,test=node
gke-cluster-1-default-pool-82008fd9-tlc7 Ready <none> 96d v1.14.10-gke.36 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=europe-west4,failure-domain.beta.kubernetes.io/zone=europe-west4-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-cluster-1-default-pool-82008fd9-tlc7,kubernetes.io/os=linux,test=node
PV, Storageclass
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: main-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: test
operator: In
values:
- node-test
Installing chart:
helm install rabbitmq bitnami/rabbitmq
...
kubectl get pods
NAME READY STATUS RESTARTS AGE
...
pod/rabbitmq-0 1/1 Running 0 3m40s
...
kubectl describe pod rabbitmq-0
Name: rabbitmq-0
Namespace: default
Priority: 0
Node: gke-cluster-1-default-pool-82008fd9-tlc7/10.164.0.29
Start Time: Thu, 03 Sep 2020 07:34:10 +0000
Labels: app.kubernetes.io/instance=rabbitmq
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rabbitmq
controller-revision-hash=rabbitmq-8687f4cb9f
helm.sh/chart=rabbitmq-7.6.4
statefulset.kubernetes.io/pod-name=rabbitmq-0
Annotations: checksum/secret: 433e8ea7590e8d9f1bb94ed2f55e6d9b95f8abef722a917b97a9e916921d7ac5
kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container rabbitmq
Status: Running
IP: 10.16.2.13
IPs: <none>
Controlled By: StatefulSet/rabbitmq
Containers:
rabbitmq:
Container ID: docker://b1a567522f50ac4c0663db2d9eca5fd8721d9a3d900ac38bb58f0cae038162f2
Image: docker.io/bitnami/rabbitmq:3.8.7-debian-10-r0
Image ID: docker-pullable://bitnami/rabbitmq#sha256:9abd53aeef6d222fec318c97a75dd50ce19c16b11cb83a3e4fb91c4047ea0d4d
Ports: 5672/TCP, 25672/TCP, 15672/TCP, 4369/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
State: Running
Started: Thu, 03 Sep 2020 07:34:34 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Liveness: exec [/bin/bash -ec rabbitmq-diagnostics -q check_running] delay=120s timeout=20s period=30s #success=1 #failure=6
Readiness: exec [/bin/bash -ec rabbitmq-diagnostics -q check_running] delay=10s timeout=20s period=30s #success=1 #failure=3
Environment:
BITNAMI_DEBUG: false
MY_POD_IP: (v1:status.podIP)
MY_POD_NAME: rabbitmq-0 (v1:metadata.name)
MY_POD_NAMESPACE: default (v1:metadata.namespace)
K8S_SERVICE_NAME: rabbitmq-headless
K8S_ADDRESS_TYPE: hostname
RABBITMQ_FORCE_BOOT: no
RABBITMQ_NODE_NAME: rabbit#$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
K8S_HOSTNAME_SUFFIX: .$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
RABBITMQ_MNESIA_DIR: /bitnami/rabbitmq/mnesia/$(RABBITMQ_NODE_NAME)
RABBITMQ_LDAP_ENABLE: no
RABBITMQ_LOGS: -
RABBITMQ_ULIMIT_NOFILES: 65536
RABBITMQ_USE_LONGNAME: true
RABBITMQ_ERL_COOKIE: <set to the key 'rabbitmq-erlang-cookie' in secret 'rabbitmq'> Optional: false
RABBITMQ_USERNAME: user
RABBITMQ_PASSWORD: <set to the key 'rabbitmq-password' in secret 'rabbitmq'> Optional: false
RABBITMQ_PLUGINS: rabbitmq_management, rabbitmq_peer_discovery_k8s, rabbitmq_auth_backend_ldap
Mounts:
/bitnami/rabbitmq/conf from configuration (rw)
/bitnami/rabbitmq/mnesia from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from rabbitmq-token-mclhw (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-rabbitmq-0
ReadOnly: false
configuration:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: rabbitmq-config
Optional: false
rabbitmq-token-mclhw:
Type: Secret (a volume populated by a Secret)
SecretName: rabbitmq-token-mclhw
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m42s default-scheduler Successfully assigned default/rabbitmq-0 to gke-cluster-1-default-pool-82008fd9-tlc7
Normal SuccessfulAttachVolume 6m36s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-8145821b-ed09-11ea-b464-42010aa400e3"
Normal Pulling 6m32s kubelet, gke-cluster-1-default-pool-82008fd9-tlc7 Pulling image "docker.io/bitnami/rabbitmq:3.8.7-debian-10-r0"
Normal Pulled 6m22s kubelet, gke-cluster-1-default-pool-82008fd9-tlc7 Successfully pulled image "docker.io/bitnami/rabbitmq:3.8.7-debian-10-r0"
Normal Created 6m18s kubelet, gke-cluster-1-default-pool-82008fd9-tlc7 Created container rabbitmq
Normal Started 6m18s kubelet, gke-cluster-1-default-pool-82008fd9-tlc7 Started container rabbitmq

Pods not getting scheduled to node with matching labels

I'm getting this error when exec'ing into my pod. Error from server (BadRequest): pod es-master-5cb49c68cc-w6dxv does not have a host assigned
It seemed to be related to my nodeAffinity but I don't see anything immediately wrong with it. I can't seem to get my deployment to attach its pod to any of my nodes. I don't have any taints or tolerations setup on the node or pod. I've tried switching to labels that are automatically generated that are on every node, but nothing seems to work. I've even tried removing my affinity section entirely, and also tried adding nodeSelector to spec by itself.
Here is my deployment config and output from kubectl describe pod -n elasticsearch
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: elasticsearch
role: master
name: es-master
namespace: elasticsearch
spec:
replicas: 3
selector:
matchLabels:
component: elasticsearch
role: master
template:
metadata:
labels:
component: elasticsearch
role: master
annotations:
iam.amazonaws.com/role: {REDACTED}
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
component: elasticsearch
role: master
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/region
operator: In
values:
- us-east-2
Name: es-master-866f7fb558-298ht
Namespace: elasticsearch
Priority: 0
Node: <none>
Labels: component=elasticsearch
pod-template-hash=866f7fb558
role=master
Annotations: iam.amazonaws.com/role: {REDACTED}
kubernetes.io/psp: eks.privileged
Status: Pending
IP:
Controlled By: ReplicaSet/es-master-866f7fb558
Init Containers:
init-sysctl:
Image: busybox:1.27.2
Port: <none>
Host Port: <none>
Command:
sysctl
-w
vm.max_map_count=262144
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xflv6 (ro)
Containers:
elasticsearch:
Image: amazon/opendistro-for-elasticsearch:0.9.0
Ports: 9300/TCP, 9200/TCP, 9600/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Limits:
cpu: 2
memory: 12Gi
Requests:
cpu: 2
memory: 12Gi
Liveness: tcp-socket :transport delay=60s timeout=1s period=10s #success=1 #failure=3
Environment:
CLUSTER_NAME: logs
NUMBER_OF_MASTERS: 3
NODE_MASTER: true
NODE_INGEST: false
NODE_DATA: false
NETWORK_HOST: 0.0.0.0
TRANSPORT_TLS_PEM_PASS:
HTTP_TLS_PEM_PASS:
NODE_NAME: es-master-866f7fb558-298ht (v1:metadata.name)
DISCOVERY_SERVICE: elasticsearch-discovery
KUBERNETES_NAMESPACE: elasticsearch (v1:metadata.namespace)
PROCESSORS: 2 (limits.cpu)
ES_JAVA_OPTS: -Xms6g -Xmx6g
Mounts:
/usr/share/elasticsearch/config/admin-crt.pem from certs (ro,path="admin-crt.pem")
/usr/share/elasticsearch/config/admin-key.pem from certs (ro,path="admin-key.pem")
/usr/share/elasticsearch/config/admin-root-ca.pem from certs (ro,path="admin-root-ca.pem")
/usr/share/elasticsearch/config/elasticsearch.yml from config (rw,path="elasticsearch.yml")
/usr/share/elasticsearch/config/elk-crt.pem from certs (ro,path="elk-crt.pem")
/usr/share/elasticsearch/config/elk-key.pem from certs (ro,path="elk-key.pem")
/usr/share/elasticsearch/config/elk-root-ca.pem from certs (ro,path="elk-root-ca.pem")
/usr/share/elasticsearch/config/logging.yml from config (rw,path="logging.yml")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xflv6 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: elasticsearch
Optional: false
certs:
Type: Secret (a volume populated by a Secret)
SecretName: elasticsearch-tls-data
Optional: false
default-token-xflv6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xflv6
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 59s (x3 over 3m44s) default-scheduler 0/8 nodes are available: 8 Insufficient cpu.
All nodes are m5a.large ec2 instances.
The error is pretty clear 0/8 nodes are available: 8 Insufficient cpu which means nodes don't have 2 cpu cores free as specified in requests. Solution is to either provision nodes with more cpu or reduce the cpu requests in pod spec.