I am trying to run the mongodb as a statefulset in the minikube Kubernetes cluster. I have 3 replicas but I have the following problem - which is, one replica (mongo-0) is up and running without any issue but the second replica (mongo-1) is forever in the pending state. I tried to describe the pod and I get the following output:
kubectl describe pod mongo-1 -n ng-mongo
. . .
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 17m (x70 over 6h9m) default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
As per the above error, it says it cannot find the persistent volume, but there is one already.
Please find my YAML definitions for this:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
namespace: ng-mongo
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
namespace: ng-mongo
spec:
capacity:
storage: 10Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /tmp
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-claim
namespace: ng-mongo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
---
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: ng-mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
namespace: ng-mongo
spec:
serviceName: "mongo"
replicas: 3
selector:
matchLabels:
role: mongo
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- rs0
resources:
requests:
cpu: 0.2
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: localvolume
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
# volumes:
# - name: localvolume
# persistentVolumeClaim:
# claimName: local-claim
volumeClaimTemplates:
- metadata:
name: localvolume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 2Gi
Can someone help me find the issue here?
You are using the node affinity while creating the PV, that need to be configured correctly:
It will inform Kubernetes my disk will attach to this type of node. Due to affinity your PV is attached to one type of specific node only.
when you are deploying the deployment it's not getting scheduled on that specific node and your POD is not getting that PV or PVC.
If you are adding node affinity to PVC add it to deployment also. So both PVC and pod get scheduled on the same node.
Resolution steps:
Make sure both deployment and pvc schedule with the same node add the node affinity to deployment also so deployment schedule on the respective node.
or else
Remove the node affinity rule from PV and create a new PV and PVC and use it.
here is the place where you have mentioned the node affinity rule
Note: In your node affinity you have mentioned as minikube, verify the node by
kubectl get nodes make changes if required.
Related
I have created a kubernetes cluster using 2 droplets (digital ocean machines)
1 machine is set to be master and another is set to be worker
Now, I am running a project which has 2 PVCs. (Their configs are same as below)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: pvc1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
storageClassName: my-storageclass
status: {}
I set the storage class of this PVCs to ...
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: my-storageclass
labels:
doks.digitalocean.com/managed: "true"
provisioner: dobs.csi.digitalocean.com
allowVolumeExpansion: true
parameters:
type: pd-ssd
My goal is to dynamically create PVs using the Dobs (Digital Ocean Block Storage) CSI
Currently when I run my application on kubernetes (I do that using helm), my pod gives me following error :
0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 pod has unbound immediate PersistentVolumeClaims
I understand that master node will be having taints and therefore of no use to run my pods. the second part of error is "1 pod has unbound immediate PersistentVolumeClaims"
how do I fix that ?
Thanks in advance !
Note: I have successfully ran my project with DOKS & EKS, I am doing this exercise to understand the concepts of volume binding in depth.
-------- Deployment ------
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
strategy:
type: Recreate
template:
spec:
containers:
- args:
- /bin/sh
- -c
- go run server.go
image: ***.dkr.ecr.us-east-2.amazonaws.com/my-app
imagePullPolicy: Always
name: my-app
ports:
- containerPort: 9000
resources: {}
volumeMounts:
- mountPath: /app/test1
name: pvc1
- mountPath: /app/test2
name: pvc2
imagePullSecrets:
- name: my-registery-key
restartPolicy: Always
volumes:
- name: pv1
persistentVolumeClaim:
claimName: pvc1
- name: pv2
persistentVolumeClaim:
claimName: pvc2
I am deploying a CouchDB cluster on Kubernetes.
It worked and I got an error when I tried to scale it.
I try scale my Statefulset and I am getting this error when I desscribe couchdb-3:
0/3 nodes are available: 3 pod has unbound immediate
PersistentVolumeClaims.
And this error when I describe hpa:
invalid metrics (1 invalid out of 1), first error is: failed to get
cpu utilization: missing request for cpu
failed to get cpu utilization: missing request for cpu
I run "kubectl get pod -o wide" and receive this result:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
couchdb-0 1/1 Running 0 101m 10.244.2.13 node2 <none> <none>
couchdb-1 1/1 Running 0 101m 10.244.2.14 node2 <none> <none>
couchdb-2 1/1 Running 0 100m 10.244.2.15 node2 <none> <none>
couchdb-3 0/1 Pending 0 15m <none> <none> <none> <none>
How can I fix it !?
Kubernetes Version: 1.22.4
Docker Version 20.10.11, build dea9396
Ubuntu 20.04
My hpa file:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-couchdb
spec:
maxReplicas: 16
minReplicas: 6
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: couchdb
targetCPUUtilizationPercentage: 50
pv.yaml:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-0
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-1
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-2
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-2"
I set nfs in /etc/exports: /var/couchnfs 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
statefulset.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: couchdb
labels:
app: couch
spec:
replicas: 3
serviceName: "couch-service"
selector:
matchLabels:
app: couch
template:
metadata:
labels:
app: couch # pod label
spec:
containers:
- name: couchdb
image: couchdb:2.3.1
imagePullPolicy: "Always"
env:
- name: NODE_NETBIOS_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NODENAME
value: $(NODE_NETBIOS_NAME).couch-service # FQDN in vm.args
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: admin
- name: COUCHDB_SECRET
value: b1709267
- name: ERL_FLAGS
value: "-name couchdb#$(NODENAME)"
- name: ERL_FLAGS
value: "-setcookie b1709267" # the “password” used when nodes connect to each other.
ports:
- name: couchdb
containerPort: 5984
- name: epmd
containerPort: 4369
- containerPort: 9100
volumeMounts:
- name: couch-pvc
mountPath: /opt/couchdb/data
volumeClaimTemplates:
- metadata:
name: couch-pvc
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
selector:
matchLabels:
volume: couch-volume
You have 3 persistent volumes and 3 pods claiming each. One PV can’t be claimed by more than one pod.
Since you are using NFS as backend, you can use dynamic provisioning of persistent volumes.
https://github.com/openebs/dynamic-nfs-provisioner
I try to mount a linux directory as a shared directory for multiple containers in minikube.
Here is my config:
minikube start --insecure-registry="myregistry.com:5000" --mount --mount-string="/tmp/myapp/k8s/:/data/myapp/share/"
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: myapp-share-storage
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
local:
path: "/data/myapp/share/"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-share-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: myapp-server
name: myapp-server
spec:
selector:
matchLabels:
io.kompose.service: myapp-server
template:
metadata:
labels:
io.kompose.service: myapp-server
spec:
containers:
- name: myapp-server
image: myregistry.com:5000/server-myapp:alpine
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /data/myapp/share
name: myapp-share
env:
- name: storage__root_directory
value: /data/myapp/share
volumes:
- name: myapp-share
persistentVolumeClaim:
claimName: myapp-share-claim
status: {}
It works with pitfalls: Statefulset are not supported, they bring deadlock errors :
pending PVC: waiting for first consumer to be created before binding
pending POD: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind
Another option is to use minikube persistentvolumeclaim without persistentvolume (it will be created automatically). However:
The volume is created in /tmp (ex: /tmp/hostpath-provisioner/default/myapp-share-claim)
Minikube doesn't honor mount request
How can I make it just work?
Using your yaml file I've managed to create the volumes and deploy it without issue, but i had to use the command minikube mount /mydir/:/data/myapp/share/ after starting the minikube since --mount --mount-strings="/mydir/:/data/myapp/share/" wasn't working.
On GKE, I set a statefulset resource as
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
selector:
matchLabels:
app: redis
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
resources:
limits:
memory: 2Gi
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /usr/share/redis
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-data-pvc
Want to use pvc so created this one. (This step was did before the statefulset deployment)
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
When check the resource in kubernetes
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
redis-data-pvc Bound pvc-6163d1f8-fb3d-44ac-a91f-edef1452b3b9 10Gi RWO standard 132m
The default Storage Class is standard.
kubectl get storageclass
NAME PROVISIONER
standard (default) kubernetes.io/gce-pd
But when check the statafulset's deployment status. It always wrong.
# Describe its pod details
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22s default-scheduler persistentvolumeclaim "redis-data-pvc" not found
Warning FailedScheduling 17s (x2 over 20s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
Normal Created 2s (x2 over 3s) kubelet Created container redis
Normal Started 2s (x2 over 3s) kubelet Started container redis
Warning BackOff 0s (x2 over 1s) kubelet Back-off restarting failed container
Why can't it find the redis-data-pvc name?
What you have done, should work. Make sure that the PersistentVolumeClaim and the StatefulSet is located in the same namespace.
Thats said, this is an easier solution, and that let you easier scale up to more replicas:
When using StatefulSet and PersistentVolumeClaim, use the volumeClaimTemplates: field in the StatefulSet instead.
The volumeClaimTemplates: will be used to create unique PVCs for each replica, and they have unique naming ending with e.g. -0 where the number is an ordinal used for the replicas in a StatefulSet.
So instead, use a SatefuleSet manifest like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
selector:
matchLabels:
app: redis
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
resources:
limits:
memory: 2Gi
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /usr/share/redis
volumeClaimTemplates: // this will be used to create PVC
- metadata:
name: redis-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
We have the setup in GKE with two different clusters. One cluster is running a nfs-server and on that cluster we have a persistent-volume which points to the server. This PV is then mounted on a pod running on this cluster. The second cluster also has a PV and a pod that should mount the same nfs volume. Here is where the problem occurs. Where we point out the server it does not work with using the nfs-server clusterIp address. This is understandable but I wonder how to best achieve this.
The setup is basically this:
Persistent Volume and Persistent Volume Claim used by NFS
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 20Gi
storageClassName: manual
accessModes:
- ReadWriteMany
gcePersistentDisk:
pdName: files
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
NFS server deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-pvc
NFS-server service
apiVersion: v1
kind: Service
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
Persistent volume and Persistent Volume Claim used by the pods:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 20Gi
storageClassName: manual
accessModes:
- ReadWriteMany
nfs:
server: 10.4.0.20
path: "/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
Part of deployment file for pod mounting the nfs
volumes:
- name: files
persistentVolumeClaim:
claimName: nfs
Output of kubectl get pv and kubectl get pvc
user#HP-EliteBook:~/Downloads$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs 100Gi RWX Retain Bound default/nfs manual 286d
nfs-pv 100Gi RWO Retain Bound default/nfs-pvc manual 286d
user#HP-EliteBook:~/Downloads$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs Bound nfs 100Gi RWX manual 286d
nfs-pvc Bound nfs-pv 100Gi RWO manual 286d
The ip in the PV used by the pods is the problem. The pod on the same cluster can connect to it but not the pod on the other cluster. I can use the actual podIP from the other cluster but the podIP changes with every deploy so that is not a working solution. What is the best way to get around this problem, I only want this second cluster to have access to the nfs server and not opening it to the world for example.