pod has unbound immediate PersistentVolumeClaims ops manager - mongodb

EDIT: SEE BELOW
I am new trying to build a local cluster with 2 physical machines with kubeadm. I am following this https://github.com/mongodb/mongodb-enterprise-kubernetes steps and everything is ok. At first i am installing kubernetes operator, but when i tried to install ops manager i am geting:
0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims ops manager.
the yaml i used to install ops manager is:
---
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: opsmanager1
spec:
replicas: 2
version: 4.2.0
adminCredentials: mongo-db-admin1 # Should match metadata.name
# in the Kubernetes secret
# for the admin user
externalConnectivity:
type: NodePort
applicationDatabase:
members: 3
version: 4.4.0
persistent: true
podSpec:
persistence:
single:
storage: 1Gi
i can't figure out what the problem is. I am at a testing phase, and my goal is to make a scaling mongo database. Thanks in advance
edit: i made a few changes.I created storage class like this:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: localstorage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Delete
allowVolumeExpansion: True
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongo-01
labels:
type: local
spec:
storageClassName: localstorage
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/master/mongo01"
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongo-02
labels:
type: local
spec:
storageClassName: localstorage
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/master/mongo02"
And now my yaml for ops manger is:
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: ops-manager-localmode
spec:
replicas: 2
version: 4.2.12
adminCredentials: mongo-db-admin1
externalConnectivity:
type: NodePort
statefulSet:
spec:
# the Persistent Volume Claim will be created for each Ops Manager Pod
volumeClaimTemplates:
- metadata:
name: mongodb-versions
spec:
storageClassName: localstorage
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
template:
spec:
containers:
- name: mongodb-ops-manager
volumeMounts:
- name: mongodb-versions
# this is the directory in each Pod where all MongoDB
# archives must be put
mountPath: /mongodb-ops-manager/mongodb-releases
backup:
enabled: false
applicationDatabase:
members: 3
version: 4.4.0
persistent: true
But i get a new error : Warning ProvisioningFailed 44s (x26 over 6m53s) persistentvolume-controller no volume plugin matched name: kubernetes.io/no-provisioner

At a quick glance, it looks like you don't have any volume that can create a PVC on your cluster. see https://v1-15.docs.kubernetes.io/docs/concepts/storage/volumes/
Your app needs to create a persistant volume, but your cluster doesn't know how to do that.

Related

Does GKE Autopilot sometimes kill Pods and is there a way to prevent it for Critical Services?

I've been debugging a 10min downtime of our service for some hours now, and I seem to have found the cause, but not the reason for it. Our redis deployment in kubernetes was down for quite a while, causing neither django nor redis to be able to reach it. This caused a bunch of jobs to be lost.
There are no events for the redis deployment, but here are the first logs before and after the reboot:
before:
after:
I'm also attaching the complete redis yml at the bottom. We're using GKE Autopilot, so I guess something caused the pod to reboot? Resource usage is a lot lower than requested, at about 1% for both CPU and memory. Not sure what's going on here. I also couldn't find an annotation to tell Autopilot to leave a specific deployment alone
redis.yml:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-disk
spec:
accessModes:
- ReadWriteOnce
storageClassName: gce-ssd
resources:
requests:
storage: "2Gi"
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- port: 6379
name: redis
clusterIP: None
selector:
app: redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
volumes:
- name: redis-volume
persistentVolumeClaim:
claimName: redis-disk
readOnly: false
terminationGracePeriodSeconds: 5
containers:
- name: redis
image: redis:6-alpine
command: ["sh"]
args: ["-c", 'exec redis-server --requirepass "$REDIS_PASSWORD"']
resources:
requests:
memory: "512Mi"
cpu: "500m"
ephemeral-storage: "1Gi"
envFrom:
- secretRef:
name: env-secrets
volumeMounts:
- name: redis-volume
mountPath: /data
subPath: data
PersistentVolumeClaim is an object in kubernetes allowing to decouple storage resource requests from actual resource provisioning done by its associated PersistentVolume part.
Given:
no declared PersistentVolume object
and Dynamic Provisioning being enabled on your cluster
kubernetes will try to dynamically provision a suitable persistent disk for you suitable for the underlying infrastructure being a Google Compute Engine Persistent Disk in you case based on the requested storage class (gce-ssd).
The claim will result then in an SSD-like Persistent Disk to be automatically provisioned for you and once the claim is deleted (the requesting pod is deleted due to downscale), the volume is destroyed.
To overcome this issue and avoid precious data loss, you should have two alternatives:
At the PersistentVolumeClaim level
To avoid data loss once the Pod and its PVC are deleted, you can set the persistentVolumeReclaimPolicy parameter to Retain on the PVC object:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-disk
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: gce-ssd
resources:
requests:
storage: "2Gi"
This allows for the persistent volume to go back to the Released state and the underlying data can be manually backed up.
At the StorageClass level
As a general recommendation, you should set the reclaimPolicy parameter to Retain (default is Delete) for your used StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
reclaimPolicy: Retain
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
Additional parameters are recommended:
replication-type: should be set to regional-pd to allow zonal replication
volumeBindingMode: set to WaitForFirstConsumer to allow for first consumer dictating the zonal replication topology
You can read more on all above StorageClass parameters in the kubernetes documentation.
A PersistentVolume with same storage class name is then declared:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ssd-volume
spec:
storageClassName: "ssd"
capacity:
storage: 2G
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: redis-disk
And the PersistentVolumeClaim would only declare the requested StorageClass name:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ssd-volume-claim
spec:
storageClassName: "ssd"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "2Gi"
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
volumes:
- name: redis-volume
persistentVolumeClaim:
claimName: ssd-volume-claim
readOnly: false
This objects declaration would prevent any failures or scale down operations from destroying the created PV either created manually by cluster administrators or dynamically using Dynamic Provisioning.

Minikube: use persistent volume (shared disk) and mount it to host

I try to mount a linux directory as a shared directory for multiple containers in minikube.
Here is my config:
minikube start --insecure-registry="myregistry.com:5000" --mount --mount-string="/tmp/myapp/k8s/:/data/myapp/share/"
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: myapp-share-storage
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
local:
path: "/data/myapp/share/"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-share-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: myapp-server
name: myapp-server
spec:
selector:
matchLabels:
io.kompose.service: myapp-server
template:
metadata:
labels:
io.kompose.service: myapp-server
spec:
containers:
- name: myapp-server
image: myregistry.com:5000/server-myapp:alpine
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /data/myapp/share
name: myapp-share
env:
- name: storage__root_directory
value: /data/myapp/share
volumes:
- name: myapp-share
persistentVolumeClaim:
claimName: myapp-share-claim
status: {}
It works with pitfalls: Statefulset are not supported, they bring deadlock errors :
pending PVC: waiting for first consumer to be created before binding
pending POD: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind
Another option is to use minikube persistentvolumeclaim without persistentvolume (it will be created automatically). However:
The volume is created in /tmp (ex: /tmp/hostpath-provisioner/default/myapp-share-claim)
Minikube doesn't honor mount request
How can I make it just work?
Using your yaml file I've managed to create the volumes and deploy it without issue, but i had to use the command minikube mount /mydir/:/data/myapp/share/ after starting the minikube since --mount --mount-strings="/mydir/:/data/myapp/share/" wasn't working.

AttachVolume.NewAttacher failed for volume: Failed to get GCE GCECloudProvider with error <nil>

I have two VM Instance on GCE with kubernetes self install (using the following https://medium.com/edureka/install-kubernetes-on-ubuntu-5cd1f770c9e4).
I'm trying to create volume and use it in my pods.
I have been created the following disk:
gcloud compute disks create --type=pd-ssd --size=10GB manual-disk-1
And create the following yaml files
pv_manual.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: manually-created-pv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
gcePersistentDisk:
pdName: manual-disk-1
pvc_manual.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: sleppypod
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: mypvc
containers:
- name: sleppycontainer
image: gcr.op/google_containers/busybox
command:
- sleep
- "5000"
volumeMounts:
- name: data
mountPath: /data
readOnly: false
And when I'm trying to create the pod the pode get status ContainerCreating and on kubectl get events I see:
7s Warning FailedAttachVolume AttachVolume.NewAttacher failed for volume : Failed to get GCE GCECloudProvider with error
I run my two instances using ServiceAccount with compute instance admin role (according Kubernetes: Failed to get GCE GCECloudProvider with error <nil>) and my kubelet running with --cloud-provider=gce
How can I solve it?
You need to create a storageclass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
fstype: ext4
replication-type: none
For GCE details here
You can also follow GCE documentation here

Mounting EKS EFS CSI Times Out before Pod Comes Up

I am using EKS with Kubernetes version 1.15 and when I create a Storageclass, Persistent-Volume, Persistent-Volume-Claim, and Deployment the pod fails with:
Warning FailedAttachVolume 71s (x2 over 3m11s) attachdetach-controller AttachVolume.Attach failed for volume "efs-pv" : attachment timeout for volume fs-<volume>
Warning FailedMount 53s (x2 over 3m8s) kubelet, ip-<ip-address>.ec2.internal Unable to mount volumes for pod "influxdb-deployment-555f4c8b94-mldfs_default(2525d10b-e30b-4c4c-893e-10971e0c683e)": timeout expired waiting for volumes to attach or mount for pod "default"/"influxdb-deployment-555f4c8b94-mldfs". list of unmounted volumes=[persistent-storage]. list of unattached volumes=[persistent-storage]
However when I try the same without building the Persistent-Volume it is successful, and creates its own that seemingly skips CSI. This is what I am working with:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: influxdb-deployment
spec:
selector:
matchLabels:
app: influxdb
replicas: 1
template:
metadata:
labels:
app: influxdb
spec:
containers:
- name: influxdb
image: influxdb:1.7.10-alpine
ports:
- containerPort: 8086
volumeMounts:
- name: persistent-storage
mountPath: /var/lib/influx
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim
storageclass.yaml:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
reclaimPolicy: Retain
persistent-volume.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-<volume-id>
persistent-volume-claim.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
Any idea on what is happening?
This is something unclarified in aws EKS documents, but a must-do when you are going to use storageclass as first time setting.
EKS has no default storage class to be set up
I didn't work on EFS with EKS before, but does set with gp2 (EBS) with below yaml file successfully.
Paste here for your reference
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
Reference link
https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html

unknown field "storage" in io.k8s.api.core.v1.PersistentVolumeClaim

my pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: database-disk
labels:
stage: production
name: database
app: mysql
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
running kubectl apply -f pvc.yaml in microk8s got following error:
error validating data:ValidationData(PersistentVolumeClaim): unknown field "storage" in io.k8s.api.core.v1.PersistenVolumeClaim if choose to ignore these errors turn validation off with --validate=false
Edit: storage indentation wrong when I copied text on my VM :( ,its working fine now
You forgot to specify the volumeMode. Add the volumeMode option and it should work.
Like this:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: database-disk
labels:
stage: production
name: database
app: mysql
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 20Gi
If you're using a storageClass, define one as default to be used or specify in the claim the storageClassName.
I have defined this in GCloud:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
name: slow
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Delete
volumeBindingMode: Immediate