kubernetes mongodb ops manager running "VolumeBinding" filter plugin for pod "ops-manager-db-0": pod has unbound immediate PersistentVolumeClaims - mongodb

I am trying to configure MongoDB ops manager on Kubernetes, I have a PersistentVolumeClaim based on dynamic provisioning based on CEPH and configured it successfully, What I am trying to do is to define the volume mounts and volumes in MongoDBOpsManager YAML file, I tried different things but couldn't define them
here is my MongoDBOpsManager yaml file:
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: ops-manager
namespace: mongodb
# podSpec:
# podTemplate:
# spec:
# containers:
# - name: mongodb-enterprise-database
# volumeMounts:
# - name: mongo-persistent-storage
# mountPath: /data/db
# volumes:
# - name: mongo-persistent-storage
# persistentVolumeClaim:
# claimName: mongo-pvc
spec:
# the version of Ops Manager distro to use
version: 4.2.4
containers:
- name: mongodb-ops-manager
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pvc
# the name of the secret containing admin user credentials.
adminCredentials: ops-manager-admin-secret
externalConnectivity:
type: NodePort
# the Replica Set backing Ops Manager.
# appDB has the SCRAM-SHA authentication mode always enabled
applicationDatabase:
members: 3
statefulSet:
spec:
# volumeClaimTemplates:letsChangeTheWorld1
template:
spec:
containers:
- name: mongodb-ops-manager
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pvc
I don't know where should I put the volume mounts and volume definition
I the ops manager om is created successfully but when I check the created pod for it I found this error
running "VolumeBinding" filter plugin for pod "ops-manager-db-0": pod has unbound immediate PersistentVolumeClaims

spec:
containers:
- image:
....
volumeMounts:
.....
- image:
....
volumeMounts:
......
volumes:
- name:
Volumes tag should come parallel to containers.
Volumes are defined globally for all containers and mounts are speific to containers
Example: https://kubernetes.io/docs/concepts/storage/volumes/
Check with this once

Related

Pod unable to mount same path in two volumes

I am a newbie here , but I have a use case where I need to mount same path to two different PV's , When ever I try to give the same path my pod doesn't come up check mount paths below
- name : xxx
mountPath : "/home/{username}"
readOnly : false
static:
pvcName:
subPath: '{username}'
capacity: 10Gi
homeMountPath: '/home/{username}'
dynamic:
storageClass: nfs-client
pvcNameTemplate: claim-{username}{servername}
volumeNameTemplate: volume-{username}{servername}
storageAccessModes: [ReadWriteOnce]
But after changing the mount path pod comes up without any issue example
mountPath : "/home/test/{username}"
Is there something I am missing
If you want to mount the Single POD with multiple PVC it wont possible unless you use the ReadWiteMany.
But if you want to mount multiple path to POD it's possible using single PVC or multiple PVC depends on use case.
Single PVC single POD
apiVersion: v1
kind: Pod
metadata:
name: my-test-pod
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_HOST
value: "IP"
volumeMounts:
- mountPath: /var/lib/mysql
name: data
subPath: path1
- name: nginx
image: nginx
volumeMounts:
- mountPath: /var/www/html
name: data
subPath: path2
volumes:
- name: data
persistentVolumeClaim:
claimName: my-site-data
Multiple PVC and volume path to single POD
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
volumes:
# List of volumes
- name: volume1
< volume details, see below >
- name: volume2
< volume details, see below >
containers:
- name: mycontainer
volumeMounts:
# Path
# will mount 'volume1' into /var/www/html
- name: volume1
mountPath: /var/www/html
# will mount 'volume2' into /var/log
- name: volume2
mountPath: /var/log/
Reference : https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/get_started_provisioning_storage_in_kubernetes#example

Simple explanation of docker volume in Kubernetes

I am transitioning from docker to Kubernetes. Had been volumes on docker to expose files needed by container, just like the example below to provide grafana container some files.
I am confused on how the same can be established in Kubernetes using volumeMounts and volumes, and how is it linked to PersistentVolumeClaim.
version: '3'
volumes:
grafana_app_data: {}
services:
grafana:
image: grafana/grafana:latest
volumes:
- grafana_app_data:/var/lib/grafana
- ./directory-on-local-machine/:/etc/grafana/provisioning/
A Pod Spec that is equivalent ↔️ would be something like this:
apiVersion: v1
kind: Pod
metadata:
name: grafana-pod
spec:
containers:
- image: grafana/grafana:latest
name: grafana-container
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-app-data
- mountPath: /etc/grafana/provisioning
name: grafana-provisioning
volumes:
- name: grafana-app-data
hostPath:
path: /grafana-data
type: Directory
- name: grafana-provisioning
hostPath:
path: /directory-on-machiche
type: Directory
This is using basic hostPath, you can also use a local volume πŸ’Ύ or any other type of supported volume depending on what you need. πŸ’ΎπŸ“€πŸ’ΏπŸ’½

Kubernetes Pod is not saving the data even though it has persistent storage

I am using JBPM business-central which is deployed on K8s.
Below is my depolyment.apps where the persistent volume is attached .
kubectl edit deployment.apps/jbpm-server-full
volumes:
- name: jbpm-pv-storage
persistentVolumeClaim:
claimName: jbpm-pv-claim
But when i restart the pod i am losing all the workspaces in business-central even though we have attached persistent volumes attached to the k8S pod.
You need to mount that volume inside the container and write data into the mounted path
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

k8s initContainer mountPath does not exist after kubectl pod deployment

Below is deployment yaml, after deployment, I could access the pod
and I can see the mountPath "/usr/share/nginx/html", but I could not find
"/work-dir" which should be created by initContainer.
Could someone explain me the reason?
Thanks and Rgds
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same volume, too, (albeit at a different location) providing mechanism for the two containers to share its content.
Per the docs:
Init containers can run with a different view of the filesystem than
app containers in the same Pod.
The volume mount with a PVC allows you to share the contents of /work-dir/ and /use/share/nginx/html/ but it does not mean the nginx container will have the /work-dir folder. Given this, you may think that you could just mount the path / which would allow you to share all folders underneath. However, a mountPath does not work for /.
So, how do you solve your problem? You could have another pod mount /work-dir/ in case you actually need the folder. Here is an example (pvc and deployment with mounts):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-fs-pvc
namespace: default
labels:
mojix.service: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: shared-fs
labels:
mojix.service: shared-fs
spec:
replicas: 1
selector:
matchLabels:
mojix.service: shared-fs
template:
metadata:
creationTimestamp: null
labels:
mojix.service: shared-fs
spec:
terminationGracePeriodSeconds: 3
containers:
- name: nginx-c
image: nginx:latest
volumeMounts:
- name: shared-fs-volume
mountPath: /var/www/static/
- name: alpine-c
image: alpine:latest
command: ["/bin/sleep", "10000s"]
lifecycle:
postStart:
exec:
command: ["/bin/mkdir", "-p", "/work-dir"]
volumeMounts:
- name: shared-fs-volume
mountPath: /work-dir/
volumes:
- name: shared-fs-volume
persistentVolumeClaim:
claimName: shared-fs-pvc

How to allow a Kubernetes Job access to a file on host

I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519