Kubernetes Pod is not saving the data even though it has persistent storage - kubernetes

I am using JBPM business-central which is deployed on K8s.
Below is my depolyment.apps where the persistent volume is attached .
kubectl edit deployment.apps/jbpm-server-full
volumes:
- name: jbpm-pv-storage
persistentVolumeClaim:
claimName: jbpm-pv-claim
But when i restart the pod i am losing all the workspaces in business-central even though we have attached persistent volumes attached to the k8S pod.

You need to mount that volume inside the container and write data into the mounted path
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

Related

Will the logs collected by k8s sidecar be lost?

I use the sidecar method of k8s to collect logs. If I use emptydiry to mount, will the uncollected logs be lost when the pod is moved to another node?
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
...
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-agent
image: k8s.gcr.io/fluentd-gcp:1.30
...
volumeMounts:
- name: varlog
mountPath: /var/log
- name: config-volume
mountPath: /etc/fluentd-config
volumes:
- name: varlog
emptyDir: {}
- name: config-volume
configMap:
name: fluentd-config
Yes, you will lose the data. emptyDir is erased when a pod is removed (e.g. when it is evicted to another node).
The logs that you'd like to preserve should be printed to stdout; then collected and persisted by your logging subsystem in the cluster.
From the docs:
An emptyDir volume is first created when a Pod is assigned to a node, and exists as long as that Pod is running on that node.

kubernetes mongodb ops manager running "VolumeBinding" filter plugin for pod "ops-manager-db-0": pod has unbound immediate PersistentVolumeClaims

I am trying to configure MongoDB ops manager on Kubernetes, I have a PersistentVolumeClaim based on dynamic provisioning based on CEPH and configured it successfully, What I am trying to do is to define the volume mounts and volumes in MongoDBOpsManager YAML file, I tried different things but couldn't define them
here is my MongoDBOpsManager yaml file:
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: ops-manager
namespace: mongodb
# podSpec:
# podTemplate:
# spec:
# containers:
# - name: mongodb-enterprise-database
# volumeMounts:
# - name: mongo-persistent-storage
# mountPath: /data/db
# volumes:
# - name: mongo-persistent-storage
# persistentVolumeClaim:
# claimName: mongo-pvc
spec:
# the version of Ops Manager distro to use
version: 4.2.4
containers:
- name: mongodb-ops-manager
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pvc
# the name of the secret containing admin user credentials.
adminCredentials: ops-manager-admin-secret
externalConnectivity:
type: NodePort
# the Replica Set backing Ops Manager.
# appDB has the SCRAM-SHA authentication mode always enabled
applicationDatabase:
members: 3
statefulSet:
spec:
# volumeClaimTemplates:letsChangeTheWorld1
template:
spec:
containers:
- name: mongodb-ops-manager
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pvc
I don't know where should I put the volume mounts and volume definition
I the ops manager om is created successfully but when I check the created pod for it I found this error
running "VolumeBinding" filter plugin for pod "ops-manager-db-0": pod has unbound immediate PersistentVolumeClaims
spec:
containers:
- image:
....
volumeMounts:
.....
- image:
....
volumeMounts:
......
volumes:
- name:
Volumes tag should come parallel to containers.
Volumes are defined globally for all containers and mounts are speific to containers
Example: https://kubernetes.io/docs/concepts/storage/volumes/
Check with this once

Simple explanation of docker volume in Kubernetes

I am transitioning from docker to Kubernetes. Had been volumes on docker to expose files needed by container, just like the example below to provide grafana container some files.
I am confused on how the same can be established in Kubernetes using volumeMounts and volumes, and how is it linked to PersistentVolumeClaim.
version: '3'
volumes:
grafana_app_data: {}
services:
grafana:
image: grafana/grafana:latest
volumes:
- grafana_app_data:/var/lib/grafana
- ./directory-on-local-machine/:/etc/grafana/provisioning/
A Pod Spec that is equivalent ↔️ would be something like this:
apiVersion: v1
kind: Pod
metadata:
name: grafana-pod
spec:
containers:
- image: grafana/grafana:latest
name: grafana-container
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-app-data
- mountPath: /etc/grafana/provisioning
name: grafana-provisioning
volumes:
- name: grafana-app-data
hostPath:
path: /grafana-data
type: Directory
- name: grafana-provisioning
hostPath:
path: /directory-on-machiche
type: Directory
This is using basic hostPath, you can also use a local volume πŸ’Ύ or any other type of supported volume depending on what you need. πŸ’ΎπŸ“€πŸ’ΏπŸ’½

k8s initContainer mountPath does not exist after kubectl pod deployment

Below is deployment yaml, after deployment, I could access the pod
and I can see the mountPath "/usr/share/nginx/html", but I could not find
"/work-dir" which should be created by initContainer.
Could someone explain me the reason?
Thanks and Rgds
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same volume, too, (albeit at a different location) providing mechanism for the two containers to share its content.
Per the docs:
Init containers can run with a different view of the filesystem than
app containers in the same Pod.
The volume mount with a PVC allows you to share the contents of /work-dir/ and /use/share/nginx/html/ but it does not mean the nginx container will have the /work-dir folder. Given this, you may think that you could just mount the path / which would allow you to share all folders underneath. However, a mountPath does not work for /.
So, how do you solve your problem? You could have another pod mount /work-dir/ in case you actually need the folder. Here is an example (pvc and deployment with mounts):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-fs-pvc
namespace: default
labels:
mojix.service: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: shared-fs
labels:
mojix.service: shared-fs
spec:
replicas: 1
selector:
matchLabels:
mojix.service: shared-fs
template:
metadata:
creationTimestamp: null
labels:
mojix.service: shared-fs
spec:
terminationGracePeriodSeconds: 3
containers:
- name: nginx-c
image: nginx:latest
volumeMounts:
- name: shared-fs-volume
mountPath: /var/www/static/
- name: alpine-c
image: alpine:latest
command: ["/bin/sleep", "10000s"]
lifecycle:
postStart:
exec:
command: ["/bin/mkdir", "-p", "/work-dir"]
volumeMounts:
- name: shared-fs-volume
mountPath: /work-dir/
volumes:
- name: shared-fs-volume
persistentVolumeClaim:
claimName: shared-fs-pvc

Kubernetes: Choose volume dependent on namespace

In a simple Postgres Deployment, I wish to choose the volume dependent on the namespace. The aim is to use the same Deployment configuration file to create Postgres deployments in different namespaces (e.g. production/staging).
What ways are there to achieve this?
Below my configuration file, I basically want to make MAKE_THIS_DEPENDENT_ON_NAMESPACE dependent on the environment (or namespace) this Deployment is used in.
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.6
name: postgres
volumeMounts:
-name: postgres-storage
mountPath: /var/lib/postgresql
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
pdName: MAKE_THIS_DEPENDENT_ON_NAMESPACE
You should try using a Persistent Volume Claim instead, PVCs are namespaced.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim