postgres data retention in Kubernetes minikube - postgresql

I am trying to deploy postgres with persistent volume on my minikube instance. I have mounted the the volume(hostpath) using the PVC but I don't see the data retention of the postgres tables. I tried to touch a file in shared directory in the pod and found it was retained which means volume is retained between deployments but why postgres table. Thanks for any insights
Here is my postgres deployment yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dbapp-deployment
labels:
app: dbapp
spec:
replicas: 1
selector:
matchLabels:
app: dbapp
strategy:
type: Recreate
template:
metadata:
namespace: default
labels:
app: dbapp
tier: backend
spec:
containers:
- name: dbapp
image: xxxxx/dbapp:latest
ports:
- containerPort: 5432
volumeMounts:
- name: pvc001
mountPath: /var/lib/postgres/data
volumes:
- name: pvc001
persistentVolumeClaim:
claimName: pvc001
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc001 Bound pvc-29320475-37dc-11e9-8a82-080027218780 1Gi RWX standard 29m

Related

Persistent Payara Server Admin UI in Kubernetes

I am using payara/server-full in Kubernetes. I want to add a persistent volume so that all configuration made to the Payara server via the Admin UI is perstisted after the pod is recreated, including uploaded .war files.
Right now my deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name:
spec:
selector:
matchLabels:
app: myapp
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: payara/server-full
imagePullPolicy: "Always"
ports:
- name: myapp-default
containerPort: 8080
- name: myapp-admin
containerPort: 4848
How to augment that yaml file to make use of a persistent volume?
Which path(s) within payara should be synced with the persistent volume so that Payara's configuration isn't lost after redeployment ?
Which additional yaml files do I need?
So after a longer conideration of the problem I realised I need to persist everything under /opt/payara/appserver/glassfish/domains for all configuration made via the Admin UI to be persisted. However if I simply start the pod with a volumeMount pointing to that path, i.e.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: myapp
spec:
volumes:
- name: myapp-vol
persistentVolumeClaim:
claimName: myapp-rwo-pvc
containers:
- name: myapp
image: payara/server-full
imagePullPolicy: "Always"
ports:
- name: myapp-default
containerPort: 8080
- name: myapp-admin
containerPort: 4848
volumeMounts:
- mountPath: "/opt/payara/appserver/glassfish/domains"
and
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-rwo-pvc
labels:
app: dont-delete-autom
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
then the Payara server won't be able to start successfully, because Kubernetes will mount an empty persistent volume into that location. Payara needs however config files which are originally located within /opt/payara/appserver/glassfish/domains.
What I needed to do is to provision the volume with the data by default located in that folder. But how to do that when the only way to access the PV is to mount it into a pod?
Fist I scaled the above deployment to 0 with:
kubectl scale --replicas=0 deployment/myapp
This deletes all pods accessing the persistent volume.
Then I created a "provisioning" pod which mounts the previously created persistent volume into /tmp.
apiVersion: v1
kind: Pod
metadata:
labels:
app: myapp
name: pv-provisioner
namespace: default
spec:
containers:
- image: payara/server-full
imagePullPolicy: Always
name: pv-provisioner
ports:
- containerPort: 8080
name: myapp-default
protocol: TCP
- containerPort: 4848
name: myapp-admin
protocol: TCP
volumeMounts:
- mountPath: "/tmp"
name: myapp-vol
resources:
limits:
cpu: "2"
memory: 2Gi
requests:
cpu: 500m
memory: 128Mi
volumes:
- name: myapp-vol
persistentVolumeClaim:
claimName: myapp-rwo-pvc
Then I used the following commands to copy the necessary data first from the "provisioning" pod to a local folder /tmp and then back from /tmp to the persistent volume (previously mounted into pv-provisioner:/tmp). There is no option to copy directly from pod:/a to pod:/b
kubectl cp pv-provisioner:/opt/payara/appserver/glassfish/domains/. tmp
kubectl cp tmp/. pv-provisioner:/tmp
As a result everything stored under /opt/payara/appserver/glassfish/domains/ in the original payara container was now copied into the persistent volume identified by the persistence volume claim "myapp-rwo-pvc".
To finish it up I deleted the provisioning pod and scaled the deployment back up:
kubectl delete pod pv-provisioner
kubectl scale --replicas=3 deployment/myapp
The payara server is now starting successfully and any configuration made via the Admin UI, including .war deployments is persisted, such that the payara pods can be killed any time and after the restart everything is as before.
Thanks for reading.

CreateContainerError while creating postgresql in k8s

I'm trying to run postgresql db at k8s and there is no errors while creating all from file, but pod at the deployment cant create container.
There is my yaml code:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgresadmin
POSTGRES_PASSWORD: adminpassword
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.18
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
Sevice:
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: postgres
after i'm using:
kubectl create -f filename
i got :
configmap/postgres-config created
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
deployment.apps/postgres created
service/postgres created
But when i'm typing:
kubectl get pods
There is an error:
postgres-78496cc865-85kt7 0/1 CreateContainerError 0 13m
this is PV and PVC, no more space at the question to ad that as a code :)
If you describe the pod, you'll see the warning message in there,
Warning FailedScheduling 45s (x2 over 45s) default-scheduler persistentvolumeclaim "postgres-pv-claim" not found
On a high level, a database instance can run within a Kubernetes container. A database instance stores data in files, and the files are stored in persistent volume claims. A PersistentVolumeClaim must be created and made available to a PostgreSQL instance.To create the database instance as a container, you use a deployment configuration. In order to provide an access interface that is independent of the particular container, you create a service that provides access to the database. The service remains unchanged even if a container (or pod) is moved to a different node.
In your case, create a PVC resource and bound it to PV so that will be used by the pod. As currently it does not found that , it went into pending state. This can be achieved in multiple ways, you can either use the hostPath as the local storage,
$ k get pods
NAME READY STATUS RESTARTS AGE
postgres-795cfcd67b-khfgn 1/1 Running 0 18s
Sample PV and PVC configs as below,
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nautilus
spec:
storageClassName: manual
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/mohan"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
You can check the Persistent Volume doc for more details. Also, read more about storage class and StatefulSets for deploying database applications in Kubernetes cluster.
Thanks to all who tried to help me! Problem was at PersistentVolume.spec.hostPath.path. There was an invalid character at the path. I tried to use "./path".

How to have data persist in GKE kubernetes StatefulSet with postgres?

So I'm just trying to get a web app running on GKE experimentally to familiarize myself with Kubernetes and GKE.
I have a statefulSet (Postgres) with a persistent volume/ persistent volume claim which is mounted to the Postgres pod as expected. The problem I'm having is having the Postgres data endure. If I mount the PV at var/lib/postgres the data gets overridden with each pod update. If I mount at var/lib/postgres/data I get the warning:
initdb: directory "/var/lib/postgresql/data" exists but is not empty
It contains a lost+found directory, perhaps due to it being a mount point.
Using a mount point directly as the data directory is not recommended.
Create a subdirectory under the mount point.
Using Docker alone having the volume mount point at var/lib/postgresql/data works as expected and data endures, but I don't know what to do now in GKE. How does one set this up properly?
Setup file:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sm-pd-volume-claim
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
---
apiVersion: "apps/v1"
kind: "StatefulSet"
metadata:
name: "postgis-db"
namespace: "default"
labels:
app: "postgis-db"
spec:
serviceName: "postgis-db"
replicas: 1
selector:
matchLabels:
app: "postgis-db"
template:
metadata:
labels:
app: "postgis-db"
spec:
terminationGracePeriodSeconds: 25
containers:
- name: "postgis"
image: "mdillon/postgis"
ports:
- containerPort: 5432
name: postgis-port
volumeMounts:
- name: sm-pd-volume
mountPath: /var/lib/postgresql/data
volumes:
- name: sm-pd-volume
persistentVolumeClaim:
claimName: sm-pd-volume-claim
You are getting this error because the postgres pod has tried to mount the data directory on / folder. It is not recommended to do so.
You have to create subdirectory to resolve this issues on the statefulset manifest yaml files.
volumeMounts:
- name: sm-pd-volume
mountPath: /var/lib/postgresql/data
subPath: data

MongoDB not persistent in kubernetes

I'm trying to configure mongodb deployment inside k8s world.
My mongo deployment file looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-mongo-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-mongo
template:
metadata:
labels:
component: panel-admin-mongo
spec:
volumes:
- name: panel-admin-mongo-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: panel-admin-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: panel-admin-mongo-storage
mountPath: /data/db
Mongo service file:
apiVersion: v1
kind: Service
metadata:
name: panel-admin-mongo-cluster-ip-service
spec:
type: ClusterIP
selector:
component: panel-admin-mongo
ports:
- port: 27017
targetPort: 27017
And my persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
When I'm entering the mongodb container using:
kubectl exec -it panel-admin-mongo-deployment-6dcfc5b8c7-mk8d5 sh
and I'm saving some users email and password inside the collection (f.ex. users) everything works fine. But when I shot down the pod and container inside of it, boot up again, the data is gone. Shouldn't be independent of life-cycle of pod? And if yes what am I missing?
I am not a k8s expert, but your problems is that you are not using Kubernetes statfulsets, have a look here
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
AFAIK, for any persistent deployment you need to make your pods using statefulsets.
First, make sure that you pod success usage PVC:
kubectl describe po/${POD_NAME}
and check Volumes section:
Volumes:
prometheus-operator-db:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: prometheus-operator-db-0
ReadOnly: false
If you success usage PVC, need check reclaim-policy for you PV, this value should be persistentVolumeReclaimPolicy":"Retain".

How to define PVC for specific path in a single node in kubernetes

I am running local k8s cluster and defining PV as hostPath for mysql pods.
Sharing all the configuration details below .
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The problem I am getting is as mysql pod is running in k8s cluster ,when its deleted and recreate ,it will choose any one of the node and deployed .So mysql hostpath always mounted to specific node .Is it a good idea to fix the node for mysql or any other options are there ?please share if any idea .
you have below choices
Use node selector or node affinity to ensure that pod gets scheduled on the node where the mount is created OR
Use local persistent volumes. it is supported on kubernetes 1.14 and above
Why are you using a PVC and a PV? Actually, for hostPath, you don't even need to create the PV object. It just gets it.
You should use a StatefulSet if you want a pod that is re-created to get the storage it was using the previous one (state).
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: mysql
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-persistent-storage
spec:
accessModes: ["ReadWriteOnce"]
# storageClassName: "standard"
resources:
requests:
storage: 2Gi
This statefulSet fails, but it is a mysql thing. As reference, should serve.