How to mount PostgreSQL data directory in Kubernetes? - postgresql

I'm using minikube to run kubernetes locally. My local k8s have two pods which one of them is PostgreSQL and another one is my own app. I've mounted a PersistentVolume and PersistentVolumeClaim in order to make a stateful pod for PostgreSQL:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/psql"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Here is PostgreSQL deployment yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
name: postgres
spec:
containers:
- name: postgres
imagePullPolicy: Never
image: postgres:9.6
ports:
- name: postgres
containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres-persistent-storage
volumes:
- name: postgres-persistent-storage
persistentVolumeClaim:
claimName: postgres-pv-claim
The problem is that PostgreSQL service doesn't start and this error occurs when I run its pod:
Error: /var/lib/postgresql/9.6/main is not accessible; please fix the directory permissions (/var/lib/postgresql/9.6/ should be world readable)
No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning).
I've checked inside of PostgreSQL pod and I found that /var/lib/postgresql is empty, just like /data/psql In minikube host.
Can anyone help?

Change:
volumeMounts:
- mountPath: /var/lib/postgresql
to
volumeMounts:
- mountPath: /var/lib/postgresql/data
With the wrong mountPoint postgres executables were overridden.
I attach an image with the data I see from inside the pod (on the left) and from inside minikube space (on the right, the little shell from virtualbox).

Related

mount azure disk to azure Kubernetes for PostgreSQL pod

I want mount azure disk to azure Kubernetes for PostgreSQL pod. My yml files
postgres-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
spec:
capacity:
storage: 80Gi
storageClassName: manual
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
azureDisk:
kind: Managed
diskName: es-us-dev-core-test
diskURI: /subscriptions/id/resourceGroups/kubernetes_resources_group/providers/Microsoft.Compute/disks/dev-test
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 80Gi
postgres-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: dev-test
POSTGRES_USER: admintpost
POSTGRES_PASSWORD: ada3dassasa
StatefulSet.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
labels:
app: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12
envFrom:
- configMapRef:
name: postgres-config
ports:
- containerPort: 5432
name: postgresdb
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
volumes:
- name: pv-data
persistentVolumeClaim:
claimName: postgres-pv-claim
Instruction for create disk https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume
I get an error that it cannot connect to the disk, could you please tell me how to add Azure Disk to the pod.Thanks.
Create Azure Disk in the correct resource group
Looking at the file postgres-storage.yaml:
in spec.azureDisk.diskURI I see that you have created the disk in the resource group kubernetes_resources_group. However, you should create the disk inside a resource group whose name is something like this:
MC_kubernetes_resources_group_<your cluster name>_<region of the cluster>
Make sure that you create the disk in the same availability zone as your cluster.
Set caching mode to None
In the file postgres-storage.yaml:
set spec.azureDisk.cachingMode to None
Fix the definition of StatefulSet.yml
If you're using Azure Disks then in the file StatefulSet.yml:
in spec.template.spec you should replace the following:
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
with the this:
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
subPath: pgdata
EDIT: fixed some mistakes in the last part.

How to have data persist in GKE kubernetes StatefulSet with postgres?

So I'm just trying to get a web app running on GKE experimentally to familiarize myself with Kubernetes and GKE.
I have a statefulSet (Postgres) with a persistent volume/ persistent volume claim which is mounted to the Postgres pod as expected. The problem I'm having is having the Postgres data endure. If I mount the PV at var/lib/postgres the data gets overridden with each pod update. If I mount at var/lib/postgres/data I get the warning:
initdb: directory "/var/lib/postgresql/data" exists but is not empty
It contains a lost+found directory, perhaps due to it being a mount point.
Using a mount point directly as the data directory is not recommended.
Create a subdirectory under the mount point.
Using Docker alone having the volume mount point at var/lib/postgresql/data works as expected and data endures, but I don't know what to do now in GKE. How does one set this up properly?
Setup file:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sm-pd-volume-claim
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
---
apiVersion: "apps/v1"
kind: "StatefulSet"
metadata:
name: "postgis-db"
namespace: "default"
labels:
app: "postgis-db"
spec:
serviceName: "postgis-db"
replicas: 1
selector:
matchLabels:
app: "postgis-db"
template:
metadata:
labels:
app: "postgis-db"
spec:
terminationGracePeriodSeconds: 25
containers:
- name: "postgis"
image: "mdillon/postgis"
ports:
- containerPort: 5432
name: postgis-port
volumeMounts:
- name: sm-pd-volume
mountPath: /var/lib/postgresql/data
volumes:
- name: sm-pd-volume
persistentVolumeClaim:
claimName: sm-pd-volume-claim
You are getting this error because the postgres pod has tried to mount the data directory on / folder. It is not recommended to do so.
You have to create subdirectory to resolve this issues on the statefulset manifest yaml files.
volumeMounts:
- name: sm-pd-volume
mountPath: /var/lib/postgresql/data
subPath: data

Persistent volume to windows not working on kubernetes

I have map windows folder into me linux machine with
mount -t cifs //AUTOCHECK/OneStopShopWebAPI -o user=someuser,password=Aa1234 /xml_shared
and the following command
df -hk
give me
//AUTOCHECK/OneStopShopWebAPI 83372028 58363852 25008176 71% /xml_shared
after that I create yaml file with
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-jenkins-slave
spec:
storageClassName: jenkins-slave-data
accessModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-jenkins-slave
labels:
type: jenkins-slave-data2
spec:
storageClassName: jenkins-slave-data
capacity:
storage: 4Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.100.109
path: "//AUTOCHECK/OneStopShopWebAPI/jenkins_slave_shared"
this seems to not work when I create new pod
apiVersion: v1
kind: Pod
metadata:
name: jenkins-slave
labels:
label: jenkins-slave
spec:
containers:
- name: node
image: node
command:
- cat
tty: true
volumeMounts:
- mountPath: /var/jenkins_slave_shared
name: jenkins-slave-vol
volumes:
- name: jenkins-slave-vol
persistentVolumeClaim:
claimName: pvc-nfs-jenkins-slave
do i need to change the nfs ? what is wrong with me logic?
The mounting of CIFS share under Linux machine is correct but you need to take different approach to mount CIFS volume under Kubernetes. Let me explain:
There are some differences between NFS and CIFS.
This site explained the whole process step by step: Github CIFS Kubernetes.

PV file not saved on host

hi all quick question on host paths for persistent volumes
I created a PV and PVC here
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
and I ran a sample pod
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
i exec the pod and created a file
root#task-pv-pod:/# cd /usr/share/nginx/html
root#task-pv-pod:/usr/share/nginx/html# ls
tst.txt
However, when I go back to my host and try to ls the file , its not appearing. Any idea why? My PV and PVC are correct as I can see that it has been bounded.
ubuntu#ip-172-31-24-21:/home$ cd /mnt/data
ubuntu#ip-172-31-24-21:/mnt/data$ ls -lrt
total 0
A persistent volume (PV) is a kubernetes resource which has its own lifecycle independent of the pod pv documentation. Using a PVC to consume from a PV makes it visible in some other tool. For example azure files, ELB, a server with NFS, etc. My point here is that there is no reason why the PV should exist in the node.
If you want your persistence to be saved in the node use the hostPath option for PVs. Check this link. Though this is not a good production practice.
First of all, you don't need to create a PV if you are creating a PVC. PVCs create PV, if you have the right storageClass.
Second, hostPath is one delicate PV in Kubernetes world. That's the only PV that doen't need to be created to be mounted in a Pod. So you could have not created neither PV nor PVC and a hostPath volume would work just fine.
To make a test, delete your PV and PVC, and create your Pod like this:
apiVersion: v1
kind: Pod
metadata:
name: nginx-volume
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
securityContext:
privileged: true
ports:
- containerPort: 80
name: nginx-http
volumeMounts:
- name: nginx
mountPath: /root/nginx-volume # path in the pod
volumes:
- name: nginx
hostPath:
path: /var/test # path in the host machine
I know this is a confusing concept, but that's how it is.

How to solve permission trouble when running Postgresql from minikube?

I am trying to run a Postgresql database using minikube with a persistent volume claim. These are the yaml specifications:
minikube-persistent-volume.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
type: hostpath
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/Users/jonathan/data"
postgres-persistent-volume-claim.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-postgres
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 2Gi
postgres-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.5
name: postgres
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-disk
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_USER
value: keycloak
- name: POSTGRES_DATABASE
value: keycloak
- name: POSTGRES_PASSWORD
value: key
- name: POSTGRES_ROOT_PASSWORD
value: masterkey
volumes:
- name: postgres-disk
persistentVolumeClaim:
claimName: pv-postgres
when I start this I get the following in the logs from the deployment:
[...]
fixing permissions on existing directory
/var/lib/postgresql/data/pgdata ... ok
initdb: could not create directory "/var/lib/postgresql/data/pgdata/pg_xlog": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data/pgdata"
Why do I get this Permission denied error and what can I do about it?
Maybe you're having a write permission issue with Virtualbox mounting those host folders.
Instead, use /data/postgres as a path and things will work.
Minikube automatically persists the following directories so they will be preserved even if your VM is rebooted/recreated:
/data
/var/lib/localkube
/var/lib/docker
Read these sections for more details:
https://github.com/kubernetes/minikube#persistent-volumes
https://github.com/kubernetes/minikube#mounted-host-folders