Kubernetes postgres PVC data - postgresql

I have postgres yaml for deployment with PV/PVC created. In DB I have a lot of info. When I delete Statesfulset postgres and created new one, data is being lost. How can I do to have restored data after creating new Statesfulset ?
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: postgres
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: postgres
replicas: 1
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: postgres
spec:
hostname: postgres
containers:
- name: postgres
image: postgres
imagePullPolicy: Always
restartPolicy: Always
env:
- name: "PGDATA"
value: "/var/lib/postgresql/data"
- name: "POSTGRES_INITDB_WALDIR"
value: "/var/lib/postgresql/dblogs/logs"
...
ports:
- name: port5432
containerPort: 5432
volumeMounts:
- name: test
mountPath: /var/lib/postgresql
- name: postgressecret
mountPath: /etc/postgressecret
readonly: true
volumes:
- name: postgressecret
secret:
secretName: postgressecret
- name: test
persistentVolumeClaim:
claimName: test
volumeClaimTemplates:
- metadata:
name: test
annotations:
volume.beta.kubernetes.io/storage-class: thin-disk
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi

Kubernetes does not remove PVCs after the deletion of their corresponding StatefulSets! From this ticket it can be deducted that auto remove functionality is planned for version 1.23

Related

Debezium server fffset file storage claim or cleanup

So, for non-kafka deployment, we specify offset storage file with debezium.source.offset.storage.file.filename as per documentation, how does the storage is reclaimed or cleanup happens?
I am planning to have k8s deployment with mounted volume as,
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: debezium-gke
labels:
name: debezium
spec:
serviceName: debezium
selector:
matchLabels:
name: debezium
template:
spec:
containers:
- name: debezium
image: debezium_server:1.10
imagePullPolicy: Always
volumeMounts:
- name: debezium-config-volume
mountPath: /debezium/conf
- name: debezium-data-volume
mountPath: /debezium/data
volumes:
- name: debezium-config-volume
configMap:
name: debezium
volumeClaimTemplates:
- metadata:
name: debezium-data-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10G
But wondering if the disk spaces gets filled in if storage reclaim doesn't happen!
Thanks

What command is needed to start restoring the mariadb database when declaring the kubenetes manifest

Dockerfile
FROM mariadb
COPY all-databases.sql /base/all-databases.sql
Manifest
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mariadb
spec:
serviceName: mariadb
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: pawwwel/mariadb01:v5
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: pvclocal
mountPath: /var/lib/mysql
command: ["mysql", "-uroot", "-ppassword", "<", "/base/all-databases.sql"]
volumeClaimTemplates:
- metadata:
name: pvclocal
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 1Gi
The presence of such a command leads to an error in the pod.
command: ["mysql", "-uroot", "-ppassword", "<", "/base/all-databases.sql"]
Without it, an empty database is created.
Tell me how to automate the recovery of the database from a backup in kubernetes.

How to configure pv and pvc for single pod with multiple containers in kubernetes

Need to create a single pod with multiple containers for MySQL, MongoDB, MySQL. My question is should I need to create persistence volume and persistence volume claim for each container and specify the volume in pod configuration or single PV & PVC is enough for all the containers in a single pod-like below configs.
Could you verify below configuration is enough or not?
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypod-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypod-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mypod
labels:
app: mypod
spec:
replicas: 1
selector:
matchLabels:
app: mypod
template:
metadata:
labels:
app: mypod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: mypod-pvc
containers:
- name: mysql
image: mysql/mysql-server:latest
ports:
- containerPort: 3306
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mysql"
name: task-pv-storage
- name: mongodb
image: openshift/mongodb-24-centos7
ports:
- containerPort: 27017
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mongodb"
name: task-pv-storage
- name: mssql
image: mcr.microsoft.com/mssql/server
ports:
- containerPort: 1433
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/opt/mssql"
name: task-pv-storage
imagePullSecrets:
- name: devplat
You should not be running multiple database containers inside a single pod.
Consider running each database in a separate statefulset.
follow below reference for mysql
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
You need to adopt similar approach for mongodb or other databases as well.

Error: PostgreSQL Kubernetes data directory "/var/lib/postgresql/data" has wrong ownership windows

I am new to kubernetes and stuff. I was going through the tutorials, I encountered a error while using the pstgres database and persistent volume claim. I am pretty sure that all the permissions are being given to the user but still the error suggest that the folder has wrong ownership.
Here are my configuration files
This is the persistent volume claim file
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Here is my postgres deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: "trust"
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: pgpassword
key: PGPASSWORD
Here is the error message
2020-04-12 01:57:11.986 UTC [82] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2020-04-12 01:57:11.986 UTC [82] HINT: The server must be started by the user that owns the data directory.
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
Any help is appreciated?
This is working for me check subpath in volume mount
apiVersion: apps/v1
kind: StatefulSet
metadata:
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: postgres
serviceName: postgres
template:
metadata:
creationTimestamp: null
labels:
app: postgres
spec:
containers:
- env:
- name: POSTGRES_USER
value: root
- name: POSTGRES_PASSWORD
value: <Password>
- name: POSTGRES_DB
value: <DB name>
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
image: postgres:9.5
imagePullPolicy: IfNotPresent
name: postgres
ports:
- containerPort: 5432
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
subPath: pgdata
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: postgres-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
volumeMode: Filesystem
status:
phase: Pending

How to mount a secret to kubernetes StatefulSet

So, looking at the Kubernetes API documentation: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#statefulsetspec-v1-apps it appears that I can indeed have a volume because it uses a podspec and the podspec does have a volume field, so I could list the secret and then mount it like in a deployment, or any other pod.
The problem is that kubernetes seems to think that volumes are not actually in the podspec for StatefulSet? Is this right? How do I mount in a secret to my statefulset if this is true.
error: error validating "mysql-stateful-set.yaml": error validating data: ValidationError(StatefulSet.spec.template.spec.containers[0]): unknown field "volumes" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
StatefulSet:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- port: 3306
name: database
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
name: database
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: mysql
mountPath: /run/secrets/mysql
env:
- name: MYSQL_ROOT_PASSWORD_FILE
value: /run/secrets/mysql/root-pass
volumes:
- name: mysql
secret:
secretName: mysql
items:
- key: root-pass
path: root-pass
mode: 511
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 10Gi```
The volume field should come inside template spec and not inside container (as done in your template). Refer this for the exact structure (https://godoc.org/k8s.io/api/apps/v1#StatefulSetSpec), go to PodTemplateSpec and you will find volumes field.
Below template should work for you:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- port: 3306
name: database
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
name: database
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: mysql
mountPath: /run/secrets/mysql
env:
- name: MYSQL_ROOT_PASSWORD_FILE
value: /run/secrets/mysql/root-pass
volumes:
- name: mysql
secret:
secretName: mysql
items:
- key: root-pass
path: root-pass
mode: 511
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 10Gi