Kubernetes StatefulSet - does not resatore data on pod restart - kubernetes

Kubernetes version - 1.8
Created statefulset for postgres database with pvc
Added some tables to database
Restarted pod by scaling statefulset to 0 and then again 1
Created tables in step # 2 are no longer available
Tried another scnario with steps on docker-for-desktop cluster k8s version 1.10
Created statefulset for postgres database with pvc
Added some tables to database
Restarted docker for desktop
Created tables in step # 2 are no longer available
k8s manifest
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: kong
POSTGRES_USER: kong
POSTGRES_PASSWORD: kong
PGDATA: /var/lib/postgresql/data/pgdata
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
app: postgres
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/postgresql/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
app: postgres
spec:
ports:
- name: pgql
port: 5432
targetPort: 5432
protocol: TCP
selector:
app: postgres
---
apiVersion: apps/v1beta2 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.6
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pvc
---

If you have multiple nodes - the issue you see is totally expected. So if you want to use hostPath as a Persistent Volume in a multi-node cluster - you must use some shared filesystem like Glusterfs or Ceph and place your /mnt/postgresql/data folder onto that shared filesystem.

Related

postgres on k8s with glusterfs as storage

I deploy a postgres database on k8s and glusterfs as volume.But every time I restart my pod all of data losses.Why is that?
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
namespace: gitlab
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.1
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres
env:
- name: POSTGRES_USERNAME
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_password
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_db
volumes:
- name: postgres
glusterfs:
endpoints: glusterfs-cluster
path: gv
Define PVC and PV objects. see below for reference.
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10GB
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: postgres
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10GB
Then bind the PVC to the pod as shown below
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
...
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-pv-claim
volumes:
- name: postgres-pv-claim
persistentVolumeClaim:
claimName: postgres-pv-claim
As per the Kubernetes Documentation https://kubernetes.io/docs/concepts/storage/volumes/#glusterfs, Unlike emptyDir, which is erased when a Pod is removed, the contents of a glusterfs volume are preserved and the volume is merely unmounted. I suggest to raise an issue at link https://github.com/kubernetes/kubernetes/issues/new/choose
If you want to install GitLab with PostgresSQL Backend, it will be easier to use below Helm Charts.
https://docs.gitlab.com/charts/
https://artifacthub.io/packages/helm/bitnami/postgresql
https://artifacthub.io/packages/helm/bitnami/postgresql-ha
You can do this:
1.For stateful set services such as databases, StatefulSet controllers should be used to deploy;
2.The storage data resources should be of a shared type, rather than using local volumes as storage, which may be scheduled to other nodes when creating POD objects;

mount azure disk to azure Kubernetes for PostgreSQL pod

I want mount azure disk to azure Kubernetes for PostgreSQL pod. My yml files
postgres-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
spec:
capacity:
storage: 80Gi
storageClassName: manual
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
azureDisk:
kind: Managed
diskName: es-us-dev-core-test
diskURI: /subscriptions/id/resourceGroups/kubernetes_resources_group/providers/Microsoft.Compute/disks/dev-test
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 80Gi
postgres-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: dev-test
POSTGRES_USER: admintpost
POSTGRES_PASSWORD: ada3dassasa
StatefulSet.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
labels:
app: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12
envFrom:
- configMapRef:
name: postgres-config
ports:
- containerPort: 5432
name: postgresdb
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
volumes:
- name: pv-data
persistentVolumeClaim:
claimName: postgres-pv-claim
Instruction for create disk https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume
I get an error that it cannot connect to the disk, could you please tell me how to add Azure Disk to the pod.Thanks.
Create Azure Disk in the correct resource group
Looking at the file postgres-storage.yaml:
in spec.azureDisk.diskURI I see that you have created the disk in the resource group kubernetes_resources_group. However, you should create the disk inside a resource group whose name is something like this:
MC_kubernetes_resources_group_<your cluster name>_<region of the cluster>
Make sure that you create the disk in the same availability zone as your cluster.
Set caching mode to None
In the file postgres-storage.yaml:
set spec.azureDisk.cachingMode to None
Fix the definition of StatefulSet.yml
If you're using Azure Disks then in the file StatefulSet.yml:
in spec.template.spec you should replace the following:
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
with the this:
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
subPath: pgdata
EDIT: fixed some mistakes in the last part.

Kubernetes: Mongo container not creating

I am unable to create a mongo container using a deployment:
Persistent volume and persistent volume claims are working fine.
> kubectl logs -f mongo-depl-dc764fb6d-qqdxh
Error from server (BadRequest): container "mongo" in pod "mongo-depl-dc764fb6d-qqdxh" is waiting to start: CreateContainerError
Persistent Volume :
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "E:\\Linux\\mongo"
Persistent Volume Claim :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 0.5Gi
Mongodb Deployment :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
volumes:
- name: mongo-volume
persistentVolumeClaim:
claimName: mongo-pvc
containers:
- name: mongo
image: mongo:3.6.5-jessie
ports:
- name: mongo
containerPort: 27017
volumeMounts:
- name: mongo-volume
mountPath: /data/db
---
apiVersion: v1
kind: Service
metadata:
name: mongo-srv
spec:
type: NodePort
selector:
app: mongo
ports:
- name: mongo
protocol: TCP
port: 27017
targetPort: 27017
I would really appreciate the help. I am running kubernetes on a dev environment on the windows machine.

Persistent Storage in Kubernetes does not persist data

In my Kubernetes cluster, on my DB container, my persistent storage (dynamically provisioned on Digital Ocean) does not persist the storage if the pod is deleted.
I have changed the reclaim policy of the storage from Delete to Retain but this does not make a difference.
This is a copy of DB YAML file:
apiVersion: v1
kind: Service
metadata:
name: db
namespace: hm-namespace01
app: app1
spec:
type: NodePort
ports:
- port: 5432
selector:
app: app1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hm-pv-claim
namespace: hm-namespace01
labels:
app: app1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: do-block-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1
namespace: hm-namespace01
labels:
app: app1
spec:
selector:
matchLabels:
app: app1
tier: backend
strategy:
type: Recreate
template:
metadata:
labels:
app: app1
tier: backend
spec:
containers:
- name: app1
image: postgres:11
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
volumeMounts:
- name: postgredb
mountPath: /var/lib/postgresql
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: hm-pv-claim
You must match your mountPath with the Postgres PGDATA environment variable.
The default value of PGDATA is /var/lib/postgresql/data (not /var/lib/postgresql).
You need to either adjust your mountPath or set the PGDATA env to match it.

How to define PVC for specific path in a single node in kubernetes

I am running local k8s cluster and defining PV as hostPath for mysql pods.
Sharing all the configuration details below .
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The problem I am getting is as mysql pod is running in k8s cluster ,when its deleted and recreate ,it will choose any one of the node and deployed .So mysql hostpath always mounted to specific node .Is it a good idea to fix the node for mysql or any other options are there ?please share if any idea .
you have below choices
Use node selector or node affinity to ensure that pod gets scheduled on the node where the mount is created OR
Use local persistent volumes. it is supported on kubernetes 1.14 and above
Why are you using a PVC and a PV? Actually, for hostPath, you don't even need to create the PV object. It just gets it.
You should use a StatefulSet if you want a pod that is re-created to get the storage it was using the previous one (state).
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: mysql
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-persistent-storage
spec:
accessModes: ["ReadWriteOnce"]
# storageClassName: "standard"
resources:
requests:
storage: 2Gi
This statefulSet fails, but it is a mysql thing. As reference, should serve.