Debezium server fffset file storage claim or cleanup - debezium

So, for non-kafka deployment, we specify offset storage file with debezium.source.offset.storage.file.filename as per documentation, how does the storage is reclaimed or cleanup happens?
I am planning to have k8s deployment with mounted volume as,
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: debezium-gke
labels:
name: debezium
spec:
serviceName: debezium
selector:
matchLabels:
name: debezium
template:
spec:
containers:
- name: debezium
image: debezium_server:1.10
imagePullPolicy: Always
volumeMounts:
- name: debezium-config-volume
mountPath: /debezium/conf
- name: debezium-data-volume
mountPath: /debezium/data
volumes:
- name: debezium-config-volume
configMap:
name: debezium
volumeClaimTemplates:
- metadata:
name: debezium-data-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10G
But wondering if the disk spaces gets filled in if storage reclaim doesn't happen!
Thanks

Related

What command is needed to start restoring the mariadb database when declaring the kubenetes manifest

Dockerfile
FROM mariadb
COPY all-databases.sql /base/all-databases.sql
Manifest
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mariadb
spec:
serviceName: mariadb
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: pawwwel/mariadb01:v5
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: pvclocal
mountPath: /var/lib/mysql
command: ["mysql", "-uroot", "-ppassword", "<", "/base/all-databases.sql"]
volumeClaimTemplates:
- metadata:
name: pvclocal
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 1Gi
The presence of such a command leads to an error in the pod.
command: ["mysql", "-uroot", "-ppassword", "<", "/base/all-databases.sql"]
Without it, an empty database is created.
Tell me how to automate the recovery of the database from a backup in kubernetes.

postgres on k8s with glusterfs as storage

I deploy a postgres database on k8s and glusterfs as volume.But every time I restart my pod all of data losses.Why is that?
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
namespace: gitlab
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.1
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres
env:
- name: POSTGRES_USERNAME
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_password
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_db
volumes:
- name: postgres
glusterfs:
endpoints: glusterfs-cluster
path: gv
Define PVC and PV objects. see below for reference.
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10GB
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: postgres
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10GB
Then bind the PVC to the pod as shown below
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
...
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-pv-claim
volumes:
- name: postgres-pv-claim
persistentVolumeClaim:
claimName: postgres-pv-claim
As per the Kubernetes Documentation https://kubernetes.io/docs/concepts/storage/volumes/#glusterfs, Unlike emptyDir, which is erased when a Pod is removed, the contents of a glusterfs volume are preserved and the volume is merely unmounted. I suggest to raise an issue at link https://github.com/kubernetes/kubernetes/issues/new/choose
If you want to install GitLab with PostgresSQL Backend, it will be easier to use below Helm Charts.
https://docs.gitlab.com/charts/
https://artifacthub.io/packages/helm/bitnami/postgresql
https://artifacthub.io/packages/helm/bitnami/postgresql-ha
You can do this:
1.For stateful set services such as databases, StatefulSet controllers should be used to deploy;
2.The storage data resources should be of a shared type, rather than using local volumes as storage, which may be scheduled to other nodes when creating POD objects;

mount azure disk to azure Kubernetes for PostgreSQL pod

I want mount azure disk to azure Kubernetes for PostgreSQL pod. My yml files
postgres-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
spec:
capacity:
storage: 80Gi
storageClassName: manual
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
azureDisk:
kind: Managed
diskName: es-us-dev-core-test
diskURI: /subscriptions/id/resourceGroups/kubernetes_resources_group/providers/Microsoft.Compute/disks/dev-test
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 80Gi
postgres-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: dev-test
POSTGRES_USER: admintpost
POSTGRES_PASSWORD: ada3dassasa
StatefulSet.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
labels:
app: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12
envFrom:
- configMapRef:
name: postgres-config
ports:
- containerPort: 5432
name: postgresdb
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
volumes:
- name: pv-data
persistentVolumeClaim:
claimName: postgres-pv-claim
Instruction for create disk https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume
I get an error that it cannot connect to the disk, could you please tell me how to add Azure Disk to the pod.Thanks.
Create Azure Disk in the correct resource group
Looking at the file postgres-storage.yaml:
in spec.azureDisk.diskURI I see that you have created the disk in the resource group kubernetes_resources_group. However, you should create the disk inside a resource group whose name is something like this:
MC_kubernetes_resources_group_<your cluster name>_<region of the cluster>
Make sure that you create the disk in the same availability zone as your cluster.
Set caching mode to None
In the file postgres-storage.yaml:
set spec.azureDisk.cachingMode to None
Fix the definition of StatefulSet.yml
If you're using Azure Disks then in the file StatefulSet.yml:
in spec.template.spec you should replace the following:
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
with the this:
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
subPath: pgdata
EDIT: fixed some mistakes in the last part.

How to define PVC for specific path in a single node in kubernetes

I am running local k8s cluster and defining PV as hostPath for mysql pods.
Sharing all the configuration details below .
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The problem I am getting is as mysql pod is running in k8s cluster ,when its deleted and recreate ,it will choose any one of the node and deployed .So mysql hostpath always mounted to specific node .Is it a good idea to fix the node for mysql or any other options are there ?please share if any idea .
you have below choices
Use node selector or node affinity to ensure that pod gets scheduled on the node where the mount is created OR
Use local persistent volumes. it is supported on kubernetes 1.14 and above
Why are you using a PVC and a PV? Actually, for hostPath, you don't even need to create the PV object. It just gets it.
You should use a StatefulSet if you want a pod that is re-created to get the storage it was using the previous one (state).
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: mysql
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-persistent-storage
spec:
accessModes: ["ReadWriteOnce"]
# storageClassName: "standard"
resources:
requests:
storage: 2Gi
This statefulSet fails, but it is a mysql thing. As reference, should serve.

Kubernetes postgres PVC data

I have postgres yaml for deployment with PV/PVC created. In DB I have a lot of info. When I delete Statesfulset postgres and created new one, data is being lost. How can I do to have restored data after creating new Statesfulset ?
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: postgres
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: postgres
replicas: 1
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: postgres
spec:
hostname: postgres
containers:
- name: postgres
image: postgres
imagePullPolicy: Always
restartPolicy: Always
env:
- name: "PGDATA"
value: "/var/lib/postgresql/data"
- name: "POSTGRES_INITDB_WALDIR"
value: "/var/lib/postgresql/dblogs/logs"
...
ports:
- name: port5432
containerPort: 5432
volumeMounts:
- name: test
mountPath: /var/lib/postgresql
- name: postgressecret
mountPath: /etc/postgressecret
readonly: true
volumes:
- name: postgressecret
secret:
secretName: postgressecret
- name: test
persistentVolumeClaim:
claimName: test
volumeClaimTemplates:
- metadata:
name: test
annotations:
volume.beta.kubernetes.io/storage-class: thin-disk
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
Kubernetes does not remove PVCs after the deletion of their corresponding StatefulSets! From this ticket it can be deducted that auto remove functionality is planned for version 1.23