What command is needed to start restoring the mariadb database when declaring the kubenetes manifest - kubernetes

Dockerfile
FROM mariadb
COPY all-databases.sql /base/all-databases.sql
Manifest
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mariadb
spec:
serviceName: mariadb
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: pawwwel/mariadb01:v5
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: pvclocal
mountPath: /var/lib/mysql
command: ["mysql", "-uroot", "-ppassword", "<", "/base/all-databases.sql"]
volumeClaimTemplates:
- metadata:
name: pvclocal
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 1Gi
The presence of such a command leads to an error in the pod.
command: ["mysql", "-uroot", "-ppassword", "<", "/base/all-databases.sql"]
Without it, an empty database is created.
Tell me how to automate the recovery of the database from a backup in kubernetes.

Related

Debezium server fffset file storage claim or cleanup

So, for non-kafka deployment, we specify offset storage file with debezium.source.offset.storage.file.filename as per documentation, how does the storage is reclaimed or cleanup happens?
I am planning to have k8s deployment with mounted volume as,
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: debezium-gke
labels:
name: debezium
spec:
serviceName: debezium
selector:
matchLabels:
name: debezium
template:
spec:
containers:
- name: debezium
image: debezium_server:1.10
imagePullPolicy: Always
volumeMounts:
- name: debezium-config-volume
mountPath: /debezium/conf
- name: debezium-data-volume
mountPath: /debezium/data
volumes:
- name: debezium-config-volume
configMap:
name: debezium
volumeClaimTemplates:
- metadata:
name: debezium-data-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10G
But wondering if the disk spaces gets filled in if storage reclaim doesn't happen!
Thanks

Kubernetes stateful mongodb: What is right string or how to connect to mongodb of running istance of statefulset which has service also attached?

Here is my config of YAML (all PV, Statefulset, and Service get created fine no issues in that). Tried a bunch of solution for connection string of Kubernetes mongo but didn't work any.
Kubernetes version (minikube):
1.20.1
Storage for config:
NFS (working fine, tested)
OS:
Linux Mint 20
YAML CONFIG:
apiVersion: v1
kind: PersistentVolume
metadata:
name: auth-pv
spec:
capacity:
storage: 250Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
path: /nfs/auth
server: 192.168.10.104
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
storageClassName: manual
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 250Mi
I have found few issues in your configuration file. In your service manifest file you use
selector:
role: mongo
But in your statefull set pod template you are using
labels:
app: mongo
One more thing you should use ClusterIP:None to use the headless service which is recommended for statefull set, if you want to access the db using dns name.
To all viewers, working YAML is below
right connection string is: mongodb://mongo:27017/<dbname>
apiVersion: v1
kind: PersistentVolume
metadata:
name: auth-pv
spec:
capacity:
storage: 250Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
path: /nfs/auth
server: 192.168.10.104
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
storageClassName: manual
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 250Mi

How to mount a secret to kubernetes StatefulSet

So, looking at the Kubernetes API documentation: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#statefulsetspec-v1-apps it appears that I can indeed have a volume because it uses a podspec and the podspec does have a volume field, so I could list the secret and then mount it like in a deployment, or any other pod.
The problem is that kubernetes seems to think that volumes are not actually in the podspec for StatefulSet? Is this right? How do I mount in a secret to my statefulset if this is true.
error: error validating "mysql-stateful-set.yaml": error validating data: ValidationError(StatefulSet.spec.template.spec.containers[0]): unknown field "volumes" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
StatefulSet:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- port: 3306
name: database
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
name: database
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: mysql
mountPath: /run/secrets/mysql
env:
- name: MYSQL_ROOT_PASSWORD_FILE
value: /run/secrets/mysql/root-pass
volumes:
- name: mysql
secret:
secretName: mysql
items:
- key: root-pass
path: root-pass
mode: 511
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 10Gi```
The volume field should come inside template spec and not inside container (as done in your template). Refer this for the exact structure (https://godoc.org/k8s.io/api/apps/v1#StatefulSetSpec), go to PodTemplateSpec and you will find volumes field.
Below template should work for you:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- port: 3306
name: database
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
name: database
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: mysql
mountPath: /run/secrets/mysql
env:
- name: MYSQL_ROOT_PASSWORD_FILE
value: /run/secrets/mysql/root-pass
volumes:
- name: mysql
secret:
secretName: mysql
items:
- key: root-pass
path: root-pass
mode: 511
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 10Gi

How to define PVC for specific path in a single node in kubernetes

I am running local k8s cluster and defining PV as hostPath for mysql pods.
Sharing all the configuration details below .
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The problem I am getting is as mysql pod is running in k8s cluster ,when its deleted and recreate ,it will choose any one of the node and deployed .So mysql hostpath always mounted to specific node .Is it a good idea to fix the node for mysql or any other options are there ?please share if any idea .
you have below choices
Use node selector or node affinity to ensure that pod gets scheduled on the node where the mount is created OR
Use local persistent volumes. it is supported on kubernetes 1.14 and above
Why are you using a PVC and a PV? Actually, for hostPath, you don't even need to create the PV object. It just gets it.
You should use a StatefulSet if you want a pod that is re-created to get the storage it was using the previous one (state).
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: mysql
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-persistent-storage
spec:
accessModes: ["ReadWriteOnce"]
# storageClassName: "standard"
resources:
requests:
storage: 2Gi
This statefulSet fails, but it is a mysql thing. As reference, should serve.

Kubernetes postgres PVC data

I have postgres yaml for deployment with PV/PVC created. In DB I have a lot of info. When I delete Statesfulset postgres and created new one, data is being lost. How can I do to have restored data after creating new Statesfulset ?
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: postgres
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: postgres
replicas: 1
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: postgres
spec:
hostname: postgres
containers:
- name: postgres
image: postgres
imagePullPolicy: Always
restartPolicy: Always
env:
- name: "PGDATA"
value: "/var/lib/postgresql/data"
- name: "POSTGRES_INITDB_WALDIR"
value: "/var/lib/postgresql/dblogs/logs"
...
ports:
- name: port5432
containerPort: 5432
volumeMounts:
- name: test
mountPath: /var/lib/postgresql
- name: postgressecret
mountPath: /etc/postgressecret
readonly: true
volumes:
- name: postgressecret
secret:
secretName: postgressecret
- name: test
persistentVolumeClaim:
claimName: test
volumeClaimTemplates:
- metadata:
name: test
annotations:
volume.beta.kubernetes.io/storage-class: thin-disk
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
Kubernetes does not remove PVCs after the deletion of their corresponding StatefulSets! From this ticket it can be deducted that auto remove functionality is planned for version 1.23