Deleting Kubernetes cluster should not delete persistance disk - kubernetes

I have created a kubernetes cluster using terraform with persistance disk (pd-ssd). I have also created storage class and persistance volume claim as well.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-claim
labels:
app: elasticsearch
spec:
storageClassName: ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30G
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
reclaimPolicy: Retain
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
name: elasticsearch
spec:
type: NodePort
ports:
- name: elasticsearch-port1
port: 9200
protocol: TCP
targetPort: 9200
- name: elasticsearch-port2
port: 9300
protocol: TCP
targetPort: 9300
selector:
app: elasticsearch
tier: elasticsearch
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elasticsearch-application
labels:
app: elasticsearch
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: elasticsearch
tier: elasticsearch
spec:
hostname: elasticsearch
containers:
- image: gcr.io/xxxxxxxxxxxx/elasticsearch:7.3.1
name: elasticsearch
ports:
- containerPort: 9200
name: elasticport1
- containerPort: 9300
name: elasticport2
env:
- name: discovery.type
value: single-node
volumeMounts:
- mountPath: /app/elasticsearch/gcp/
name: elasticsearch-pv-volume
volumes:
- name: elasticsearch-pv-volume
persistentVolumeClaim:
claimName: pvc-claim
The pvc-claim and storage classes bound perfeclty and I have set the reclamin policy as retain. so the persistance disk should not be deleted when the kubernetes cluster is deleted. But the cluster and other data's deletes with cluster
My scenerio is I need a persitance disk and when the cluster is deleted also my data's should not be deleted. The disk should remain as it is. Is there any fesible solution to my scenerio.

I have created kubernetes cluster using kOps in AWS. When I deleted my cluster I faced same issue as you. The EBS volume that I used for my Database got deleted. Luckily, I had snapshot to create a volume out of it.
Solution: Remove the tags of the volume from AWS UI. And then delete your kubernetes cluster. Then the volume will not get removed. I hope this is possible in GCP too.
For more details, have a look at this video and this post

Related

kubernetes persistent volume overrides the previous volume

We are trying to run blockchain node with AWS EKS.
We need to maintain the previous blockchain snapshot while updating node version.
So we used the kubernetes persistent volume, but the previous volume is just overwritten whenever we kill the previous pod, and restart the new one.
We set up the arguments of docker image to store the data at "/data/headless"
like this
--store-path=/data/headless
also, we set the volumeMounts to "/disk"
volumeMounts:
- name: block-data
mountPath: /disk
and the result shows
We think this volume path seems a temporary volume, and we are wondering about how to maintain these volumes even if we update the pods.
[We have seen millions of documents of AWS, but still doesn't know where to get started:( ]
How can I update the previous volume and connect it to new pod without losing any data in the previous one?
Here's our source code.
Volume Claim Yaml File
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nine-claim-3
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
StatefulSet & Service Yaml File
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nine-cluster
namespace: test-nine-3
labels:
app: nine
spec:
serviceName: nine-cluster
replicas: 2
selector:
matchLabels:
app: nine
template:
metadata:
labels:
app: nine
spec:
containers:
- name: nine-container
image: planetariumhq/ninechronicles-headless:v100260
ports:
- name: rpc
containerPort: 31238
- name: iceserver
containerPort: 3478
- name: peer
containerPort: 31234
- name: graphql
containerPort: 80
args: [
"--port=31234",
"--no-miner",
"--store-type=rocksdb",
"--store-path=/data/headless",
"--graphql-server",
"--graphql-host=0.0.0.0",
"--graphql-port=80",
"--rpc-server",
"--rpc-remote-server",
"--rpc-listen-host=0.0.0.0",
"--rpc-listen-port=31238",
"--no-cors",
"--chain-tip-stale-behavior-type=reboot"
]
volumeMounts:
- name: block-data
mountPath: /data/headless
volumes:
- name: block-data
persistentVolumeClaim:
claimName: nine-claim-3
---
apiVersion: v1
kind: Service
metadata:
name: nine-nlb-services
namespace: test-nine-3
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
ports:
- name: graphql
port: 80
targetPort: 80
nodePort: 31003
protocol: TCP
- name: rpc
port: 31238
targetPort: 31238
nodePort: 31004
protocol: TCP
type: LoadBalancer
selector:
app: nine

Redis Pod with 3 replica and Persistence Storage not providing data all the time

I have created Persistence Volume, Persistence Volume Claim and used the same for Redis Deployment with 3 replicas (YAML shared below)
It allows me to set and get the data.
However POD1 is not able to fetch data set by POD2 and vice-a-versa
I want to fix this issue so that POD1-POD2-POD3 can exchange the data from same PV
redis-pv-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/dev/redis_data"
redis-pv-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
Here is my deployment file
redis_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 3
selector:
matchLabels:
name: redis
template:
metadata:
labels:
name: redis
spec:
volumes:
- name: redis-data-store
persistentVolumeClaim:
claimName: redis-pv-claim
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
volumeMounts:
- mountPath: "/data"
name: redis-data-store
And Services for CLusterIP & NodePort for external use
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: NodePort
ports:
- targetPort: 6379
port: 6379
nodePort: 30379
selector:
name: redis
apiVersion: v1
kind: Service
metadata:
name: redis-cluster-service
spec:
type: ClusterIP
ports:
- targetPort: 6379
port: 6379
protocol: TCP
selector:
name: redis
All the data is getting stored on my local storage but pods are not able to fetch the data with GET KEY command for values SET by other PODS
For Ex
First, here you are missing the concept of Redis clustering and networking.
Just starting the 3 replicas won't resolve your issue, and they won't interact with each other.
What you are trying to do is replication or clustering across the PODs.
Second issue :
You have mentioned
accessModes:
- ReadWriteOnce
in PVC and PV due to this your all PODs cannot be attached to single PV or PVC.
You have to use the ReadWriteMany or services like NFS or EFS to create PV, PVC and attach single PV, PVC behind all PODs.
Third issue :
Redis Pod with 3 replicas and Persistence Storage not providing data
all the time
You are just running the image without any type of backup or snapshot configuration
- name: redis
image: redis
Start AOF & RDB
image: redislabs/redis
args: ["--requirepass", "admin", "--appendonly", "yes", "--save", "900", "1", "--save", "30", "2"]
Redis support 2 types of backup support to persiste the data.
AOF & RDB you can read more at : https://redis.io/topics/persistence
Final answer :
Instead of trying to configure the Redis and Replication, i would suggest using the Helm chart which will create the Statefulsets and PVC by own without doing many configurations and you can run the HA Redis cluster with persistence data.
Helm chart link : https://docs.bitnami.com/tutorials/deploy-redis-sentinel-production-cluster/
Chart YAML : https://github.com/bitnami/charts/tree/master/bitnami/redis
This is the Redis HA cluster with sentinel configuration.
Read more about the sentinel at :
https://stackoverflow.com/a/70271427/5525824
https://github.com/bitnami/charts/tree/master/bitnami/redis#master-replicas-with-sentinel

How to define PVC for specific path in a single node in kubernetes

I am running local k8s cluster and defining PV as hostPath for mysql pods.
Sharing all the configuration details below .
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The problem I am getting is as mysql pod is running in k8s cluster ,when its deleted and recreate ,it will choose any one of the node and deployed .So mysql hostpath always mounted to specific node .Is it a good idea to fix the node for mysql or any other options are there ?please share if any idea .
you have below choices
Use node selector or node affinity to ensure that pod gets scheduled on the node where the mount is created OR
Use local persistent volumes. it is supported on kubernetes 1.14 and above
Why are you using a PVC and a PV? Actually, for hostPath, you don't even need to create the PV object. It just gets it.
You should use a StatefulSet if you want a pod that is re-created to get the storage it was using the previous one (state).
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: mysql
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-persistent-storage
spec:
accessModes: ["ReadWriteOnce"]
# storageClassName: "standard"
resources:
requests:
storage: 2Gi
This statefulSet fails, but it is a mysql thing. As reference, should serve.

ActiveMQ on Kuberenetes with Shared storage

I have existing applications built with Apache Camel and ActiveMQ. As part of migration to Kubernetes, what we are doing is moving the same services developed with Apache Camel to Kubernetes. I need to deploy ActiveMQ such that I do not lose the data in case one of the Pod dies.
What I am doing now is running a deployment with RelicaSet value to 2. This will start 2 pods and with a Service in front, I can serve any request while atleast 1 Pod is up. However, if one Pod dies, i do not want to lose the data. I want to implement something like a shared file system between the Pods. My environment is in AWS so I can use EBS. Can you suggest, how to achieve that.
Below is my deployment and service YAML.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: smp-activemq
spec:
replicas: 1
template:
metadata:
labels:
app: smp-activemq
spec:
containers:
- name: smp-activemq
image: dasdebde/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
resources:
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: smp-activemq
spec:
type: NodePort
selector:
app: smp-activemq
ports:
- nodePort: 32191
port: 61616
targetPort: 61616
StatefulSets are valuable for applications that require stable, persistent storage. Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure data safety. The "volumeClaimTemplates" part in yaml will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner.
In your case, StatefulSet file definition will look similar to this:
apiVersion: v1
kind: Service
metadata:
name: smp-activemq
labels:
app: smp-activemq
spec:
type: NodePort
selector:
app: smp-activemq
ports:
- nodePort: 32191
port: 61616
name: smp-activemq
targetPort: 61616
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: smp-activemq
spec:
selector:
matchLabels:
app: smp-activemq
serviceName: smp-activemq
replicas: 1
template:
metadata:
labels:
app: smp-activemq
spec:
containers:
- name: smp-activemq
image: dasdebde/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
name: smp-activemq
volumeMounts:
- name: www
mountPath: <mount-path>
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "<storageclass-name>"
resources:
requests:
storage: 1Gi
That what you need to define is your StorageClass name and mountPath. I hope it will helps you.
In high-level terms, what you want is a StatefulSet instead of a Deployment for your ActiveMQ. You are correct that you want "shared file system" -- in kubernetes this is expressed as a "Persistent Volume", which is made available to the pods in your StatefulSet using a "Volume Mount".
These are the things you need to look up.

How do run object storage minio in a minikube cluster?

I want to integrate a minio object storage in to my minikune cluster.
I use the docker file from the minio gitrepo
I also added the persistent volume with the claim
kind: PersistentVolume
apiVersion: v1
metadata:
name: minio-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data/minio"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: minio-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
for the minio deployment I have
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: minio
spec:
selector:
matchLabels:
app: minio
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: minio
role: master
tier: backend
spec:
imagePullSecrets:
- name: regcred
containers:
- name: minio
image: <secret Registry >
env:
- name: MINIO_ACCESS_KEY
value: akey
- name: MINIO_SECRET_KEY
value: skey
ports:
- containerPort: 9000
volumeMounts:
- name: data
mountPath: /data/ob
volumes:
- name: data
persistentVolumeClaim:
claimName: minio-pv-claim
For the service I opened up the external IP just for debugging
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
role: master
tier: backend
spec:
ports:
- port: 9000
targetPort: 9000
externalIPs:
- 192.168.99.101
selector:
app: minio
role: master
tier: backend
But when I start the deployment I get the error message ERROR Unable to initialize backend: The disk size is less than the minimum threshold.
I assumed that 3GB should be enough. How can I solve this issue moreover now that I try to delete my persistent volume it rest in the terminating status.
How can I run minio in a minikube clutster?
I dont think there is enough storage in /mnt/data inside minikube. Try /mnt/sda1 or /data. Better yet, go inside minikube and check the storage available. To get into minikube you can do minikube ssh.