kubernetes persistent volume overrides the previous volume - kubernetes

We are trying to run blockchain node with AWS EKS.
We need to maintain the previous blockchain snapshot while updating node version.
So we used the kubernetes persistent volume, but the previous volume is just overwritten whenever we kill the previous pod, and restart the new one.
We set up the arguments of docker image to store the data at "/data/headless"
like this
--store-path=/data/headless
also, we set the volumeMounts to "/disk"
volumeMounts:
- name: block-data
mountPath: /disk
and the result shows
We think this volume path seems a temporary volume, and we are wondering about how to maintain these volumes even if we update the pods.
[We have seen millions of documents of AWS, but still doesn't know where to get started:( ]
How can I update the previous volume and connect it to new pod without losing any data in the previous one?
Here's our source code.
Volume Claim Yaml File
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nine-claim-3
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
StatefulSet & Service Yaml File
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nine-cluster
namespace: test-nine-3
labels:
app: nine
spec:
serviceName: nine-cluster
replicas: 2
selector:
matchLabels:
app: nine
template:
metadata:
labels:
app: nine
spec:
containers:
- name: nine-container
image: planetariumhq/ninechronicles-headless:v100260
ports:
- name: rpc
containerPort: 31238
- name: iceserver
containerPort: 3478
- name: peer
containerPort: 31234
- name: graphql
containerPort: 80
args: [
"--port=31234",
"--no-miner",
"--store-type=rocksdb",
"--store-path=/data/headless",
"--graphql-server",
"--graphql-host=0.0.0.0",
"--graphql-port=80",
"--rpc-server",
"--rpc-remote-server",
"--rpc-listen-host=0.0.0.0",
"--rpc-listen-port=31238",
"--no-cors",
"--chain-tip-stale-behavior-type=reboot"
]
volumeMounts:
- name: block-data
mountPath: /data/headless
volumes:
- name: block-data
persistentVolumeClaim:
claimName: nine-claim-3
---
apiVersion: v1
kind: Service
metadata:
name: nine-nlb-services
namespace: test-nine-3
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
ports:
- name: graphql
port: 80
targetPort: 80
nodePort: 31003
protocol: TCP
- name: rpc
port: 31238
targetPort: 31238
nodePort: 31004
protocol: TCP
type: LoadBalancer
selector:
app: nine

Related

Reclaim Policy set to retain, but volume data will be erased

I set the persistent volume using AWS EBS.
Also, I set the reclam policy to retain and deleted the pod to see if the persistent volume maintains the data normally.
When I deleted the pod, the data in the volume was also deleted.
I was embarrassed because the volume data disappeared even though I set the claim policy to return.
If I delete the pod, will the volume data be deleted even if the claim policy is set to return?
I want to keep the volume data.
PersistentVolumeClaim file
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nine-claim
spec:
storageClassName: ebs-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Storage Class File
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
Statefulset File
kind: StatefulSet
metadata:
name: nine-cluster
namespace: default
labels:
app: nine
spec:
serviceName: nine-cluster
replicas: 1
selector:
matchLabels:
app: nine
template:
metadata:
labels:
app: nine
spec:
containers:
- name: nine-container
image: planetariumhq/ninechronicles-headless:v100270
ports:
- name: rpc
containerPort: 31238
- name: iceserver
containerPort: 3478
- name: peer
containerPort: 31234
- name: graphql
containerPort: 80
args: [
"--workers=1000",
"--port=31234",
"--no-miner",
"--store-type=rocksdb",
"--store-path=/data/headless",
"--graphql-server",
"--graphql-host=0.0.0.0",
"--graphql-port=80",
"--rpc-server",
"--rpc-remote-server",
"--rpc-listen-host=0.0.0.0",
"--rpc-listen-port=31238",
"--no-cors",
"--chain-tip-stale-behavior-type=reboot"
]
volumeMounts:
- name: block-data
mountPath: /data/headless
volumes:
- name: block-data
persistentVolumeClaim:
claimName: nine-claim
---
apiVersion: v1
kind: Service
metadata:
name: nine-nlb-services
namespace: default
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
ports:
- name: graphql
port: 80
targetPort: 80
nodePort: 31016
protocol: TCP
- name: rpc
port: 31238
targetPort: 31238
nodePort: 31017
protocol: TCP
type: LoadBalancer
selector:
app: nine

mongodb microservice k8 persistent volume claim not persisting data

I have several microservices, each one with its own mongodb deployment. I would like to start with getting my auth service working with a persistent volume. I have watched courses where postgresql is used and read a lot in the kubernetes docs but am having trouble getting this to work for mongodb.
When I run skaffold dev the PVC is created with no errors. kubectl shows the PVC is in Bound status, and running describe on the PVC shows my mongo deployment as the user.
However, when I visit my client service in the browser, I signup, logout, signin again with no problem and then if I restart skaffold so it deletes and recreates the containers my data is gone and I have to signup again.
Here are my files
auth-mongo-depl.yaml
# auth-mongo service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
volumes:
- name: auth-mongo-data
persistentVolumeClaim:
claimName: auth-mongo-pvc
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
name: 'auth-mongo-port'
volumeMounts:
- name: auth-mongo-data
mountPath: '/data/db'
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: auth-mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
auth-depl.yaml
# auth service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: isimmons33/ticketing-auth
env:
- name: MONGO_URI
value: 'mongodb://auth-mongo-ip-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-ip-srv
spec:
selector:
app: auth
type: ClusterIP
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
api/users portion of my ingress-srv.yaml
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-ip-srv
port:
number: 3000
My client fires off a post request to /api/users/auth with which I can successfully signup or signin as long as I don't restart skaffold.
I even used kubectl to get a shell into my mongo deployment and queried to see the new user account there as it should be. But of course it is gone after restarting skaffold.
I am on Windows 10 but am running everything through WSL2 (Ubuntu)
Thanks for any help
It is highly recommended to use StatefulSets for running databases in Kubernetes. In Deployment if your pod crashes for some reason and creates new one, it's not guaranteed the pod will get patched to the same PV, hence the you loose the data.
Have a look on this https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets
The solution as pointed out by raghu_manne was to use StatefulSets. But because the link posted is extremely old, here is the full solution that worked for me.
Also here is a youtube video I just found that explains StatefulSet and volumeClaimTemplates quite well.
How to run MongoDB with StatefulSets in Kubernetes
auth-mongo-depl.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: auth-mongo-depl
spec:
replicas: 1
serviceName: auth-mongo
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: auth-mongo-data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: auth-mongo-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017

kubectl - Error response from daemon: error while creating mount source path

I'm trying to install SAP HANA Express docker image in a Kubernete node in Google Cloud Platform as per guide https://developers.sap.com/tutorials/hxe-k8s-advanced-analytics.html#7f5c99da-d511-479b-8745-caebfe996164 however, during execution of step 7 "Deploy your containers and connect to them" I'm not getting the expected result.
I'm executing command kubectl create -f hxe.yaml and here is the yaml file I'm using it:
kind: ConfigMap
apiVersion: v1
metadata:
creationTimestamp: 2018-01-18T19:14:38Z
name: hxe-pass
data:
password.json: |+
{"master_password" : "HXEHana1"}
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-vol-hxe
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 150Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/hxe_pv"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: hxe-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hxe
labels:
name: hxe
spec:
selector:
matchLabels:
run: hxe
app: hxe
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
run: hxe
app: hxe
role: master
tier: backend
spec:
initContainers:
- name: install
image: busybox
command: [ 'sh', '-c', 'chown 12000:79 /hana/mounts' ]
volumeMounts:
- name: hxe-data
mountPath: /hana/mounts
volumes:
- name: hxe-data
persistentVolumeClaim:
claimName: hxe-pvc
- name: hxe-config
configMap:
name: hxe-pass
imagePullSecrets:
- name: docker-secret
containers:
- name: hxe-container
image: "store/saplabs/hanaexpress:2.00.045.00.20200121.1"
ports:
- containerPort: 39013
name: port1
- containerPort: 39015
name: port2
- containerPort: 39017
name: port3
- containerPort: 8090
name: port4
- containerPort: 39041
name: port5
- containerPort: 59013
name: port6
args: [ "--agree-to-sap-license", "--dont-check-system", "--passwords-url", "file:///hana/hxeconfig/password.json" ]
volumeMounts:
- name: hxe-data
mountPath: /hana/mounts
- name: hxe-config
mountPath: /hana/hxeconfig
- name: sqlpad-container
image: "sqlpad/sqlpad"
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: hxe-connect
labels:
app: hxe
spec:
type: LoadBalancer
ports:
- port: 39013
targetPort: 39013
name: port1
- port: 39015
targetPort: 39015
name: port2
- port: 39017
targetPort: 39017
name: port3
- port: 39041
targetPort: 39041
name: port5
selector:
app: hxe
---
apiVersion: v1
kind: Service
metadata:
name: sqlpad
labels:
app: hxe
spec:
type: LoadBalancer
ports:
- port: 3000
targetPort: 3000
protocol: TCP
name: sqlpad
selector:
app: hxe
I'm also using the last version of HANA Express Edition docker image: store/saplabs/hanaexpress:2.00.045.00.20200121.1 that you can see available here: https://hub.docker.com/_/sap-hana-express-edition/plans/f2dc436a-d851-4c22-a2ba-9de07db7a9ac?tab=instructions
The error I'm getting is the following:
Any thought on what could be wrong?
Best regards and happy new year for everybody.
Thanks to the Mahboob suggestion now I can start the pods (partially) and the issue is not poppin up in the "busybox" container starting stage. The problem was that I was using an Container-Optimized image for the node pool and the required one is Ubuntu. If you are facing a similar issue double check the image flavor you are choosing at the moment of node pool creation.
However, I have now a different issue, the pods are starting (both the hanaxs and the other for sqlpad), nevertheless one of them, the sqlpad container, is crashing at some point after starting and the pod gets stuck in CrashLoopBackOff state. As you can see in picture below, the pods are in CrashLoopBackOff state and only 1/2 started and suddenly both are running.
I'm not hitting the right spot to solve this problem since I'm a newcomer to the kubernetes and docker world. Hope some of you can bring some light to me.
Best regards.

K8s PersistentVolume shared among multiple PersistentVolumeClaims for local testing

Could someone help me please and point me what configuration should I be doing for my use-case?
I'm building a development k8s cluster and one of the steps is to generate security files (private keys) that are generated in a number of pods during deployment (let's say for a simple setup I have 6 pods that each build their own security keys). I need to have access to all these files, also they must be persistent after the pod goes down.
I'm trying to figure out now how to set up it locally for internal testing. From what I understand Local PersistentVolumes only allow 1:1 with PersistentVolumeClaims, so I would have to create a separate PersistentVolume and PersistentVolumeClaim for each pod that get's configured. I would prefer to void this and use one PersistentVolume for all.
Could someone be so nice and help me or point me to the right setup that should be used?
-- Update: 26/11/2020
So this is my setup:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hlf-nfs--server
spec:
replicas: 1
selector:
matchLabels:
app: hlf-nfs--server
template:
metadata:
labels:
app: hlf-nfs--server
spec:
containers:
- name: hlf-nfs--server
image: itsthenetwork/nfs-server-alpine:12
ports:
- containerPort: 2049
name: tcp
- containerPort: 111
name: udp
securityContext:
privileged: true
env:
- name: SHARED_DIRECTORY
value: "/opt/k8s-pods/data"
volumeMounts:
- name: pvc
mountPath: /opt/k8s-pods/data
volumes:
- name: pvc
persistentVolumeClaim:
claimName: shared-nfs-pvc
apiVersion: v1
kind: Service
metadata:
name: hlf-nfs--server
labels:
name: hlf-nfs--server
spec:
type: ClusterIP
selector:
app: hlf-nfs--server
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: udp-111
port: 111
protocol: UDP
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-nfs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs
resources:
requests:
storage: 1Gi
These three are being created at once, after that, I'm reading the IP of the service and adding it to the last one:
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-nfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
path: /opt/k8s-pods/data
server: <<-- IP from `kubectl get svc -l name=hlf-nfs--server`
The problem I'm getting and trying to resolve is that the PVC does not get bound with the PV and the deployment keeps in READY mode.
Did I miss anything?
You can create a NFS and have the pods use NFS volume. Here is the manifest file to create such in-cluster NFS server (make sure you modify STORAGE_CLASS and the other variables below):
export NFS_NAME="nfs-share"
export NFS_SIZE="10Gi"
export NFS_IMAGE="itsthenetwork/nfs-server-alpine:12"
export STORAGE_CLASS="thin-disk"
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: udp-111
port: 111
protocol: UDP
selector:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
name: ${NFS_NAME}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: $STORAGE_CLASS
volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
template:
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
containers:
- name: nfs-server
image: ${NFS_IMAGE}
ports:
- containerPort: 2049
name: tcp
- containerPort: 111
name: udp
securityContext:
privileged: true
env:
- name: SHARED_DIRECTORY
value: /nfsshare
volumeMounts:
- name: pvc
mountPath: /nfsshare
volumes:
- name: pvc
persistentVolumeClaim:
claimName: ${NFS_NAME}
EOF
Below is an example how to point the other pods to this NFS. In particular, refer to the volumes section at the end of the YAML:
export NFS_NAME="nfs-share"
export NFS_IP=$(kubectl get --template={{.spec.clusterIP}} service/$NFS_NAME)
kubectl apply -f - <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:
name: apache
labels:
app: apache
spec:
replicas: 2
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
containers:
- name: apache
image: apache
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: nfs-vol
subPath: html
volumes:
- name: nfs-vol
nfs:
server: $NFS_IP
path: /
EOF
It is correct that there is a 1:1 relation between a PersistentVolumeClaim and a PersistentVolume.
However, Pods running on the same Node can concurrently mount the same volume, e.g. use the same PersistentVolumeClaim.
If you use Minikube for local development, you only have one node, so you can use the same PersistentVolumeClaim. Since you want to use different files for each app, you could use a unique directory for each app in that shared volume.
So finally, I did it by using a dynamic provider.
I installed the stable/nfs-server-provisioner with helm. With proper configuration, it managed to create a pv named nfs two which my pvc's are able to bound :)
helm install stable/nfs-server-provisioner --name nfs-provisioner -f nfs_provisioner.yaml
the nfs_provisioner.yaml is as follows
persistence:
enabled: true
storageClass: "standard"
size: 20Gi
storageClass:
# Name of the storage class that will be managed by the provisioner
name: nfs
defaultClass: true

ActiveMQ on Kuberenetes with Shared storage

I have existing applications built with Apache Camel and ActiveMQ. As part of migration to Kubernetes, what we are doing is moving the same services developed with Apache Camel to Kubernetes. I need to deploy ActiveMQ such that I do not lose the data in case one of the Pod dies.
What I am doing now is running a deployment with RelicaSet value to 2. This will start 2 pods and with a Service in front, I can serve any request while atleast 1 Pod is up. However, if one Pod dies, i do not want to lose the data. I want to implement something like a shared file system between the Pods. My environment is in AWS so I can use EBS. Can you suggest, how to achieve that.
Below is my deployment and service YAML.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: smp-activemq
spec:
replicas: 1
template:
metadata:
labels:
app: smp-activemq
spec:
containers:
- name: smp-activemq
image: dasdebde/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
resources:
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: smp-activemq
spec:
type: NodePort
selector:
app: smp-activemq
ports:
- nodePort: 32191
port: 61616
targetPort: 61616
StatefulSets are valuable for applications that require stable, persistent storage. Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure data safety. The "volumeClaimTemplates" part in yaml will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner.
In your case, StatefulSet file definition will look similar to this:
apiVersion: v1
kind: Service
metadata:
name: smp-activemq
labels:
app: smp-activemq
spec:
type: NodePort
selector:
app: smp-activemq
ports:
- nodePort: 32191
port: 61616
name: smp-activemq
targetPort: 61616
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: smp-activemq
spec:
selector:
matchLabels:
app: smp-activemq
serviceName: smp-activemq
replicas: 1
template:
metadata:
labels:
app: smp-activemq
spec:
containers:
- name: smp-activemq
image: dasdebde/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
name: smp-activemq
volumeMounts:
- name: www
mountPath: <mount-path>
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "<storageclass-name>"
resources:
requests:
storage: 1Gi
That what you need to define is your StorageClass name and mountPath. I hope it will helps you.
In high-level terms, what you want is a StatefulSet instead of a Deployment for your ActiveMQ. You are correct that you want "shared file system" -- in kubernetes this is expressed as a "Persistent Volume", which is made available to the pods in your StatefulSet using a "Volume Mount".
These are the things you need to look up.