Kubernetes share storage between replicas - kubernetes

We have Kubernetes running on our own servers. For Persistent Storage we have a NFS server. This works great.
Now we want to deploy an application with multiple replicas that should have shared storage between them, but the storage should not be persistent. When the pods are deleted, the data should be gone as well.
I was hoping I could achieve it with the following
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
image: nginx:latest
imagePullPolicy: IfNotPresent
name: nginx
volumeMounts:
- name: shared-data
mountPath: /shared-data
resources:
limits:
memory: 500Mi
All replicas have /shared-data, but when 1 replica stores data in that folder, the other replica's cannot see the file, so it is not shared.
What are my options?

You can use a PVC to share data between the pods. Then, you can setup a preStop lifecycle hook
for the pods to cleanup the data when the pod gets deleted.
Here, is an example of adding preStop hook on a pod: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/

Related

Persistent Payara Server Admin UI in Kubernetes

I am using payara/server-full in Kubernetes. I want to add a persistent volume so that all configuration made to the Payara server via the Admin UI is perstisted after the pod is recreated, including uploaded .war files.
Right now my deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name:
spec:
selector:
matchLabels:
app: myapp
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: payara/server-full
imagePullPolicy: "Always"
ports:
- name: myapp-default
containerPort: 8080
- name: myapp-admin
containerPort: 4848
How to augment that yaml file to make use of a persistent volume?
Which path(s) within payara should be synced with the persistent volume so that Payara's configuration isn't lost after redeployment ?
Which additional yaml files do I need?
So after a longer conideration of the problem I realised I need to persist everything under /opt/payara/appserver/glassfish/domains for all configuration made via the Admin UI to be persisted. However if I simply start the pod with a volumeMount pointing to that path, i.e.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: myapp
spec:
volumes:
- name: myapp-vol
persistentVolumeClaim:
claimName: myapp-rwo-pvc
containers:
- name: myapp
image: payara/server-full
imagePullPolicy: "Always"
ports:
- name: myapp-default
containerPort: 8080
- name: myapp-admin
containerPort: 4848
volumeMounts:
- mountPath: "/opt/payara/appserver/glassfish/domains"
and
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-rwo-pvc
labels:
app: dont-delete-autom
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
then the Payara server won't be able to start successfully, because Kubernetes will mount an empty persistent volume into that location. Payara needs however config files which are originally located within /opt/payara/appserver/glassfish/domains.
What I needed to do is to provision the volume with the data by default located in that folder. But how to do that when the only way to access the PV is to mount it into a pod?
Fist I scaled the above deployment to 0 with:
kubectl scale --replicas=0 deployment/myapp
This deletes all pods accessing the persistent volume.
Then I created a "provisioning" pod which mounts the previously created persistent volume into /tmp.
apiVersion: v1
kind: Pod
metadata:
labels:
app: myapp
name: pv-provisioner
namespace: default
spec:
containers:
- image: payara/server-full
imagePullPolicy: Always
name: pv-provisioner
ports:
- containerPort: 8080
name: myapp-default
protocol: TCP
- containerPort: 4848
name: myapp-admin
protocol: TCP
volumeMounts:
- mountPath: "/tmp"
name: myapp-vol
resources:
limits:
cpu: "2"
memory: 2Gi
requests:
cpu: 500m
memory: 128Mi
volumes:
- name: myapp-vol
persistentVolumeClaim:
claimName: myapp-rwo-pvc
Then I used the following commands to copy the necessary data first from the "provisioning" pod to a local folder /tmp and then back from /tmp to the persistent volume (previously mounted into pv-provisioner:/tmp). There is no option to copy directly from pod:/a to pod:/b
kubectl cp pv-provisioner:/opt/payara/appserver/glassfish/domains/. tmp
kubectl cp tmp/. pv-provisioner:/tmp
As a result everything stored under /opt/payara/appserver/glassfish/domains/ in the original payara container was now copied into the persistent volume identified by the persistence volume claim "myapp-rwo-pvc".
To finish it up I deleted the provisioning pod and scaled the deployment back up:
kubectl delete pod pv-provisioner
kubectl scale --replicas=3 deployment/myapp
The payara server is now starting successfully and any configuration made via the Admin UI, including .war deployments is persisted, such that the payara pods can be killed any time and after the restart everything is as before.
Thanks for reading.

Ephemeral volume limit making pod errored out

I'm working on the task where I want to limit the Ephemeral volume to a certain Gi.
This is my deployment configuration file.
`
apiVersion: apps/v1
kind: Deployment
metadata:
name: vol
namespace: namespace1
labels:
app: vol
spec:
replicas: 1
selector:
matchLabels:
app: vol
template:
metadata:
labels:
app: vol
spec:
containers:
- name: vol
image: <my-image>
ports:
- containerPort: 5000
resources:
limits:
ephemeral-storage: "1Gi"
volumeMounts:
- name: ephemeral
mountPath: "/volume"
volumes:
- name: ephemeral
emptyDir: {}
`
The expected behaviour is when the volume limit is met it should evict the pod, which is happening as expected.
The only problem I have is after the default termination grace period the pod is getting into an error state with a warning ExceededGracePeriod. Now, I have one pod running and one error in my deployment.
I have tried solutions like increased the terminationGracePeriodSeconds, tried to use preStop hook, settings limit on emptyDir: {} as well but nothing worked out for me.
You can increase the ephemeral storage limit to 2Gi ephemeral storage. This might resolve your error. Refer to this doc for quotas and limit ranges of ephemeral storage. In kubernetes documentation you can find more details how Ephemeral storage consumption management works here.

Kubernetes reattach to same persistent volume after delete

I have a app where two pods needs to have access to the same volume. I want to be able to delete the cluster and then after apply to be able to access the data that is on the volume.
So for example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: retaining
provisioner: csi.hetzner.cloud
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: media
spec:
#storageClassName: retaining
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-php
labels:
app: myapp-php
k8s-app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp-php
template:
metadata:
labels:
app: myapp-php
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-php
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 750m
memory: 3Gi
requests:
cpu: 750m
memory: 3Gi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-web
labels:
app: myapp-web
k8s-app: myapp
spec:
selector:
matchLabels:
app: myapp-web
template:
metadata:
labels:
app: myapp-web
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-web
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 10m
memory: 128Mi
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
If I do these:
k apply -f pv-issue.yaml
k delete -f pv-issue.yaml
k apply-f pv-issue.yaml
I want to connect the same volume.
What I have tried:
If I keep the file as is, the volume will be deleted so the data will be lost.
I can remove the pvc declaration from the file. Then it works. My issue that on the real app I am using kustomize and I don't see a way to exclude resources when doing kustomize build app | kubectl delete -f -
Tried using retain in the pvc. It retains the volume on delete, but on the apply a new volume is created.
Statefulset, however I don't see a way that to different statefulsets can share the same volume.
Is there a way to achieve this?
Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?
Is there a way to achieve this? Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?
Cluster deletion will make all your local volumes to be deleted. You can achieve this by storing the data outside the cluster. Kubernetes has a wide variety of storage providers to help you deploy data on a variety of storage types.
You may want to think also that you can keep the data locally on nodes with usage of hostPath but that is also not a good solution since it will require you to pin the pod to the specific node to avoid data loss. And if you delete you cluster in a way that all of you VM are gone, then this will be also gone.
Having some network-attached storage would be right way to go here. Very good example of those are Persistence disks which durable network storage devices that you instances can access. They're located independently from you virtuals machines and they are not being deleted when you delete the cluster.

postgres data retention in Kubernetes minikube

I am trying to deploy postgres with persistent volume on my minikube instance. I have mounted the the volume(hostpath) using the PVC but I don't see the data retention of the postgres tables. I tried to touch a file in shared directory in the pod and found it was retained which means volume is retained between deployments but why postgres table. Thanks for any insights
Here is my postgres deployment yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dbapp-deployment
labels:
app: dbapp
spec:
replicas: 1
selector:
matchLabels:
app: dbapp
strategy:
type: Recreate
template:
metadata:
namespace: default
labels:
app: dbapp
tier: backend
spec:
containers:
- name: dbapp
image: xxxxx/dbapp:latest
ports:
- containerPort: 5432
volumeMounts:
- name: pvc001
mountPath: /var/lib/postgres/data
volumes:
- name: pvc001
persistentVolumeClaim:
claimName: pvc001
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc001 Bound pvc-29320475-37dc-11e9-8a82-080027218780 1Gi RWX standard 29m

How to propagate kubernetes events from a GKE cluster to google cloud log

Is there anyway to propagate all kubernetes events to google cloud log? For instance, a pod creation/deletion or liveness probing failed, I knew I can use kubectl get events in a console.However, I would like to preserve those events in a log file in the cloud log with other pod level logs. It is quite helpful information.
It seems that OP found the logs, but I wasn't able to on GKE (1.4.7) with Stackdriver. It was a little tricky to figure out, so I thought I'd share for others. I was able to get them by creating an eventer deployment with the gcl sink.
For example:
deployment.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
k8s-app: eventer
name: eventer
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: eventer
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
k8s-app: eventer
spec:
containers:
- name: eventer
command:
- /eventer
- --source=kubernetes:''
- --sink=gcl
image: gcr.io/google_containers/heapster:v1.2.0
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
terminationMessagePath: /dev/termination-log
restartPolicy: Always
terminationGracePeriodSeconds: 30
Then, search for logs with an advanced filter (substitute your GCE project name):
resource.type="global"
logName="projects/project-name/logs/kubernetes.io%2Fevents"