Persistent Payara Server Admin UI in Kubernetes - kubernetes

I am using payara/server-full in Kubernetes. I want to add a persistent volume so that all configuration made to the Payara server via the Admin UI is perstisted after the pod is recreated, including uploaded .war files.
Right now my deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name:
spec:
selector:
matchLabels:
app: myapp
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: payara/server-full
imagePullPolicy: "Always"
ports:
- name: myapp-default
containerPort: 8080
- name: myapp-admin
containerPort: 4848
How to augment that yaml file to make use of a persistent volume?
Which path(s) within payara should be synced with the persistent volume so that Payara's configuration isn't lost after redeployment ?
Which additional yaml files do I need?

So after a longer conideration of the problem I realised I need to persist everything under /opt/payara/appserver/glassfish/domains for all configuration made via the Admin UI to be persisted. However if I simply start the pod with a volumeMount pointing to that path, i.e.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: myapp
spec:
volumes:
- name: myapp-vol
persistentVolumeClaim:
claimName: myapp-rwo-pvc
containers:
- name: myapp
image: payara/server-full
imagePullPolicy: "Always"
ports:
- name: myapp-default
containerPort: 8080
- name: myapp-admin
containerPort: 4848
volumeMounts:
- mountPath: "/opt/payara/appserver/glassfish/domains"
and
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-rwo-pvc
labels:
app: dont-delete-autom
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
then the Payara server won't be able to start successfully, because Kubernetes will mount an empty persistent volume into that location. Payara needs however config files which are originally located within /opt/payara/appserver/glassfish/domains.
What I needed to do is to provision the volume with the data by default located in that folder. But how to do that when the only way to access the PV is to mount it into a pod?
Fist I scaled the above deployment to 0 with:
kubectl scale --replicas=0 deployment/myapp
This deletes all pods accessing the persistent volume.
Then I created a "provisioning" pod which mounts the previously created persistent volume into /tmp.
apiVersion: v1
kind: Pod
metadata:
labels:
app: myapp
name: pv-provisioner
namespace: default
spec:
containers:
- image: payara/server-full
imagePullPolicy: Always
name: pv-provisioner
ports:
- containerPort: 8080
name: myapp-default
protocol: TCP
- containerPort: 4848
name: myapp-admin
protocol: TCP
volumeMounts:
- mountPath: "/tmp"
name: myapp-vol
resources:
limits:
cpu: "2"
memory: 2Gi
requests:
cpu: 500m
memory: 128Mi
volumes:
- name: myapp-vol
persistentVolumeClaim:
claimName: myapp-rwo-pvc
Then I used the following commands to copy the necessary data first from the "provisioning" pod to a local folder /tmp and then back from /tmp to the persistent volume (previously mounted into pv-provisioner:/tmp). There is no option to copy directly from pod:/a to pod:/b
kubectl cp pv-provisioner:/opt/payara/appserver/glassfish/domains/. tmp
kubectl cp tmp/. pv-provisioner:/tmp
As a result everything stored under /opt/payara/appserver/glassfish/domains/ in the original payara container was now copied into the persistent volume identified by the persistence volume claim "myapp-rwo-pvc".
To finish it up I deleted the provisioning pod and scaled the deployment back up:
kubectl delete pod pv-provisioner
kubectl scale --replicas=3 deployment/myapp
The payara server is now starting successfully and any configuration made via the Admin UI, including .war deployments is persisted, such that the payara pods can be killed any time and after the restart everything is as before.
Thanks for reading.

Related

Deployed Jenkins on an EKS cluster with EFS for persistence... but I still loose all configs if pod restarts

Hey so I deployed Jenkins on an EKS cluster the deployment is simple
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-vol
mountPath: /var
restartPolicy: Always
volumes:
- name: jenkins-vol
persistentVolumeClaim:
claimName: jenkins-claim
the pv/pvc show up fine
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
jenkins 5Gi RWX Retain Bound jenkins/jenkins-claim efs-sc 61m
but when every the jenkins pod restarts (to simulate an update) I loose all jenkins configs

Kubernetes share storage between replicas

We have Kubernetes running on our own servers. For Persistent Storage we have a NFS server. This works great.
Now we want to deploy an application with multiple replicas that should have shared storage between them, but the storage should not be persistent. When the pods are deleted, the data should be gone as well.
I was hoping I could achieve it with the following
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
image: nginx:latest
imagePullPolicy: IfNotPresent
name: nginx
volumeMounts:
- name: shared-data
mountPath: /shared-data
resources:
limits:
memory: 500Mi
All replicas have /shared-data, but when 1 replica stores data in that folder, the other replica's cannot see the file, so it is not shared.
What are my options?
You can use a PVC to share data between the pods. Then, you can setup a preStop lifecycle hook
for the pods to cleanup the data when the pod gets deleted.
Here, is an example of adding preStop hook on a pod: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/

Kubernetes reattach to same persistent volume after delete

I have a app where two pods needs to have access to the same volume. I want to be able to delete the cluster and then after apply to be able to access the data that is on the volume.
So for example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: retaining
provisioner: csi.hetzner.cloud
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: media
spec:
#storageClassName: retaining
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-php
labels:
app: myapp-php
k8s-app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp-php
template:
metadata:
labels:
app: myapp-php
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-php
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 750m
memory: 3Gi
requests:
cpu: 750m
memory: 3Gi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-web
labels:
app: myapp-web
k8s-app: myapp
spec:
selector:
matchLabels:
app: myapp-web
template:
metadata:
labels:
app: myapp-web
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-web
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 10m
memory: 128Mi
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
If I do these:
k apply -f pv-issue.yaml
k delete -f pv-issue.yaml
k apply-f pv-issue.yaml
I want to connect the same volume.
What I have tried:
If I keep the file as is, the volume will be deleted so the data will be lost.
I can remove the pvc declaration from the file. Then it works. My issue that on the real app I am using kustomize and I don't see a way to exclude resources when doing kustomize build app | kubectl delete -f -
Tried using retain in the pvc. It retains the volume on delete, but on the apply a new volume is created.
Statefulset, however I don't see a way that to different statefulsets can share the same volume.
Is there a way to achieve this?
Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?
Is there a way to achieve this? Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?
Cluster deletion will make all your local volumes to be deleted. You can achieve this by storing the data outside the cluster. Kubernetes has a wide variety of storage providers to help you deploy data on a variety of storage types.
You may want to think also that you can keep the data locally on nodes with usage of hostPath but that is also not a good solution since it will require you to pin the pod to the specific node to avoid data loss. And if you delete you cluster in a way that all of you VM are gone, then this will be also gone.
Having some network-attached storage would be right way to go here. Very good example of those are Persistence disks which durable network storage devices that you instances can access. They're located independently from you virtuals machines and they are not being deleted when you delete the cluster.

Minio data does not persist through reboot

I deployed Minio on Kubernetes on an Ubuntu desktop. It works fine, except that whenever I reboot the machine, everything that was stored in Minio mysteriously disappears (if I create several buckets with files in them, I come back to a completely blanks slate after the reboot - the buckets, and all their files, are completely gone).
When I set up Minio, I created a persistent volume in Kubernetes which mounts to a folder (/mnt/minio/minio - I have a 4 TB HDD mounted at /mnt/minio with a folder named minio inside that). I noticed that this folder seems to be empty even when I store stuff in Minio, so perhaps Minio is ignoring the persistent volume and using the container storage? However, I don't know why this would be happening; I have both a PV and a PV claim, and kubectl shows that they are bound to each other.
Below are the yaml files I applied to deploy my minio installation:
kind: PersistentVolume
apiVersion: v1
metadata:
name: minio-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/minio/minio"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pv-claim
labels:
app: minio-storage-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 99Gi
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-deployment
spec:
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
# Refer to the PVC created earlier
volumes:
- name: storage
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: minio-pv-claim
containers:
- name: minio
# Pulls the default Minio image from Docker Hub
image: minio/minio:latest
args:
- server
- /storage
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
hostPort: 9000
# Mount the volume into the pod
volumeMounts:
- name: storage # must match the volume name, above
mountPath: "/mnt/minio/minio"
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
you need to mount container's /storage directory in the directory you are mounting on the container /mnt/minio/minio/;
args:
- server
- /mnt/minio/minio/storage
But consider deploying using StatefulSet, so when your pod restarts it will retain everything of the previous pod.

postgres data retention in Kubernetes minikube

I am trying to deploy postgres with persistent volume on my minikube instance. I have mounted the the volume(hostpath) using the PVC but I don't see the data retention of the postgres tables. I tried to touch a file in shared directory in the pod and found it was retained which means volume is retained between deployments but why postgres table. Thanks for any insights
Here is my postgres deployment yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dbapp-deployment
labels:
app: dbapp
spec:
replicas: 1
selector:
matchLabels:
app: dbapp
strategy:
type: Recreate
template:
metadata:
namespace: default
labels:
app: dbapp
tier: backend
spec:
containers:
- name: dbapp
image: xxxxx/dbapp:latest
ports:
- containerPort: 5432
volumeMounts:
- name: pvc001
mountPath: /var/lib/postgres/data
volumes:
- name: pvc001
persistentVolumeClaim:
claimName: pvc001
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc001 Bound pvc-29320475-37dc-11e9-8a82-080027218780 1Gi RWX standard 29m