Mounting kubernetes volume in multiple containers within a pod with gitlab - kubernetes

I am setting up a CI/CD environment for the first time consisting of a single node kubernetes (minikube).
On this node I created a PV
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
data-volume 1Gi RWO Retain Bound gitlab-managed-apps/data-volume-claim manual 20m
and PVC
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-claim Bound data-volume 1Gi RWO manual 19m
Now I would like to create a pod with multiple containers accessing to this volume.
Where and how do you advise to setup this using gitlab pipelines gitlab-ci etc? Multiple repos may be the best fit for the project.

Here is the fully working example of deployment manifest file, having in Pod's spec defined two containers (based on different nginx docker images) using the same PV, from where they serve custom static html content on ports 80/81 accordingly:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: nginx
name: nginx
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
volumes:
- name: my-pv-storage
persistentVolumeClaim:
claimName: my-pv-claim-nginx
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-pv-storage
subPath: html_custom
- image: custom-nginx
imagePullPolicy: IfNotPresent
name: custom-nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-pv-storage
subPath: html
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}

Yes probaly you can do that run multiple container in one pod sharing the one PVC.
In CI/CD if you have multiple repos and if commit comes in one repo it will build new Docker image and push it to the registry and deployed to k8s cluster.
In CI/CD if you have the plan to use latest tag for image tagging then you can use multi-container in pod. it will be easy to manage deployment if there is commit in only one repository.
If you have plan to use SHA:hash for CI/CD-tagging images then how will you manage the deployment file having two containers config.

Related

(Again) GKE Fails to mount volumes to deployment/pods: timeout waiting for the condition

Almost two years later, we are experiencing the same issue as described in this SO post.
Our workloads had been working without any disruption since 2018, and they suddenly stopped because we had to renew certificates. Then we've not been able to start the workloads again... The failure is caused by the fact that pods try to mount a persistence disk via NFS, and the
nfs-server pod (based on gcr.io/google_containers/volume-nfs:0.8) can't mount the persistent disk.
We have upgraded from 1.23 to 1.25.5-gke.2000 (experimenting with a few intermediary previous) and hence have also switched to containerd.
We have recreated everything multiple times with slight varioations, but no luck. Pods definitely cannot access any persistent disk.
We've checked basic things such as: the persistent disks and cluster are in the same zone as the GKE cluster, the service account used by the pods has the necessary permissions to access the disk, etc.
No logs are visible on, each pod, which is also strange since logging seems to be correctly configured.
Here is the nfs-server.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
role: nfs-server
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- image: gcr.io/google_containers/volume-nfs:0.8
imagePullPolicy: IfNotPresent
name: nfs-server
ports:
- containerPort: 2049
name: nfs
protocol: TCP
- containerPort: 20048
name: mountd
protocol: TCP
- containerPort: 111
name: rpcbind
protocol: TCP
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /exports
name: webapp-disk
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- gcePersistentDisk:
fsType: ext4
pdName: webapp-data-disk
name: webapp-disk
status: {}
OK, fixed. I had to enable the CI driver on our legacy cluster, as described here...

Persistent Payara Server Admin UI in Kubernetes

I am using payara/server-full in Kubernetes. I want to add a persistent volume so that all configuration made to the Payara server via the Admin UI is perstisted after the pod is recreated, including uploaded .war files.
Right now my deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name:
spec:
selector:
matchLabels:
app: myapp
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: payara/server-full
imagePullPolicy: "Always"
ports:
- name: myapp-default
containerPort: 8080
- name: myapp-admin
containerPort: 4848
How to augment that yaml file to make use of a persistent volume?
Which path(s) within payara should be synced with the persistent volume so that Payara's configuration isn't lost after redeployment ?
Which additional yaml files do I need?
So after a longer conideration of the problem I realised I need to persist everything under /opt/payara/appserver/glassfish/domains for all configuration made via the Admin UI to be persisted. However if I simply start the pod with a volumeMount pointing to that path, i.e.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: myapp
spec:
volumes:
- name: myapp-vol
persistentVolumeClaim:
claimName: myapp-rwo-pvc
containers:
- name: myapp
image: payara/server-full
imagePullPolicy: "Always"
ports:
- name: myapp-default
containerPort: 8080
- name: myapp-admin
containerPort: 4848
volumeMounts:
- mountPath: "/opt/payara/appserver/glassfish/domains"
and
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-rwo-pvc
labels:
app: dont-delete-autom
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
then the Payara server won't be able to start successfully, because Kubernetes will mount an empty persistent volume into that location. Payara needs however config files which are originally located within /opt/payara/appserver/glassfish/domains.
What I needed to do is to provision the volume with the data by default located in that folder. But how to do that when the only way to access the PV is to mount it into a pod?
Fist I scaled the above deployment to 0 with:
kubectl scale --replicas=0 deployment/myapp
This deletes all pods accessing the persistent volume.
Then I created a "provisioning" pod which mounts the previously created persistent volume into /tmp.
apiVersion: v1
kind: Pod
metadata:
labels:
app: myapp
name: pv-provisioner
namespace: default
spec:
containers:
- image: payara/server-full
imagePullPolicy: Always
name: pv-provisioner
ports:
- containerPort: 8080
name: myapp-default
protocol: TCP
- containerPort: 4848
name: myapp-admin
protocol: TCP
volumeMounts:
- mountPath: "/tmp"
name: myapp-vol
resources:
limits:
cpu: "2"
memory: 2Gi
requests:
cpu: 500m
memory: 128Mi
volumes:
- name: myapp-vol
persistentVolumeClaim:
claimName: myapp-rwo-pvc
Then I used the following commands to copy the necessary data first from the "provisioning" pod to a local folder /tmp and then back from /tmp to the persistent volume (previously mounted into pv-provisioner:/tmp). There is no option to copy directly from pod:/a to pod:/b
kubectl cp pv-provisioner:/opt/payara/appserver/glassfish/domains/. tmp
kubectl cp tmp/. pv-provisioner:/tmp
As a result everything stored under /opt/payara/appserver/glassfish/domains/ in the original payara container was now copied into the persistent volume identified by the persistence volume claim "myapp-rwo-pvc".
To finish it up I deleted the provisioning pod and scaled the deployment back up:
kubectl delete pod pv-provisioner
kubectl scale --replicas=3 deployment/myapp
The payara server is now starting successfully and any configuration made via the Admin UI, including .war deployments is persisted, such that the payara pods can be killed any time and after the restart everything is as before.
Thanks for reading.

Kubernetes share storage between replicas

We have Kubernetes running on our own servers. For Persistent Storage we have a NFS server. This works great.
Now we want to deploy an application with multiple replicas that should have shared storage between them, but the storage should not be persistent. When the pods are deleted, the data should be gone as well.
I was hoping I could achieve it with the following
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
image: nginx:latest
imagePullPolicy: IfNotPresent
name: nginx
volumeMounts:
- name: shared-data
mountPath: /shared-data
resources:
limits:
memory: 500Mi
All replicas have /shared-data, but when 1 replica stores data in that folder, the other replica's cannot see the file, so it is not shared.
What are my options?
You can use a PVC to share data between the pods. Then, you can setup a preStop lifecycle hook
for the pods to cleanup the data when the pod gets deleted.
Here, is an example of adding preStop hook on a pod: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/

Kubernetes reattach to same persistent volume after delete

I have a app where two pods needs to have access to the same volume. I want to be able to delete the cluster and then after apply to be able to access the data that is on the volume.
So for example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: retaining
provisioner: csi.hetzner.cloud
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: media
spec:
#storageClassName: retaining
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-php
labels:
app: myapp-php
k8s-app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp-php
template:
metadata:
labels:
app: myapp-php
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-php
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 750m
memory: 3Gi
requests:
cpu: 750m
memory: 3Gi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-web
labels:
app: myapp-web
k8s-app: myapp
spec:
selector:
matchLabels:
app: myapp-web
template:
metadata:
labels:
app: myapp-web
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-web
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 10m
memory: 128Mi
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
If I do these:
k apply -f pv-issue.yaml
k delete -f pv-issue.yaml
k apply-f pv-issue.yaml
I want to connect the same volume.
What I have tried:
If I keep the file as is, the volume will be deleted so the data will be lost.
I can remove the pvc declaration from the file. Then it works. My issue that on the real app I am using kustomize and I don't see a way to exclude resources when doing kustomize build app | kubectl delete -f -
Tried using retain in the pvc. It retains the volume on delete, but on the apply a new volume is created.
Statefulset, however I don't see a way that to different statefulsets can share the same volume.
Is there a way to achieve this?
Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?
Is there a way to achieve this? Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?
Cluster deletion will make all your local volumes to be deleted. You can achieve this by storing the data outside the cluster. Kubernetes has a wide variety of storage providers to help you deploy data on a variety of storage types.
You may want to think also that you can keep the data locally on nodes with usage of hostPath but that is also not a good solution since it will require you to pin the pod to the specific node to avoid data loss. And if you delete you cluster in a way that all of you VM are gone, then this will be also gone.
Having some network-attached storage would be right way to go here. Very good example of those are Persistence disks which durable network storage devices that you instances can access. They're located independently from you virtuals machines and they are not being deleted when you delete the cluster.

K8s Create Deployment with EnvFrom

I am trying to fire up an influxdb instance on my cluster.
I am following a few different guides and am trying to get it to expose a secret as environment variables using the envFrom operator. Unfortunately I am always getting the Environment: <none> after doing my deployment. Doing an echo on the environment variables I expect yields a blank value as well.
I am running this command to deploy (the script below is in influxdb.yaml): kubectl create deployment influxdb --image=influxdb
Here is my deployment script:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
generation: 1
labels:
app: influxdb
project: pihole
name: influxdb
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: influxdb
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: influxdb
spec:
containers:
- name: influxdb
envFrom:
- secretRef:
name: influxdb-creds
image: docker.io/influxdb:1.7.6
imagePullPolicy: IfNotPresent
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/influxdb
name: var-lib-influxdb
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: var-lib-influxdb
persistentVolumeClaim:
claimName: influxdb
status: {}
The output of kubectl describe secret influxdb-creds is this:
Name: influxdb-creds
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
INFLUXDB_USERNAME: 4 bytes
INFLUXDB_DATABASE: 6 bytes
INFLUXDB_HOST: 8 bytes
INFLUXDB_PASSWORD: 11 bytes
to test your deployment, please first create secrets and later create deployment:
1. Secrets:
kubectl create secret generic influxdb-creds --from-literal=INFLUXDB_USERNAME='test_user' --from-literal=INFLUXDB_DATABASE='test_password'
2. Deployment:
kubectl apply -f <path_to_your_yaml_file>
In order to verify, please run
kubectl describe secret influxdb-creds
kubectl exec <your_new_deployed_pod> -- env
kubectl describe pod <your_new_deployed_pod>
Take a look at:
Environment Variables from:
influxdb-creds Secret Optional: false
Hope this help.
Please share with your findings.
The answer to this is that I was creating the deployment incorrectly. I was using the command kubectl create deployment influxdb --image=influxdb which was creating a blank deployment and instead I should have been creating it with kubectl create -f influxdb.yaml where influxdb.yaml was my file that contained the deployment definition in the original question.
I was making the false assumption that the create deployment command read the yaml file by the same name, but it does not.