I have a app where two pods needs to have access to the same volume. I want to be able to delete the cluster and then after apply to be able to access the data that is on the volume.
So for example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: retaining
provisioner: csi.hetzner.cloud
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: media
spec:
#storageClassName: retaining
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-php
labels:
app: myapp-php
k8s-app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp-php
template:
metadata:
labels:
app: myapp-php
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-php
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 750m
memory: 3Gi
requests:
cpu: 750m
memory: 3Gi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-web
labels:
app: myapp-web
k8s-app: myapp
spec:
selector:
matchLabels:
app: myapp-web
template:
metadata:
labels:
app: myapp-web
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-web
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 10m
memory: 128Mi
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
If I do these:
k apply -f pv-issue.yaml
k delete -f pv-issue.yaml
k apply-f pv-issue.yaml
I want to connect the same volume.
What I have tried:
If I keep the file as is, the volume will be deleted so the data will be lost.
I can remove the pvc declaration from the file. Then it works. My issue that on the real app I am using kustomize and I don't see a way to exclude resources when doing kustomize build app | kubectl delete -f -
Tried using retain in the pvc. It retains the volume on delete, but on the apply a new volume is created.
Statefulset, however I don't see a way that to different statefulsets can share the same volume.
Is there a way to achieve this?
Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?
Is there a way to achieve this? Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?
Cluster deletion will make all your local volumes to be deleted. You can achieve this by storing the data outside the cluster. Kubernetes has a wide variety of storage providers to help you deploy data on a variety of storage types.
You may want to think also that you can keep the data locally on nodes with usage of hostPath but that is also not a good solution since it will require you to pin the pod to the specific node to avoid data loss. And if you delete you cluster in a way that all of you VM are gone, then this will be also gone.
Having some network-attached storage would be right way to go here. Very good example of those are Persistence disks which durable network storage devices that you instances can access. They're located independently from you virtuals machines and they are not being deleted when you delete the cluster.
Related
I am running node-red on kubernetes locally. This is my deployment code:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
resources:
requests:
memory: 256Mi
cpu: "0.2"
limits:
memory: 512Mi
cpu: "1"
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nodered-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
---
apiVersion: v1
kind: Service
metadata:
name: node-red-service
spec:
type: LoadBalancer
selector:
app: nodered
ports:
- port: 1880
targetPort: 1880
I am trying to feed a local text file in my directory into node red but I am getting the error - ""Error: ENOENT: no such file or directory, open 'D:\SHRDC\abss\test.txt'"
What could be a solution to this?
I have tried \data\test.txt but that hasn't worked either. I am expecting nodered to read the file contents.
First thing to know is that under most situations is that kubernetes pods run docker containers that are Linux based. This means that even if you are running your kubernetes node on a Windows machine it will be running in a Linux virtual machine. This means 2 things.
All paths used will need to use the Unix style
it will have no direct access to files on the host Windows machine
Next while you have requested a volume to be mounted into the pod at /data/nodered and defined both a physical volume and a physical volume claim (which I'm not sure will actually map to each other in this case) they will be in the Linux Virtual machine not the urgently Windows machine so will not have access to files on the host.
Even if you had managed to copy the file into the directory that is backing the volume mount in the pod the correct path to give to Node-RED would be something like /data/nodered/test.txt based on the volume mount
I've been debugging a 10min downtime of our service for some hours now, and I seem to have found the cause, but not the reason for it. Our redis deployment in kubernetes was down for quite a while, causing neither django nor redis to be able to reach it. This caused a bunch of jobs to be lost.
There are no events for the redis deployment, but here are the first logs before and after the reboot:
before:
after:
I'm also attaching the complete redis yml at the bottom. We're using GKE Autopilot, so I guess something caused the pod to reboot? Resource usage is a lot lower than requested, at about 1% for both CPU and memory. Not sure what's going on here. I also couldn't find an annotation to tell Autopilot to leave a specific deployment alone
redis.yml:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-disk
spec:
accessModes:
- ReadWriteOnce
storageClassName: gce-ssd
resources:
requests:
storage: "2Gi"
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- port: 6379
name: redis
clusterIP: None
selector:
app: redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
volumes:
- name: redis-volume
persistentVolumeClaim:
claimName: redis-disk
readOnly: false
terminationGracePeriodSeconds: 5
containers:
- name: redis
image: redis:6-alpine
command: ["sh"]
args: ["-c", 'exec redis-server --requirepass "$REDIS_PASSWORD"']
resources:
requests:
memory: "512Mi"
cpu: "500m"
ephemeral-storage: "1Gi"
envFrom:
- secretRef:
name: env-secrets
volumeMounts:
- name: redis-volume
mountPath: /data
subPath: data
PersistentVolumeClaim is an object in kubernetes allowing to decouple storage resource requests from actual resource provisioning done by its associated PersistentVolume part.
Given:
no declared PersistentVolume object
and Dynamic Provisioning being enabled on your cluster
kubernetes will try to dynamically provision a suitable persistent disk for you suitable for the underlying infrastructure being a Google Compute Engine Persistent Disk in you case based on the requested storage class (gce-ssd).
The claim will result then in an SSD-like Persistent Disk to be automatically provisioned for you and once the claim is deleted (the requesting pod is deleted due to downscale), the volume is destroyed.
To overcome this issue and avoid precious data loss, you should have two alternatives:
At the PersistentVolumeClaim level
To avoid data loss once the Pod and its PVC are deleted, you can set the persistentVolumeReclaimPolicy parameter to Retain on the PVC object:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-disk
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: gce-ssd
resources:
requests:
storage: "2Gi"
This allows for the persistent volume to go back to the Released state and the underlying data can be manually backed up.
At the StorageClass level
As a general recommendation, you should set the reclaimPolicy parameter to Retain (default is Delete) for your used StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
reclaimPolicy: Retain
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
Additional parameters are recommended:
replication-type: should be set to regional-pd to allow zonal replication
volumeBindingMode: set to WaitForFirstConsumer to allow for first consumer dictating the zonal replication topology
You can read more on all above StorageClass parameters in the kubernetes documentation.
A PersistentVolume with same storage class name is then declared:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ssd-volume
spec:
storageClassName: "ssd"
capacity:
storage: 2G
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: redis-disk
And the PersistentVolumeClaim would only declare the requested StorageClass name:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ssd-volume-claim
spec:
storageClassName: "ssd"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "2Gi"
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
volumes:
- name: redis-volume
persistentVolumeClaim:
claimName: ssd-volume-claim
readOnly: false
This objects declaration would prevent any failures or scale down operations from destroying the created PV either created manually by cluster administrators or dynamically using Dynamic Provisioning.
I am trying to create a PV and PVC which is accessible by a pod and a cronjob too on the same node.
Currently I am using the following yaml file
apiVersion: v1
kind: PersistentVolume
metadata:
name: wwwroot
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "wwwroot"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: wwwroot
name: wwwroot
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
status: {}
in the statefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: web
name: web
spec:
selector:
matchLabels:
app: web
serviceName: web
replicas: 1
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: xy:latest
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- mountPath: /app/wwwroot
name: wwwroot
subPath: tempwwwroot
imagePullSecrets:
- name: xy
restartPolicy: Always
serviceAccountName: ""
volumeClaimTemplates:
- metadata:
name: wwwroot
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 100Gi
This works fine with the pods, but I have a cronjob that needs to access the same wwwroot PV.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app: import
name: import
spec:
schedule: "09 16 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: import
image: xy:latest
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/wwwroot
name: wwwroot
subPath: tempwwwroot
imagePullSecrets:
- name: xy
restartPolicy: OnFailure
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: wwwroot
To the best of my knowledge, you cannot bind these PVs to a folder on the host machine like in Docker (which would be nice, but I can live with that). When I run a kubectl get pvc with the following setup, it seem to use the same PVC name wwwroot, but clearly it is not. I have /bin/bash-d into the pod created by the cronjob and the wwwroot folder does not contain the files from the web pod wwwroot folder.
Any idea what am I doing wrong? Also what is the best option the create PV/PVCs on a local 1 node Kubernetes setup? I was also thinking about to create a NFS share on the host for easier access to the files.
Also when first created the PV/PVC and deployed the web statefulSet, the web pod should populate the wwwroot PV with CSS and JS files, but seems like when it bounds to the PV, it deletes everything from that folder and I have to manually copy those files again. Any workaround on that?
You are mounting pv on two pod created and using pv by setting accessModes: ReadWriteOnce
You can check them below.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
Also instant of local you can use storage class NFS volume which can give you read-write-many. I am not sure Storage class local provide it as it is not in official documents. Chk below
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
P.S. Never use storage class in actual deployment as node in k8s come and go.
I am trying to run ActiveMQ in Kubernetes. I want to keep the queues even after the pod is terminated and recreated. So far I got the queues to stay even after pod deletion and recreation. But, there is a catch, it seems to be storing the list of queues one previous.
Ex: I create 3 queues a, b, and c. I delete the pod and its recreated. The queue list is empty. I then go ahead and create queues x and y. When I delete and the pod gets recreated, it loads queues a, b, and c. If I add a queue d to it and pod is recreated, it shows x and y.
I have created a configMap like below and
I'm using the config map in my YAML file as well.
kubectl create configmap amq-config-map --from-file=/opt/apache-activemq-
5.15.6/data
apiVersion: apps/v1
kind: Deployment
metadata:
name: activemq-deployment-local
labels:
app: activemq
spec:
replicas: 1
selector:
matchLabels:
app: activemq
template:
metadata:
labels:
app: activemq
spec:
containers:
- name: activemq
image: activemq:1.0
ports:
- containerPort: 8161
volumeMounts:
- name: activemq-data-local
mountPath: /opt/apache-activemq-5.15.6/data
readOnly: false
volumes:
- name: activemq-data-local
persistentVolumeClaim:
claimName: amq-pv-claim-local
- name: config-vol
configMap:
name: amq-config-map
---
apiVersion: v1
kind: Service
metadata:
name: my-service-local
spec:
selector:
app: activemq
ports:
- port: 8161
targetPort: 8161
type: NodePort
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: amq-pv-claim-local
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: amq-pv-claim-local
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp
When the pod is recreated, I want the queues to stay the same. I'm almost there, but I need some help.
You might be missing a setting in you volume claim:
kind: PersistentVolume
apiVersion: v1
metadata:
name: amq-pv-claim-local
labels:
type: local
spec:
storageClassName: manual
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp
Also there is still a good change that this does not work due to the use of hostPath: HostPath means it is stored on the server the volume started. It does not migrate along with the restart of the pod, and can lead to very odd behavior in a pv. Look at using NFS, gluster, or any other cluster file system to store your data in a generically accessible path.
If you use a cloud provider, you can also have auto disk mounts from kubernetes, so you can use gcloud, AWS, Azure, etc to provide the storage for you and be mounted by kubernetes where kubernetes wants it be.
With this deployment plan, I'm able to have activemq working in a Kubernetes cluster running in AWS. However, I'm still trying to figure out why it does not work for mysql in the same way.
Simply running
kubectl create -f activemq.yaml
does the trick. Queues are persistent and even terminating the pod and restarting brings up the queues. They remain until the Persistent volume and claim are removed. With this template, I dont need to explicitly create a volume even.
apiVersion: apps/v1
kind: Deployment
metadata:
name: activemq-deployment
labels:
app: activemq
spec:
replicas: 1
selector:
matchLabels:
app: activemq
template:
metadata:
labels:
app: activemq
spec:
securityContext:
fsGroup: 2000
containers:
- name: activemq
image: activemq:1.0
ports:
- containerPort: 8161
volumeMounts:
- name: activemq-data
mountPath: /opt/apache-activemq-5.15.6/data
readOnly: false
volumes:
- name: activemq-data
persistentVolumeClaim:
claimName: amq-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: amq-nodeport-service
spec:
selector:
app: activemq
ports:
- port: 8161
targetPort: 8161
type: NodePort
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: amq-pv-claim
spec:
#storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
I am working on an application on Kubernetes in GCP and I need a really huge SSD storage for it.
So I created a StorageClass recourse, a PersistentVolumeClaim that requests 500Gi of space and then a Deployment recourse.
StorageClass.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: faster
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
PVC.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-volume
spec:
storageClassName: faster
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
spec:
replicas: 2
selector:
matchLabels:
app: mongo
template:
metadata:
creationTimestamp: null
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo
ports:
- containerPort: 27017
volumeMounts:
- mountPath: /data/db
name: mongo-volume
volumes:
- name: mongo-volume
persistentVolumeClaim:
claimName: mongo-volume
When I applied the PVC, it stuck in Pending... state for hours. I found out experimentally that it binds correctly with maximum 200Gi of requested storage space.
However, I can create several 200Gi PVCs. Is there a way to bind them to one path to work as one big PVC in Deployment.yaml? Or maybe the 200Gi limit can be expanded?
I have just tested it on my own env and it works perfectly. So the problem is in Quotas.
For this check:
IAM & admin -> Quotas -> Compute Engine API Local SSD (GB) "your region"
Amount which you used.
I've created the situation when I`m run out of Quota and it stack in pending status the same as your.
It happens because you create PVC for each pod for 500GB each.