How to persist latest queues after pod recreation - kubernetes

I am trying to run ActiveMQ in Kubernetes. I want to keep the queues even after the pod is terminated and recreated. So far I got the queues to stay even after pod deletion and recreation. But, there is a catch, it seems to be storing the list of queues one previous.
Ex: I create 3 queues a, b, and c. I delete the pod and its recreated. The queue list is empty. I then go ahead and create queues x and y. When I delete and the pod gets recreated, it loads queues a, b, and c. If I add a queue d to it and pod is recreated, it shows x and y.
I have created a configMap like below and
I'm using the config map in my YAML file as well.
kubectl create configmap amq-config-map --from-file=/opt/apache-activemq-
5.15.6/data
apiVersion: apps/v1
kind: Deployment
metadata:
name: activemq-deployment-local
labels:
app: activemq
spec:
replicas: 1
selector:
matchLabels:
app: activemq
template:
metadata:
labels:
app: activemq
spec:
containers:
- name: activemq
image: activemq:1.0
ports:
- containerPort: 8161
volumeMounts:
- name: activemq-data-local
mountPath: /opt/apache-activemq-5.15.6/data
readOnly: false
volumes:
- name: activemq-data-local
persistentVolumeClaim:
claimName: amq-pv-claim-local
- name: config-vol
configMap:
name: amq-config-map
---
apiVersion: v1
kind: Service
metadata:
name: my-service-local
spec:
selector:
app: activemq
ports:
- port: 8161
targetPort: 8161
type: NodePort
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: amq-pv-claim-local
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: amq-pv-claim-local
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp
When the pod is recreated, I want the queues to stay the same. I'm almost there, but I need some help.

You might be missing a setting in you volume claim:
kind: PersistentVolume
apiVersion: v1
metadata:
name: amq-pv-claim-local
labels:
type: local
spec:
storageClassName: manual
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp
Also there is still a good change that this does not work due to the use of hostPath: HostPath means it is stored on the server the volume started. It does not migrate along with the restart of the pod, and can lead to very odd behavior in a pv. Look at using NFS, gluster, or any other cluster file system to store your data in a generically accessible path.
If you use a cloud provider, you can also have auto disk mounts from kubernetes, so you can use gcloud, AWS, Azure, etc to provide the storage for you and be mounted by kubernetes where kubernetes wants it be.

With this deployment plan, I'm able to have activemq working in a Kubernetes cluster running in AWS. However, I'm still trying to figure out why it does not work for mysql in the same way.
Simply running
kubectl create -f activemq.yaml
does the trick. Queues are persistent and even terminating the pod and restarting brings up the queues. They remain until the Persistent volume and claim are removed. With this template, I dont need to explicitly create a volume even.
apiVersion: apps/v1
kind: Deployment
metadata:
name: activemq-deployment
labels:
app: activemq
spec:
replicas: 1
selector:
matchLabels:
app: activemq
template:
metadata:
labels:
app: activemq
spec:
securityContext:
fsGroup: 2000
containers:
- name: activemq
image: activemq:1.0
ports:
- containerPort: 8161
volumeMounts:
- name: activemq-data
mountPath: /opt/apache-activemq-5.15.6/data
readOnly: false
volumes:
- name: activemq-data
persistentVolumeClaim:
claimName: amq-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: amq-nodeport-service
spec:
selector:
app: activemq
ports:
- port: 8161
targetPort: 8161
type: NodePort
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: amq-pv-claim
spec:
#storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

Related

Kubernetes PersistentVolume issues in GCP ( Deployment stuck in pending )

Hi i'm trying to create a persistent volume for my mongoDB on kubernetes on google cloud platform and i'm stuck in pending
Here you have my manifest :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: account-mongo-depl
labels:
app: account-mongo
spec:
replicas: 1
serviceName: 'account-mongo'
selector:
matchLabels:
app: account-mongo
template:
metadata:
labels:
app: account-mongo
spec:
volumes:
- name: account-mongo-storage
persistentVolumeClaim:
claimName: account-db-bs-claim
containers:
- name: account-mongo
image: mongo
volumeMounts:
- mountPath: '/data/db'
name: account-mongo-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: account-db-bs-claim
spec:
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: account-mongo-srv
spec:
type: ClusterIP
selector:
app: account-mongo
ports:
- name: account-db
protocol: TCP
port: 27017
targetPort: 27017
Here you have my list of pods
My pods is stuck in pending until fail
If you're on GKE, you should already have dynamic provisioning setup.
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
"do-block-storage"? Were you running on Digital Ocean previously? You should be able remove the storageClassName line and use the default provisioner Google provides.
For example, here's a snippet from one of my own statefulsets
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
## Specify storage class to use nfs (requires nfs provisioner. Leave unset for dynamic provisioning)
#storageClassName: "nfs"
resources:
requests:
storage: 10G
I do not specify a storage class here, so it uses the default one, which provisions as a persistent disk on GCP
$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) kubernetes.io/gce-pd Delete Immediate false 2y245d

Cant mount local host path in local kind cluster

Below is my kubernetes file and I need to do two things
need to mount a folder with a file
need to mount a file with startup script
I have on my local /tmp/zoo folder both the files and my zoo folder files never appear in /bitnami/zookeeper inside the pod.
The below is the updated Service,Deployment,PVC and PV
kubernetes.yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.service.type: nodeport
creationTimestamp: null
labels:
io.kompose.service: zookeeper
name: zookeeper
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
io.kompose.service: zookeeper
type: NodePort
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.service.type: nodeport
creationTimestamp: null
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: zookeeper
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: zookeeper
spec:
containers:
- image: bitnami/zookeeper:3
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
resources: {}
volumeMounts:
- mountPath: /bitnami/zoo
name: bitnamidockerzookeeper-zookeeper-data
restartPolicy: Always
volumes:
- name: bitnamidockerzookeeper-zookeeper-data
#hostPath:
#path: /tmp/tmp1
persistentVolumeClaim:
claimName: bitnamidockerzookeeper-zookeeper-data
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: bitnamidockerzookeeper-zookeeper-data
type: local
name: bitnamidockerzookeeper-zookeeper-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: v1
kind: PersistentVolume
metadata:
name: foo
spec:
storageClassName: manual
claimRef:
name: bitnamidockerzookeeper-zookeeper-data
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
hostPath:
path: /tmp/tmp1
status: {}
kind: List
metadata: {}
A service cannot be assigned a volume. In line 4 of your YAML, you specify "Service" when it should be "Pod" and every resource used in Kubernetes must have a name, in metadata you could add it. That should fix the simple problem.
apiVersion: v1
items:
- apiVersion: v1
kind: Pod #POD
metadata:
name: my-pod #A RESOURCE NEEDS A NAME
creationTimestamp: null
labels:
io.kompose.service: zookeeper
spec:
containers:
- image: bitnami/zookeeper:3
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
resources: {}
volumeMounts:
- mountPath: /bitnami/zookeeper
name: bitnamidockerzookeeper-zookeeper-data
restartPolicy: Always
volumes:
- name: bitnamidockerzookeeper-zookeeper-data
persistentVolumeClaim:
claimName: bitnamidockerzookeeper-zookeeper-data
status: {}
Now, I don't know what you're using but hostPath works exclusively on a local cluster like Minikube. In production things change drastically. If everything is local, you need to have the directory "/ tmp / zoo" in the node, NOTE not on your local pc but inside the node. For example, if you use minikube then you run minikube ssh to enter the node and there copies "/ tmp / zoo". An excellent guide to this is given in the official kubernetes documentation: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
There are a few potential issues in your YAML.
First, the accessModes of the PersistentVolume doesn't match the one of the PersistentVolumeClaim. One way to fix that is to list both ReadWriteMany and ReadWriteOnce in the accessModes of the PersistentVolume.
Then, the PersistentVolume doesn't specify a storageClassName. As a result, if you have a StorageClass configured to be the default StorageClass on your cluster (you can see that with kubectl get sc), it will automatically provision a PersistentVolume dynamically instead of using the PersistentVolume that you declared. So you need to specify a storageClassName. The StorageClass doesn't have to exist for real (since we're using static provisioning instead of dynamic anyway).
Next, the claimRef in PersistentVolume needs to mention the Namespace of the PersistentVolumeClaim. As a reminder: PersistentVolumes are cluster resources, so they don't have a Namespace; but PersistentVolumeClaims belong to the same Namespace as the Pod that mounts them.
Another thing is that the path used by Zookeeper data in the bitnami image is /bitnami/zookeeper, not /bitnami/zoo.
You will also need to initialize permissions in that volume, because by default, only root will have write access, and Zookeeper runs as non-root here, and won't have write access to the data subdirectory.
Here is an updated YAML that addresses all these points. I also rewrote the YAML to use the YAML multi-document syntax (resources separated by ---) instead of the kind: List syntax, and I removed a lot of fields that weren't used (like the empty status: fields and the labels that weren't strictly necessary). It works on my KinD cluster, I hope it will also work in your situation.
If your cluster has only one node, this will work fine, but if you have multiple nodes, you might need to tweak things a little bit to make sure that the volume is bound to a specific node (I added a commented out nodeAffinity section in the YAML, but you might also have to change the bind mode - I only have a one-node cluster to test it out right now; but the Kubernetes documentation and blog have abundant details on this; https://stackoverflow.com/a/69517576/580281 also has details about this binding mode thing).
One last thing: in this scenario, I think it might make more sense to use a StatefulSet. It would not make a huge difference but would more clearly indicate intent (Zookeeper is a stateful service) and in the general case (beyond local hostPath volumes) it would avoid having two Zookeeper Pods accessing the volume simultaneously.
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
io.kompose.service: zookeeper
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: zookeeper
template:
metadata:
labels:
io.kompose.service: zookeeper
spec:
initContainers:
- image: alpine
name: chmod
volumeMounts:
- mountPath: /bitnami/zookeeper
name: bitnamidockerzookeeper-zookeeper-data
command: [ sh, -c, "chmod 777 /bitnami/zookeeper" ]
containers:
- image: bitnami/zookeeper:3
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
volumeMounts:
- mountPath: /bitnami/zookeeper
name: bitnamidockerzookeeper-zookeeper-data
volumes:
- name: bitnamidockerzookeeper-zookeeper-data
persistentVolumeClaim:
claimName: bitnamidockerzookeeper-zookeeper-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: bitnamidockerzookeeper-zookeeper-data
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: tmp-tmp1
spec:
storageClassName: manual
claimRef:
name: bitnamidockerzookeeper-zookeeper-data
namespace: default
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
- ReadWriteOnce
hostPath:
path: /tmp/tmp1
#nodeAffinity:
# required:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/hostname
# operator: In
# values:
# - kind-control-plane
Little confuse, if you want to use file path on node as volume for pod, you should do as this:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
but you need to make sure you pod will be scheduler the same node which has the file path.

K8s PersistentVolume shared among multiple PersistentVolumeClaims for local testing

Could someone help me please and point me what configuration should I be doing for my use-case?
I'm building a development k8s cluster and one of the steps is to generate security files (private keys) that are generated in a number of pods during deployment (let's say for a simple setup I have 6 pods that each build their own security keys). I need to have access to all these files, also they must be persistent after the pod goes down.
I'm trying to figure out now how to set up it locally for internal testing. From what I understand Local PersistentVolumes only allow 1:1 with PersistentVolumeClaims, so I would have to create a separate PersistentVolume and PersistentVolumeClaim for each pod that get's configured. I would prefer to void this and use one PersistentVolume for all.
Could someone be so nice and help me or point me to the right setup that should be used?
-- Update: 26/11/2020
So this is my setup:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hlf-nfs--server
spec:
replicas: 1
selector:
matchLabels:
app: hlf-nfs--server
template:
metadata:
labels:
app: hlf-nfs--server
spec:
containers:
- name: hlf-nfs--server
image: itsthenetwork/nfs-server-alpine:12
ports:
- containerPort: 2049
name: tcp
- containerPort: 111
name: udp
securityContext:
privileged: true
env:
- name: SHARED_DIRECTORY
value: "/opt/k8s-pods/data"
volumeMounts:
- name: pvc
mountPath: /opt/k8s-pods/data
volumes:
- name: pvc
persistentVolumeClaim:
claimName: shared-nfs-pvc
apiVersion: v1
kind: Service
metadata:
name: hlf-nfs--server
labels:
name: hlf-nfs--server
spec:
type: ClusterIP
selector:
app: hlf-nfs--server
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: udp-111
port: 111
protocol: UDP
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-nfs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs
resources:
requests:
storage: 1Gi
These three are being created at once, after that, I'm reading the IP of the service and adding it to the last one:
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-nfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
path: /opt/k8s-pods/data
server: <<-- IP from `kubectl get svc -l name=hlf-nfs--server`
The problem I'm getting and trying to resolve is that the PVC does not get bound with the PV and the deployment keeps in READY mode.
Did I miss anything?
You can create a NFS and have the pods use NFS volume. Here is the manifest file to create such in-cluster NFS server (make sure you modify STORAGE_CLASS and the other variables below):
export NFS_NAME="nfs-share"
export NFS_SIZE="10Gi"
export NFS_IMAGE="itsthenetwork/nfs-server-alpine:12"
export STORAGE_CLASS="thin-disk"
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: udp-111
port: 111
protocol: UDP
selector:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
name: ${NFS_NAME}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: $STORAGE_CLASS
volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
template:
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
containers:
- name: nfs-server
image: ${NFS_IMAGE}
ports:
- containerPort: 2049
name: tcp
- containerPort: 111
name: udp
securityContext:
privileged: true
env:
- name: SHARED_DIRECTORY
value: /nfsshare
volumeMounts:
- name: pvc
mountPath: /nfsshare
volumes:
- name: pvc
persistentVolumeClaim:
claimName: ${NFS_NAME}
EOF
Below is an example how to point the other pods to this NFS. In particular, refer to the volumes section at the end of the YAML:
export NFS_NAME="nfs-share"
export NFS_IP=$(kubectl get --template={{.spec.clusterIP}} service/$NFS_NAME)
kubectl apply -f - <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:
name: apache
labels:
app: apache
spec:
replicas: 2
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
containers:
- name: apache
image: apache
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: nfs-vol
subPath: html
volumes:
- name: nfs-vol
nfs:
server: $NFS_IP
path: /
EOF
It is correct that there is a 1:1 relation between a PersistentVolumeClaim and a PersistentVolume.
However, Pods running on the same Node can concurrently mount the same volume, e.g. use the same PersistentVolumeClaim.
If you use Minikube for local development, you only have one node, so you can use the same PersistentVolumeClaim. Since you want to use different files for each app, you could use a unique directory for each app in that shared volume.
So finally, I did it by using a dynamic provider.
I installed the stable/nfs-server-provisioner with helm. With proper configuration, it managed to create a pv named nfs two which my pvc's are able to bound :)
helm install stable/nfs-server-provisioner --name nfs-provisioner -f nfs_provisioner.yaml
the nfs_provisioner.yaml is as follows
persistence:
enabled: true
storageClass: "standard"
size: 20Gi
storageClass:
# Name of the storage class that will be managed by the provisioner
name: nfs
defaultClass: true

How to define PVC for specific path in a single node in kubernetes

I am running local k8s cluster and defining PV as hostPath for mysql pods.
Sharing all the configuration details below .
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The problem I am getting is as mysql pod is running in k8s cluster ,when its deleted and recreate ,it will choose any one of the node and deployed .So mysql hostpath always mounted to specific node .Is it a good idea to fix the node for mysql or any other options are there ?please share if any idea .
you have below choices
Use node selector or node affinity to ensure that pod gets scheduled on the node where the mount is created OR
Use local persistent volumes. it is supported on kubernetes 1.14 and above
Why are you using a PVC and a PV? Actually, for hostPath, you don't even need to create the PV object. It just gets it.
You should use a StatefulSet if you want a pod that is re-created to get the storage it was using the previous one (state).
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: mysql
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-persistent-storage
spec:
accessModes: ["ReadWriteOnce"]
# storageClassName: "standard"
resources:
requests:
storage: 2Gi
This statefulSet fails, but it is a mysql thing. As reference, should serve.

How do run object storage minio in a minikube cluster?

I want to integrate a minio object storage in to my minikune cluster.
I use the docker file from the minio gitrepo
I also added the persistent volume with the claim
kind: PersistentVolume
apiVersion: v1
metadata:
name: minio-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data/minio"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: minio-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
for the minio deployment I have
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: minio
spec:
selector:
matchLabels:
app: minio
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: minio
role: master
tier: backend
spec:
imagePullSecrets:
- name: regcred
containers:
- name: minio
image: <secret Registry >
env:
- name: MINIO_ACCESS_KEY
value: akey
- name: MINIO_SECRET_KEY
value: skey
ports:
- containerPort: 9000
volumeMounts:
- name: data
mountPath: /data/ob
volumes:
- name: data
persistentVolumeClaim:
claimName: minio-pv-claim
For the service I opened up the external IP just for debugging
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
role: master
tier: backend
spec:
ports:
- port: 9000
targetPort: 9000
externalIPs:
- 192.168.99.101
selector:
app: minio
role: master
tier: backend
But when I start the deployment I get the error message ERROR Unable to initialize backend: The disk size is less than the minimum threshold.
I assumed that 3GB should be enough. How can I solve this issue moreover now that I try to delete my persistent volume it rest in the terminating status.
How can I run minio in a minikube clutster?
I dont think there is enough storage in /mnt/data inside minikube. Try /mnt/sda1 or /data. Better yet, go inside minikube and check the storage available. To get into minikube you can do minikube ssh.