Kubernetes Minio Volume Node Affinity Conflict - kubernetes

I have setup a testing k3d cluster with 4 agents and a server.
I have a storage class defined thus:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
with a pv defined thus:
apiVersion: v1
kind: PersistentVolume
metadata:
name: basic-minio-storage
labels:
storage-type: object-store-path
spec:
capacity:
storage: 500Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/basic_minio
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3d-test-agent-0
the pvc that I have defined is like:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. Will be used in deployment below.
name: minio-pv-claim
labels:
app: basic-minio
spec:
# Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 500Gi
selector:
matchLabels:
storage-type: object-store-path
my deployment is like:
# Create a simple single node Minio linked to root drive
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: basic-minio
namespace: minio
spec:
selector:
matchLabels:
app: basic-minio
serviceName: "basic-minio"
template:
metadata:
labels:
app: basic-minio
spec:
containers:
- name: basic-minio
image: minio/minio:RELEASE.2021-10-10T16-53-30Z
imagePullPolicy: IfNotPresent
args:
- server
- /data
env:
- name: MINIO_ROOT_USER
valueFrom:
secretKeyRef:
name: minio-secret
key: minio-root-user
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: minio-secret
key: minio-root-password
ports:
- containerPort: 9000
volumeMounts:
- name: storage
mountPath: "/data"
volumes:
- name: storage
persistentVolumeClaim:
claimName: minio-pv-claim
In my kubernetes dashboard, I can see the that PV is provisioned and ready.
The PV has been setup and has bound to the PV.
But my pod shows the error: 0/5 nodes are available: 5 node(s) had volume node affinity conflict.
what is causing this issue and how can I debug it?

Your (local) volume is created on the worker node k3d-test-agent-0 but none of your pod is scheduled to run on this node. This is not a good approach but if you must run in this way, you can direct all pods to run on this host:
...
spec:
nodeSelector:
kubernetes.io/hostname: k3d-test-agent-0
containers:
- name: basic-minio
...

Related

Kubernetes: Store files in local hard disk using persistent volumes with Minikube

I want to mount a folder that is located on my local hard disk (ex. /home/logon/volumes/algovens/ids/app_data/) on a pod:
{{/home/logon/volumes/algovens/ids/app_data/}} SHOULD BE MOUNTED ON {{/app/app_data/}} ON A POD
I create a persistent volume and a persistent volume claim, then import the pvc on my pod .yaml file.
I apply all three yaml files for pv, pvc, pod into kubernetes cluster.
When getting access to the container using interactive bash, I see there's a folder that I've configured to mount from my local hard disk (/app/app_data/ ON POD), but it's still empty.
However, I have some files and folders on my local folder on my local hard disk (/home/logon/volumes/algovens/ids/app_data/)
My configuration files:
identityserver-pv.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: identityserver-pv
labels:
name: identityserver-pv
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: /home/logon/volumes/algovens/ids/app_data/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
identityserver-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: identityserver-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
identityserver-deployment.yaml
# Deployment for identityserver
apiVersion: apps/v1
kind: Deployment
metadata:
name: identityserver-deployment
labels:
app: identityserver-deployment
spec:
replicas: 1
selector:
matchLabels:
app: identityserver
template:
metadata:
labels:
app: identityserver # pod label
spec:
volumes:
- name: identityserver-pvc
persistentVolumeClaim:
claimName: identityserver-pvc
containers:
- name: identityserver-container
image: localhost:5000/algovens-identityserver:v1.0
ports:
- containerPort: 80
volumeMounts:
- name: identityserver-pvc
mountPath: "/app/app_data/"
env:
- name: ALGOVENS_CORS_ORIGINS
valueFrom:
configMapKeyRef:
name: identityserver-config
key: ALGOVENS_CORS_ORIGINS
---
# Service for identityserver
apiVersion: v1
kind: Service
metadata:
name: identityserver-service
spec:
type: NodePort # External service. default value is ClusterIP
selector:
app: identityserver
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30100

How can I mount folder correctly in kubernetes

I'm trying to run nodered in my minikube kubernetes cluster ("cluster", its one node :D).
The docker command to do this is by example:
docker run -it -p 1880:1880 -v /home/user/node_red_data:/data --name mynodered nodered/node-red
But I'm not running it in docker, I'm trying to run it in minikube. The documentation of minikube states that /data on the host is persisted. So what I wanted was a /data/nodered to be mounted up as /data on the nodered container.
I started with adding a storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
Then added persistent storage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
Then a persistent volume claim for the nodered:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nodered-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
And then the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: "/data"
subPath: "nodered"
I've checked kubernetes dasboard and everything is green and volume is bound. I created a simple http service in nodered and deployed it. It's running but nothing is saved. So if the deployment goes down and gets redeployed it will be empty.
I've checked the /data and /data/nodered folders on the minikube instance running in docker but they are empty.
Your deployment spec should return error, try the following:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim

PersistentVolume not utilized in Docker Desktop (MacOS) hosted Kubernetes cluster

I have deployed (using Helm) some services in the K8s cluster hosted by Docker Desktop (MacOS). One of the "services" is MongoDB, for which I'm trying to set up a PersistedVolume, so that the actual data will be retained in a MacOS local directory between cluster (re)installs (or MongoDB pod replacements). Everything "works" per se, but the MongoDB container process keeps setting up its local directory /data/db, as if nothing is really setup in terms of Persistent Volumes. I've been pulling my hair for a while now and thought an extra set of eyes might spot whats wrong or missing.
I have several other resources deployed, e.g a small Micronaut based backend service which exposes an API to read from the MongoDB instance. All of that works just fine.
Here are the descriptors involved for MongoDB:
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Deployment:
kind: Deployment
metadata:
name: fo-persons-mongodb
namespace: fo
labels:
app: fo-persons-mongodb
spec:
replicas: 1
selector:
matchLabels:
app: fo-persons-mongodb
template:
metadata:
labels:
app: fo-persons-mongodb
spec:
volumes:
- name: fo-persons-mongodb-volume-pvc
persistentVolumeClaim:
claimName: persons-mongodb-pvc
containers:
- name: fo-persons-mongodb
image: mongo
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
volumeMounts:
- name: fo-persons-mongodb-volume-pvc
mountPath: "/data/db"
Service:
apiVersion: v1
kind: Service
metadata:
name: fo-persons-mongodb
namespace: fo
spec:
type: ClusterIP
selector:
app: fo-persons-mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /Users/mike/kubernetes/fo/persons/mongodb
Alright! I got it working. Seems I'd made two mistakes. Below are the updated descriptors for the PersistentVolumeClaim and PersistentVolume:
Error #1: Not setting the storageClassName in the spec of the PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
Error #2: Not setting the node affinity and not using local.path instead of hostPath, both in the PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: /Users/mike/kubernetes/fo/persons/mongodb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop

Unable to mount local PersistentVolume in k8s 1.13

I'm trying to deploy a stateful set with a persistent volume claim on a bare metal kubernetes cluster (v1.13) but the pod times out when trying to mount the volume.
I have a local-storage storage class defined:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
I have a PV defined:
kind: PersistentVolume
apiVersion: v1
metadata:
name: cassandradev1
labels:
app: cassandra
environment: dev
spec:
storageClassName: local-storage
capacity:
storage: 1Ti
accessModes:
- ReadWriteOnce
local:
path: "/data1/cassandradev1"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node1
And I have a stateful set that issues a claim (truncated):
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra-set
spec:
...
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
selector:
matchLabels:
app: "cassandra"
environment: "dev"
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 1Ti
When I try to apply the stateful set, the Pod gets scheduled but times out:
Normal Scheduled 2m13s default-scheduler Successfully assigned default/cassandra-set-0 to my-node1
Warning FailedMount 13s kubelet, my-node1 Unable to mount volumes for pod "cassandra-set-0 (dd252f77-fda3-11e8-96d3-1866dab905dc)": timeout expired waiting for volumes to attach or mount for pod "default"/"cassandra-set-0". list of unmounted volumes=[cassandra-data]. list of unattached volumes=[cassandra-data default-token-t2dg8]
If I look at the logs for the controller I see an error message for no volume plugin matched:
kubectl logs pod/kube-controller-manager -n kube-system
W1212 00:51:24.218114 1 plugins.go:845] FindExpandablePluginBySpec(cassandradev1) -> err:no volume plugin matched
Any ideas on where to look next?
First of all PV definition is incorrect, there is no hostPath in local-storage class. This is how you should define your local storage PV:
kind: PersistentVolume
apiVersion: v1
metadata:
name: cassandradev1
labels:
app: cassandra
environment: dev
spec:
storageClassName: local-storage
capacity:
storage: 1Ti
accessModes:
- ReadWriteOnce
local:
path: "/data1/cassandradev1"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node1
Also keep in mind unlike hostPath, /data1/cassandradev1 should exist on my-node1, local-storage doesn't automatically create the path and when you deploy statefulset and path is not there, it will give error related to mounting.
This should resolve your issue. Hope this helps.
EDIT: So, I setup the cassandra statefulset with local-storage by following yaml files. I have omitted some config maps and so it will not work as it is. Could you please try to check what is different there:
apiVersion: apps/v1
kind: StatefulSet
metadata:
generation: 1
labels:
state: cassandra
name: cassandra
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: cassandra
serviceName: cassandra
template:
metadata:
annotations:
pod.alpha.kubernetes.io/initialized: "true"
creationTimestamp: null
labels:
app: cassandra
spec:
containers:
- args:
- chmod -R 777 /logs/; /on_start.sh
command:
- /bin/sh
- -c
image: <image>
imagePullPolicy: Always
name: cassandra
ports:
- containerPort: 9042
protocol: TCP
resources: {}
volumeMounts:
- mountPath: /data
name: data
imagePullSecrets:
- name: gcr-imagepull-json-key
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: local-storage
PV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: cassandra-data-vol-0
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: data-cassandra-0
namespace: default
local:
path: /data/cassandra-0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ip-10-0-1-91.ec2.internal
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
Make sure /data/cassandra-0 exists before you create PV. Let me know if you face any issues.

Will storageClass kubernetes.io/no-provisioner work for multi-node cluster?

Cluster:
1 master
2 workers
I am deploying StatefulSet using the local-volume using the PV (kubernetes.io/no-provisioner storageClass) with 3 replicas.
Created 2 PV for Both worker nodes.
Expectation: pods will be scheduled on both workers and sharing the same volume.
result: 3 stateful pods are created on single worker node.
yaml :-
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv-1
spec:
capacity:
storage: 2Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-node1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv-2
spec:
capacity:
storage: 2Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/vol2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-node2
---
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: test
labels:
app: test
spec:
ports:
- name: test-headless
port: 8000
clusterIP: None
selector:
app: test
---
apiVersion: v1
kind: Service
metadata:
name: test-service
labels:
app: test
spec:
ports:
- name: test
port: 8000
protocol: TCP
nodePort: 30063
type: NodePort
selector:
app: test
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-stateful
spec:
selector:
matchLabels:
app: test
serviceName: stateful-service
replicas: 6
template:
metadata:
labels:
app: test
spec:
containers:
- name: container-1
image: <Image-name>
imagePullPolicy: Always
ports:
- name: http
containerPort: 8000
volumeMounts:
- name: localvolume
mountPath: /tmp/
volumes:
- name: localvolume
persistentVolumeClaim:
claimName: example-local-claim
This happened because Kubernetes doesn't care about distribution. It has the mechanism for providing specific distribution called Pod Affinity.
For distributing pods on all workers, you may use Pod Affinity.
Furthermore, you can use soft affinity (the differences I explain here ), it isn't strict and allows to spawn all your pods. For example, StatefulSet will look like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: my-app
replicas: 3
template:
metadata:
labels:
app: my-app
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: app-name
image: k8s.gcr.io/super-app:0.8
ports:
- containerPort: 21
name: web
This StatefulSet will try to spawn each pod on a new worker; if there are not enough workers, it will spawn the pod on the node where the pod already exists.