Persistent Volume Claim is not getting bound for AWS EBS volume - kubernetes

I'm new to AWS EKS (Elastic Kubernetes Service). I'm trying to run SonarQube on EKS using Fargate profile (serverless).
Following are my Kubernetes manifests files:-
Deployment:-
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonarqube
name: sonarqube
namespace: cicd
spec:
replicas: 1
selector:
matchLabels:
app: sonarqube
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- name: sonarqube
image: sonarqube:8.6.1-community
resources:
requests:
cpu: 500m
memory: 1024Mi
limits:
cpu: 2000m
memory: 2048Mi
volumeMounts:
- mountPath: "/opt/sonarqube/data/"
name: sonar-data
- mountPath: "/opt/sonarqube/extensions/"
name: sonar-extensions
env:
- name: "SONARQUBE_JDBC_USERNAME"
valueFrom:
configMapKeyRef:
name: configuration
key: USERNAME
- name: "SONARQUBE_JDBC_URL"
valueFrom:
configMapKeyRef:
name: configuration
key: URL
- name: "SONARQUBE_JDBC_PASSWORD"
valueFrom:
secretKeyRef:
name: db-password
key: password
ports:
- containerPort: 9000
protocol: TCP
volumes:
- name: sonar-data
persistentVolumeClaim:
claimName: ebs-claim-sonar-data
- name: sonar-extensions
persistentVolumeClaim:
claimName: ebs-claim-sonar-extensions
Storage Classes for Sonar data and Sonar extensions:-
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-sc-sonar-data
namespace: cicd
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-sc-sonar-extensions
namespace: cicd
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
My Persistent Volume Claims (PVC):-
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim-sonar-data
namespace: cicd
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc-sonar-data
resources:
requests:
storage: 100Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim-sonar-extensions
namespace: cicd
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc-sonar-extensions
resources:
requests:
storage: 50Gi
Please ignore the config maps and secrets. The problem is PVC is not getting bound with the Storage Class hence the deployment is failing with the below error:-
Pod not supported on Fargate: volumes not supported: sonar-data not supported because PVC ebs-claim-sonar-data not bound, sonar-extensions not supported because: PVC ebs-claim-sonar-extensions not bound
I tried to debug a lot but not able to understand the root cause. I would really appreciate it if someone can help me with this.
Also just want to check if run just my storage class object using the kubectl create -f <sc file> then I guess I should be able to see any EBS volumes getting provisioned on the AWS console, right?
Please assist, thanks

Related

Kubernetes PersistentVolume issues in GCP ( Deployment stuck in pending )

Hi i'm trying to create a persistent volume for my mongoDB on kubernetes on google cloud platform and i'm stuck in pending
Here you have my manifest :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: account-mongo-depl
labels:
app: account-mongo
spec:
replicas: 1
serviceName: 'account-mongo'
selector:
matchLabels:
app: account-mongo
template:
metadata:
labels:
app: account-mongo
spec:
volumes:
- name: account-mongo-storage
persistentVolumeClaim:
claimName: account-db-bs-claim
containers:
- name: account-mongo
image: mongo
volumeMounts:
- mountPath: '/data/db'
name: account-mongo-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: account-db-bs-claim
spec:
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: account-mongo-srv
spec:
type: ClusterIP
selector:
app: account-mongo
ports:
- name: account-db
protocol: TCP
port: 27017
targetPort: 27017
Here you have my list of pods
My pods is stuck in pending until fail
If you're on GKE, you should already have dynamic provisioning setup.
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
"do-block-storage"? Were you running on Digital Ocean previously? You should be able remove the storageClassName line and use the default provisioner Google provides.
For example, here's a snippet from one of my own statefulsets
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
## Specify storage class to use nfs (requires nfs provisioner. Leave unset for dynamic provisioning)
#storageClassName: "nfs"
resources:
requests:
storage: 10G
I do not specify a storage class here, so it uses the default one, which provisions as a persistent disk on GCP
$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) kubernetes.io/gce-pd Delete Immediate false 2y245d

WaitForFirstConsumer PersistentVolumeClaim waiting for first consumer to be created before binding. Auto provisioning does not work

Hi I know this might be a possible duplicate, but I cannot get the answer from this question.
I have a prometheus deployment and would like to give it a persistent volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--storage.tsdb.retention.time=60d"
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
resources:
requests:
cpu: 500m
memory: 500M
limits:
cpu: 1
memory: 1Gi
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
persistentVolumeClaim:
claimName: prometheus-pv-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-pv-claim
spec:
storageClassName: default
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
Now both the pvc and the deployment cannot be scheduled because the pvc waits for the deployment and the other way around. As far as I am concerned we have a cluster with automatic provisioning, thus I cannot just create a pv. How can I solve this problem, because other deployments do use pvc in the same style and it works.
Its because of namespace. PVC is a namespaced object you can look here. Your PVC is on the default namespace. Moving it to monitoring namespace should work.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-pv-claim
namespace: monitoring
spec:
storageClassName: default
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
Does your PVC deploy the correct namespace with deployment?
And make sure the StorageClass has volumeBindingMode: WaitForFirstConsumer
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: default
...
volumeBindingMode: WaitForFirstConsumer

Kubernetes Minio Volume Node Affinity Conflict

I have setup a testing k3d cluster with 4 agents and a server.
I have a storage class defined thus:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
with a pv defined thus:
apiVersion: v1
kind: PersistentVolume
metadata:
name: basic-minio-storage
labels:
storage-type: object-store-path
spec:
capacity:
storage: 500Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/basic_minio
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3d-test-agent-0
the pvc that I have defined is like:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. Will be used in deployment below.
name: minio-pv-claim
labels:
app: basic-minio
spec:
# Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 500Gi
selector:
matchLabels:
storage-type: object-store-path
my deployment is like:
# Create a simple single node Minio linked to root drive
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: basic-minio
namespace: minio
spec:
selector:
matchLabels:
app: basic-minio
serviceName: "basic-minio"
template:
metadata:
labels:
app: basic-minio
spec:
containers:
- name: basic-minio
image: minio/minio:RELEASE.2021-10-10T16-53-30Z
imagePullPolicy: IfNotPresent
args:
- server
- /data
env:
- name: MINIO_ROOT_USER
valueFrom:
secretKeyRef:
name: minio-secret
key: minio-root-user
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: minio-secret
key: minio-root-password
ports:
- containerPort: 9000
volumeMounts:
- name: storage
mountPath: "/data"
volumes:
- name: storage
persistentVolumeClaim:
claimName: minio-pv-claim
In my kubernetes dashboard, I can see the that PV is provisioned and ready.
The PV has been setup and has bound to the PV.
But my pod shows the error: 0/5 nodes are available: 5 node(s) had volume node affinity conflict.
what is causing this issue and how can I debug it?
Your (local) volume is created on the worker node k3d-test-agent-0 but none of your pod is scheduled to run on this node. This is not a good approach but if you must run in this way, you can direct all pods to run on this host:
...
spec:
nodeSelector:
kubernetes.io/hostname: k3d-test-agent-0
containers:
- name: basic-minio
...

PersistentVolume not utilized in Docker Desktop (MacOS) hosted Kubernetes cluster

I have deployed (using Helm) some services in the K8s cluster hosted by Docker Desktop (MacOS). One of the "services" is MongoDB, for which I'm trying to set up a PersistedVolume, so that the actual data will be retained in a MacOS local directory between cluster (re)installs (or MongoDB pod replacements). Everything "works" per se, but the MongoDB container process keeps setting up its local directory /data/db, as if nothing is really setup in terms of Persistent Volumes. I've been pulling my hair for a while now and thought an extra set of eyes might spot whats wrong or missing.
I have several other resources deployed, e.g a small Micronaut based backend service which exposes an API to read from the MongoDB instance. All of that works just fine.
Here are the descriptors involved for MongoDB:
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Deployment:
kind: Deployment
metadata:
name: fo-persons-mongodb
namespace: fo
labels:
app: fo-persons-mongodb
spec:
replicas: 1
selector:
matchLabels:
app: fo-persons-mongodb
template:
metadata:
labels:
app: fo-persons-mongodb
spec:
volumes:
- name: fo-persons-mongodb-volume-pvc
persistentVolumeClaim:
claimName: persons-mongodb-pvc
containers:
- name: fo-persons-mongodb
image: mongo
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
volumeMounts:
- name: fo-persons-mongodb-volume-pvc
mountPath: "/data/db"
Service:
apiVersion: v1
kind: Service
metadata:
name: fo-persons-mongodb
namespace: fo
spec:
type: ClusterIP
selector:
app: fo-persons-mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /Users/mike/kubernetes/fo/persons/mongodb
Alright! I got it working. Seems I'd made two mistakes. Below are the updated descriptors for the PersistentVolumeClaim and PersistentVolume:
Error #1: Not setting the storageClassName in the spec of the PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
Error #2: Not setting the node affinity and not using local.path instead of hostPath, both in the PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: /Users/mike/kubernetes/fo/persons/mongodb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop

Minikube: use persistent volume (shared disk) and mount it to host

I try to mount a linux directory as a shared directory for multiple containers in minikube.
Here is my config:
minikube start --insecure-registry="myregistry.com:5000" --mount --mount-string="/tmp/myapp/k8s/:/data/myapp/share/"
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: myapp-share-storage
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
local:
path: "/data/myapp/share/"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-share-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: myapp-server
name: myapp-server
spec:
selector:
matchLabels:
io.kompose.service: myapp-server
template:
metadata:
labels:
io.kompose.service: myapp-server
spec:
containers:
- name: myapp-server
image: myregistry.com:5000/server-myapp:alpine
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /data/myapp/share
name: myapp-share
env:
- name: storage__root_directory
value: /data/myapp/share
volumes:
- name: myapp-share
persistentVolumeClaim:
claimName: myapp-share-claim
status: {}
It works with pitfalls: Statefulset are not supported, they bring deadlock errors :
pending PVC: waiting for first consumer to be created before binding
pending POD: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind
Another option is to use minikube persistentvolumeclaim without persistentvolume (it will be created automatically). However:
The volume is created in /tmp (ex: /tmp/hostpath-provisioner/default/myapp-share-claim)
Minikube doesn't honor mount request
How can I make it just work?
Using your yaml file I've managed to create the volumes and deploy it without issue, but i had to use the command minikube mount /mydir/:/data/myapp/share/ after starting the minikube since --mount --mount-strings="/mydir/:/data/myapp/share/" wasn't working.