Getting "1 node(s) didn't find available persistent volumes to bind" installing DSE OpsCenter on Kubernetes - kubernetes

I am trying to install DSE Opscenter on Kubernetes.
Below is my cluster file
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ops-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-storage-0
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
name: config-clusters-volume-opscenter-0
volumeMode: Filesystem
storageClassName: ops-storage
local:
path: /data/k8s-data/cassandra-0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntuserver
---
apiVersion: v1
kind: Service
metadata:
name: opscenter-ext-lb
labels:
app: opscenter
spec:
type: LoadBalancer
ports:
- port: 8888
name: opsc-gui-port
- port: 8443
name: opsc-secure-port
selector:
app: opscenter
---
apiVersion: v1
kind: Service
metadata:
name: opscenter
labels:
app: opscenter
spec:
ports:
- port: 8888
name: opsc-gui-port
- port: 8443
name: opsc-secure-port
- port: 61620
name: port-61620
clusterIP: None
selector:
app: opscenter
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: opscenter
spec:
serviceName: "opscenter"
replicas: 1
selector:
matchLabels:
app: opscenter
podManagementPolicy: OrderedReady
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: opscenter
spec:
containers:
- name: opscenter
image: gcr.io/datastax-public/dse-opscenter:6.5.0
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command: ["/update_admin_password.sh"]
resources:
requests:
cpu: "2"
memory: "4000Mi"
env:
- name: DS_LICENSE
value: accept
- name: OPSC_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: opsc
key: admin_password
ports:
- containerPort: 8888
name: opsc-gui-port
- containerPort: 61620
name: port-61620
volumeMounts:
- name: config-volume
mountPath: /config
- name: config-clusters-volume
mountPath: /opt/opscenter/conf/clusters
- name: ssl-store-volume
mountPath: /var/lib/opscenter/ssl
volumes:
- name: config-volume
configMap:
name: opsc-config
- name: ssl-store-volume
configMap:
name: opsc-ssl-config
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntuserver
volumeClaimTemplates:
- metadata:
name: config-clusters-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: ops-storage
resources:
requests:
storage: 10Gi
But I get the below error when I try to do so, I get the below error while listing the pods.
0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
I tried several ways but neither of them worked. Can someone please help me with this.
Any tutorial or link that shows how to install DSE Opscenter on Kubernetes would be helpful too.
Please note: I need to use local storage for the above.

Assuming you've installed DSE using the cass-operator, OpsCenter is not designed to be installed on a Kubernetes cluster and it's not a supported configuration. It will definitely not work.
The recommendation is to use open-source tools like https://github.com/datastax/metric-collector-for-apache-cassandra to monitor your cluster instead of OpsCenter.
For the same reason, K8ssandra.io has all these tools already bundled in and pre-configured when you launch Cassandra on Kubernetes:
Reaper for automated repairs
Medusa for backups and restores
Metrics Collector for monitoring with Prometheus + Grafana
Traefik templates for k8s cluster ingress
Stargate.io - a data gateway for connecting to Cassandra using REST API, GraphQL API and JSON/Doc API
K8ssandra uses the same cass-operator under the hood. Cheers!

Related

Kubernetes: Store files in local hard disk using persistent volumes with Minikube

I want to mount a folder that is located on my local hard disk (ex. /home/logon/volumes/algovens/ids/app_data/) on a pod:
{{/home/logon/volumes/algovens/ids/app_data/}} SHOULD BE MOUNTED ON {{/app/app_data/}} ON A POD
I create a persistent volume and a persistent volume claim, then import the pvc on my pod .yaml file.
I apply all three yaml files for pv, pvc, pod into kubernetes cluster.
When getting access to the container using interactive bash, I see there's a folder that I've configured to mount from my local hard disk (/app/app_data/ ON POD), but it's still empty.
However, I have some files and folders on my local folder on my local hard disk (/home/logon/volumes/algovens/ids/app_data/)
My configuration files:
identityserver-pv.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: identityserver-pv
labels:
name: identityserver-pv
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: /home/logon/volumes/algovens/ids/app_data/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
identityserver-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: identityserver-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
identityserver-deployment.yaml
# Deployment for identityserver
apiVersion: apps/v1
kind: Deployment
metadata:
name: identityserver-deployment
labels:
app: identityserver-deployment
spec:
replicas: 1
selector:
matchLabels:
app: identityserver
template:
metadata:
labels:
app: identityserver # pod label
spec:
volumes:
- name: identityserver-pvc
persistentVolumeClaim:
claimName: identityserver-pvc
containers:
- name: identityserver-container
image: localhost:5000/algovens-identityserver:v1.0
ports:
- containerPort: 80
volumeMounts:
- name: identityserver-pvc
mountPath: "/app/app_data/"
env:
- name: ALGOVENS_CORS_ORIGINS
valueFrom:
configMapKeyRef:
name: identityserver-config
key: ALGOVENS_CORS_ORIGINS
---
# Service for identityserver
apiVersion: v1
kind: Service
metadata:
name: identityserver-service
spec:
type: NodePort # External service. default value is ClusterIP
selector:
app: identityserver
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30100

Kubernetes Minio Volume Node Affinity Conflict

I have setup a testing k3d cluster with 4 agents and a server.
I have a storage class defined thus:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
with a pv defined thus:
apiVersion: v1
kind: PersistentVolume
metadata:
name: basic-minio-storage
labels:
storage-type: object-store-path
spec:
capacity:
storage: 500Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/basic_minio
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3d-test-agent-0
the pvc that I have defined is like:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. Will be used in deployment below.
name: minio-pv-claim
labels:
app: basic-minio
spec:
# Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 500Gi
selector:
matchLabels:
storage-type: object-store-path
my deployment is like:
# Create a simple single node Minio linked to root drive
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: basic-minio
namespace: minio
spec:
selector:
matchLabels:
app: basic-minio
serviceName: "basic-minio"
template:
metadata:
labels:
app: basic-minio
spec:
containers:
- name: basic-minio
image: minio/minio:RELEASE.2021-10-10T16-53-30Z
imagePullPolicy: IfNotPresent
args:
- server
- /data
env:
- name: MINIO_ROOT_USER
valueFrom:
secretKeyRef:
name: minio-secret
key: minio-root-user
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: minio-secret
key: minio-root-password
ports:
- containerPort: 9000
volumeMounts:
- name: storage
mountPath: "/data"
volumes:
- name: storage
persistentVolumeClaim:
claimName: minio-pv-claim
In my kubernetes dashboard, I can see the that PV is provisioned and ready.
The PV has been setup and has bound to the PV.
But my pod shows the error: 0/5 nodes are available: 5 node(s) had volume node affinity conflict.
what is causing this issue and how can I debug it?
Your (local) volume is created on the worker node k3d-test-agent-0 but none of your pod is scheduled to run on this node. This is not a good approach but if you must run in this way, you can direct all pods to run on this host:
...
spec:
nodeSelector:
kubernetes.io/hostname: k3d-test-agent-0
containers:
- name: basic-minio
...

Persistent Storage - Pi K8s Cluster - NFS version transport protocol not supported

I have a Raspberry Pi Cluster consisting of 1-Master 20-Nodes:
192.168.0.92 (Master)
192.168.0.112 (Node w/ USB Drive)
I mounted a USB drive to /media/hdd & set a label - purpose=volume to it.
Using the following I was able to setup a NFS server:
apiVersion: v1
kind: Namespace
metadata:
name: storage
labels:
app: storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
namespace: storage
spec:
capacity:
storage: 3.5Ti
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /media/hdd
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: purpose
operator: In
values:
- volume
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-claim
namespace: storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 3Ti
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server
namespace: storage
labels:
app: nfs-server
spec:
replicas: 1
selector:
matchLabels:
app: nfs-server
template:
metadata:
labels:
app: nfs-server
name: nfs-server
spec:
containers:
- name: nfs-server
image: itsthenetwork/nfs-server-alpine:11-arm
env:
- name: SHARED_DIRECTORY
value: /exports
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: local-claim
nodeSelector:
purpose: volume
---
kind: Service
apiVersion: v1
metadata:
name: nfs-server
namespace: storage
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
clusterIP: 10.96.0.11
selector:
app: nfs-server
And I was even able to make a persistent volume with this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-nfs-volume
labels:
directory: mysql
spec:
capacity:
storage: 200Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
nfs:
path: /mysql
server: 10.244.19.5
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-nfs-claim
spec:
storageClassName: slow
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
selector:
matchLabels:
directory: mysql
But when I try to use the volume like so:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-nfs-claim
I get NFS version transport protocol not supported error.
When seeing mount.nfs: requested NFS version or transport protocol is not supported error there are three main reasons:
NFS services are not running on NFS server
NFS utils not installed on the client
NFS service hung on NFS server
According tho this artice there are three solutions to resolve the problem with your error.
First one:
Login to the NFS server and check the NFS services status. If the following command
service nfs status returns an information that NFS services are stopped on the server - just start them using service nfs start. To mount NFS share on the client use the same command.
Second one:
If after trying first solution your problem isn't resolved
try installing package nfs-utils on your server.
Third one:
Open file /etc/sysconfig/nfs and try to check below parameters
# Turn off v4 protocol support
#RPCNFSDARGS="-N 4"
# Turn off v2 and v3 protocol support
#RPCNFSDARGS="-N 2 -N 3"
Removing hash from RPCNFSDARGS lines will turn off specific version support. This way clients with mentioned NFS versions won’t be able to connect to the NFS server for mounting share. If you have any of it enabled, try disabling it and mounting at the client after the NFS server service restarts.

K8s pod has unbound immediate PersistentVolumeClaims - mongoDB

I'm trying to deploy mongoDB in my Kubernetes cluster (Google Cloud Platform). Right now I'm focused on minikube local version of the deployment. The thing is that I'm getting error like so:
pod has unbound immediate PersistentVolumeClaims
This is my config files which sets up mongodb inside the minikube:
Service file:
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
Stateful set file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
role: mongo
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
Storage class file:
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
What am I missing here?
I think the problem is that you are trying to apply of
provisioner: kubernetes.io/gce-pd
this should not work locally, as it is intended for GCE PD.
For minikube, you can create hostPath pvc. Read more
I work with minikube during the feature development and I am running MongoDB as well, I would recommend you to use the hostPath when working with minikube, here is my volume definition:
volume
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "blog-repoflow-resources-data-volume"
namespace: repoflow-blog-namespace
labels:
service: "resources-data-service"
fsOwner: "1001"
fsGroup: "0"
fsMode: "775"
spec:
capacity:
storage: "500Mi"
accessModes:
- "ReadWriteOnce"
storageClassName: local-storage
hostPath:
path: /home/docker/production/blog.repoflow.com/volumes/blog-repoflow-resources-data-volume
The complete cluster is up and running and open source: repository.
Check also the stateful, volume claim and service definitions

Will storageClass kubernetes.io/no-provisioner work for multi-node cluster?

Cluster:
1 master
2 workers
I am deploying StatefulSet using the local-volume using the PV (kubernetes.io/no-provisioner storageClass) with 3 replicas.
Created 2 PV for Both worker nodes.
Expectation: pods will be scheduled on both workers and sharing the same volume.
result: 3 stateful pods are created on single worker node.
yaml :-
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv-1
spec:
capacity:
storage: 2Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-node1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv-2
spec:
capacity:
storage: 2Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/vol2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-node2
---
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: test
labels:
app: test
spec:
ports:
- name: test-headless
port: 8000
clusterIP: None
selector:
app: test
---
apiVersion: v1
kind: Service
metadata:
name: test-service
labels:
app: test
spec:
ports:
- name: test
port: 8000
protocol: TCP
nodePort: 30063
type: NodePort
selector:
app: test
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-stateful
spec:
selector:
matchLabels:
app: test
serviceName: stateful-service
replicas: 6
template:
metadata:
labels:
app: test
spec:
containers:
- name: container-1
image: <Image-name>
imagePullPolicy: Always
ports:
- name: http
containerPort: 8000
volumeMounts:
- name: localvolume
mountPath: /tmp/
volumes:
- name: localvolume
persistentVolumeClaim:
claimName: example-local-claim
This happened because Kubernetes doesn't care about distribution. It has the mechanism for providing specific distribution called Pod Affinity.
For distributing pods on all workers, you may use Pod Affinity.
Furthermore, you can use soft affinity (the differences I explain here ), it isn't strict and allows to spawn all your pods. For example, StatefulSet will look like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: my-app
replicas: 3
template:
metadata:
labels:
app: my-app
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: app-name
image: k8s.gcr.io/super-app:0.8
ports:
- containerPort: 21
name: web
This StatefulSet will try to spawn each pod on a new worker; if there are not enough workers, it will spawn the pod on the node where the pod already exists.