Installing RabbitMQ on Minikube - kubernetes

I'm currently building a backend, that among other things, involves sending RabbitMQ messages from localhost into a K8s cluster where containers can run and pickup specific messages.
So far I've been using Minikube to carry out all of my Docker and K8s development but have ran into a problem when trying to install RabbitMQ.
I've been following the RabbitMQ Cluster Operator official documentation (installing) (using). I got to the "Create a RabbitMQ Instance" section and ran into this error:
1 pod has unbound immediate persistentVolumeClaims
I fixed it by continuing with the tutorial and adding a PV and PVC into my RabbitMQCluster YAML file. Tried to apply it again and came across my next issue:
1 insufficient cpu
I've tried messing around with resource limits and requests in the YAML file but no success yet. After Googling and doing some general research I noticed that my specific problems and setup (Minikube and RabbitMQ) doesn't seem to be very popular. My question is, have I passed the scope or use case of Minikube by trying to install external services like RabbitMQ? If so what should be my next step?
If not, are there any useful tutorials out there for installing RabbitMQ in Minikube?
If it helps, here's my current YAML file for the RabbitMQCluster:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq-cluster
spec:
persistence:
storageClassName: standard
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rabbimq-pvc
spec:
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: rabbitmq-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: standard
hostPath:
path: /mnt/app/rabbitmq
type: DirectoryOrCreate
Edit:
Command used to start Minikube:
minikube start
Output:
πŸ˜„ minikube v1.17.1 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
πŸ‘ Starting control plane node minikube in cluster minikube
πŸ”„ Restarting existing docker container for "minikube" ...
πŸŽ‰ minikube 1.18.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.18.1
πŸ’‘ To disable this notice, run: 'minikube config set WantUpdateNotification false'
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
πŸ”Ž Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass, dashboard
πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

According to the command you used to start minikube, the error is because you don't have enough resources assigned to your cluster.
According to the source code from the rabbitmq cluster operator, it seems that it needs 2CPUs.
You need to adjust the number of CPUs (and probably the memory also) when you initialize your cluster. Below is an example to start a kubernetes cluster with 4 cpus and 8G of RAM :
minikube start --cpus=4 --memory 8192
If you want to check your current allocated ressources, you can run kubectl describe node.

Raising cpus and memory definitely gets things un-stuck, but if you are running minikube on a dev machine - which is probably a laptop - this is a pretty big resource requirement. I get that Rabbit runs in some really big scenarios, but can't it be configured to work in small scenarios?

Related

how to connect vmware storage to kuberentes built using rancher 2.8

The cluster nodes are on-prem vmware servers, we used rancher just to build the k8s cluster.
Built was successful, when trying to host apps that are using PVC we have problems, the dynamic volume provisioning isn't happening and pvc are stuck in 'pending' state.
VMWare storage class is being used, we got confirmed from our vsphere admins that the VM's have visibility to the datastores and ideally it should work.
While configuring the cluster we have used the cloud provider credentials according the rancher docs.
cloud_provider:
name: vsphere
vsphereCloudProvider:
disk:
scsicontrollertype: pvscsi
global:
datacenters: nxs
insecure-flag: true
port: '443'
soap-roundtrip-count: 0
user: k8s_volume_svc#vsphere.local
Storage class yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nxs01-k8s-0004
parameters:
datastore: ds1_K8S_0004
diskformat: zeroedthick
reclaimPolicy: Delete
PVC yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: arango
namespace: arango
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: nxs01-k8s-0004
Now wanted understand why my PVC are stuck under pending state? is there any other steps missed out.
I saw in the rancher documentation saying Storage Policy has to be given as an input
https://rancher.com/docs/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/#creating-a-storageclass
In a vmware document it referred that as an optional parameter, and also had a statement on the top stating it doesn't apply to the tools that use CSI(container storage Interface)
https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/storageclass.html
I found that rancher is using an CSI driver called rshared.
So now is this storage policy a mandatory one? is this one that stopping me from provisioning a VMDK file?
I gave the documentation of creating the storage policy to the vsphere admins, they said this is for VSAN and the datastores are in VMax. I couldn't understand the difference or find an diff doc for VMax.
It would be a great help!! if fixed :)
The whole thing is about just the path defined for the storage end, in the cloud config yaml the PATH was wrong. The vpshere admins gave us the PATH where the vm
's residing instead they should have given the path where the Storage Resides.
Once this was corrected the PVC came to bound state.

Data from host to kubernetes pod container

Im just starting this trip into cloud native and kubernetes(minikube for now), but im stuck because i cannot pass files and persist into pod containers.
Nginx,php-fpm and mariadb containers. Now, i just need to test app in kubernetes(docker-compose is running ok), that means as i was doing in docker-compose.
How can i mount volumes in this scenario?
Volume file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/lib/docker/volumes/sylius-standard-mysql-sylius-dev-data/_data/sylius
Claim File:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Thank you for the guidance...
It depends on which Minikube driver you're using. Check out https://minikube.sigs.k8s.io/docs/handbook/mount/ for a full overview, but basically you have to make sure the host folder is shared with the guest VM, then hostPath volumes will work as normal. You may want to try Docker Desktop instead as it somewhat streamlines this process.
found on minikube github repo: feature request: mount host volumes into docker driver:
Currently we are mounting the /var directory as a docker volume, so
that's the current workaround.
i.e. use this host directory, for getting things into the container ?
See e.g. docker volume inspect minikube for the details on it
So you may want to try to use the /var dir as workaround.
If the previous solution doesn't meet your expectations and you still want to use docker as your minikube driver - don't, because you can't use extra mounts with docker (as far as I know). Use a VM.
Other solution: if you don't like the idea of using a VM, use kind (kuberntes in docker).
kind supports extra mounts. To configure it, check the kind documentation on extra mounts:
Extra Mounts
Extra mounts can be used to pass through storage on the host to a kind node for persisting data, mounting through code etc.
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes:
- role: control-plane # add a mount from /path/to/my/files on the host to /files on the node
extraMounts:
- hostPath: /path/to/my/files/
containerPath: /files
And the rest is like you described: You need to create a PV with the same hostPath as specified by containerPath.
You could also use minikube without any driver by specifying --driver=none with minikube start, which can be usefull in some cases, but check the minikube docs for more information.

AZDATA BDC CREATE stuck. Control containers pending. Scheduling error on NFS PVC

I am very new to Linux, Docker and Kubernetes. I need to setup an on-premises POC to showcase BDC.
What I have installed.
1. Ubuntu 19.10
2. Kubernetes Cluster
3. Docker
4. NFS
5. Settings and prerequisites but obviously missing stuff.
This has been done with stitched together tutorials. I am stuck on "AZDATA BDC Create". Error below.
Scheduling error on POD PVC.
Some more information.
NFS information
Storage class info
More Info 20191220:
PV & PVcs bound NFS side
Dynamic Volume Provisioning alongside with a StorageClass allows the cluster to provision PersistentVolumes on demand. In order to make that work, the given storage provider must support provisioning - this allows the cluster to request the provisioning of a "new" PersistentVolume when an unsatisfied PersistentVolumeClaim pops up.
First make sure you have defined StorageClass properly. You have defined nfs-dynamic class but it is not defined as default storage class, that's why your claims cannot bound volumes to it. You have two options:
1. Execute command below:
$ kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Another option is to define in pvc configuration file storageclass you have used:
Here is an example cofiguration of such file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: slow
selector:
matchLabels:
release: "stable"
matchExpressions:
- {key: environment, operator: In, values: [dev]}'
Simple add line storageClassName: nfs-dynamic.
Then make sure you have followed steps from this instruction: nfs-kubernetes.

How install Ceph on a kubernetes cluster

we want to use Ceph but we want to use Docker and Kubernetes to
deploy new instances of Ceph quickly.
I tried to use the default ceph docker hub: ceph/daemon-base. But I didn't work.
I tried to use the ceph-container. Seems like it doesn't work.
This is my last deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ceph3-deployment
spec:
replicas: 1
selector:
matchLabels:
app: ceph3
template:
metadata:
labels:
app: ceph3
spec:
containers:
- name: ceph
image: ceph/daemon-base:v3.0.5-stable-3.0-luminous-centos-7
resources:
limits:
memory: 512Mi
cpu: "500m"
requests:
memory: 256Mi
cpu: "250m"
volumeMounts:
- mountPath: /etc/ceph
name: etc-ceph
- mountPath: /var/lib/ceph
name: lib-ceph
volumes:
- name: etc-ceph
hostPath:
path: /etc/ceph
- name: lib-ceph
hostPath:
path: /var/lib/ceph
Does someone already install a ceph instance on Kubernetes?
I tried to follow the tutorial here
But pods not working:
pod/ceph-mds-7b49574f48-vhvtl 0/1 Pending 0 81s
pod/ceph-mon-75c49c4fd5-2cq2r 0/1 CrashLoopBackOff 3 81s
pod/ceph-mon-75c49c4fd5-6nprj 0/1 Pending 0 81s
pod/ceph-mon-75c49c4fd5-7vrp8 0/1 Pending 0 81s
pod/ceph-mon-check-5df985478b-d87rs 1/1 Running 0 81s
The common practice for deploying stateful systems on Kubernetes is to use an Operator to manage and codify the lifecycle management of the application. Rook is an operator that provides Ceph lifecycle management on Kubernetes clusters.
Documentation for using Rook to deploy Ceph clusters can be found at https://rook.io/docs/rook/v1.1/ceph-storage.html
For a basic introduction, you can use the Rook Storage Quickstart guide
The core Ceph team is highly involved in working on Rook and with the Rook community, and Rook is widely deployed within the Kubernetes community for distributed storage applications, and the Ceph Days event now has added Rook] explicitly to become Ceph + Rook Days
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:ceph:default" cannot list resource "pods" in API group "" in the namespace "ceph".
This error means Your default user doesn't have an access to resources in default namespace, look into documentation how to deal with rbac and how to grant user accesses. There is the same error on github.
Install Ceph
To install ceph I would recommend You to use helm
Install helm
There are few ways to install helm, You can find them there.
Personally i am using helm github releases to download latest version. When answering this question latest release was 2.15.0. That's how You can install it.
1.Download helm (If you want to install current version of helm, check above helm github releases link, there might be a new version of it. Everything you need to do then is change v2.15.0 in wget and tar command to new, for example v2.16.0)
wget https://get.helm.sh/helm-v2.15.0-linux-amd64.tar.gz
2.Unpack it
tar -zxvf helm-v2.15.0-linux-amd64.tar.gz
3.Find the helm binary in the unpacked directory, and move it to its desired destination
mv linux-amd64/helm /usr/local/bin/helm
4.Install tiller
The easiest way to install tiller into the cluster is simply to run helm init. This will validate that helm’s local environment is set up correctly (and set it up if necessary). Then it will connect to whatever cluster kubectl connects to by default (kubectl config view). Once it connects, it will install tiller into the kube-system namespace.
And then follow official documentation to install Ceph.

Kubernetes Ceph StorageClass with dynamic provisioning

I'm trying to setup my Kubernetescluster with a Ceph Cluster using a storageClass, so that with each PVC a new PV is created automatically inside the ceph cluster.
But it doesn't work. I've tried a lot and read a lot of documentation and tutorials and can't figure out, what went wrong.
I've created 2 secrets, for the ceph admin user and an other user kube, which I created with this command to grant access to a ceph osd pool.
Creating the pool:
sudo ceph osd pool create kube 128
Creating the user:
sudo ceph auth get-or-create client.kube mon 'allow r' \
osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' \
-o /etc/ceph/ceph.client.kube.keyring
After that I exported both the keys and converted them to Base64 with:
sudo ceph auth get-key client.admin | base64 and sudo ceph auth get-key client.kube | base64
I used those values inside my secret.yaml to create kubernetes secrets.
apiVersion: v1
kind: Secret
type: "kubernetes.io/rbd"
metadata:
name: ceph-secret
data:
key: QVFCb3NxMVpiMVBITkJBQU5ucEEwOEZvM1JlWHBCNytvRmxIZmc9PQo=
And another one named ceph-user-secret.
Then I created a storage class to use the ceph cluster
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
monitors: publicIpofCephMon1:6789,publicIpofCephMon2:6789
adminId: admin
adminSecretName: ceph-secret
pool: kube
userId: kube
userSecretName: ceph-kube-secret
fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
To test my setup I created a PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-eng
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
But it gets stuck in the pending state:
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
pvc-eng Pending standard 25m
Also, no images are created inside the ceph kube pool.
Do you have any recommendations how to debug this problem?
I tried to install the ceph-common ubuntu package on all kubernetes nodes. I switched the kube-controller-manager docker image with an image provided by AT&T which includes the ceph-common package.
https://github.com/att-comdev/dockerfiles/tree/master/kube-controller-manager
Network is fine, I can access my ceph cluster from inside a pod and from every kubernetes host.
I would be glad if anyone has any ideas!
You must use annotation: ReadWriteOnce.
As you can see https://kubernetes.io/docs/concepts/storage/persistent-volumes/ (Persistent Volumes section) RBD devices does not support ReadWriteMany mode. Choose different volume plugin (CephFS, for example) if you need read and write data from PV by several pods.
As an expansion on the accepted answer.....
RBD is a remote block device. ie an external hard drive like iSCSI. The filesystem is interpreted by the client container so can only be written by a single user or corruption will happen.
CephFS is a network aware filesystem similar to NFS or SMB/CIFS. That allows multiple writes to different files. The filesystem is interpreted by the Ceph server so can accept writes from multiple clients.