i deployed hdfs into minikube using helm chart, i also using minikube multinodes(3nodes), after the deploying of hdfs, everytime i run minikube start the pods restart everything from 0 ( i lose the data that i put it on hdfs),meanwhile i tried to apply a pvc :
persistence:
nameNode:
enabled: true
storageClass: standard-ssd
accessMode: ReadWriteOnce
size: 50Gi
dataNode:
enabled: true
storageClass: standard-hdd
accessMode: ReadWriteOnce
size: 200Gi
but i get this error : Error validatin data :[apiVersion not set, kind not set]if you choose to ignore these errors, turn validation offset
How can i solve this problem ?
Related
I'm trying to deploy IBM-MQ to Kubernetes (Rancher) using helmfile. I was getting the same error as here.
It wasn't working with storage class "longhorn", but was working with storage class "local-path". I added security: initVolumeAsRoot: true to my helmfile, now it looks like this:
....
- name: ibm-mq
...
chart: ibm-stable-charts/ibm-mqadvanced-server-dev
values:
- license: accept
security:
initVolumeAsRoot: true
resources:
limits:
cpu: 800m
memory: 800Mi
requests:
cpu: 500m
memory: 512Mi
image:
tag: latest
dataPVC:
storageClassName: "longhorn"
size: 500Mi
...
But in the Lens in the Stateful Sets I can see that it can't create a pod because of the error create Pod ibm-mq-0 in StatefulSet ibm-mq failed error: pods "ibm-mq-0" is forbidden: failed quota: default: must specify limits.cpu,limits.memory.
But the same helmfile without security worked fine, it didn't get an error because of limits, but got an error because there were problems with longhorn (as in the question I linked). If I change it to local-path (without security) it works fine, but the problem is that with local-path I need to delete manually volume after every restart, which is not what I want (for example, database works on longhorn without deleting volume after every restart). What's might be the problem here? I'm running it using helmfile -n namespace apply
UPD: I tried to place all the values in the order they're presented here, but it didn't work. I'm using helm 3, helmfile v.0.141.0, kubectl 1.22.2
Im just starting this trip into cloud native and kubernetes(minikube for now), but im stuck because i cannot pass files and persist into pod containers.
Nginx,php-fpm and mariadb containers. Now, i just need to test app in kubernetes(docker-compose is running ok), that means as i was doing in docker-compose.
How can i mount volumes in this scenario?
Volume file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/lib/docker/volumes/sylius-standard-mysql-sylius-dev-data/_data/sylius
Claim File:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Thank you for the guidance...
It depends on which Minikube driver you're using. Check out https://minikube.sigs.k8s.io/docs/handbook/mount/ for a full overview, but basically you have to make sure the host folder is shared with the guest VM, then hostPath volumes will work as normal. You may want to try Docker Desktop instead as it somewhat streamlines this process.
found on minikube github repo: feature request: mount host volumes into docker driver:
Currently we are mounting the /var directory as a docker volume, so
that's the current workaround.
i.e. use this host directory, for getting things into the container ?
See e.g. docker volume inspect minikube for the details on it
So you may want to try to use the /var dir as workaround.
If the previous solution doesn't meet your expectations and you still want to use docker as your minikube driver - don't, because you can't use extra mounts with docker (as far as I know). Use a VM.
Other solution: if you don't like the idea of using a VM, use kind (kuberntes in docker).
kind supports extra mounts. To configure it, check the kind documentation on extra mounts:
Extra Mounts
Extra mounts can be used to pass through storage on the host to a kind node for persisting data, mounting through code etc.
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes:
- role: control-plane # add a mount from /path/to/my/files on the host to /files on the node
extraMounts:
- hostPath: /path/to/my/files/
containerPath: /files
And the rest is like you described: You need to create a PV with the same hostPath as specified by containerPath.
You could also use minikube without any driver by specifying --driver=none with minikube start, which can be usefull in some cases, but check the minikube docs for more information.
I'm currently building a backend, that among other things, involves sending RabbitMQ messages from localhost into a K8s cluster where containers can run and pickup specific messages.
So far I've been using Minikube to carry out all of my Docker and K8s development but have ran into a problem when trying to install RabbitMQ.
I've been following the RabbitMQ Cluster Operator official documentation (installing) (using). I got to the "Create a RabbitMQ Instance" section and ran into this error:
1 pod has unbound immediate persistentVolumeClaims
I fixed it by continuing with the tutorial and adding a PV and PVC into my RabbitMQCluster YAML file. Tried to apply it again and came across my next issue:
1 insufficient cpu
I've tried messing around with resource limits and requests in the YAML file but no success yet. After Googling and doing some general research I noticed that my specific problems and setup (Minikube and RabbitMQ) doesn't seem to be very popular. My question is, have I passed the scope or use case of Minikube by trying to install external services like RabbitMQ? If so what should be my next step?
If not, are there any useful tutorials out there for installing RabbitMQ in Minikube?
If it helps, here's my current YAML file for the RabbitMQCluster:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq-cluster
spec:
persistence:
storageClassName: standard
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rabbimq-pvc
spec:
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: rabbitmq-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: standard
hostPath:
path: /mnt/app/rabbitmq
type: DirectoryOrCreate
Edit:
Command used to start Minikube:
minikube start
Output:
😄 minikube v1.17.1 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing docker container for "minikube" ...
🎉 minikube 1.18.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.18.1
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass, dashboard
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
According to the command you used to start minikube, the error is because you don't have enough resources assigned to your cluster.
According to the source code from the rabbitmq cluster operator, it seems that it needs 2CPUs.
You need to adjust the number of CPUs (and probably the memory also) when you initialize your cluster. Below is an example to start a kubernetes cluster with 4 cpus and 8G of RAM :
minikube start --cpus=4 --memory 8192
If you want to check your current allocated ressources, you can run kubectl describe node.
Raising cpus and memory definitely gets things un-stuck, but if you are running minikube on a dev machine - which is probably a laptop - this is a pretty big resource requirement. I get that Rabbit runs in some really big scenarios, but can't it be configured to work in small scenarios?
I am trying to set up persistent storage with the new prometheus-community helm chart. I have modified the helm values files as seen below. Currently when the chart is reinstalled (I use Tiltfiles for this) the PVC is deleted and therefore the data is not persisted.
I assume that the problem could have something to do with the fact that there is no statefulset running for the server, but I am not sure how to fix it.
(The solutions from here does not solve my problem, as it is for the old chart.)
server:
persistentVolume:
enabled: true
storageClass: default
accessModes:
- ReadWriteOnce
size: 8Gi
I enabled the statefulset on the prometheus server and now it seems to work.
server:
persistentVolume:
enabled: true
storageClass: default-hdd-retain
accessModes:
- ReadWriteOnce
size: 40Gi
statefulSet:
enabled: true
I changed the size of pvc.
According to the documents on the Internet, I went through the following steps.
I first added the following command line to the storageclass file.
allowVolumeExpansion: true
After changing the size with the following command ,I removed the pod to be made again with pvc.
But at the end of the steps, the amount of pvc does not change.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-fp
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 2Gi
storageClassName: rook-ceph-blockp
The output of these commands should be resized in pvc.
While not changing.
What is the version of your Kubernetes cluster ? PVC resize feature is by default enabled only for k8s version 1.11 and above. For prior versions of k8s, ExpandPersistentVolumes feature and PersistentVolumeClaimResize admission controller needs to be enabled explicitly.
What is backend storage provider ? Does it supports volume resize with PVC ?
As of now bellow providers support PVC resize:
AWS-EBS, GCE-PD, Azure Disk, Azure File, Glusterfs, Cinder, Portworx, and Ceph RBD
You can find more information at https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/