I have installed Helm 3 binary locally on my Mac. I understand that Helm 3 does not use Tiller on the Kubernetes cluster. How do I configure Helm 3 to know where to find my cluster? I have looked all through the documentation but nothing!
ok, so I figured it out:
Helm uses KubeConfig file.
Related
I am new to k8s and helm. I am deploying an open source distributed software called zeebe that provides helm charts for k8s deployment. I have seen that even after executing the helm uninstall command the persistent volume claims and persistent volumes do not get deleted.
A workaround is stated on this helm github issue to define a helm hook. Being new to helm I cannot find an example to try this out. The only file that I have been editing so far when installing and uninstalling helm chart is the values.yaml file. Kindly guide me on how and where to write an helm hook for zeebe deployment to delete its persistent volumes on helm uninstall.
Thanks.
I finally found a solution to this. All pv and pvc could be deleted through
kubectl delete pvc -l app=camunda-platform command
We wanna use TDengine on Kubernetes. But I dont see any docs,is there any problem runing in k8s or somethings?
As helm chart is popular in kubernetes to deploy service,If can use helm install tdengine to install a cluster in kubernetes,that will be wonderful.
If possible,I can contribute helm chart and test it in my cluster.
https://github.com/taosdata/TDengine-Operator/tree/3.0/helm/tdengine
how about these two guys
https://docs.tdengine.com/deployment/helm/
it could help you on TDengine database K8s cluster with Helm
I am looking for a helm chart that contains the default (most commonly used) configuration for ZooKeeper (config for statefulset, netpol, service, etc.), v3.5.0+. A chart that I can then use to deploy a ZK cluster in a k8s cluster.
I was hoping to obtain such a Helm chart (the values.yaml file) from Bitnami (e.g. https://charts.bitnami.com/, or https://bitnami.com/stack/zookeeper/helm). But no luck.
Any ideas or pointers would be greatly appreciated.
Thank you,
Ahmed.
You can get default values.yaml from the GitHub repository or through the helm command.
helm show values bitnami/zookeeper > vaules.yaml
Or
From the GitHub repository, they have a link to the GitHub repository for every chart.
bitnami/zookeeper/values.yaml
I am trying depoly a replicaset of mongodb in my kind kubernetes.
Well, my first step is run thsi command:
helm upgrade --install mongodb bitnami/mongodb
--set mongodb.global.storageClass=standard
--set mongodbRootPassword=root
--set mongodbUsername=mongoUser
--set mongodbPassword=mongoUser
--set mongodbDatabase=articles
--set mongodb.persistence.size=5Gi
--set mongodb.service.type=NodePort
--set mongodb.service.nodePort=30005
--set mongodb.metrics.enabled=false
--set replicaSet.enabled=true
--namespace replicaset-data
But I have tow questions:
How I can connect to this replicaSet?
The next question is about the persistent data. Y have a extraMounts defined y my kind kulster, but I not found the way to use it in this helm
Could anyone help my in this?
A lot of Thanks!
How I can connect to this replicaSet?
You can use the K8s service to connect with these replicas service type like LoadBalancer, Port-forward, using ingress
The next question is about the persistent data. Y have a extraMounts
defined y my kind kulster, but I not found the way to use it in this
helm
You can use the PV and PVC with the K8s cluster to persist the data of your database.
Read more at : https://github.com/bitnami/charts/issues/3291
Prameters : https://artifacthub.io/packages/helm/bitnami/mongodb#persistence-parameters
Well.
I have finally a response for the questions... is not a definedly respose, but is a progress.
All that I says here, is about binami/monogdb Helm Clart.
Firstly, I thinks is better usea a values.yaml for deploy the helm; so, you can see all the available parameters for the helm. Here, you can see that if you deploy the helm in standalone mode, you can indicate the name of the PVC, so, yo can bindind your deployment with your pvc.
However, you can see in the values.yaml, that in replicaset mode, you can't set this parameter (you can set anothers like the space, type...).
By other hand, the url for conect to mongo, insde the kubernetes cluster is:
mongodb://<user>:<password>#<service-name>.<namespace>:<port>/?authMechanism=DEFAULT&directConnection=true
If you deploy mongo on standalone mode, the is by default "mongodb". If you deploy mongo on replicaset mode, the is by default "mongodb-headless".
So, knowing that, is easy set your environment variable in your cliente service for connect to mongo service.
So, the remain answer is ¿is a way to set the pvc in a replicaset mode? ¿how?
Are there any problems when using helm 2 and helm 3 in parallel - on the same cluster?
Reason behind is, that the Terraform helm provider is still NOT available for helm 3. But with another application we'd like to proceed with helm 3.
Have you maybe tried this? Or did you run into some problems?
Helm 2 and Helm 3 can be installed concurrently to manage the same cluster. This works when Helm 2 uses ConfigMaps for storage as Helm 3 uses Secrets for storage. There is however a conflict when Helm 2 uses Secrets for storage and stores the release in the same namespace as the release. The conflict occurs because Helm 3 uses different tags and ownership for the secret objects that Helm 2 does. It can therefore try to create a release that it thinks does not exist but will fail then because Helm 2 already has a secret with that name in that namespace.
Additionally, Helm 2 can be migrated to enable Helm 3 to manage releases previously handled by Helm 2 ref. https://github.com/helm/helm-2to3. This also works when Helm 2 uses ConfigMaps for storage as Helm 3 uses Secrets for storage. There is a conflict again however when using secrets because of the same naming convention.
A possible solution around this would be for Helm 3 to use a different naming convention for release versions.
There is no problem using them in parallel. However, you need to treat them somehow like separate tools, meaning that Helm 3 won't list (or anyhow manage) your releases from Helm 2 and vice versa.