I am trying depoly a replicaset of mongodb in my kind kubernetes.
Well, my first step is run thsi command:
helm upgrade --install mongodb bitnami/mongodb
--set mongodb.global.storageClass=standard
--set mongodbRootPassword=root
--set mongodbUsername=mongoUser
--set mongodbPassword=mongoUser
--set mongodbDatabase=articles
--set mongodb.persistence.size=5Gi
--set mongodb.service.type=NodePort
--set mongodb.service.nodePort=30005
--set mongodb.metrics.enabled=false
--set replicaSet.enabled=true
--namespace replicaset-data
But I have tow questions:
How I can connect to this replicaSet?
The next question is about the persistent data. Y have a extraMounts defined y my kind kulster, but I not found the way to use it in this helm
Could anyone help my in this?
A lot of Thanks!
How I can connect to this replicaSet?
You can use the K8s service to connect with these replicas service type like LoadBalancer, Port-forward, using ingress
The next question is about the persistent data. Y have a extraMounts
defined y my kind kulster, but I not found the way to use it in this
helm
You can use the PV and PVC with the K8s cluster to persist the data of your database.
Read more at : https://github.com/bitnami/charts/issues/3291
Prameters : https://artifacthub.io/packages/helm/bitnami/mongodb#persistence-parameters
Well.
I have finally a response for the questions... is not a definedly respose, but is a progress.
All that I says here, is about binami/monogdb Helm Clart.
Firstly, I thinks is better usea a values.yaml for deploy the helm; so, you can see all the available parameters for the helm. Here, you can see that if you deploy the helm in standalone mode, you can indicate the name of the PVC, so, yo can bindind your deployment with your pvc.
However, you can see in the values.yaml, that in replicaset mode, you can't set this parameter (you can set anothers like the space, type...).
By other hand, the url for conect to mongo, insde the kubernetes cluster is:
mongodb://<user>:<password>#<service-name>.<namespace>:<port>/?authMechanism=DEFAULT&directConnection=true
If you deploy mongo on standalone mode, the is by default "mongodb". If you deploy mongo on replicaset mode, the is by default "mongodb-headless".
So, knowing that, is easy set your environment variable in your cliente service for connect to mongo service.
So, the remain answer is ¿is a way to set the pvc in a replicaset mode? ¿how?
Related
I need to set up NiFi in Kubernetes (microk8s), in a VM (Ubuntu, using VirtualBox) using a helm chart. The end goal is to have two-way communication with Kafka, which is also already deployed in Kubernetes.
I have found a helm chart for NiFi available through Cetic here. Kafka is already set up to allow external access through a NodePort, so my assumption is that I should do the same for NiFi (at least for simplicity's sake), though any alternative solution is welcome.
From the documentation, there is NodePort access optionality:
NodePort: Exposes the service on each Node’s IP at a static port (the
NodePort). You’ll be able to contact the NodePort service, from
outside the cluster, by requesting NodeIP:NodePort.
Additionally, the documentation states (paraphrasing):
service.type defaults to NodePort
However, this does not appear to be true for the helm file, given that the default value in the chart's values.yaml file has service.type=ClusterIP.
I have very little experience with any of these technologies, so my question is, how do I actually set up the NiFi helm chart YAML file to allow two-way communication (presumably via NodePorts)? Is it as simple as "requesting NodeIP:NodePort", and if so, how do I do this?
UPDATE
I attempted JM Robles's approach (which does not use helm), but the API version used for Ingress is out-of-date and I haven't been able to figure out how to fix it.
I also tried GetInData's approach, but the helm commands provided result in: Error: unknown command "nifi" for "helm".
I found an answer, for any faced with a similar problem. As of late January 2023, the following can be used to set up NiFi as described in the question:
helm remo add cetic https://cetic.github.io/helm-charts
helm repo update
helm install -n <namespace> --set persistence.enabled=True --set service.type=NodePort --set properties.sensitiveKey=<key you want> --set auth.singleUser.username=<your username> --set auth.singleUser.password=<password you select, must be at least 12 characters> nifi cetic/nifi
I am new to Helm Kubernetes. I am currently using a list of bash commands to create a local Minikube cluster with many containers installed. In order to alleviate the manual burden we were thinking of creating an (umbrella) Helm Chart to execute the whole list of commands.
Between the commands that I would need to run in the Chart there are few (cleanup) kubectl deletes, i.e. :
kubectl delete all,configmap --all -n system --force --grace-period=0
and also some helm installs, i.e.:
helm repo add bitnami https://charts.bitnami.com/bitnami && \
helm install postgres bitnami/postgresql --set postgresqlPassword=test,postgresqlDatabase=test && \
Question1: is it possible to include kubectl command in my Helm Chart?
Question2: is it possible to add a dependency from a Chart only remotely available? I.e. the dependency from postgres above.
Question3: If you think Helm is not the correct tool for doing this, what would you suggest instead?
Thank you
You can't embed imperative kubectl commands in a Helm chart. An installed Helm chart keeps track of a specific set of Kubernetes resources it owns; you can helm delete the release, and that will delete that specific set of things. Similarly, if you have an installed Helm chart, you can helm upgrade it, and the new chart contents will replace the old ones.
For the workflow you describe – you're maintaining a developer environment based on Minikube, and you want to be able to start clean – there are two good approaches to take:
helm delete the release(s) that are already there, which will uninstall their managed Kubernetes resources; or
minikube delete the whole "cluster" (as a single container or VM), and then minikube start a new empty "cluster".
I want to setup Elasticsearch on Kubernetes Cluster using Helm. I can setup Elasticsearch on Kubernetes Cluster without persistence. I am using below helm chart.
helm install --name elasticsearch incubator/elasticsearch \
--set master.persistence.enabled=false \
--set data.persistence.enabled=false \
--set image.tag=6.4.2 \
--namespace logging
However, i am not able to use it with Persistence. Moreover i am confused as i am using neither cloud based storage(aws,gce) nor nfs. I am using Local VM storage.
I added disk in my VM environment formated it under ext4. And now i am trying to use it as a persistent disk for my elasticsearch Deployment.
I tried lots of ways, not working much.
For any data if you need i would be helpful to provide.
But kindly get a solution which will work.
I just need help..
I don't believe this chart will support local storage.
Looking at the volumeClaimTemplate such as on the master-statefulset.yaml shows that it's missing key parameters for a local volume setup (such as path, nodeAffinity, volumeBindingMode) described here. If you are using a cloud deployment, just use a cloud volume claim. If you have deployed a cluster on a on-prem or just onto your computer, then you should fork the chart and adjust the volume claims to meet the requirements for local storage.
Either way on your future posts you should include relevant logs. With kubernetes errors it's helpful to see from all parts of the stack such as: kubernetes control plane logs, object events (like the output from describing the volume claim), helm logs, elasticsearch pod logs failing to discover a volume, etc etc.
I used HELM to install the Prometheus operator and kube-prometheus into my kubernetes cluster using the following commands:
helm install coreos/prometheus-operator --name prometheus-operator --namespace monitoring --set rbacEnable=false
helm install coreos/kube-prometheus --name kube-prometheus --set global.rbacEnable=false --namespace monitoring
Everything is running fine, however, I want to set up email alerts and in order to do so I must configure the SMTP settings in "custom.ini" file according to the grafana website. I am fairly new to Kuberenetes and using HELM charts, therefore I have no idea which command I would use to access this file or make updates to it? Is it possible to do so without having to redeploy?
Can anyone provide me with a command to update custom values?
You could pass grafana.env value to add SMTP-related settings:
GF_SMTP_ENABLED=true,GF_SMTP_HOST,GF_SMTP_USER and GF_SMTP_PASSWORD
should do the trick. The prometheus-operator chart relies on the upstream stable/grafana chart (although, still using the 1.25 version)
I'm deploying taefik to my kubernetes cluster using helm. Here's what I have at the moment:
helm upgrade --install load-balancer --wait --set ssl.enabled=true,ssl.enforced=true,acme.enabled=true,acme.email=an#email.com stable/traefik
I'm trying to configure letsencrypt. According to this documentation - you add the domains to the bottom of the .toml file.
Looking at the code for the helm chart, there's no provision for such configuration.
Is there another way to do this or do I need to fork the chart to create my own variation of the .toml file?
Turns out this is the chicken and the egg problem, described here.
For the helm chart, if acme.enabled is set to true, then Treafik will automatically generate and serve certificates for domains configured in Kubernetes ingress rules. This is the purpose of the onHostRule = true line in the yaml file (referenced above).
To use Traefik with Let's Encrypt, we have to create an A record in our DNS server that points to the ip address of our load balancer. Which we can't do until Traefik is up and running. However, this configuration needs to exist before Traefik starts.
The only solution (at this stage) is to kill the first Pod after the A record configuration has propagated.
Note that the stable/traefik chart now supports the ACME DNS-01 protocol. By using DNS it avoids the chicken and egg problem.
See: https://github.com/kubernetes/charts/tree/master/stable/traefik#example-aws-route-53