sharing common components in a helm chart - kubernetes-helm

I have a Kubernetes cluster in which I would like to deploy various company dependent pods and services.
All of them need some common components (e.g. ingress, traefik, postgres).
Therefore I had designed a chart structure:
- myproject
charts
- ingress
- traefik
- postgres
templates
- svc1
- pod1
- svc2
- pod2
My idea was to control the company-dependent pods/services via environment variables and do deployments like this:
helm install --set env="dev" --set company="cat" ./myproject
or
helm install --set env="prod" --set company="dog" ./myproject
svc1, svc2, ... read the env values.
Anyway, this construct doesn't work. I get an error that some common component already exists.
I understand this.
I think one way to avoid the problem would be to create a separate chart for ingress, traefik etc. and generate it first.
But I have the feeling that this is not right way. What would be a good solution to solve this problem?

Are all the charts developed by you? or are you using 3rd party charts?
Depending on how you are using the charts this might alter the solution.
Have you tried using something related to DRY? this article is very helpful for using DRY in helm.
Also, would be very helpful if you shared the error that is happening.

Related

Helmfile with additional resource without chart

I know this is maybe a weird question, but I want to ask if it's possible to also manage single resources (like f.e. a configmap/secret) without a seperate chart?
F.e. I try to install a nginx-ingress and would like to additionally apply a secret map which includes http-basic-authentication data.
I can just reference the nginx-ingress-repo directly in my helmfile, but do I really need to create a seperate helm chart to also apply the http-basic-secret?
I have many releases which need a single, additional resource (like a json configmap, a single secret) and it would be cumbersome to always need a seperate chart file for each release?
Thank you!
Sorry, Helmfile only manages entire Helm releases.
There are a couple of escape hatches you might be able to use. Helmfile hooks can run arbitrary shell commands on the host (as distinct from Helm hooks, which usually run Jobs in the cluster) and so you could in principle kubectl apply a file in a hook. Helmfile also has some integration with Kustomize and it might be possible to add resources this way. As you've noted you can also write local charts and put whatever YAML you need in those.
The occasional chart does support including either arbitrary extra resources or specific configuration content; the Bitnami MariaDB chart, to pick one, supports putting anything you want under an extraDeploy value. You could use this in combination with Helmfile values: to inject more resources
releases:
- name: mariadb
chart: bitnami/mariadb
values:
- extraDeploy:
- |-
apiVersion: v1
kind: ConfigMap
...

Setting up Nifi for use with Kafa in Kubernetes using Helm in a VirtualBox

I need to set up NiFi in Kubernetes (microk8s), in a VM (Ubuntu, using VirtualBox) using a helm chart. The end goal is to have two-way communication with Kafka, which is also already deployed in Kubernetes.
I have found a helm chart for NiFi available through Cetic here. Kafka is already set up to allow external access through a NodePort, so my assumption is that I should do the same for NiFi (at least for simplicity's sake), though any alternative solution is welcome.
From the documentation, there is NodePort access optionality:
NodePort: Exposes the service on each Node’s IP at a static port (the
NodePort). You’ll be able to contact the NodePort service, from
outside the cluster, by requesting NodeIP:NodePort.
Additionally, the documentation states (paraphrasing):
service.type defaults to NodePort
However, this does not appear to be true for the helm file, given that the default value in the chart's values.yaml file has service.type=ClusterIP.
I have very little experience with any of these technologies, so my question is, how do I actually set up the NiFi helm chart YAML file to allow two-way communication (presumably via NodePorts)? Is it as simple as "requesting NodeIP:NodePort", and if so, how do I do this?
UPDATE
I attempted JM Robles's approach (which does not use helm), but the API version used for Ingress is out-of-date and I haven't been able to figure out how to fix it.
I also tried GetInData's approach, but the helm commands provided result in: Error: unknown command "nifi" for "helm".
I found an answer, for any faced with a similar problem. As of late January 2023, the following can be used to set up NiFi as described in the question:
helm remo add cetic https://cetic.github.io/helm-charts
helm repo update
helm install -n <namespace> --set persistence.enabled=True --set service.type=NodePort --set properties.sensitiveKey=<key you want> --set auth.singleUser.username=<your username> --set auth.singleUser.password=<password you select, must be at least 12 characters> nifi cetic/nifi

How do I use crossplane to Install helm charts (with provider-helm) into other cluster

I'm evaluating crossplane to use as our go to tool to deploy our clients different solutions and have struggled with one issue:
We want to install crossplane to one cluster on GCP (which we create manually) and use that crossplane to provision new cluster on which we can install helm charts and deploy as usual.
The main problem so far is that we haven't figured out how to tell crossplane to install the helm charts into other clusters than itself.
This is what we have tried so for:
The provider-config in the example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: helm-provider
spec:
credentials:
source: InjectedIdentity
...which works but installs everything into the same cluster as crossplane.
and the other example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
source: Secret
secretRef:
name: cluster-credentials
namespace: crossplane-system
key: kubeconfig
...which required a lot of makefile scripting to easier generate a kubeconfig for the new cluster and with that kubecoinfig still gives a lot of errors (but does begin to create something in the new cluster, but it doesnt work all the way. Gettings errors like: " PodUnschedulable Cannot schedule pods: gvisor}).
I have only tried crossplane for a couple of days so I'm aware that I might be approaching this from a completely wrong angle but I do like the promise of crossplane and its approach compared to Terraform and alike.
So the question is: I'm thinking completely wrong or I'm missing something obvious.
The second test with the kubeconfig feels quite complicated right now (many steps in correct order to achieve it).
Thanks
As you've noticed, ProviderConfig with InjectedIdentity is for the case where provider-helm installs the helm release into the same cluster.
To deploy to other clusters, provider-helm needs a kubeconfig file of the remote cluster which needs to be provided as a Kubernetes secret and referenced from ProviderConfig. So, as long as you've provided a proper kubeconfig to an external cluster that is accessible from your Crossplane cluster (a.k.a. control plane), provider-helm should be able to deploy the release to the remote cluster.
So, it looks like you're on the right track regarding configuring provider-helm, and since you observed something getting deployed to the external cluster, you provided a valid kubeconfig, and provider-helm could access and authenticate to the cluster.
The last error you're getting sounds like some incompatibility between your cluster and release, e.g. the external cluster only allows pods with gvisor and the application that you want to install with provider helm does not have some labels accordingly.
As a troubleshooting step, you might try installing that helm chart with exactly same configuration to the external cluster via helm cli, using the same kubeconfig you built.
Regarding the inconvenience of building the Kubeconfig you mentioned, provider-helm needs a way to access to that external Kubernetes cluster, and since kubeconfig is the most common way for this purpose. However, if you see another alternative that makes things easier for some common use cases, this could be implemented and it would be great if you could create a feature request in the repo for this.
Finally, I am wondering how you're creating those external clusters. If it makes sense to create them with Crossplane as well, e.g. if GKE with provider-gcp, then, you can compose a helm ProviderConfig together with a GKE Cluster resource which would just create the appropriate secret and ProviderConfig when you create a new cluster, you can check this as an example: https://github.com/crossplane-contrib/provider-helm/blob/master/examples/in-composition/composition.yaml#L147

How set up Prometheus Operator with Grafana to enable basic Kubernetes monitoring

I followed a bunch of tutorials on how to monitor Kubernetes with prometheus and Grafana
All referring to a deprecated helm operator
According to the tutorials Grafana comes out of the box complete with cluster monitoring.
In practice Grafana is not installed with the chart
helm install prometheus-operator stable/prometheus -n monitor
nor is it installed with the newer community repo
helm install prometheus-operator prometheus-community/prometheus -n monitor
I installed the Grafana chart independently
helm install grafana-operator grafana/grafana -n monitor
And through the UI tried to connect using inner cluster URLs
prometheus-operator-server.monitor.svc.cluster.local:80
prometheus-operator-alertmanager.monitor.svc.cluster.local:80
the UI test indicates success but produces no metrics.
Is there a ready made Helm operator with out of the box Grafana?
How can Grafana interact with Prometeus?
You've used the wrong charts. Currently the project is named kube-prometheus-stack:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
If you look at values.yaml you'll notice switches for everything, including prometheus, all the exporters, grafana, all the standard dashboards, alerts for kubernetes and so on. It's all installed by one chart. And it's all linked together out of the box.
They only additional thing you might need is an Ingress/ELB for grafana, prometheus, and alertmanager to be able to open them without port-forwarding (don't forget to add ouath2-proxy or smth else cause it's all opened with no password by default).
I wouldn't bother, look at PaaS like Datadog, NewRelic etc. What you are describing becomes a costly nightmare at scale. It's just not worth the hassle for what you get ihmo.

How to bind kubernetes resource to helm release

If I run kubectl apply -f <some statefulset>.yaml separately, is there a way to bind the stateful set to a previous helm release? (eg by specifying some tags in the yaml file)
As far as I know - you cannot do it.
Yes, you can always create resources via templates before installing the Helm chart.
However, I have never seen a solution for your question.