Is there a way in Helm to share values in sibling charts? - kubernetes-helm

I have a simple structure of charts:
chart1
-templates
-deployment.yaml
-configmap.yaml
-service.yaml
chart2
-templates
-deployment.yaml
-configmap.yaml
-service.yaml
redis
-templates
-deployment.yaml
-service.yaml
Now chart2 is depending on redis and needs it to run. and chart1 depend on both redis and chart2. (basically they are both services that uses redis to store info and chart1 sends requests to chart2).
When I install chart2 it's all fine but if I install chart1 it will try to install both its redis and the redis that is subchart of chart2 (which is the same redis).
So to prevent this collision I use a tag to prevent the second redis from installing, so by installing chart1 I also install chart2 and one instance of redis.
The problem is, chart2 needs to know the name of the redis service (assuming it can be dynamically created with the installation) and I don't have access to it from chart2.
I use the template "redis.fullname" to name all the resources of redis. chart1 has access to this template because redis is its subchart on installation (using .Subcharts.redis), but its not the subchart of chart2 in this instance, so chart2 doesn't have access to "redis.fullname" and therefore cant use the correct service name in its configmap.
I hope I made sense when describing the problem. is there any solution to this?

In case you have control over the code in your charts, you can use global values: https://helm.sh/docs/chart_template_guide/subcharts_and_globals/

Related

How do I use crossplane to Install helm charts (with provider-helm) into other cluster

I'm evaluating crossplane to use as our go to tool to deploy our clients different solutions and have struggled with one issue:
We want to install crossplane to one cluster on GCP (which we create manually) and use that crossplane to provision new cluster on which we can install helm charts and deploy as usual.
The main problem so far is that we haven't figured out how to tell crossplane to install the helm charts into other clusters than itself.
This is what we have tried so for:
The provider-config in the example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: helm-provider
spec:
credentials:
source: InjectedIdentity
...which works but installs everything into the same cluster as crossplane.
and the other example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
source: Secret
secretRef:
name: cluster-credentials
namespace: crossplane-system
key: kubeconfig
...which required a lot of makefile scripting to easier generate a kubeconfig for the new cluster and with that kubecoinfig still gives a lot of errors (but does begin to create something in the new cluster, but it doesnt work all the way. Gettings errors like: " PodUnschedulable Cannot schedule pods: gvisor}).
I have only tried crossplane for a couple of days so I'm aware that I might be approaching this from a completely wrong angle but I do like the promise of crossplane and its approach compared to Terraform and alike.
So the question is: I'm thinking completely wrong or I'm missing something obvious.
The second test with the kubeconfig feels quite complicated right now (many steps in correct order to achieve it).
Thanks
As you've noticed, ProviderConfig with InjectedIdentity is for the case where provider-helm installs the helm release into the same cluster.
To deploy to other clusters, provider-helm needs a kubeconfig file of the remote cluster which needs to be provided as a Kubernetes secret and referenced from ProviderConfig. So, as long as you've provided a proper kubeconfig to an external cluster that is accessible from your Crossplane cluster (a.k.a. control plane), provider-helm should be able to deploy the release to the remote cluster.
So, it looks like you're on the right track regarding configuring provider-helm, and since you observed something getting deployed to the external cluster, you provided a valid kubeconfig, and provider-helm could access and authenticate to the cluster.
The last error you're getting sounds like some incompatibility between your cluster and release, e.g. the external cluster only allows pods with gvisor and the application that you want to install with provider helm does not have some labels accordingly.
As a troubleshooting step, you might try installing that helm chart with exactly same configuration to the external cluster via helm cli, using the same kubeconfig you built.
Regarding the inconvenience of building the Kubeconfig you mentioned, provider-helm needs a way to access to that external Kubernetes cluster, and since kubeconfig is the most common way for this purpose. However, if you see another alternative that makes things easier for some common use cases, this could be implemented and it would be great if you could create a feature request in the repo for this.
Finally, I am wondering how you're creating those external clusters. If it makes sense to create them with Crossplane as well, e.g. if GKE with provider-gcp, then, you can compose a helm ProviderConfig together with a GKE Cluster resource which would just create the appropriate secret and ProviderConfig when you create a new cluster, you can check this as an example: https://github.com/crossplane-contrib/provider-helm/blob/master/examples/in-composition/composition.yaml#L147

Dynamically refresh pods on secrets update on kubernetes while using helm chart

I am creating deployment,service manifest files using helm charts, also secrets by helm but separately not with deployments and service.
secretes are being loaded as env variables on pod level.
we are looking to refresh or restart PODs when we update secrets with new content.
Kubernetes does not itself support this feature at the moment and there is feature in the works (https://github.com/kubernetes/kubernetes/issues/22368).
You can use custom solution available to achieve the same and one of the popular ones include Reloader.

How to pass dynamic data to helm subchart

I'm using the mongodb helm chart and the mongo-express one. mongodb generates the name depending on my release name, so it is dynamic. The mongodb service name will be something like my-release-mongodb.
mongo-express requires to pass mongodbServer - the location at which the mongodb can be reached. How can I provide this value to mongo-express if it is generated and can change depending on the release name?
Helm doesn't directly have this ability. (See also helm - programmatically override subchart values.yaml.) It has a couple of ways to propagate configured values from a subchart to a parent but not to use computed values, or to send these values to a sibling chart.
In the particular case of Services created by a subchart, I've generally considered the Service name as part of the chart's "API": you know the Service will be named {{ .Release.Name }}-mongodb and you just have to hard-code that in the consuming chart.
If you're launching this under a single "umbrella" chart, this is a little more straightforward. Both parts have the same release name, so you can construct the service name the same way. (Umbrella charts have other limitations – if you have multiple services that each should have an independent MongoDB installation, Helm will only deploy the database once for the whole umbrella chart – but you can still hit this same problem making HTTP calls between microservices.)
If they're totally separate installations, you may need to pick the release name yourself and pass it in as a value.
helm install thedb ./mongodb
helm install theapp ./mongo-express --set serviceName=thedb-mongodb
This also a place where a still higher-level tool like Helmfile or Helmsman can come in handy, since that would let you specify these parameters in a fixed file.

kubernetes: deploying kong helm chart

I deploy kong via helm on my kubernetes cluster but I can't configure it as I want.
helm install stable/kong -f values.yaml
value.yaml:
{
"persistence.size":"1Gi",
"persistence.storageClass":"my-kong-storage"
}
Unfortunately, the created persistenceVolumeClaim stays at 8G instead of 1Gi. Even adding "persistence.enabled":false has no effect on deployment. So I think my all my configuration is bad.
What should be a good configuration file?
I am using kubernetes rancher deployment on bare metal servers.
I use Local Persistent Volumes. (working well with mongo-replicaset deployment)
What you are trying to do is to configure a dependency chart (a.k.a subchart ) which is a little different from a main chart when it comes to writing values.yaml. Here is how you can do it:
As postgresql is a dependency chart for kong so you have to use the name of the dependency chart as a key then the rest of the options you need to modify in the following form:
The content of values.yaml does not need to be surrounded with curly braces. so you need to remove it from the code you posted in the question.
<dependcy-chart-name>:
<configuration-key-name>: <configuration-value>
For Rancher you have to write it as the following:
#values.yaml for rancher
postgresql.persistence.storageClass: "my-kong-storage"
postgresql.persistence.size: "1Gi"
Unlike if you are using helm itself with vanilla kubernetes - at least - you can write the values.yml as below:
#values.yaml for helm
postgresql:
persistence:
storageClass: "my-kong-storage"
size: "1Gi"
More about Dealing with SubChart values
More about Postgresql chart configuration
Please tell us which cluster setup you are using. A cloud managed service? Custom setup kubernetes?
The problem you are facing is that there is a "minimum size" of storage to be provisioned. For example in IBM Cloud it is 20 GB.
So even if 2GB are requested in the PVC , you will end up with a 20GB PV.
Please check the documentation of your NFS Provisioner / Storage Class

Automate helm chart installation when new Kubernetes namespace is created

I'm creating a Multi-Tenancy Kubernetes infrastructure.
I created a Helm Chart with my app, and now I need automate the helm chart installation when a new namespace is created.
For example, when the namespace client1 is create I need to run helm install myrepo/myapp --name client1.
How can i get the new namespace creation event? And the namespace name?
You can either keep running a script which executes kubectl get namespace every since a while and compares the current result with the old result. When you find out a new namespace created, you can then execute helm install myrepo/myapp --name client1. Or you can run an application in your cluster. What the application does is basically listing all namespaces in the cluster, comparing the current with the cached, if a new namespace found, then call helm client to install your app. For more information, if you are using golang, I would recommend you to use kubernetes client-go to get the list of namespaces in the cluster and you can refer to the open resource project pipeline for the helm client-go part to install your app.