Helm sub-chart used by multiple instances of the parent - kubernetes

I have followed the Helm Subchart documentation to create a parent chart with a sub-chart.
When I install the parent chart, the sub-chart comes along for the ride. Great!
However, if I try to install another instance of the parent chart with a different name, I get an error saying the sub-chart already exists.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: Service, namespace: my-namespace, name: my-subcharts-service
I had expected the sub-chart to be installed if it did not already exist. And if it did exist I expected everything to be ok. Eg I thought it would work like a package management system eg pip/yum/npm
Is there a way to share a sub chart (or other helm construct) between multiple instances of a parent chart?

Related

Helm installation throws an error for cluster-wide objects

I have a problem with helm chart that I would like to use to deploy multiple instances of my app in many namespaces. Let's say it would be ns1, ns2 and ns3.
I run helm upgrade with --install option and it goes well for ns1 but when I want to run it second time for ns2 I get an error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy "my-psp" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "ns2": current value is "ns1"
I found many topics about this problem but everytime the only answer is to delete the old object and install it again with helm. I don't want to do that - I would like to get two or more instances of my app that uses k8s objects that are common for many namespaces.
What can I do in this situation? I know I could change names of those objects with every deployment but that would be really messy. Second idea is to move those object to another chart and deploy it just once but sadly there's like a ton of work to do that so I would like to avoid it. Is it possible to ignore this error somehow and still make this work?
Found out the solution. The easiest way is to add lookup block into your templates:
{{- if not (lookup "policy/v1beta1" "PodSecurityPolicy" "" "my-psp") }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: my-psp
...
{{- end }}
With this config the object will be created only if case that the object with the same name does not exist.
It may not be perfect solution but if you know what you do you can save a lot of time.

Helm Release with existing resources

Previously we only use helm template to generate the manifest and apply to the cluster, recently we start planning to use helm install to manage our deployment, but running into following problems:
Our deployment is a simple backend api which contains "Ingress", "Service", and "Deployment", when there is a new commit, the pipeline will be triggered to deploy.
We plan to use the short commit sha as the image tag and helm release name. Here is the command
helm upgrade --install releaseName repo/chartName -f value.yaml --set image.tag=SHA
This runs perfectly fine for the first time, but when I create another release it fails with following error message
rendered manifests contain a resource that already exists. Unable to continue with install: Service "app-svc" in namespace "ns" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "rel-124": current value is "rel-123"
The error message is pretty clear on what the issue is, but I am just wondering what's "correct" way of using helm in this case?
It is not practical that I uninstall everything for a new release, and I also dont want to keep using the same release.
You are already doing it "right" way, just don't change release-name. That's key for Helm to identify resources. It seems that you previously used different name for release (rel-123) then you are using now (rel-124).
To fix your immediate problem, you should be able to proceed by updating value of annotation meta.helm.sh/release-name on problematic resource. Something like this should do it:
kubectl annotate --overwrite service app-svc meta.helm.sh/release-name=rel-124

Update helm chart values for different environments

I have helm charts created for a microservice that I have built and everything is working as expected. Now, I have created a new k8s namespace and I want to try to deploy the same helm charts as my old namespace. Although, I have just one value that I need it different while everything else remain the same.
Do I have to create another values.yaml for the new namespace and copy everything over and update the one field I want updated ? Or is there any other way ? I do not want to use the --set method of passing updates to the command line.
David suggested the right way. You can use different values.yaml where you can specify the namespace you want to deploy the chart:
$ helm install -f another-namespace-values.yaml <my-release> .
It's also entirely possible to launch helm chart with multiple values.
For more reading please check values section of helm docs.

How to pass dynamic data to helm subchart

I'm using the mongodb helm chart and the mongo-express one. mongodb generates the name depending on my release name, so it is dynamic. The mongodb service name will be something like my-release-mongodb.
mongo-express requires to pass mongodbServer - the location at which the mongodb can be reached. How can I provide this value to mongo-express if it is generated and can change depending on the release name?
Helm doesn't directly have this ability. (See also helm - programmatically override subchart values.yaml.) It has a couple of ways to propagate configured values from a subchart to a parent but not to use computed values, or to send these values to a sibling chart.
In the particular case of Services created by a subchart, I've generally considered the Service name as part of the chart's "API": you know the Service will be named {{ .Release.Name }}-mongodb and you just have to hard-code that in the consuming chart.
If you're launching this under a single "umbrella" chart, this is a little more straightforward. Both parts have the same release name, so you can construct the service name the same way. (Umbrella charts have other limitations – if you have multiple services that each should have an independent MongoDB installation, Helm will only deploy the database once for the whole umbrella chart – but you can still hit this same problem making HTTP calls between microservices.)
If they're totally separate installations, you may need to pick the release name yourself and pass it in as a value.
helm install thedb ./mongodb
helm install theapp ./mongo-express --set serviceName=thedb-mongodb
This also a place where a still higher-level tool like Helmfile or Helmsman can come in handy, since that would let you specify these parameters in a fixed file.

Rendered manifests contain a resource that already exists. Could not get information about the resource: resource name may not be empty

I Installed Helm 3 on my windows laptop where i have kube config configured as well. But when i try to install my local helm chart, i;m getting the below error.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource: resource name may not be empty
I tried helm ls --all --all-namespaces but i don't see anything. Please help me!
I think you have to check if you left off any resource without - name:
I had the same issue. In values.yaml I had
name:
and in deployment.yaml I tried to access this "name" via {{ .Values.name }}. I found out, that {{ .Values.name }} doesn't work for me at all. I had to use {{ Chart.Name }} in deployment.yaml as the built-in object. ref: https://helm.sh/docs/chart_template_guide/builtin_objects/
If you want to access the "name", you can put it into values.yaml for example like this:
something:
name:
and them access it from deployment.yaml (for example) like {{ .Values.something.name }}
Had same error message. I solved the problem using helm lint on the folder of a dependecy charts that I just added. That pointed me to some bad assignment of values.
Beware: helm lint on the parent folder didn't highlight any problem in the dependencies folders.
I suppose there is already the same resource existing in your namespace where you are trying to install or in your helm chart you are trying to create the same resource twice.
Try to create a new namespace and try helm install if you still face the issue then definitely there is some issue with your helm install.
I faced the same error, the fix was to correct the sub-chart name in values.yaml file of main chart.
Best bet would be to run helm template . in the chart directory and verify that name or namespace fields are not empty. This was the case with me atleast.
Most likely, one of the deployments you removed left behind a clusterole.
Check if you have one with kubectl get clusterole
Once you find it, you can delete it with kubectl delete clusterrole <clusterrolename>