Rendered manifests contain a resource that already exists. Could not get information about the resource: resource name may not be empty - kubernetes

I Installed Helm 3 on my windows laptop where i have kube config configured as well. But when i try to install my local helm chart, i;m getting the below error.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource: resource name may not be empty
I tried helm ls --all --all-namespaces but i don't see anything. Please help me!

I think you have to check if you left off any resource without - name:

I had the same issue. In values.yaml I had
name:
and in deployment.yaml I tried to access this "name" via {{ .Values.name }}. I found out, that {{ .Values.name }} doesn't work for me at all. I had to use {{ Chart.Name }} in deployment.yaml as the built-in object. ref: https://helm.sh/docs/chart_template_guide/builtin_objects/
If you want to access the "name", you can put it into values.yaml for example like this:
something:
name:
and them access it from deployment.yaml (for example) like {{ .Values.something.name }}

Had same error message. I solved the problem using helm lint on the folder of a dependecy charts that I just added. That pointed me to some bad assignment of values.
Beware: helm lint on the parent folder didn't highlight any problem in the dependencies folders.

I suppose there is already the same resource existing in your namespace where you are trying to install or in your helm chart you are trying to create the same resource twice.
Try to create a new namespace and try helm install if you still face the issue then definitely there is some issue with your helm install.

I faced the same error, the fix was to correct the sub-chart name in values.yaml file of main chart.

Best bet would be to run helm template . in the chart directory and verify that name or namespace fields are not empty. This was the case with me atleast.

Most likely, one of the deployments you removed left behind a clusterole.
Check if you have one with kubectl get clusterole
Once you find it, you can delete it with kubectl delete clusterrole <clusterrolename>

Related

How to add automountServiceAccountToken: false using Helm

I have been trying to add automountServiceAccountToken: false into deployment using helm but my changes are reflecting inside deployment in kubernetes.
I tried below in helpers.tpl
{{- "<chart-name>.automountserviceaccounttoken" }}
{{- default "false" .Values.automountserviceaccounttoken.name }}
{{- end }}
in app-deployment.yaml
automountServiceAccountToken: {{- include "<chart-name>.automountserviceaccounttoken" . }}
in values.yaml
automountServiceAccountToken: false
But I can't see the changes. Please guide
You can give a try with following troubleshooting steps
In the helpers.tpl file you are taking the
automountserviceaccounttoken value from the values.yaml. In
values.yaml you metnioned automountserviceaccounttoken:false but
in the tpl file you are accesing the value like
automountserviceaccounttoken.name there is no attribute called
name under automountserviceaccounttoken in values file. Although you
are using default value in function sometimes it may not include it.
So correct he value in values.yaml.
Debug the deployed heml chart by using a command $helm template template-name. It will download the generated helm template along
with values. Check whether your desired values are reflecting or
not.
In case you are redeploying the chart try upgrading it by $helm upgrade [RELEASE] [CHART] and make sure your values are reflecting.
Before installing the helm chart running with dry-run will give us
the templates with compiled values. So using dry run will helps to
confirm the templates. Command for dry-run $helm install chart-name . --dry-run
Fore more information refer to official document

how to define entrypoint command in dependency helm chart

I have this issue. I need to setup oauth2-proxy in kubernetes via helm, and I need it to use injected vault secret for configuration of proxy. I know that this would be possible by defining
'command' : ['sh', '-c', 'source /vault/secrets/client-secret && '], in some override-values.yaml i would create, but problem is that this helm chart values.yaml file does not provide any keyword like "command" and i am using it as a dependency chart so I cannot directly edit it's manifests.
Is there any way how can I define command for a pod of dependency helm chart even if it does not have command key in values? chart link: https://artifacthub.io/packages/helm/oauth2-proxy/oauth2-proxy if somebody wants to see
I also tried to reference secrets in the configuartion file for the proxy but i got error that i should not provide values like this: client_secret=$(cat /vault/secrets/secret), and many other things

Helm installation throws an error for cluster-wide objects

I have a problem with helm chart that I would like to use to deploy multiple instances of my app in many namespaces. Let's say it would be ns1, ns2 and ns3.
I run helm upgrade with --install option and it goes well for ns1 but when I want to run it second time for ns2 I get an error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy "my-psp" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "ns2": current value is "ns1"
I found many topics about this problem but everytime the only answer is to delete the old object and install it again with helm. I don't want to do that - I would like to get two or more instances of my app that uses k8s objects that are common for many namespaces.
What can I do in this situation? I know I could change names of those objects with every deployment but that would be really messy. Second idea is to move those object to another chart and deploy it just once but sadly there's like a ton of work to do that so I would like to avoid it. Is it possible to ignore this error somehow and still make this work?
Found out the solution. The easiest way is to add lookup block into your templates:
{{- if not (lookup "policy/v1beta1" "PodSecurityPolicy" "" "my-psp") }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: my-psp
...
{{- end }}
With this config the object will be created only if case that the object with the same name does not exist.
It may not be perfect solution but if you know what you do you can save a lot of time.

Helm template order for CustomResourceDefinition and ClickHouseInstallation

I have created a helm directory called clickhouse:
Inside the template subdirectory I have a crd.yaml (kind: CustomResourceDefinition) which has to be run before the installation.yaml (kind: ClickHouseInstallation). Right now the installation.yaml is run first when I run the command
$ helm upgrade -i clickhouse ./charts/clickhouse
How do I change the order?
Notes:
I noted that there's a static order by reading through this thread. Since ClickHouseInstallation is not a part of that list I'm curious of how helm orders it and how to change that order.
Also here's the yaml files
crd.yaml
installation.yaml
I think you can try to use Helm hooks like
annotations:
"helm.sh/hook": post-install
Let your crd.yaml have pre-install and then your installation.yaml could have post-install. Please look through the docs for Helm hooks, there might be some downsides in regards to what you want to achieve.
Another way to solve this (might be trivial and not so elegant) would be creating a separate helm chart for the installation.yaml and then just run the crd chart first.

How to bind kubernetes resource to helm release

If I run kubectl apply -f <some statefulset>.yaml separately, is there a way to bind the stateful set to a previous helm release? (eg by specifying some tags in the yaml file)
As far as I know - you cannot do it.
Yes, you can always create resources via templates before installing the Helm chart.
However, I have never seen a solution for your question.