Can we add labels to Nodes using Helm template - kubernetes

I am trying to add labels to the nodes using helm chart, however getting error while deploying.
yaml template
apiVersion: v1
kind: Node
metadata:
name: {{ index (lookup "v1" "Node" "" "").items 0 "metadata" "name" }}
labels:
content-strange: "true"
name: {{ index (lookup "v1" "Node" "" "").items 1 "metadata" "name" }}
labels:
content-strange: "true"
name: {{ index (lookup "v1" "Node" "" "").items 2 "metadata" "name" }}
labels:
content-strange: "true"
Error
helm install famous famous.1.1.tgz -n famous-ns1
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: Node "10.x.x.x" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "famous"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "famous-ns1"

You can't use Helm to modify existing objects. Helm works by running its templating engine to construct complete Kubernetes manifests, and them submits them to the cluster. This process assumes that an object is wholly owned by Helm, and these objects don't already exist, and nothing other than helm upgrade will modify them.
The error you're getting here is in fact because the Node objects already exist; Kubernetes creates them when the actual nodes (physical machines, cloud instances, VMs) get created and are joined to the cluster. You can't modify these using Helm.

Related

Helm lookup function dynamically fetch value from a configmap

I am attempting to use the Helm lookup function to dynamically lookup a key ORGANIZATION_NAME from a ConfigMap and use that value.
apiVersion: apps/v1
kind: Deployment
metadata:
name: celery-beat
labels:
app: celery-beat
tags.datadoghq.com/env: {{ (lookup "v1" "configmap" "default" "api-env").items.ORGANIZATION_NAME | quote }}
...
But I am getting the error:
Error: UPGRADE FAILED: template: celery-beat/templates/deployment.yaml:9:66: executing "celery-beat/templates/deployment.yaml" at <"api-env">: nil pointer evaluating interface {}.ORGANIZATION_NAME
The key .items only gets populated when there is more than one resource that matches the lookup.
Since there is only one ConfigMap named api-env on the default namespace , you can access directly the data you want:
(lookup "v1" "configmap" "default" "api-env").data.ORGANIZATION_NAME

install dapr helm chart on a second namespace while already installed on another namespace in same cluster

I am trying to install a second dapr helm chart on namespace "test" while it is already installed on namespace "dev" in same cluster.
helm upgrade -i --namespace $NAMESPACE \
dapr-uat dapr/dapr
already installed config exists whith following name:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
dapr dev 1 2021-10-06 21:16:27.244997 +0100 +01 deployed dapr-1.4.2 1.4.2
I get the following error
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "dapr-operator-admin" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dapr-uat": current value is "dapr"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "test": current value is "dev"
Tried specifying a different version for the installation but with no success
helm upgrade -i --namespace $NAMESPACE \
dapr-uat dapr/dapr \
--version 1.4.0
Starting to think the current chart does not allow for multiple instances (development and testing ) on the same cluster.
Has anyone faced the same issue ?
thank you,
Existing dapr chart applies cluster-wide ressources where names are given with no namespace name consideration. So, when trying to install a second configuration, a cluster-wide ressource name conflict occurs with pre-existing cluster-wide ressource:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: ClusterRole "dapr-operator-admin" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dapr-uat": current value is "dapr-dev"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "uat": current value is "dev"
I had to edit the chart:
git clone https://github.com/dapr/dapr.git
I edited RBAC ressources in subchart dapr_rbac where ressource name now considers namespace name in dapr_rbac/templates/ClusterRoleBinding.yaml
previous file :
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dapr-operator
...
Edit now consists of metadata name on all ressources:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dapr-operator-{{ .Release.Namespace }}
...
Same logic have been applied to MutatingWebhookConfiguration in subchart dapr_sidecar_injector in file dapr_sidecar_injector/templates/dapr_sidecar_injector_webhook_config.yaml
For full edits, please see forked repo in :
https://github.com/redaER7/dapr/tree/DEV/charts/dapr

How to fix helm namespaces when specified namespace in templates before, but setting the namespace afterwards via the helm -n namepace flag

Some time ago we deployed many different releases where we specified the namespaces in the templates itself, like f.e.:
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-database-config
namespace: {{ .Release.Name }}
labels:
app: secret-database-config
data:
POSTGRES_HOST: 123
...
But we realized now that this is not the correct approach, but you should use the -n namespace flag (see here).
In general, templates should not define a namespace. This is because Helm installs objects into the namespace provided with the --namespace flag. By omitting this information, it also provides templates with some flexibility for post-render operations (like helm template | kubectl create --namespace foo -f -)
So if we fix our files
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-database-config
labels:
app: secret-database-config
data:
POSTGRES_HOST: 123
...
and run now:
helm upgrade --install --debug -n myproject123 -f helm/configs/myproject123.yaml myproject123 helm
We get following errors:
history.go:56: [debug] getting history for release myproject123
Release "myproject123" does not exist. Installing it now.
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /Users/myuser/coding/myrepo/helm
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Namespace "myproject123" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "myproject123": current value is "default"
helm.go:81: [debug] Namespace "myproject123" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "myproject123": current value is "default"
rendered manifests contain a resource that already exists. Unable to continue with install
helm.sh/helm/v3/pkg/action.(*Install).Run
/private/tmp/helm-20210310-44407-1006esy/pkg/action/install.go:276
main.runInstall
/private/tmp/helm-20210310-44407-1006esy/cmd/helm/install.go:242
main.newUpgradeCmd.func2
/private/tmp/helm-20210310-44407-1006esy/cmd/helm/upgrade.go:115
github.com/spf13/cobra.(*Command).execute
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/cobra#v1.1.1/command.go:850
github.com/spf13/cobra.(*Command).ExecuteC
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/cobra#v1.1.1/command.go:958
github.com/spf13/cobra.(*Command).Execute
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/cobra#v1.1.1/command.go:895
main.main
/private/tmp/helm-20210310-44407-1006esy/cmd/helm/helm.go:80
runtime.main
/usr/local/Cellar/go/1.16/libexec/src/runtime/proc.go:225
runtime.goexit
/usr/local/Cellar/go/1.16/libexec/src/runtime/asm_amd64.s:1371
make: *** [ns_upgrade] Error 1
Any ideas how this can be fixed?
It is not possible for us to delete everything and then install it again due to downtimes (and the amount of projects we have already deployed).
Use {{ .Release.Namespace }} instead of {{ .Release.Name }}. Then, you will be able to overwrite the namespace during installation via cli.
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-database-config
namespace: {{ .Release.Namespace }}
labels:
app: secret-database-config
data:
POSTGRES_HOST: 123
...

Deploying helm release forcefully when same name deployments, svcs, etc. are running in the same namespace

How to deploy the helm release for the first time when there's already the deployment, svc, etc. running with the same name.
Is there's any way to import the config running, which is not being handled by helm?
Or deleting the same name objects is the only solution to deploy the helm release first time?(As I don't want to change the release names because it will break the communication between the microservices)
Deleting the objects will cause downtime and I want to avoid that.
Error getting while deploying with the same name:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Service "abc" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "abc"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
Is their any other approach?
Thanks
Addressing the error message and part of the question:
How to deploy the helm release for the first time when there's already the deployment, svc, etc. running with the same name.
You can't deploy resources with Helm that weren't created by Helm. It will give you the same message as you've encountered. You can annotate the existing resources that were not added by Helm to "import" the existing resources and act on them. Please try to run your workload on a test environment first before trying it as it could redeploy some resources.
There is already similar answer on how to annotate resources:
Stackoverflow.com: Answers: Use Helm 3 for existing resources deployed with kubectl
see this feature of helm3 Adopt resources into release with correct instance and managed-by labels
Helm will no longer error when attempting to create a resource that already exists in the target cluster if the existing resource has the correct meta.helm.sh/release-name and meta.helm.sh/release-namespace annotations, and matches the label selector app.kubernetes.io/managed-by=Helm. This facilitates zero-downtime migrations to Helm 3 for managing existing deployments, and allows Helm to "adopt" existing resources that it previously created.
In order to allow an existing resource to be adopted by Helm, add release metadata and the managed-by label:
KIND=deployment
NAME=my-app-staging
RELEASE=staging
NAMESPACE=default
kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE
kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE
kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm
Assuming following situation:
Deployment created outside of Helm (example below).
Helm Chart with equivalent templated Deployment in templates/ (example below).
Creating below Deployment without Helm:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Assuming that above file is used with kubectl apply and it's also residing in templates/ directory (templated) of your Chart, you will get the following error (when you try to run $ helm install release_name .):
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Deployment "nginx" in namespace "default" exists and cannot be imported into the current release: ...
By running the script that was mentioned in the answer I linked, you can annotate and label your resources for Helm to not produce mentioned error message.
After that you can run $ helm install release_name . and provision your resources with desired changes.
Additional resources:
Jacky-jiang.medium.com: Import existing resources in Helm3
A nice oneliner to annotate all resources in a helm release to be adopted by the new release:
x=`mktemp` && helm -n $NAMESPACE get manifest $RELEASE >$x && kubectl annotate -f $x --overwrite "meta.helm.sh/release-name"=$NEW_RELEASE && rm -rf "$x"
Or, if you also moved the release to a new namespace:
x=`mktemp` && helm -n $NAMESPACE get manifest $RELEASE >$x && kubectl annotate -f $x --overwrite "meta.helm.sh/release-name"=$NEW_RELEASE "meta.helm.sh/release-namespace"=$NEW_NAMESPACE && rm -rf "$x"
A more common approach is to use the combination of the two labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
As can be seen in different Helm chart providers (for example Bitnami charts, External-Dns , Nginx ingress controller and more).
(*) Read more on the K8s Recommended Labels and Helm standard labels sections.

helm not creating the resources

I have tried to run Helm for the first time. I am having deployment.yaml, service.yaml and ingress.yaml files alongwith values.yaml and chart.yaml.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc
namespace: xyz
labels:
app: abc
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 3
template:
spec:
containers:
- name: abc
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
-
containerPort: 8080
service.yaml
apiVersion: v1
kind: Service
metadata:
name: abc
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
namespace: xyz
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.service.sslCert }}
spec:
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8080
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
selector:
app: abc
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "haproxy-ingress"
namespace: xyz
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: alb
From what I can see I do not think I have missed putting app.kubernetes.io/managed-by but still, I keep getting an error:
rendered manifests contain a resource that already exists. Unable to
continue with install: Service "abc" in namespace "xyz" exists and
cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"; annotation
validation error: missing key "meta.helm.sh/release-name": must be set
to "abc"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"
It renders the file locally correctly.
helm list --all --all-namespaces returns nothing.
Please help.
You already have some resources, e.g. service abc in the given namespace, xyz that you're trying to install via a Helm chart.
Delete those and install them via helm install.
$ kubectl delete service -n <namespace> <service-name>
$ kubectl delete deployment -n <namespace> <deployment-name>
$ kubectl delete ingress -n <namespace> <ingress-name>
Once you have these resources deployed via Helm, you will be able to perform helm update to change properties.
Remove the "app.kubernetes.io/managed-by" label from your yaml's, this will be added by Helm.
The error below is quiet common:
label validation error: missing key "app.kubernetes.io/managed-by":
must be set to "Helm"; annotation validation error: missing key
"meta.helm.sh/release-name": must be set to ..
So I'll provide a bit longer explanation and also a context to the topic.
What happend?
It seems that you tried to create resources that were already exist and created outside of Helm (probably with kubectl).
Why Helm throw the error?
Helm doesn't allow a resource to be owned by more than one
deployment.
It is the responsibility of the chart creator to ensure that the chart
produce unique resources only.
How can you solve this?
Option 1 - Follow the error message and add the meta.helm.sh annotations:
As can be describe in this PR: Adopt resources into release with correct instance and managed-by labels
Helm will no longer error when attempting to create a resource that
already exists in the target cluster if the existing resource has the
correct meta.helm.sh/release-name and
meta.helm.sh/release-namespace annotations, and matches the label
selector app.kubernetes.io/managed-by=Helm. This facilitates
zero-downtime migrations to Helm 3 for managing existing deployments,
and allows Helm to "adopt" existing resources that it previously
created.
(*) I think that the meta.helm.sh scope is a less common approach today.
Option 2 - Add the app.kubernetes.io/instance label:
As can be seen in different Helm chart providers (Bitnami, Nginx ingress controller, External-Dns for example) - the combination of the two labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
(*) Notice: There are some CD tools like ArgoCD that automatically sets the app.kubernetes.io/instance label and uses it to determine which resources form the app.
Option 3 - Delete old resources.
It might be relevant in your specific case where the old resources might not be relevant anymore.
For those who need some context
What are those labels?
Shared labels and annotations share a common prefix: app.kubernetes.io. Labels without a prefix are private to users. The shared prefix ensures that shared labels do not interfere with custom user labels.
In order to take full advantage of using these labels, they should be applied on every resource object.
The app.kubernetes.io/managed-by label is used to describe the tool being used to manage the operation of an application - for example: helm.
Read more on the Recommended Labels section.
Are they added by helm?
No.
First of all, as mentioned before, those labels are not specific to Helm and Helm itself never requires that a particular label be present.
From the other hand, Helm docs recommend to use the following Standard Labels. app.kubernetes.io/managed-by is one of them and should be set to {{ .Release.Service }} in order to find all resources managed by Helm.
So it is the role of the chart maintainer to add those labels.
What is the best way to add them?
Many Helm chart providers adds them to the _helpers.tpl file and let all resources include it:
labels: {{ include "my-chart.labels" . | nindent 4 }}
The trick here is to chase the error message.
For example, in the below case the erro message points at something wrong with the 'service' in namespace 'xyz'
Unable to
continue with install: Service "abc" in namespace "xyz" exists and
cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"; annotation
validation error: missing key "meta.helm.sh/release-name": must be set
to "abc"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"
Simply delete the same service from the mentioned namespace with below:
kubectl -n xyz delete svc abc
And then try the installation/deployment again. It might so happen that similar issue may appear but for a different resource as shown in the below example:
Release "nok-sec-sip-tls-crd" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Role "nok-sec-sip-tls-crd-role" in namespace "debu" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nok-sec-sip-tls-crd": current value is "nok-sec-sip"
Again use the kubectl command and delete the resource mentioned in the error message. For example, in the above case the error resource should be deleted with the below command:
kubectl delete role nok-sec-sip-tls-crd-role -n debu
I was getting this error because I was trying to upgrade the helm chart with wrong release name. So it conflicted with the existing resources in same namespace.
I was running this command with wrong releasename
helm upgrade --install --namespace <namespace> wrong-releasename <chart-folder>
and got the similar errors
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap \"cmname\" in namespace \"namespace\" exists and cannot be imported into the current release
invalid ownership metadata; label validation error: missing key \"app.kubernetes.io/managed-by\": must be set to \"Helm\"; annotation validation error: missing key \"meta.helm.sh/release-name\": must be set to \"wrong-releasename\"; annotation validation error: missing key \"meta.helm.sh/release-namespace\": must be set to \"namespace\"
I checked the existing helm releases in the same namespace and used the same name as the listed release name to upgrade my helmchart
helm ls -n <namespace>
helm upgrade --install --namespace <namespace> releasename <chart-folder>
Here's a faster and more thorough way to get rid of argo so it can be reinstalled :
helm list -A # see argocd in namespace argocd
helm uninstall argocd -n argocd
kubectl delete namespace argocd
The last line gets rid of all secrets and other resources not cleaned up by uninstalling the helm chart, and was needed in my environment, otherwise, I got the same sorts of errors about duplicate resources you were seeing.
We use GitOps via Flux, and I was getting the same rendered manifests contain a resource that already exists error. For me the problem was I accidentally defined a resource with the same name in two different files, so it was trying to create it twice. I removed the duplicate resource definition from one of the files to fix it up.