Helm lookup function dynamically fetch value from a configmap - kubernetes

I am attempting to use the Helm lookup function to dynamically lookup a key ORGANIZATION_NAME from a ConfigMap and use that value.
apiVersion: apps/v1
kind: Deployment
metadata:
name: celery-beat
labels:
app: celery-beat
tags.datadoghq.com/env: {{ (lookup "v1" "configmap" "default" "api-env").items.ORGANIZATION_NAME | quote }}
...
But I am getting the error:
Error: UPGRADE FAILED: template: celery-beat/templates/deployment.yaml:9:66: executing "celery-beat/templates/deployment.yaml" at <"api-env">: nil pointer evaluating interface {}.ORGANIZATION_NAME

The key .items only gets populated when there is more than one resource that matches the lookup.
Since there is only one ConfigMap named api-env on the default namespace , you can access directly the data you want:
(lookup "v1" "configmap" "default" "api-env").data.ORGANIZATION_NAME

Related

Can we add labels to Nodes using Helm template

I am trying to add labels to the nodes using helm chart, however getting error while deploying.
yaml template
apiVersion: v1
kind: Node
metadata:
name: {{ index (lookup "v1" "Node" "" "").items 0 "metadata" "name" }}
labels:
content-strange: "true"
name: {{ index (lookup "v1" "Node" "" "").items 1 "metadata" "name" }}
labels:
content-strange: "true"
name: {{ index (lookup "v1" "Node" "" "").items 2 "metadata" "name" }}
labels:
content-strange: "true"
Error
helm install famous famous.1.1.tgz -n famous-ns1
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: Node "10.x.x.x" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "famous"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "famous-ns1"
You can't use Helm to modify existing objects. Helm works by running its templating engine to construct complete Kubernetes manifests, and them submits them to the cluster. This process assumes that an object is wholly owned by Helm, and these objects don't already exist, and nothing other than helm upgrade will modify them.
The error you're getting here is in fact because the Node objects already exist; Kubernetes creates them when the actual nodes (physical machines, cloud instances, VMs) get created and are joined to the cluster. You can't modify these using Helm.

How to debug Unknown field error in Kubernetes?

This might be rookie question. I am not well versed with kubernetes. I added this to my deployment.yaml
ad.datadoghq.com/helm-chart.check_names: |
["openmetrics"]
ad.datadoghq.com/helm-chart.init_configs: |
[{}]
ad.datadoghq.com/helm-chart.instances: |
[
{
"prometheus_url": "http://%%host%%:7071/metrics",
"namespace": "custom-metrics",
"metrics": [ "jvm*" ]
}
]
But I get this error
error validating data: [ValidationError(Deployment.spec.template.metadata): unknown field "ad.datadoghq.com/helm-chart.check_names" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta, ValidationError(Deployment.spec.template.metadata)
What does this error mean? Does it mean that I need to define ad.datadoghq.com/helm-chart.check_names somewhere? If so, where?
You are probably adding this in a wrong place - according to your error messages - you're adding this to Deployment.spec.template.metadata
you can check official helm deployment template and this example on documentation - values such ad.datadoghq.com/helm-chart.check_names are annotations, so these needs to be defined under path Deployment.spec.template.metadata.annotations
annotations: a map of string keys and values that can be used by
external tooling to store and retrieve arbitrary metadata about this
object
(see the annotations docs)
apiVersion: apps/v1
kind: Deployment
metadata:
name: datadog-cluster-agent
namespace: default
spec:
selector:
matchLabels:
app: datadog-cluster-agent
template:
metadata: # <- not directly under 'metadata'
labels:
app: datadog-cluster-agent
name: datadog-agent
annotations: # <- add here
ad.datadoghq.com/datadog-cluster-agent.check_names: '["prometheus"]'
ad.datadoghq.com/datadog-cluster-agent.init_configs: '[{}]'
ad.datadoghq.com/datadog-cluster-agent.instances: '[{"prometheus_url": "http://%%host%%:5000/metrics","namespace": "datadog.cluster_agent","metrics": ["go_goroutines","go_memstats_*","process_*","api_requests","datadog_requests","external_metrics", "cluster_checks_*"]}]'
spec:

How to fix helm namespaces when specified namespace in templates before, but setting the namespace afterwards via the helm -n namepace flag

Some time ago we deployed many different releases where we specified the namespaces in the templates itself, like f.e.:
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-database-config
namespace: {{ .Release.Name }}
labels:
app: secret-database-config
data:
POSTGRES_HOST: 123
...
But we realized now that this is not the correct approach, but you should use the -n namespace flag (see here).
In general, templates should not define a namespace. This is because Helm installs objects into the namespace provided with the --namespace flag. By omitting this information, it also provides templates with some flexibility for post-render operations (like helm template | kubectl create --namespace foo -f -)
So if we fix our files
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-database-config
labels:
app: secret-database-config
data:
POSTGRES_HOST: 123
...
and run now:
helm upgrade --install --debug -n myproject123 -f helm/configs/myproject123.yaml myproject123 helm
We get following errors:
history.go:56: [debug] getting history for release myproject123
Release "myproject123" does not exist. Installing it now.
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /Users/myuser/coding/myrepo/helm
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Namespace "myproject123" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "myproject123": current value is "default"
helm.go:81: [debug] Namespace "myproject123" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "myproject123": current value is "default"
rendered manifests contain a resource that already exists. Unable to continue with install
helm.sh/helm/v3/pkg/action.(*Install).Run
/private/tmp/helm-20210310-44407-1006esy/pkg/action/install.go:276
main.runInstall
/private/tmp/helm-20210310-44407-1006esy/cmd/helm/install.go:242
main.newUpgradeCmd.func2
/private/tmp/helm-20210310-44407-1006esy/cmd/helm/upgrade.go:115
github.com/spf13/cobra.(*Command).execute
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/cobra#v1.1.1/command.go:850
github.com/spf13/cobra.(*Command).ExecuteC
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/cobra#v1.1.1/command.go:958
github.com/spf13/cobra.(*Command).Execute
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/cobra#v1.1.1/command.go:895
main.main
/private/tmp/helm-20210310-44407-1006esy/cmd/helm/helm.go:80
runtime.main
/usr/local/Cellar/go/1.16/libexec/src/runtime/proc.go:225
runtime.goexit
/usr/local/Cellar/go/1.16/libexec/src/runtime/asm_amd64.s:1371
make: *** [ns_upgrade] Error 1
Any ideas how this can be fixed?
It is not possible for us to delete everything and then install it again due to downtimes (and the amount of projects we have already deployed).
Use {{ .Release.Namespace }} instead of {{ .Release.Name }}. Then, you will be able to overwrite the namespace during installation via cli.
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-database-config
namespace: {{ .Release.Namespace }}
labels:
app: secret-database-config
data:
POSTGRES_HOST: 123
...

Setup a Kubernetes Namespace and Roles Using Helm

I am trying to setup Kuberentes for my company. In that process I am trying to learn Helm.
One of the tasks I have is to setup automation to take a supplied namespace name parameter, and create a namespace and setup the correct permissions in that namespace for the deployment user account.
I can do this simply with a script that uses kubectl apply like this:
kubectl create namespace $namespaceName
kubectl create rolebinding deployer-edit --clusterrole edit --user deployer --namespace $namespaceName
But I am wondering if I should set up things like this using Helm charts. As I look at Helm charts, it seems that everything is a deployment. I am not sure that this fits the model of "deploying" things. It is more just a general setup of a namespace that will then allow deployments into it. But I want to try it out as a Helm chart if it is possible.
How can I create a Kubernetes namespace and rolebinding using Helm?
A Namespace is a Kubernetes object and it can be described in YAML, so Helm can create one. #mdaniel's answer describes the syntax for doing it for a single Namespace and the corresponding RoleBinding.
There is a chicken-and-egg problem if you are trying to use this syntax to create the Helm installation namespace, though. In Helm 3, metadata about the installation is stored in Kubernetes objects, usually in the same namespace you're installing into
helm install release-name ./a-chart-that-creates-a-namespace --namespace ns
If the namespace doesn't already exist, then Helm can't retrieve the installation metadata; or, if it does, then the declaration of the Namespace object in the chart will conflict with an existing object in the cluster. You can create other objects this way (like RoleBindings) but Namespaces themselves are a problem.
But! You can create other namespaces safely. You can also use Helm's templating constructs to create multiple objects based on what's present in the .Values configuration. So if your values.yaml file (possibly environment-specific) has
namespaces: [service-a, service-b]
clusterRole: edit
user: deploy
Then you can write a template file like
{{- $top := . }}
{{- range $namespace := .Values.namespaces -}}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ $namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: {{ $namespace }}
name: deployer-edit
roleRef:
apiGroup: ""
kind: ClusterRole
name: {{ $top.Values.clusterRole }}
subjects:
- apiGroup: ""
kind: User
name: {{ $top.Values.user }}
{{ end -}}
This will create two YAML documents for each item in .Values.namespaces. Since the range looping construct overwrites the . special variable, we save its value in a $top local variable before we start, and then use $top.Values where we'd otherwise need to reference .Values. We also need to make sure to explicitly name the metadata: { namespace: } of each object we create, since we're not using the default installation namespace.
You need to make sure the helm install --namespace name isn't any of the namespaces you're managing with this chart.
This would let you have a single chart that manages all of the per-service namespaces. If you needed to change the set of services, you can just update the chart values and helm update. The one other caution is that this will happily delete namespaces with no warning if you remove a value from the .Values.namespaces list, and also take everything in that namespace with it (notably, any PersistentVolumeClaims that have data you might need).
Almost any chart for an install that needs to interact with kubernetes itself will include RBAC resources, so it is for sure not just Deployments
# templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: {{ .Release.Namespace }}
name: {{ .Values.bindingName }}
roleRef:
apiGroup: ""
kind: ClusterRole
name: {{ .Values.clusterRole }}
subjects:
- apiGroup: ""
kind: User
name: {{ .Values.user }}
then a values.yaml isn't strictly required, but helps folks know what values could be provided:
# values.yaml
bindingName: deployment-edit
clusterRole: edit
user: deployer
Helm v3 has --create-namespace which will create the provided --namespace if it doesn't already exist, which isn't very declarative but does achieve the end result just like the kubectl version
It's also theoretically possible to have the chart create the Namespace but I would not guess that helm remove the-namespaced-rolebinding will do the right thing, since the order of item removal matters a lot:
# templates/00namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.theNamespace }}
and then run helm --namespace kube-system ... or any NS other than the real one, since it doesn't yet exist

Prevent creating a secret if it already exists

At the moment I have the following secret set up:
apiVersion: v1
kind: Secret
metadata:
name: my-repository-key
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
Unfortunately, I have 2 subcharts using the same secret, which cause an issue when I try to install them using helm.
Per the stackoverflow answer, I've tried using the following line to prevent the re-creation of the secret:
{{- if not (lookup "v1" "Secret" "" "my-repository-key") }}
Unfortunately It did not work, and I'm unable to debug the lookup as it's impossible for the time being.
How do I prevent the creation with a lookup? Is there a better way?
In Helm charts, Kubernetes objects are often named with a prefix that's the name of the current release plus the name of the current chart. That will make the name unique, even if there are related subcharts that declare similar secrets. (A secret is pretty small and duplicating it between two subcharts shouldn't be an operational problem.)
metadata:
name: "{{ .Release.Name }}-{{ .Chart.Name }}-key"
If you created the chart with helm create, this pattern is common enough that the new-chart template includes a helper template that generates this. If the chart only has a single secret, you can use the default name:
metadata:
name: "{{ include "chartname.fullname" . }}"
Or, up to some corner cases around naming, you can add a suffix to it
metadata:
name: "{{ include "chartname.fullname" . }}-key"