helm: 'lookup' function always returns empty map - kubernetes-helm

The relevant docs: https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function
My helm version:
$ helm version
version.BuildInfo{Version:"v3.4.1",
GitCommit:"c4e74854886b2efe3321e185578e6db9be0a6e29",
GitTreeState:"dirty", GoVersion:"go1.15.4"}
Minimal example to reproduce:
Create a new helm chart and install it.
$ helm create my-chart
$ helm install my-chart ./my-chart
Create a simple ConfigMap.
# my-chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
someKey: someValue
Upgrade the existing chart so that the ConfigMap is applied.
$ helm upgrade my-chart ./my-chart
Confirm that the ConfigMap exists.
$ kubectl -n default get configmap my-configmap
Which returns as expected:
NAME DATA AGE
my-configmap 1 12m
Try to use the lookup function to reference the existing ConfigMap.
# my-chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
someKey: someValue
someOtherKey: {{ (lookup "v1" "ConfigMap" "default" "my-configmap").data.someValue }}
Then do a dry-run of the upgrade.
$ helm upgrade my-chart ./my-chart --dry-run
You will be met with a nil pointer error:
Error: UPGRADE FAILED: template: my-chart/templates/configmap.yaml:9:54: executing "my-
chart/templates/configmap.yaml" at <"my-configmap">: nil pointer evaluating interface
{}.someValue
What am I doing wrong?

This is an expected behavior if you are using --dry-run flag.
From documentation
Keep in mind that Helm is not supposed to contact the Kubernetes API
Server during a helm template or a helm install|update|delete|rollback --dry-run, so the lookup function will return an empty list (i.e. dict) in such a case.

Related

Error: unable to recognize "mongo-configmap.yaml": no matches for kind "ConfigMap" in version "V1"

I am following a MongoDB tutorial on Kubernetes, but when I create the configuration map, it gives me this error:
error: unable to recognize "mongo-configmap.yaml": no matches for kind "ConfigMap" in version "V1"
This is the mongo-configmap.yaml file:
apiVersion: V1
kind: ConfigMap
metadata:
name: mongodb-configmap
data:
database_url: mongodb-service
The version should be lower case.
apiVersion: v1
lang-none
You can run the below command to refer to the attributes.
kubectl explain cm | head
Or
kubectl explain cm --recursive | grep -i <attribute> or head
KIND: ConfigMap
VERSION: v1
DESCRIPTION:
ConfigMap holds configuration data for pods to consume.
FIELDS:
apiVersion <string>
binaryData <map[string]string>
data <map[string]string>

How to fix helm namespaces when specified namespace in templates before, but setting the namespace afterwards via the helm -n namepace flag

Some time ago we deployed many different releases where we specified the namespaces in the templates itself, like f.e.:
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-database-config
namespace: {{ .Release.Name }}
labels:
app: secret-database-config
data:
POSTGRES_HOST: 123
...
But we realized now that this is not the correct approach, but you should use the -n namespace flag (see here).
In general, templates should not define a namespace. This is because Helm installs objects into the namespace provided with the --namespace flag. By omitting this information, it also provides templates with some flexibility for post-render operations (like helm template | kubectl create --namespace foo -f -)
So if we fix our files
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-database-config
labels:
app: secret-database-config
data:
POSTGRES_HOST: 123
...
and run now:
helm upgrade --install --debug -n myproject123 -f helm/configs/myproject123.yaml myproject123 helm
We get following errors:
history.go:56: [debug] getting history for release myproject123
Release "myproject123" does not exist. Installing it now.
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /Users/myuser/coding/myrepo/helm
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Namespace "myproject123" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "myproject123": current value is "default"
helm.go:81: [debug] Namespace "myproject123" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "myproject123": current value is "default"
rendered manifests contain a resource that already exists. Unable to continue with install
helm.sh/helm/v3/pkg/action.(*Install).Run
/private/tmp/helm-20210310-44407-1006esy/pkg/action/install.go:276
main.runInstall
/private/tmp/helm-20210310-44407-1006esy/cmd/helm/install.go:242
main.newUpgradeCmd.func2
/private/tmp/helm-20210310-44407-1006esy/cmd/helm/upgrade.go:115
github.com/spf13/cobra.(*Command).execute
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/cobra#v1.1.1/command.go:850
github.com/spf13/cobra.(*Command).ExecuteC
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/cobra#v1.1.1/command.go:958
github.com/spf13/cobra.(*Command).Execute
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/cobra#v1.1.1/command.go:895
main.main
/private/tmp/helm-20210310-44407-1006esy/cmd/helm/helm.go:80
runtime.main
/usr/local/Cellar/go/1.16/libexec/src/runtime/proc.go:225
runtime.goexit
/usr/local/Cellar/go/1.16/libexec/src/runtime/asm_amd64.s:1371
make: *** [ns_upgrade] Error 1
Any ideas how this can be fixed?
It is not possible for us to delete everything and then install it again due to downtimes (and the amount of projects we have already deployed).
Use {{ .Release.Namespace }} instead of {{ .Release.Name }}. Then, you will be able to overwrite the namespace during installation via cli.
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-database-config
namespace: {{ .Release.Namespace }}
labels:
app: secret-database-config
data:
POSTGRES_HOST: 123
...

Helm upgrade that does a rolling Pod restart if chart values change

I have a simple Helm chart that consists of a Deployment and a ConfigMap. The ConfigMap looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.APP_NAMESPACE }}-config
data:
LOGGED_OUT_MSG: "{{ .Values.LOGGED_OUT_MSG }}"
The ConfigMap is mounted as an envfrom in the Pod template:
...
envFrom:
- configMapRef:
name: {{ .Values.APP_NAMESPACE }}-config
For one of my non-production environments I have the file override.yaml:
# override.yaml
LOGGED_OUT_MSG: "You are logged out (DEV)"
I then do a Helm upgrade like this:
$ helm upgrade -f override.yaml mychart .
What I assumed would happen was that if I make a change to override.yaml and run the above helm upgrade command that Helm would notice that the value of LOGGED_OUT_MSG has changed and do a rolling restart of my Pods. However, that does not happen. Instead, I have to manually delete the Pods so that the change comes through.
Is there a way to run helm upgrade so that changes in override.yaml trigger Helm to do a rolling restart of the Pods?
There is no way to do it by default AFAIK.
You are looking for reloader by stakater.
"Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets and Statefulsets."
This will require installing the tool in your cluster and adding an annotation to your deployment.
https://github.com/stakater/Reloader

Helm how to define .Release.Name value

I have created basic helm template using helm create command. While checking the template for Ingress its adding the string RELEASE-NAME and appname like this RELEASE-NAME-microapp
How can I change .Release.Name value?
helm template --kube-version 1.11.1 microapp/
# Source: microapp/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-microapp
labels:
app: microapp
chart: microapp-0.1.0
release: RELEASE-NAME
heritage: Tiller
annotations:
kubernetes.io/ingress.class: nginx
This depends on what version of Helm you have; helm version can tell you this.
In Helm version 2, it's the value of the helm install --name parameter, or absent this, a name Helm chooses itself. If you're checking what might be generated via helm template that also takes a --name parameter.
In Helm version 3, it's the first parameter to the helm install command. Helm won't generate a name automatically unless you explicitly ask it to helm install --generate-name. helm template also takes the same options.
Also, in helm 3, if you want to specify a name explicitly, you should use the --name-template flag. e.g. helm template --name-template=dummy in order to use the name dummy instead of RELEASE-NAME
As of helm 3.9 the flag is --release-name, making the command: helm template --release-name <release name>

How to create a namespace if it doesn't exists from HELM templates?

I have a kind: Namespace template yaml, as per below:
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespace }}
namespace: ""
How do I make helm install create the above-given namespace ({{ .Values.namespace }}) if and only if above namespace ({{ .Values.namespace }}) doesn't exits in the pointed Kubernetes cluster?
Thanks.
This feature is implemented in helm >= 3.2 (Pull Request)
Use --create-namespace in addition to --namespace <namespace>
For helm2 it's best to avoiding creating the namespace as part of your chart content if at all possible and letting helm manage it. helm install with the --namespace=<namespace_name> option should create a namespace for you automatically. You can reference that namespace in your chart with {{ .Release.Namespace }}. There's currently only one example of creating a namespace in the public helm/charts repo and it uses a manual flag for checking whether to create it
For helm3 functionality has changed and there's a github issue on this
There are some differences in Helm commands due to different versions.
For Helm 2, just use --namespace; for Helm 3, need to use --namespace and --create-namespace.
Helm 2 Example:
helm install stable/nginx-ingress --name ingress-nginx --namespace ingress-nginx --wait
Helm 3 Example:
helm install ingress-nginx stable/nginx-ingress --namespace ingress-nginx --create-namespace --wait
For terraform users, set create_namespace attribute to true:
resource "helm_release" "kube_prometheus_stack" {
name = ...
repository = ...
chart = ...
namespace = ...
create_namespace = true
}