nil pointer evaluating interface when installing a helm chart - kubernetes

I'm trying to install a chart to my cluster but I'm getting a error
Error: template: go-api/templates/deployment.yaml:18:24: executing "go-api/templates/deployment.yaml"
at <.Values.deployment.container.name>: nil pointer evaluating interface {}.name
However I executed the same commands for another 2 charts and it worked fine.
The template file I'm using is this:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: {{ .Values.namespace}}
labels: {{- include "chart.labels" . | nindent 4 }}
name: {{ .Values.deployment.name}}
spec:
replicas: {{ .Values.deployment.replicas}}
selector:
matchLabels: {{ include "chart.matchLabels" . | nindent 6 }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Values.deployment.container.name }}
image: {{ .Values.deployment.container.image }}
imagePullPolicy: Never
ports:
- containerPort: {{ .Values.deployment.container.port }}

This can happen if the Helm values you're using to install don't have that particular block:
namespace: default
deployment:
name: a-name
replicas: 1
# but no container:
To avoid this specific error in the template code, it's useful to pick out the parent dictionary into a variable; then if the parent is totally absent, you can decide what to do about it. This technique is a little more useful if there are optional fields or sensible defaults:
{{- $deployment := .Values.deployment | default dict }}
metadata:
name: {{ $deployment.name | default (include "chart.fullname" .) }}
spec:
{{- if $deployment.replicas }}
replicas: {{ $deployment.replicas }}
{{- end }}
If you really can't work without the value, Helm has an undocumented required function that can print a more specific error message.
{{- $deployment := .Values.deployment | required "deployment configuration is required" }}
(My experience has been that required values are somewhat frustrating as an end user, particularly if you're trying to run someone else's chart, and I would try to avoid this if possible.)
Given what you show, it's also possible you're making the chart too configurable. The container name, for example, is mostly a detail that only appears if you have a multi-container pod (or are using Istio); the container port is a fixed attribute of the image you're running. You can safely fix these values in the Helm template file, and then it's reasonable to provide defaults for things like the replica count or image name (consider setting the repository name, image name, and tag as separate variables).
{{- $deployment := .Values.deployment | default dict }}
{{- $registry := $deployment.registry | default "docker.io" }}
{{- $image := $deployment.image | default "my/image" }}
{{- $tag := $deployment.tag | default "latest" }}
containers:
- name: app # fixed
image: {{ printf "%s/%s:%s" $registry $image $tag }}
{{- with .Values.imagePullPolicy }}
imagePullPolicy: {{ . }}
{{- end }}
ports:
- name: http
containerPort: 3000 # fixed

If the value is defined in your values file, and you're still getting the error then the issue could be due to accessing that value inside range or similar function which changes the context.
For example, to use a named template mySuffix that has been defined with .Values and using that inside range with template function, we need to provide $ to the template function instead of the usual .:
{{- define "mySuffix" -}}
{{- .Values.suffix }}
{{- end }}
...
{{- range .Values.listOfValues }}
echo {{ template "mySuffix" $ }}
{{- end }}

Related

Starting container multiple times with different environment variables

I have a container image that is loading multiple large files on startup. When restarting the container, all files have to be loaded again.
What I want to do now is to start six instances which only load one file each, given an environment variable. Now my question is how to configure this. What I could do is create a new deployment+service for each file, but that seems incorrect because 99% of the content is the same, only the environment variable is different. Another option would be to have one pod with multiple containers and one gateway-like containers. But then when the pod is restarting, all files are loaded again.
What's the best strategy to do this?
Ideally, you should have to keep it like deployment+service and make 5-6 different secret or configmap as per need storing the environment variables files your application require.
Inject this secret or configmap one by one to each different deployment.
Another option would be to have one pod with multiple containers and
one gateway-like containers.
that's didn't look like scalable approach, if you are running the 5- container inside single pod and one gateway container also.
Although this does not directly address the question, I am delivering an answer to how you could create multiple container instances from one deployment that only different in environmental variables because this question pops up if googling said approach.
For your question the answer of Harsh Manvar is the correct kubernetes approved way of handling that.
I had the same problem and found a solution which needs to be refined a bit more.
With Helm you can specify a key-value pair array inside your values.yaml, which would look like this:
envVars:
key1: value1
key2: value2
Now you need to modify your deployment.yaml to loop over this array and inject the values as environmental variables into your container :
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chart.fullname" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "chart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "chart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "chart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
{{- range $key, $value := .Values.envVars }}
- name: {{ $.Chart.Name }}-{{ $key }}
securityContext:
{{- toYaml $.Values.securityContext | nindent 12 }}
image: "{{ $.Values.image.repository }}:{{ $.Values.image.tag | default $.Chart.AppVersion }}"
env:
- name: envVarName
value: {{ $value | quote }}
imagePullPolicy: {{ $.Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
resources:
{{- toYaml $.Values.resources | nindent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
There are some notes that need to be considered tho:
We are using {{- range $key, $value := .Values.envVars }} inside the containers spec to create multiple containers inside one pod. This may not be wanted due to ressource restrictions
you can reference the key-value pairs with $key and $value as specified in the range command
As the scope inside the range environment is set to .Values.envVars every other call outside of that scope needs to be made from the root scope. In Helm the root scope of the values file is called with the $ sign e.g. {{ $.Chart.Name }}
You need to incooperate your key or value into the container name or you would be creating duplicated, hence the - name: {{ $.Chart.Name }}-{{ $key }} portion.
It should be possible to use the same range command to create multiple deployments based on one file if put at the top. Just keep in mind to fix the scope of other calls.
It should NOT be possible to create multiple (different) pods out of one deployment, due to the Kubernetes structure not expecting multiple template sections within the spec of a deployment. Multiple pods would simply be replicas. (The template inside the deployment.spec is of type PodTemplate)
Anyway, this took me quite some time to crack, I hope it is helpful to someone.

Helm upgrade error. atlassian-jira-software ingress

Trying to update helm ingress jira atlassian software.
I have such ingress template:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "atlassian-jira-software.fullname" . -}}
{{- $servicePort := .Values.service.port -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ template "atlassian-jira-software.name" . }}
chart: {{ template "atlassian-jira-software.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
backend:
service:
name: {{ $fullName }}
port:
name: http
{{- end }}
{{- end }}
Execute this command:
helm upgrade --dry-run -n atlassian jira .
The output of this command:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Ingress "jira-atlassian-jira-software" in namespace "atlassian" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "jira"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "atlassian"
kubectl version --short
The output:
Client Version: v1.19.12 Server Version: v1.19.13-eks-8df270
Please, help me!
Was the ingress originally installed by Helm? Check out its "labels" section:
kubectl get ingress jira-atlassian-jira-software -o json
If you don't find the expected values (as described in the error messages) and you are sure you know what you are doing, you can try adding the labels yourself by editing the ingress:
kubectl edit ingress jira-atlassian-jira-software
If you do this, make sure that you run a diff before you do the helm upgrade again (to ensure that you see what is going to happen in advance and that you don't blow away anything you did not intend to):
helm diff upgrade -n atlassian jira .

Kubernetes write once, deploy anywhere

Consider that I have built a complex Kubernetes deployment/workload consisting of deployments, stateful sets, services, operators, CRDs with specific configuration etc… The workload/deployment was created by individual commands (kubectl create, helm install…)…
1-) Is there a way to dynamically (not manually) generate a script or a special file that describes the deployment and that could be used to redeploy/reinstall my deployment without going into each command one by one again.
2-) Is there a domain specific language (DSL) or something similar through which one can describe a Kubernetes deployment independently from the final target kubernetes cluster target, whether GKE, AWS, Azure, or on premises … kind of write once deploy anywhere…
Thanks.
I think Kustomize and Helm are your best bet. We can write the helm chart as a template and decide which template to use with Go-Templating and conditions.
For example, look at the below configuration file which has a few conditions.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}
namespace: {{ .Values.namespace }}
spec:
selector:
matchLabels:
app: {{ .Values.appName }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
annotations:
labels:
app: {{ .Values.appName }}
spec:
containers:
- name: {{ .Values.appName }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.hasSecretVolume }}
volumeMounts:
- name: {{ .Values.appName }}-volume-sec
mountPath: {{ .Values.secretVolumeMountPath }}
{{- end}}
{{- if or .Values.env.configMap .Values.env.secrets }}
envFrom:
{{- if .Values.env.configMap }}
- configMapRef:
name: {{ .Values.appName }}-env-configmap
{{- end }}
{{- if .Values.env.secrets }}
- secretRef:
name: {{ .Values.appName }}-env-secret
{{- end }}
{{- end }}
ports:
- containerPort: {{ .Values.containerPort }}
protocol: TCP
{{- if .Values.railsContainerHealthChecks}}
{{ toYaml .Values.railsContainerHealthChecks | indent 8 }}
{{- end}}
{{- if .Values.hasSecretVolume }}
volumes:
- name: {{ .Values.appName }}-volume-sec
secret:
secretName: {{ .Values.appName }}-volume-sec
{{- end}}
{{- if .Values.imageCredentials}}
imagePullSecrets:
- name: {{.Values.imageCredentials.secretName}}
{{- end}}
For instance, this condition checks for Secret volume and mounts it.
{{- if .Values.hasSecretVolume }}
In case you are interested in helm and generic templating, you can refer to this medium blog: https://medium.com/srendevops/helm-generic-spring-boot-templates-c9d9800ddfee

Helm require value without using it

Is it possible to have a required .Value without using it in the template.
For example in my case I want to require to write a password for a subchart of mongodb but I won't use it on my templates so can I have something like bellow in a template:
{{- required 'You must set a mongodb password' .Values.mongodb.mongodbPassword | noPrint -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "cloud.fullname" . }}
labels:
{{- include "cloud.labels" . | nindent 4 }}
app.kubernetes.io/component: cloud
spec:
replicas: {{ .Values.cloud.minReplicaCount }}
selector:
....
And the result would be something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: blablablabla
...
Possibly the most direct way is to use sprig's fail function.
{{- if not .Values.mongodb.mongodbPassword -}}
{{- fail "You must set a mongodb password" -}}
{{- end -}}
Assigning the required expression to a variable (that you never use) will probably also have the desired effect.
{{- $unused := required "You must set a mongodb password" .Values.mongodb.mongodbPassword -}}
Yes, it is possible. Let's consider the below Values.yaml file:
Values.yaml:
mongodb:
mongodbPassword: "AbDEX***"
So, you want to generate the deployment file only if the password is set. You can do by using if-block of go-templating. If the length of the password field is greater than zero, the deployment yaml will be generated otherwise not.
{{- if .Values.mongodb.mongodbPassword}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "cloud.fullname" . }}
labels:
{{- include "cloud.labels" . | nindent 4 }}
app.kubernetes.io/component: cloud
spec:
replicas: {{ .Values.cloud.minReplicaCount }}
selector:
....
{{- end }}
Reference:
{{if pipeline}} T1 {{end}}
If the value of the pipeline is empty, no output is generated;
otherwise, T1 is executed. The empty values are false, 0, any nil pointer or
interface value, and any array, slice, map, or string of length zero.
Dot is unaffected.

Helm - how to call helper functions in a loop?

I'm trying to define n StatefulSets where n is the number of nodes required, set in values.yaml as nodeCount. I get an error that looks to be scope related, but I can't seem to get the scope sorted out. Am I missing something here?
The relevant content in my StatefulSet .yaml file:
{{ range $k, $v := until ( .Values.nodeCount | int) }}
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: {{ $.Release.Name }}
labels:
app: {{ $.Release.Name }}
chart: {{ template "myapp-on-kube.chart" . }} #here's my call to _helpers
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
The relevant content in _helpers.tpl:
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "myapp-on-kube.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
The error I get:
Error: render error in "myapp-on-kube/templates/statefulset.yaml": template: myapp-on-kube/templates/_helpers.tpl:31:25: executing "myapp-on-kube.chart" at <.Chart.Name>: can't evaluate field Chart in type int
Several of the Go templating constructs change the meaning of . to be the thing that's being looped over, and you need to use $ to refer to the initial value. Most of your template correctly refers to e.g. $.Release.Name, but when you invoke the helper template, it's using the current context rather than the root value. Change:
chart: {{ template "myapp-on-kube.chart" $ }}
(Note that the template as you have it will declare several StatefulSets all with the same name, which won't go well. I might create just one StatefulSet with replicas: {{ .Values.nodeCount }}.)