What does {{ annotation .ObjectMeta `abc` `def` }} mean in helm template - kubernetes-helm

I'm new to helm. When I work with istio, I see something like {{ annotation ...}} serveral times, more details:
spec:
containers:
- name: istio-proxy
{{- if contains "/" (annotation .ObjectMeta `sidecar.istio.io/proxyImage` .Values.global.proxy.image) }}
image: "{{ annotation .ObjectMeta `sidecar.istio.io/proxyImage` .Values.global.proxy.image }}"
{{- else }}
image: "{{ .ProxyImage }}"
{{- end }}
You can find the above code from the istio github.
I have read the helm doc, so I think the annotation is a function, and all others (i.e. .ObjectMeta, sidecar.istio.io/proxyImage, .Values.global.proxy.image ) are just arguments. Am I right?
But I have no idea what the annotation function is. It would be better if anyone could point me in the right direction.

I just went hunting for the annotation template function in the Istio code and it's implemented by getAnnotation. The last argument to the function is a default value to use (presumably if the pod annotation doesn't exist).
As David Maze said in the comments above, the annotation function is not part of Helm. And the template in question is not actually a Helm template, but is an Istio template stored as a static file in the Helm chart. Its raw content is shoved into a ConfigMap without being processed by the Helm renderer (bringing the literal string ".Values.global.proxy.image" into the ConfigMap as part of the data); e.g., see the sidecar injector configmap. The Istio control-plane will then read this Istio template from the ConfigMap volume and render it with Go text/template, deserializing it into a struct with an ObjectMeta member, similar to:
type SidecarTemplateData struct {
...
ObjectMeta *metav1.ObjectMeta
Spec *corev1.PodSpec
...
}
^ ObjectMeta and Spec belong to the pod that is being injected with a sidecar.
If you now read the Istio template you pasted again, you'll see the istio-proxy container will run the image defined by a pod's sidecar.istio.io/proxyImage annotation, falling back to .Values.global.proxy.image if the annotation doesn't exist. But it will only do so if the returned value contains a slash (/), otherwise it will use the value of .ProxyImage for the sidecar image. Fun times.

Related

Yaml dynamic variables

So, I'm just starting with YAML and k8s and maybe this questions comes from lack of understanding how YAML and Helm works together.
But I was wondering if I can declare a variable inside Values.YAML file that will be changed during the run of the scripts?
I was thinking about accumulating value for each pod I am starting, that will be saved as an environment variable in each pod. I can manually create different value for each pod but I was wondering if there is an automatic way to do so?
Hope my question is clear :)
Helm allows for conditionals using its templating. For example, I can have this in my values.yaml
environment: preprod
And then this inside a yaml within my helm chart
{{ if eq .Values.environment "preprod" }}
## Do preprod stuff here
{{ end }}
{{ if eq .Values.environment "prod" }}
## Do prod stuff here
{{ end }}
This means if I ran helm install, then .Values.environment would resolve to "preprod" and the block within the {{ if eq .Values.environment "preprod" }}...{{ end }} would be printed in the yaml.
If I wanted to override that default, I can by adding the --set switch (details here)
helm install --set environment=prod
Which would cause the .Values.environment variable to resolve to "prod" instead, and the block within {{ if eq .Values.environment "prod" }} ... {{ end }} would be output instead.
Helm templates are stateless, and the variables structure is immutable.
Can I declare a variable inside the values.yaml file that will be changed during the run of the scripts?
That's not possible, no.
If you have experience with functional programming, there are some related tricks you can use in the context of Helm templates. A template only takes one parameter, but you can make that parameter be a list, and then pass some state forward through a series of recursive template calls. If you're tempted to do this, consider writing a Kubernetes operator instead: even if you have to learn Go to do it, the language is much more mainstream and practical than the template language, and it's much easier to test.
This having been said:
... accumulating value for each pod I am starting ...
If all you're asking for is a set of Pods that are very similar except that they have sequential names, this is one of the things a StatefulSet provides.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: some-name
spec:
replicas: 5
The Pods generated by this StatefulSet will be named some-name-0, some-name-1, and so on. Your application code can see these names via the hostname command and language-specific equivalents. That could meet your needs.
If you need something more complex, you can also use a template range loop to generate a series of documents. Each document needs to begin with a --- YAML start-of-document marker. You need to be aware that range rebinds the . special template variable as appears in constructs like .Values, and I tend to save that away.
{{- $top := . }}
{{- $i := range until 5 }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "mychart.fullname" $top }}-{{ $i }}
spec: { ... }
{{- end }}
You should almost always use higher-level constructs like Deployments, StatefulSets, or Jobs instead of creating bare Pods. Trying to fit within their patterns will usually be a little easier than trying to manually create several very slightly different Pods.

Import parent template with subchart values

I have multiple sucharts with applications and a parent chart that will deploy them.
All subcharts have the same manifests for the underlying application. Therefore I decided to create a library and put general variables from subcharts in it.
Example from lib:
{{- define "app.connect.common.release.common_libs.servicetemplate" -}}
apiVersion: v1
kind: Service
metadata:
labels:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
name: {{ .Values.application.name }}-service
namespace: {{ .Values.global.environment.namespace }}
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: 8080
- name: http
port: 80
targetPort: 8080
selector:
app: {{ .Values.application.name }}
status:
loadBalancer: {}
{{- end }}
I declared a dependency in Chart.yaml and executed helm dep up. Then in my subchart I'm importing this template. But when I'm trying to run --dry-run on parent chart I'm receiving the following error:
Error: template: app.connect.common.release/charts/app.connect.common.release.chtmgr/templates/service.yaml:1:4: executing "app.connect.common.release/charts/app.connect.common.release.chtmgr/templates/service.yaml" at <include "app.connect.common.release.common_libs.servicetemplate" .>: error calling include: template: app.connect.common.release/charts/app.connect.common.release.chtmgr/charts/app.connect.common.release.common_libs/templates/_helpers.tpl:169:18: executing "app.connect.common.release.common_libs.servicetemplate" at <.Values.application.name>: nil pointer evaluating interface {}.name
My values values.yaml in the subchart:
application:
name: chtmgr-api
image: cht-mgr-api
The same error with named template.
Is it possible to put general values from subchart in parent template(example _helper.tpl) and import it in subchart?
If not, how do you implement this?
I've checked a lot of resources but still don't have an idea am I going in the right direction.
The Helm template define action creates a "function". It implicitly takes a single "parameter", using the special variable name ., and .Values is actually a lookup in .. It does not "capture" .Values at the point where it is defined; it uses the Values property of the parameter that's passed to it.
This means the template will behave differently when it's called in different contexts. As the Helm documentation on Subcharts and Global Variables describes, when executing the subchart, the top-level . parameter will have its Values replaced by the subchart's key in the primary values.
There's three ways to work around this:
If you're using Helm 3, you can directly import a value from the subchart into the parent. (I'm not clear what version of Helm exactly this was added, or if the syntax works in a separate requirements.yaml file.) Declare the subchart dependency in your Chart.yaml as
dependencies:
- name: subchart
import-values:
- child: application
parent: application
and the template you show should work unmodified.
(There's a more involved path that involves the subchart explicitly exporting values to the parent. I'm not sure if this is that useful to you: the paths will still be different in the two charts, and in any case Helm values can't contain computed values.)
You already use .Values.global in your example; this will have the same value in the parent chart and all included charts.
# vvvvvv inserted "global" here
name: {{ .Values.global.application.name }}-service
namespace: {{ .Values.global.environment.namespace }}
You can also use template logic to try to look in more places for the application dictionary. This will only work from the parent chart and the subchart proper and not any other sibling charts; I believe it will also only work if the configuration is embedded in the parent chart's values.yaml or an external helm install -f values file, but won't find content in the included chart's values.yaml.
{{/* Get the subchart configuration, or an empty dict (assuming we're in the parent) */}}
{{- $subchart := .Values.subchart | default dict -}}
{{/* Find the application values (assuming first we're in the subchart) */}}
{{- $application := .Values.application | default $subchart.application | default dict -}}
{{/* Then use it */}}
name: {{ $application.name }}-service
This logic sets the variable $application, normally, to .Values.application. If that's not set (you're in the parent chart) then it in effect looks up .Values.subchart.application, and if that's not available either then it uses an empty dictionary. This is (again, using programming terminology) eagerly evaluated – even if you're not falling back to the default, Helm will always look up .Values.subchart – so we use a second variable to have the .Values.subchart, or an empty dict (if you're in the subchart). In both cases looking up a nonexistent key in an empty dict isn't an error but looking up on an unset value is.
Found a solution. Problem was related to the chart/template names all of them include '.' in the names what causes problem. Don't include dots in your templates/charts names. Example how not to do: "ibext.common.connect.etc".
My assumption
I'm quite newbie in helm and my assumption that helm when template engine is looking in the subchart it goes through the following way to find variables .Values.subchartName in my case it was .Values.ibext.common.connect.etc So when it faced with .Values.ibext it can't understand what it is(but helm doesn't show anything).
But this is only my assumption. If someone there understands the behaviour of template engine in this case please reveal the secret.
Be aware the *.tpl files must be in the templates folder. Double check all folder names and the structure.

using node selector helm chart to assign pods to a specific node pool

i'm trying to assign pods to a specific node as part of helm command, so by the end the deployment yaml should look like this
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
node-name: dev-cpu-pool
i'm using this command as part of Jenkins file deployment
`sh "helm upgrade -f charts/${job_name}/default.yaml --set nodeSelector.name=${deployNamespace}-cpu-pool --install ${deployNamespace}-${name} helm/${name} --namespace=${deployNamespace} --recreate-pods --version=${version}`"
the deployment works good and the pod is up and running but from some reason i cannot see the nodeSelector key and value as part of the deployment yaml and as a results pods not assign to the specific node i want. any idea what is wrong ? should i put any place holder as part of my chart template or is not must ?
The artifacts that Helm submits to the Kubernetes API are exactly the result of rendering the chart templates; nothing more, nothing less. If your templates don't include a nodeSelector: block then the resulting Deployment never will either. Even if you helm install --set ... things that could match Kubernetes API fields, nothing will implicitly fill them in.
If you want an option to specify rarely-used fields like nodeSelector: then your chart code needs to include them. You can make the presence of the field conditional on the value being set, but you do need to explicitly list it out:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
{{- if .Values.nodeSelector }}
nodeSelector: {{- .Values.nodeSelector | toYaml | nindent 8 }}
{{- end }}

Create kubernetes resources with helm only if custom resource definition exists

I have a helm chart that deploys a number of Kubernetes resources. One of them is a resource that is of a Custom Resource Definition (CRD) type (ServiceMonitor used by prometheus-operator).
I am looking for a way, how to "tell" helm that I'd want to create this resource only if such a CRD is defined in the cluster OR to ignore errors only caused by the fact that such a CRD is missing.
Is that possible and how can I achieve that?
Helm's Capabilities object can tell you if an entire API class is installed in the cluster. I don't think it can test for a specific custom resource type.
In your .tpl files, you can wrap the entire file in a {{ if }}...{{ end }} block. Helm doesn't especially care if the rendered version of a file is empty.
That would lead you to a file like:
{{ if .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
...
{{ end -}}
That would get installed if the operator is installed in the cluster, and skipped if not.
If you are on Helm 3 you can put your CRD in the crds/ directory. Helm will treat it differently, see the docs here.
In Helm 2 there is another mechanism using the crd-install hook. You can add the following to your CRD:
annotations:
"helm.sh/hook": crd-install
There are some limitations with this approach so if you are using Helm 3 that would be preferred.
In Helm v3, you can test for specific resources:
{{ if .Capabilities.APIVersions.Has "monitoring.coreos.com/v1/ServiceMonitor" -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
...
spec:
...
{{- end }}
https://helm.sh/docs/chart_template_guide/builtin_objects/

Best way to DRY up deployments that all depend on a very similar init-container

I have 10 applications to deploy to Kubernetes. Each of the deployments depends on an init container that is basically identical except for a single parameter (and it doesn't make conceptual sense for me to decouple this init container from the application). So far I've been copy-pasting this init container into each deployment.yaml file, but I feel like that's got to be a better way of doing this!
I haven't seen a great solution from my research, though the only thing I can think of so far is to use something like Helm to package up the init container and deploy it as part of some dependency-based way (Argo?).
Has anyone else with this issue found a solution they were satisfied with?
A Helm template can contain an arbitrary amount of text, just so long as when all of the macros are expanded it produces a valid YAML Kubernetes manifest. ("Valid YAML" is trickier than it sounds because the indentation matters.)
The simplest way to do this would be to write a shared Helm template that included the definition for the init container:
_init_container.tpl:
{{- define "common.myinit" -}}
name: myinit
image: myname/myinit:{{ .Values.initTag }}
# Other things from a container spec
{{ end -}}
Then in your deployment, include this:
deployment.yaml:
apiVersion: v1
kind: Deployment
spec:
template:
spec:
initContainers:
- {{ include "common.myinit" . | indent 10 | strip }}
Then you can copy the _init_container.tpl file into each of your individual services.
If you want to avoid the copy-and-paste (reasonable enough) you can create a Helm chart that contains only templates and no actual Kubernetes resources. You need to set up some sort of repository to hold this chart. Put the _init_container.tpl into that shared chart, declare it as a dependency is the chart metadata, and reference the template in your deployment YAML in the same way (Go template names are shared across all included charts).