Is it possible to abstract boilerplate YAML from helm subchart templates? - kubernetes

Is there a way to move boilerplate YAML from subchart templates to the parent _helpers.tpl or values.yaml ?
helm project
MyHelmApp
│
├── Chart.yaml
├── values.yaml
├── templates
│   ├── _helpers.tpl
│   ├── configmap.yaml
│   └── app-ingress-rules.yaml
│
└── charts
│
   ├── app1
   │   ├── Chart.yaml
   │   ├── templates
   │   │   ├── _helpers.tpl
   │   │   ├── deployment.yaml
   │   │   └── service.yaml
   │   └── values.yaml
   ├── app2
   │   ├── Chart.yaml
   │   ├── templates
   │   │   ├── _helpers.tpl
   │   │   ├── deployment.yaml
   │   │   └── service.yaml
   │   └── values.yaml
   └── app3
      ├── Chart.yaml
      ├── templates
      │   ├── _helpers.tpl
      │   ├── deployment.yaml
      │   └── service.yaml
      └── values.yaml
To elaborate: I have 3 app subcharts and they each have boilerplate YAML in their templates. The values in those templates are defined in the parent chart's values.yaml, and the variable paths are identical. For instance:
app1, app2, and app3 deployment.yaml's all contain...
livenessProbe:
httpGet:
path: {{ .Values.health.livenessProbe.path }}
port: {{ .Values.health.livenessProbe.port }}
initialDelaySeconds: {{ .Values.health.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.health.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.health.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.health.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.health.readinessProbe.failureThreshold }}
readinessProbe:
tcpSocket:
port: {{ .Values.health.readinessProbe.port }}
initialDelaySeconds: {{ .Values.health.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.health.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.health.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.health.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.health.readinessProbe.failureThreshold }}
values.yaml (in the parent chart)
app1:
health:
livenessProbe:
path: /system_health
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
app2:
health:
livenessProbe:
path: /system_health
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
(etc)
Back to the question: I'd like to grab what's identical in the subchart templates and move it to one centralized place like the parent chart's _helpers.tpl or values.yaml; is this possible? Can you provide an example of how you'd do this?

Helm renders all of the YAML it's given, from all parent and dependency charts together, in a single execution environment with a shared template namespace. In theory, the app1 chart can depend on a template that's defined in the parent's _helpers.tpl, and in the specific layout you show, it will work.
Because of this environment setup, it's also possible to write a "chart" that doesn't actually produce any YAML of its own, but just contains templates. Helm 3 will include "library chart" as a specific concept. A better layout still would be to have a library with your shared templates, and reference that.
MyHelmApp
\-- charts
+-- app1
| +-- Chart.yaml
| \-- templates/...
\-- common
+-- Chart.yaml
\-- templates
\-- _helpers.tpl
(but no *.yaml)
Now MyHelmApp depends on app1, app2, and app3, and each of those depend on common. This would let you install any of those independently from their siblings.
Helm doesn't have a way to "push" fragments of YAML into objects in other locations (Kustomize, part of relatively recent Kubernetes, can does this). Each object has to declare for itself any potential customization that could be allowed. So in each chart you'd have to declare
spec:
{{ include "common.probes" . | indent 2 }}
The common chart just defines templates, and when they're invoked they'd use the version of .Values that's localized to the subchart.
{{- define "common.probes" -}}
livenessProbe:
httpGet:
path: {{ .Values.health.livenessProbe.path }}
et: cetera
{{ end -}}

Related

Kustomize - Imported (resource) patch doesn't work

I am not managing to include a kustomization.yaml that only contains patches. As a minimal example of the problem I'm solving, I can illustrate with the folder structure below:
.
├── kustomization.yaml
├── nested
│ ├── kustomization.yaml
│ └── serviceaccount-patch.yaml
└── service-account.yaml
The files are:
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: top
resources:
- service-account.yaml
- nested
# service-account.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
name: base
spec:
automountServiceAccountToken: false
# nested/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: nested
patches:
- path: serviceaccount-patch.yaml
target:
kind: ServiceAccount
name: base
# nested/serviceaccount-patch.yaml
- op: replace
path: /metadata/name
value: replaced
The patch is not applied. When putting the patch straight into kustomization.yaml it does work. I'd like to keep this structure, as my actual use case is to have the following:
.
├── base
│ ├── env
│ │ ├── app
│ │ │ ├── kustomization.yaml
│ │ │ └── service-account.yaml
│ │ └── kustomization.yaml
│ ├── kustomization.yaml
│ └── shared
│ ├── kustomization.yaml
│ └── pod.yaml
├── overlay
│ └── dev
│ ├── env
│ │ ├── app
│ │ │ ├── kustomization.yaml
│ │ │ └── serviceaccount-patch2.yaml
│ │ ├── kustomization.yaml
│ │ └── serviceaccount-patch.yaml
│ ├── kustomization.yaml
│ └── shared
│ └── kustomization.yaml
Here, I would have two parts for each environment: env and shared. I would overrule the namespace in overlay/dev/env/kustomization.yaml. That file would also include the base base/env, making sure that the namespace of the shared base is not modified. The above also illustrates which patch works (serviceaccount-patch.yaml) and which doesn't (serviceaccount-patch2.yaml).
If there is a better way to achieve this, I'd love to hear it.
A patch can only be applied to resources that are generated by the kustomization.yaml that includes the patch. In other words, running kustomize build without the patch defined must create the resources that you want to patch.
Running kustomize build in your nested directory produces no output -- that file includes no resources, so your patch is no-op.
I'm not entirely sure I understand your desired layout; in particular, it's not clear what the difference between dev/ and /dev/env/ is meant to be. You would generally set things up like this:
.
├── base
│   ├── kustomization.yaml
│   └── service-account.yaml
└── overlay
├── dev
│   ├── kustomization.yaml
│   └── serviceaccount-patch.yaml
├── us-east
│   ├── kustomization.yaml
│   └── ...
└── us-west
│   ├── kustomization.yaml
│   └── ...
Where overlay/dev/kustomization.yaml looks like:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: serviceaccount-patch.yaml
target:
kind: ServiceAccount
name: base
Running kustomize build overlay/dev shows that the patch is applied as expected:
apiVersion: v1
kind: ServiceAccount
metadata:
name: replaced
spec:
automountServiceAccountToken: false

Kustomize/Kubernetes - Nested envFrom injection

I'm trying to add envs injection based on envFrom
A simplified structure looks something like that:
├── base
│   ├ ─ backend
│   ├── backend.properties
│   ├── app1
│   │   ├── app1_backend.properties
├ ── deployment.yaml
│   │   ├── ingress.yaml
│   │   ├── kustomization.yaml
├── common.properties
├── frontend
│   ├── app1
│ ├── app1_frontend.properties
│   │   ├── deployment.yaml
│   │   ├── ingress.yaml
│   │   ├── kustomization.yaml
│   │   └── service.yaml
│   ├── frontend.properties
│   └── kustomization.yaml
└── kustomization.yaml
I would like to generate properties on the main level(common), backend/frontend level, and particular app level.
So I was trying to add the following patch on main level and it works:
- op: add
path: /spec/template/spec/containers/0/envFrom
value:
- configMapRef:
name: common-properties
and following code to nested directories(backend/frontend/particular app)
- op: add
path: "/spec/template/spec/containers/0/envFrom/-"
value:
configMapRef:
name: backend-properties
But it doesn't work with the following error:
add operation does not apply: doc is missing path: "/spec/template/spec/containers/0/envFrom/-": missing value
I have seen some examples on GitHub where that syntax was used: https://github.com/search?l=YAML&p=1&q=%2Fspec%2Ftemplate%2Fspec%2Fcontainers%2F0%2FenvFrom%2F-&type=Code (you have to be logged in to see results) And I'm not sure this stopped work on specific Kustomize version(I'm using the newest version - 4.5.3) or it never worked
I have already written some Kustomize patches and syntax with /- to resources usually worked fine to resources that already exist on the manifest.
It's possible to inject that envFrom on different levels?
It's hard to diagnose your problem without a reproducible example, but if I start with this Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
spec:
containers:
- name: example
image: docker.io/alpine:latest
envFrom:
- configMapRef:
name: example-config
And use this kustomization.yaml, which includes your patch without
changes:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
patches:
- target:
kind: Deployment
name: example
patch: |-
- op: add
path: "/spec/template/spec/containers/0/envFrom/-"
value:
configMapRef:
name: backend-properties
Then everything seems to work and I get the resulting output from
kustomize build:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
spec:
containers:
- envFrom:
- configMapRef:
name: example-config
- configMapRef:
name: backend-properties
image: docker.io/alpine:latest
name: example

Kustomize: how to reference a value from a ConfigMap in another resource/overlay?

I have a couple of overlays (dev, stg, prod) pulling data from multiple bases where each base contains a single service so that each overlay can pick and choose what services it needs. I generate the manifests from the dev/stg/prod directories.
A simplified version of my Kubernetes/Kustomize directory structure looks like this:
├── base
│ ├── ServiceOne
│ │ ├── kustomization.yaml
│ │ └── service_one_config.yaml
│ ├── ServiceTwo
│ │ ├── kustomization.yaml
│ │ └── service_two_config.yaml
│ └── ConfigMap
│ ├── kustomization.yaml
│ └── config_map_constants.yaml
└── overlays
├── dev
│ ├── kustomization.yaml
│ └── dev_patch.yaml
├── stg
│ ├── kustomization.yaml
│ └── stg_patch.yaml
└── prod
├── kustomization.yaml
└── prod_patch.yaml
Under base/ConfigMap, config_map_constants.yaml file contains key/value pairs that are non-secrets:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: myApp
name: global-config-map
namespace: myNamespace
data:
aws_region: "us-west"
env_id: "1234"
If an overlay just needs a default value, it should reference the key/value pair as is, and if it needs a custom value, I would use a patch to override the value.
kustomization.yaml from base/ConfigMap looks like this and refers to ConfigMap as a resource:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- config_map_constants.yaml
QUESTION: how do I reference "aws_region" in my overlays' yaml files so that I can retrieve the value?
For example, I want to be able to do something like this in base/ServiceOne/service_one_config.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
app: myApp
aws_region: ../ConfigMap/${aws_region} #pseudo syntax
name: service_one
spec:
env_id: ../ConfigMap/${env_id} #pseudo syntax
I am able to build the ConfigMap and append it to my services but I am struggling to find how to reference its contents within other resources.
EDIT:
Kustomize version: v4.5.2
You can try using https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/
For your scenario, if you want to reference the aws-region into your Service labels. You need to create a replacement file.
replacements/region.yaml
source:
kind: ConfigMap
fieldPath: data.aws-region
targets:
- select:
kind: Service
name: service_one
fieldPaths:
- metadata.labels.aws_region
And add it to your kustomization.yaml
replacements:
- path: replacements/region.yaml
Kustomize output should be similar to this
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myApp
aws_region: us-west-1
name: service_one

helm hook post-install shell script execution

I am trying to set helm "post-install" hook and seeing below error.
ERROR:
sh: script/jenkins.sh: not found
postinstall.yaml content
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
spec:
template:
spec:
containers:
- name: post-install-jenkins-job
image: alpine:3.3
imagePullPolicy: IfNotPresent
command: [ "/bin/sh", "-c", "scripts/jenkins.sh"]
restartPolicy: Never
terminationGracePeriodSeconds: 0
Folder structure of helm package
scripts/jenkins.h is the script that I am trying to execute with "postinstall.yaml" as post-install helm hook.
riq-agent
├── Chart.yaml
├── README.md
├── scripts
│   └── jenkins.sh
├── templates
│   ├── NOTES.txt
│   ├── Untitled-1.yml
│   ├── _helpers.tpl
│   ├── awssecret.yaml
│   ├── clusterrolebinding.yaml
│   ├── configurationFiles-configmap.yaml
│   ├── deployment.yaml
│   ├── hook-aws-ecr.yaml
│   ├── initializationFiles-configmap.yaml
│   ├── postinstall.yaml
│   ├── pvc.yaml
│   ├── secrets.yaml
│   ├── serviceaccount.yaml
│   ├── servicemonitor.yaml
│   ├── svc.yaml
│   └── tests
│   ├── test-configmap.yaml
│   └── test.yaml
└── values.yaml
Is there any mistakes in the way that I am trying to execute a shell script (stored within helm package) in helm hook?
In order to be executed, scripts/jenkins.sh should be a part of post-install-jenkins-job container, mounted as a volume. You can populate a volume with data stored in a configmap.
postinstall-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-postinstall-configmap
data:
jenkins.sh: |-
{{ .Files.Get "scripts/jenkins.sh" | indent 4}}
postinstall.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}"
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
spec:
template:
spec:
containers:
- name: post-install-jenkins-job
image: alpine:3.3
imagePullPolicy: IfNotPresent
command: [ "/bin/sh", "-c", "/opt/scripts/jenkins.sh"]
volumeMounts:
- name: config-volume
mountPath: /opt/scripts
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}-postinstall-configmap
defaultMode: 0777
restartPolicy: Never
terminationGracePeriodSeconds: 0

Access parent (chart) context from subcharts

I have a Helm Chart comprising of a few subcharts, sth like this:
├── Chart.yaml
├── README.md
├── charts
│   ├── nginx-1.0.0.tgz
│   └── redis-1.0.0.tgz
├── index.yaml
├── requirements.lock
├── requirements.yaml
├── subcharts
│   ├── nginx
│   │   ├── Chart.yaml
│   │   ├── templates
│   │   │   ├── deployment.yaml
│   │   │   ├── service.yaml
│   └── redis
│   ├── Chart.yaml
│   ├── templates
│   │   ├── deployment.yaml
│   │   └── service.yaml
│   └── values.yaml
└── values.yaml
In my root level values.yaml, I define the (per chart) values, under the corresponding yaml node, i.e. the file would look something like this:
redis:
namespace: default
replicas: 1
image: redis
tag: 5.0-alpine
port: 6379
imagePullPolicy: IfNotPresent
serviceType: ClusterIP
resources:
requests:
cpu: 200m
memory: 512Mi
limits:
cpu: 1000M
memory: 1500Mi
nginx:
namespace: default
istio:
enabled: false
replicas: 1
image: redash/nginx
tag: latest
port: 80
imagePullPolicy: IfNotPresent
serviceType: LoadBalancer
And these values (the ones in the hierarchy <subchartname>.value are accessed as follows in a subchart template:
spec:
replicas: {{ default 1 .Values.replicas }}
i.e. there is no reference to the subchart name, given that this becomes the root context for the template.
Is there a way so that my helm templates access the parent (root) context values?
I want to do this so I can share values across subcharts in a DRY mode.
i.e. more or less my question becomes:
how should I access from my subchart templates, values in the root level?
myvariable1:
value1
myvarianle2:
value2
I think you need to use global chart values which can be shared across subcharts . Link to doc https://helm.sh/docs/topics/chart_template_guide/subcharts_and_globals/#global-chart-values