I m trying to inject env vars in my helm chart deployment file. my values file looks like this.
values.yaml
envFrom:
- configMapRef:
name: my-config
- secretRef:
name: my-secret
I want to iterate through secrets and configmaps values . This is what I did in deployment.yaml file
envFrom:
{{- range $item := .Values.envFrom }}
{{- $item | toYaml | nindent 14 }}
{{- end }}
But i didn t get the desired result
You can directly use the defined value like:
...
envFrom:
{{- toYaml .Values.envFrom | nindent 6 }}
...
Or Instead of use range, you can use with.
Here is an example:
values.yaml:
envFrom:
- configMapRef:
name: my-config
- secretRef:
name: my-secret
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
namespace: test
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
# {{- with .Values.envFrom }} can be here if you dont
# want to define envFrom in this container if envFrom
# is not defined in values.yaml.
# If you want to do that, remove the one below.
envFrom:
{{- with .Values.envFrom }}
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: Never
The output is:
c[_] > helm template test .
---
# Source: test/templates/test.yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
namespace: test
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: my-config
- secretRef:
name: my-secret
restartPolicy: Never
Related
i have a helm template vor an cronjob in a chart:
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "gdp-chart.fullname" . }}-importjob
labels:
{{- include "gdp-chart.labels" . | nindent 4 }}
spec:
suspend: {{ .Values.import.suspend }}
schedule: {{ .Values.import.schedule }}
jobTemplate:
metadata:
name: import-job
spec:
template:
spec:
containers:
- image: curlimages/curl
name: import-job
args:
- "$(GDP_INT_IMPORT_URL)"
{{- with .Values.import.env}}
env:
{{- toYaml . | nindent 12}}
{{- end}}
restartPolicy: Never
I want to change spec.suspend by command. If i set it to from true to false it works, suspend is set to false:
helm upgrade --reuse-values --set import.suspend=false gdp gdp-api
but if i try to set it to from false true, the value of suspend does not change to true:
helm upgrade --reuse-values --set import.suspend='true' gdp gdp-api
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
gdp-api-importjob 0 0 31 2 * False 0 7h32m 3d7h
Why is this the case?
Assuming I have this values.yaml under my helm chart -
tasks:
- name: test-production-dev
env:
- production
- dev
- name: test-dev
env:
- dev
- name: test-all
environment_variables:
STAGE: dev
I would like to run my cronjob based on these values -
if .env doesn't exist - run any time.
if .env exists - run only if environment_variables.STAGE is in the .env list.
This is what I've done so far ( with no luck ) -
{{- range $.Values.tasks}}
# check if $value.env not exists OR contains stage
{{if or .env (hasKey .env "$.Values.environment_variables.STAGE") }}
apiVersion: batch/v1
kind: CronJob
...
{{- end}}
---
{{- end}}
values.yaml
tasks:
- name: test-production-dev
env:
- production
- dev
- name: test-dev
env:
- dev
- name: test-all
- name: test-production
env:
- production
environment_variables:
STAGE: dev
template/xxx.yaml
plan a
...
{{- range $.Values.tasks }}
{{- $flag := false }}
{{- if .env }}
{{- range .env }}
{{- if eq . $.Values.environment_variables.STAGE }}
{{- $flag = true }}
{{- end }}
{{- end }}
{{- else }}
{{- $flag = true }}
{{- end }}
{{- if $flag }}
apiVersion: batch/v1
kind: CronJob
meta:
name: {{ .name }}
{{- end }}
{{- end }}
...
plan b
...
{{- range $.Values.tasks }}
{{- if or (not .env) (has $.Values.environment_variables.STAGE .env) }}
apiVersion: batch/v1
kind: CronJob
meta:
name: {{ .name }}
{{- end }}
{{- end }}
...
output
...
apiVersion: batch/v1
kind: CronJob
meta:
name: test-production-dev
apiVersion: batch/v1
kind: CronJob
meta:
name: test-dev
apiVersion: batch/v1
kind: CronJob
meta:
name: test-all
...
I want to loop through a values file to create a namespace and a networkpolicy in/for that namespace, except for default. I only want to create the policy and not the namespace for default since it is there by default.
values file:
namespaces:
- name: default
- name: test1
- name: test2
template file:
# Loop through the namespace names and create the namespaces
{{- range $namespaces := .Values.namespaces }}
{{- if ne "default" }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ $namespaces.name }}
---
{{- end }}
{{- end }}
# Loop through the namespace names and create a network policy for those namespace
{{- range $namespaces := .Values.namespaces }}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ $namespaces.name }}-networkpolicy
namespace: {{ $namespaces.name }}
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: {{ $namespaces.name }}
---
{{- end }}
The error I get is:
Error: UPGRADE FAILED: template: namespaces/templates/namespaces.yaml:3:7: executing "namespaces/templates/namespaces.yaml" at <ne>: wrong number of args for ne: want 2 got 1
It's probably something simple, but not seeing it. Hope someone can help.
This worked for me:
# Loop through the namespace names and create the namespaces
{{- range $namespaces := .Values.namespaces }}
{{- if ne $namespaces.name "default" }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ $namespaces.name }}
---
{{- end }}
{{- end }}
# Loop through the namespace names and create a network policy for those namespace
{{- range $namespaces := .Values.namespaces }}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ $namespaces.name }}-networkpolicy
namespace: {{ $namespaces.name }}
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: {{ $namespaces.name }}
---
{{- end }}
The task is to range over workers collection and if the current worker has autoscaling.enabled=true create an hpa for it.
I've tried to compare .autoscaling.enabled to "true" but it returned "error calling eq: incompatible types for comparison". Here people say that it actually means that .autoscaling.enabled is nil. So {{ if .autoscaling.enabled }} somehow doesn't see the variable and assumes it doesn't exist.
Values:
...
workers:
- name: worker1
command: somecommand1
memoryRequest: 500Mi
memoryLimit: 1400Mi
cpuRequest: 50m
cpuLimit: 150m
autoscaling:
enabled: false
- name: worker2
command: somecommand2
memoryRequest: 512Mi
memoryLimit: 1300Mi
cpuRequest: 50m
cpuLimit: 150m
autoscaling:
enabled: false
- name: workerWithAutoscaling
command: somecommand3
memoryRequest: 600Mi
memoryLimit: 2048Mi
cpuRequest: 150m
cpuLimit: 400m
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 5
targetCPUUtilization: 50
targetMemoryUtilization: 50
...
template:
...
{{- range .Values.workers }}
{{- if .autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
...
name: "hpa-{{ .name }}-{{ $.Realeas.Name }}"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .name }}
minReplicas: {{ .minReplicas }}
maxReplicas: {{ .maxReplicas }}
metrics:
{{- with .targetCPUUtilization}}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ . }}
{{- end }}
{{- with .targetMemoryUtilization}}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ . }}
{{- end }}
---
{{- end }}
{{- end }}
I expect the manifest for one hpa that targets workerWithAutoscaling, but the actual output is totally empty.
Your use of {{- range .Values.workers }} and {{- if .autoscaling.enabled }} is fine. You are not getting any values because .minReplicas, .maxReplicas, etc, are inside .autoscaling scope.
See Modifying scope using with
Adding {{- with .autoscaling}} will solve the issue.
{{- range .Values.workers }}
{{- if .autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
name: "hpa-{{ .name }}-{{ $.Release.Name }}"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .name }}
{{- with .autoscaling}}
minReplicas: {{ .minReplicas }}
maxReplicas: {{ .maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .targetCPUUtilization}}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .targetMemoryUtilization}}
{{- end }}
{{- end }}
{{- end }}
helm template .
---
# Source: templates/hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
name: "hpa-workerWithAutoscaling-release-name"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: workerWithAutoscaling
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageUtilization: 50
I have an application that requires a configurable number of master nodes and replicas. Is there any way to dynamically generate a n stateful sets where n is the number of master nodes I have? The number of master nodes is currently set in values.yaml.
Yes, it is possible with until function.
values.yaml:
masterCount: 5
templates/statefulset.yaml:
{{ range $k, $v := until ( .Values.masterCount | int) }}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-{{ $v }}
spec:
serviceName: "nginx-{{ $v }}"
replicas: 3
selector:
matchLabels:
app: nginx-{{ $v }}
template:
metadata:
labels:
app: nginx-{{ $v }}
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
{{ end }}