is it possible in Kubernetes values to refer to value in the same file? I'm doing a for loop and most env vars are fine but this one depends on some other val and need to duplicate/refer to it somehow inside values;
image:
repository: nginx
tag: stable
someCustomVal:
- name: x
value: xx
- name: y
value: yy
- name: z
value: {{ .Values.image.tag }}
btw above config doesn't work but looking for equivalent; I could just do the z value outside of for loop in the deployment but that wouldn't look nice so looking for alternative of referecnce
This isn't Kubernetes-specific, you can do this with YAML anchors:
$ cat example.yaml
image:
repository: nginx
tag: &imagetag stable
someCustomVal:
- name: x
value: xx
- name: y
value: yy
- name: z
value: *imagetag
$ ruby -ryaml -rpp -e'pp YAML.load_file("example.yaml")'
{"image"=>{"repository"=>"nginx", "tag"=>"stable"},
"someCustomVal"=>
[{"name"=>"x", "value"=>"xx"},
{"name"=>"y", "value"=>"yy"},
{"name"=>"z", "value"=>"stable"}]}
Related
I m trying to inject env vars in my helm chart deployment file. my values file looks like this.
values.yaml
envFrom:
- configMapRef:
name: my-config
- secretRef:
name: my-secret
I want to iterate through secrets and configmaps values . This is what I did in deployment.yaml file
envFrom:
{{- range $item := .Values.envFrom }}
{{- $item | toYaml | nindent 14 }}
{{- end }}
But i didn t get the desired result
You can directly use the defined value like:
...
envFrom:
{{- toYaml .Values.envFrom | nindent 6 }}
...
Or Instead of use range, you can use with.
Here is an example:
values.yaml:
envFrom:
- configMapRef:
name: my-config
- secretRef:
name: my-secret
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
namespace: test
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
# {{- with .Values.envFrom }} can be here if you dont
# want to define envFrom in this container if envFrom
# is not defined in values.yaml.
# If you want to do that, remove the one below.
envFrom:
{{- with .Values.envFrom }}
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: Never
The output is:
c[_] > helm template test .
---
# Source: test/templates/test.yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
namespace: test
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: my-config
- secretRef:
name: my-secret
restartPolicy: Never
This Kubernetes Pod object has an Env whose value is a string with one single quote '"0.0.0.0/0"'.
clientSet, err := initClientSet()
if err != nil {
klog.ErrorS(err, "failed to init clientSet")
return err
}
ctx := context.Background()
job := &batchv1.Job{
TypeMeta: metav1.TypeMeta{
Kind: "Job",
APIVersion: "batch/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "tf-poc",
},
Spec: batchv1.JobSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
Containers: []v1.Container{{
Name: "tf-poc",
Image: "nginx:1.9.4",
ImagePullPolicy: v1.PullIfNotPresent,
Env: []v1.EnvVar{{Name: "TF_VAR_security_ips", Value: "'\"0.0.0.0/0\"'"}},
},
},
RestartPolicy: v1.RestartPolicyOnFailure,
},
},
},
}
j, err := clientSet.BatchV1().Jobs("default").Create(ctx, job, metav1.CreateOptions{})
After it was created by Kubernetes client-go (as above) or Controller-runtime, the one single quote string became three single quotes.
spec:
backoffLimit: 6
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: 53c93df0-b3b5-4dbc-b1d8-2a77316176a1
template:
metadata:
creationTimestamp: null
labels:
controller-uid: 53c93df0-b3b5-4dbc-b1d8-2a77316176a1
job-name: tf-poc
spec:
containers:
- env:
- name: TF_VAR_security_ips
value: '''"0.0.0.0/0"'''
Here is a Kubernetes Job manifest in a Yaml file.
apiVersion: batch/v1
kind: Job
metadata:
name: poc
spec:
backoffLimit: 2147483647
completions: 1
parallelism: 1
template:
spec:
containers:
- command:
- bash
- -c
- tail -f /dev/null
env:
- name: TF_VAR_security_ips
value: '"0.0.0.0/0"'
image: nginx:1.9.4
imagePullPolicy: IfNotPresent
name: terraform-executor
restartPolicy: OnFailure
If I created it by kubectl apply -f, it worked as expected.
spec:
backoffLimit: 2147483647
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: 1525d501-09f4-419e-8989-eb27ea4ddab5
template:
metadata:
creationTimestamp: null
labels:
controller-uid: 1525d501-09f4-419e-8989-eb27ea4ddab5
job-name: poc
spec:
containers:
- command:
- bash
- -c
- tail -f /dev/null
env:
- name: TF_VAR_security_ips
value: '"0.0.0.0/0"'
How can make client-go or controller-runtime not generate three single quotes? Just keep the original number of single quotes.
You're getting the correct value from your Go code and just seeing a YAML serialization artifact.
In YAML, strings can be wrapped in single or double quotes. Since the value string starts with a quote character, it has to be quoted so that it's possible to escape the quotes inside the string. The serializer chose single quotes; inside a single-quoted string a double single quote '' is the way to escape a single quote (and other characters can't be escaped).
# v v start/end of string quoting
value: '''"0.0.0.0/0"'''
# ^^ ^^ escaped single quotes
You could equivalently do this with double quotes and it would look exactly like your Go code
# v v start/end of string quoting
value: "'\"0.0.0.0\"'"
# ^^ ^^ escaped double quotes
Your latter examples are not producing the same string. They are a YAML single-quoted string, that contains a string with double quotes but no single quotes. (Try kubectl exec job/poc -- sh -c 'echo $TF_VAR_security_ips' and see what comes back.)
hope you are doing fine,
i got that error :error:
error converting YAML to JSON: yaml: line 33: found character that cannot start any token
while trying to deploy this cronjob on my k8s cluster, can you please check and let me know if you have any clues about the reason of having this error ?
the file is as follows:
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: resourcecleanup
spec:
# 10:00 UTC == 1200 CET
schedule: '0 10 * * 1-5'
jobTemplate:
spec:
template:
metadata:
annotations:
iam.amazonaws.com/role: arn:aws:iam::%%AWS_ACCOUNT_NUMBER%%:role/k8s/pod/id_ResourceCleanup
spec:
containers:
- name: resourcecleanup
image: cloudcustodian/c7n
args:
- run
- -v
- -s
- /tmp
- -f
- /tmp/.cache/cloud-custodian.cache
- /home/custodian/delete-unused-ebs-volumes-policies.yaml
volumeMounts:
- name: cleanup-policies
mountPath: /home/custodian/delete-unused-ebs-volumes-policies.yaml
subPath: delete-unused-ebs-volumes-policies.yaml
env:
- name: AWS_DEFAULT_REGION
value: %%AWS_REGION%%
volumes:
- name: cleanup-policies
configMap:
name: cleanup-policies
restartPolicy: Never
---
change:
value: %%AWS_REGION%%
to:
value: "%%AWS_REGION%%"
Strings containing any of the following characters must be quoted.
:, {, }, [, ], ,, &, *, #, ?, |, -, <, >, =, !, %, #, `
Could not find in kubernetes docs, but from ansible yaml syntax:
In addition to ' and " there are a number of characters that are
special (or reserved) and cannot be used as the first character of an
unquoted scalar: [] {} > | * & ! % # ` # ,.
The problem could be from your indentation method, Try using spaces and not tabs for your indentation. Use 2 spaces for each indentation. Hope this helps.
I have created common helm charts. In values.yml file, I have set of env variables that need to be set as part of deployment.yaml file.
Snippet of values file.
env:
name: ABC
value: 123
name: XYZ
value: 567
name: PQRS
value: 345
In deployment.yaml, when the values are referred, only the last name/value are set, other values are overwritten. How to read/set all the names/values in the deployment file?
I've gone through a few iterations of how to handle setting sensitive environment variables. Something like the following is the simplest solution I've come up with so far:
template:
{{- if or $.Values.env $.Values.envSecrets }}
env:
{{- range $key, $value := $.Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- range $key, $secret := $.Values.envSecrets }}
- name: {{ $key }}
valueFrom:
secretKeyRef:
name: {{ $secret }}
key: {{ $key | quote }}
{{- end }}
{{- end }}
values:
env:
ENV_VAR: value
envSecrets:
SECRET_VAR: k8s-secret-name
Pros:
syntax is pretty straightforward
keys are easily mergeable. This came in useful when creating CronJobs with shared secrets. I was able to easily override "global" values using the following:
{{- range $key, $secret := merge (default dict .envSecrets) $.Values.globalEnvSecrets }}
Cons:
This only works for secret keys that exactly match the name of the environment variable, but it seems like that is the typical use case.
This is how I solved it in a common helm-chart I developed previously:
env:
{{- if .Values.env }}
{{- toYaml .Values.env | indent 12 }}
{{- end }}
In the values.yaml:
env:
- name: ENV_VAR
value: value
# or
- name: ENV_VAR
valueFrom:
secretKeyRef:
name: secret_name
key: secret_key
An important thing to note here is the indention. Incorrect indentation might lead to a valid helm-chart (yaml file), but the kubernetes API will give an error.
It looks like you've made a typo and forgot your dashes. Without the dashes yaml will evaluate env into a single object instead of a list and overwrite values in unexpected ways.
Your env should like more like this:
env:
- name: ABC
value: 123
- name: XYZ
value: 567
- name: PQRS
value: 345
- name: SECRET
valueFrom:
secretKeyRef:
name: name
key: key
https://www.convertjson.com/yaml-to-json.htm can help visualize how the yaml is being interpreted and investigate syntax issues.
You could let the chart user decide if he wants to take environment variables from a secret, provide the value, or take it from the downward API in the values.yaml
env:
FOO:
value: foo
BAR:
valueFrom:
secretKeyRef:
name: bar
key: barKey
POD_NAME:
valueFrom:
fieldRef:
fieldPath: metadata.name
and render it in the deployment.yaml
spec:
# ...
template:
# ...
spec:
# ...
containers:
- name: {{ .Chart.Name }}
env:
{{- range $name, $item := .Values.env }}
- name: {{ $name }}
{{- $item | toYaml | nindent 14 }}
{{- end }}
# ...
This is relatively simple and flexible.
It has the shortcoming of not keeping the order of the environment variables. This can break dependent environment variables.
I have written a bit longer story on how to support corrrect ordering as well: An Advanced API for Environment Variables in Helm Charts.
I have an application that requires a configurable number of master nodes and replicas. Is there any way to dynamically generate a n stateful sets where n is the number of master nodes I have? The number of master nodes is currently set in values.yaml.
Yes, it is possible with until function.
values.yaml:
masterCount: 5
templates/statefulset.yaml:
{{ range $k, $v := until ( .Values.masterCount | int) }}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-{{ $v }}
spec:
serviceName: "nginx-{{ $v }}"
replicas: 3
selector:
matchLabels:
app: nginx-{{ $v }}
template:
metadata:
labels:
app: nginx-{{ $v }}
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
{{ end }}