Ansible Kubernetes Job arg inject random string - kubernetes

I am thinking about creating a Kubernetes job in Ansible with random string (password) generated on the fly and injected to the args/command line. However I am not sure if what I am trying to achieve will work as the below Jinja template itself already imports data from the values YAML file.
apiVersion: batch/v1
kind: Job
metadata:
namespace: {{ deployment.namespace }} <- taken from the values YAML
name: create-secret
labels:
app: test
app.kubernetes.io/name: create-secret
app.kubernetes.io/component: test
app.kubernetes.io/part-of: test
app.kubernetes.io/managed-by: test
annotations:
spec:
backoffLimit: 0
template:
metadata:
namespace: {{ deployment.namespace }} <- taken from the values YAML
name: create-secret
labels:
app: test
app.kubernetes.io/name: create-secret
app.kubernetes.io/component: test
app.kubernetes.io/part-of: test
app.kubernetes.io/managed-by: test
spec:
restartPolicy: Never
containers:
- name: create-secret
command: ["/bin/bash"]
args: ["-c", "somecommand create --secret {{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1, length=15) }} --name 'test'"]
image: {{ registry.host }}/{{ images.docker.image.name }}:{{ images.docker.image.tag }} <- taken from the values YAML

It'll work fine, but (as you pointed out) due to golang/helm using the same template characters as jinja2 {{, you'll need to take one of two approaches: either wrap every golang set of mustaches in {{ "{{" }} in order for jinja2 to emit the text {{ in the resulting file, or change the jinja2 template delimiters to something other than {{
example 1
apiVersion: batch/v1
kind: Job
metadata:
namespace: {{ "{{" }} deployment.namespace {{ "}}" }}
name: create-secret
...
containers:
- name: create-secret
command: ["/bin/bash"]
args: ["-c", "somecommand create --secret {{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1, length=15) }} --name 'test'"]
image: {{ "{{" }} registry.host {{ "}}" }}/{{ "{{" }} images.docker.image.name {{ "}}:{{" }} images.docker.image.tag {{ "}}" }} <- taken from the values YAML
Although you'll also likely want to use | quote for that random_string since in my local example, it produce a password of 2-b19e2k#HUF=k` and that ` will be interpreted by the sh -c leading to an error
example 2
# my-job.yml.j2
apiVersion: batch/v1
kind: Job
metadata:
namespace: {{ deployment.namespace }} <- taken from the values YAML
name: create-secret
...
containers:
- name: create-secret
command: ["/bin/bash"]
args: ["-c", "somecommand create --secret [% lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1, length=15) %] --name 'test'"]
image: {{ registry.host }}/{{ images.docker.image.name }}:{{ images.docker.image.tag }} <- taken from the values YAML
- template:
src: my-job.yml.j2
dest: my-job.yml
variable_start_string: '[%'
variable_end_string: '%]'

Related

env varaible error converting YAML to JSON: yaml: did not find expected key

I have a deployment file which takes the environment variables from the values.yaml file.
Also I want to add one more variable named "PURPOSE".
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.scheduler.name }}
spec:
selector:
matchLabels:
app: {{ .Values.scheduler.name }}
template:
metadata:
labels:
app: {{ .Values.scheduler.name }}
spec:
containers:
- name: {{ .Values.scheduler.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.scheduler.targetPort }}
imagePullPolicy: Always
env:
{{- toYaml .Values.envVariables | nindent 10 }}
- name: PURPOSE
value: "SCHEDULER"
The error I get is the following:
error converting YAML to JSON: yaml: line 140: did not find expected key
The env varaibles from the values file work fine,
the problem seems to be the variable "PURPOSE"
The problem was the formatting of the environment block.
I have used the below Solution to fix the error :
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.scheduler.name }}
spec:
selector:
matchLabels:
app: {{ .Values.scheduler.name }}
template:
metadata:
labels:
app: {{ .Values.scheduler.name }}
spec:
containers:
- name: {{ .Values.scheduler.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.scheduler.targetPort }}
imagePullPolicy: Always
env:
- name: PURPOSE
value: "SCHEDULER"
{{- toYaml .Values.envVariables | nindent 10 }}

Unable to deploy Kubernetes secrets using Helm

I'm trying to create my first Helm release on an AKS cluster using a GitLab pipeline,
but when I run the following command
- helm upgrade server ./aks/server
--install
--namespace demo
--kubeconfig ${CI_PROJECT_DIR}/.kube/config
--set image.name=${CI_PROJECT_NAME}/${CI_PROJECT_NAME}-server
--set image.tag=${CI_COMMIT_SHA}
--set database.user=${POSTGRES_USER}
--set database.password=${POSTGRES_PASSWORD}
I receive the following error:
"Error: Secret in version "v1" cannot be handled as a Secret: v1.Secret.Data:
decode base64: illegal base64 data at input byte 8, error found in #10 byte of ..."
It looks like something is not working with the secrets file, but I don't understand what.
The secret.yaml template file is defined as follows:
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
data:
User: {{ .Values.database.user }}
Host: {{ .Values.database.host }}
Database: {{ .Values.database.name }}
Password: {{ .Values.database.password }}
Port: {{ .Values.database.port }}
I will also add the deployment and the service .yaml files.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.app.name }}
labels:
app: {{ .Values.app.name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
tier: backend
stack: node
app: {{ .Values.app.name }}
template:
metadata:
labels:
tier: backend
stack: node
app: {{ .Values.app.name }}
spec:
containers:
- name: {{ .Values.app.name }}
image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
imagePullPolicy: IfNotPresent
env:
- name: User
valueFrom:
secretKeyRef:
name: server-secret
key: User
optional: false
- name: Host
valueFrom:
secretKeyRef:
name: server-secret
key: Host
optional: false
- name: Database
valueFrom:
secretKeyRef:
name: server-secret
key: Database
optional: false
- name: Password
valueFrom:
secretKeyRef:
name: server-secret
key: Password
optional: false
- name: Ports
valueFrom:
secretKeyRef:
name: server-secret
key: Ports
optional: false
resources:
limits:
cpu: "1"
memory: "128M"
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: server-service
spec:
type: ClusterIP
selector:
tier: backend
stack: node
app: {{ .Values.app.name }}
ports:
- protocol: TCP
port: 3000
targetPort: 3000
Any hint?
You have to encode secret values to base64
Check the doc encoding-functions
Try below code
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
data:
User: {{ .Values.database.user | b64enc }}
Host: {{ .Values.database.host | b64enc }}
Database: {{ .Values.database.name | b64enc }}
Password: {{ .Values.database.password | b64enc }}
Port: {{ .Values.database.port | b64enc }}
Else use stringData instead of data
stringData will allow you to create the secrets without encode to base64
Check the example in the link
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
stringData:
User: {{ .Values.database.user | b64enc }}
Host: {{ .Values.database.host | b64enc }}
Database: {{ .Values.database.name | b64enc }}
Password: {{ .Values.database.password | b64enc }}
Port: {{ .Values.database.port | b64enc }}

Helm3: Create .properties files recursively in Configmap

Below are files that I have:
users-values.yaml file :
users:
- foo
- baz
other-values.yaml file:
foo_engine=postgres
foo_url=some_url
foo_username=foofoo
baz_engine=postgres
baz_url=some_url
baz_username=bazbaz
config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-catalog
data:
{{- range $user := .Values.users }}
{{ . }}: |
engine.name={{ printf ".Values.%s_engine" ($user) }}
url={{ printf ".Values.%s_url" ($user) }}
username={{ printf".Values.%s_username" ($user) }}
{{- end }}
deployment-coordinator.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Release.Name }}-coordinator"
labels:
app.kubernetes.io/name: "{{ .Release.Name }}-coordinator"
spec:
replicas: 1
...
template:
metadata:
labels:
app.kubernetes.io/name: "{{ .Release.Name }}-coordinator"
spec:
volumes:
- name: config
configMap:
name: test-catalog
...
volumeMounts:
- name: config
mountPath: "/etc/config"
Then, I do a helm install test mychart.
When I exec into the pod, and cd to /etc/config, I expect to see foo.properties and baz.properties files in there, and each file looks like:
foo.properties: |
engine.name=postgres
url=some_url
username=foofoo
baz.properties: |
engine.name=postgres
url=some_url
username=bazbaz
The answer from Pawel below solved the error I got previously
unexpected bad character U+0022 '"' in command
But, the files are still not created in the /etc/config directory.
So, I was wondering if it's even possible to create the .properties files using helm range as I mentioned in my config.yaml file above.
The reason I wanted to do it the below way is because I have more than 10 users to create .properties files on, not just foo and baz. Just thought it'll be easier if I can do a for loop on it if possible.
data:
{{- range $user := .Values.users }}
{{ . }}: |
engine.name={{ printf ".Values.%s_engine" ($user) }}
url={{ printf ".Values.%s_url" ($user) }}
username={{ printf".Values.%s_username" ($user) }}
{{- end }}

Helm - How to write a file in a Volume using ConfigMap?

I have defined the values.yaml like the following:
name: custom-streams
image: streams-docker-images
imagePullPolicy: Always
restartPolicy: Always
replicas: 1
port: 8080
nodeSelector:
nodetype: free
configHocon: |-
streams {
monitoring {
custom {
uri = ${?URI}
method = ${?METHOD}
}
}
}
And configmap.yaml like the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-streams-configmap
data:
config.hocon: {{ .Values.configHocon | indent 4}}
Lastly, I have defined the deployment.yaml like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configmap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}
When I run the container via:
helm install --name custom-streams custom-streams -f values.yaml --debug --namespace streaming
Then the pods are running fine, but I cannot see the config.hocon file in the container:
$ kubectl exec -it custom-streams-55b45b7756-fb292 sh -n streaming
/ # ls
...
config
...
/ # cd config/
/config # ls
/config #
I need the config.hocon written in the /config folder. Can anyone let me know what is wrong with the configurations?
I was able to resolve the issue. The issue was using configmap in place configMap in deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configMap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}

helm nested variable reference not happening

I am not able to reference variable inside a nested variable in Helm. I am not able to do this nested reference, I want to retrieve app1_image and app1_tag using the value of the apps_label variable. How can I do that?
values.yaml:
apps:
- name: web-server
label: app1
command: /root/web.sh
port: 80
- name: app-server
label: app2
command: /root/app.sh
port: 8080
app1_image:
name: nginx
tag: v1.0
app2_image:
name: tomcat
tag: v1.0
deployment.yaml:
{{- range $apps := .Values.apps
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $apps.name }}
labels:
app: {{ $apps.name }}
spec:
replicas: 1
selector:
matchLabels:
app:
template:
metadata:
labels:
app: {{ $apps.name }}
spec:
containers:
- name: {{ $apps.name }}
image: {{ $.Values.$apps.label.image }}: {{ $.Values.$apps.label.tag }}
ports:
- containerPort: {{ $apps.port}}
{{- end }}
The core Go text/template language includes an index function that you can use as a more dynamic version of the . operator. Given the values file you show, you could do the lookup (inside the loop) as something like:
{{- $key := printf "%s_image" $apps.label }}
{{- $settings := index $.Values $key | required (printf "could not find top-level settings for %s" $key) }}
- name: {{ $apps.name }}
image: {{ $settings.image }}:{{ $settings.tag }}
You could probably rearrange the layout of the values.yaml file to make this clearer. You also might experiment with what you can provide with multiple helm install -f options to override options at install time; if you can keep all of these settings in one place it is easier to manage.