Range issue in go template in vault configuration in k8s - kubernetes

I don't know Golang at all but need to implement Go template syntax in my kubernetes config (where hishicorp vault is configured). What I'm trying to do is to modify file in order to change its format. So source looks like this:
data: map[key1:value1]
metadata: map[created_time:2021-10-06T21:02:18.41643371Z deletion_time: destroyed:false version:1]
The Kubernetes config part with go template is used in order to format file is here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: ${REPLICAS}
selector:
matchLabels:
component: test
template:
metadata:
labels:
component: test
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-inject-status: 'update'
vault.hashicorp.com/role: 'test'
vault.hashicorp.com/agent-inject-secret-config: 'secret/data/test/config'
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/test/config" -}}
{{ range $k, $v := .Data.data }}
export {{ $k }}={{ $v | quote }}
{{ end }}
{{- end}}
spec:
serviceAccountName: test
containers:
- name: test
image: ${IMAGE}
ports:
- containerPort: 3000
But the error I'm getting is:
runtime error encountered: error="template server: (dynamic): parse: template: :2: unexpected "," in range"
EDIT:
To deploy vault on k8s I'm using vault helm chart

For what I can see you have env variables in your yaml file (${REPLICAS}, ${IMAGE}), which makes me think that you are using something like cat file.yml | envsubst | kubectl apply --wait=true -f - in order to replace those env vars for the real values.
The issue with this is that $k and $v are also being replaced for '' (since you do not have that env var in your system).
One ugly but effective solution is to export v="$v" and export k="$k" which whill generate your yaml file correctly.

Related

Access an input file from vault template injector

I use Vault to retrieve some secrets that I put inside a configuration file. All works fine until this configuration gets bigger and I want it to be saved in sub configs in a folder. The issue is that those files can't get imported using go templating used to fill passwords..
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-app
spec:
...
template:
metadata:
annotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/secret-volume-path-my-config: "/my-path/etc"
vault.hashicorp.com/agent-inject-file-my-config: "my-app.conf"
vault.hashicorp.com/agent-inject-secret-my-config: secret/data/my-app/config
vault.hashicorp.com/agent-inject-template-my-config: |
{{- $file := .Files }}
{{ .Files.Get "configurations/init.conf" }}
{{- with secret "secret/data/my-app/config" -}}
...
{{- end }}
The file configurations/init.conf for example doesn't seem to be visible by the vault injector and so gets simply replaced by <no value>. Is there a way to make those files in configurations/* visible to vault injector maybe by mounting them somewhere?

Conftest Exception Rule Fails with Kustomization & Helm

I'm having in one of my projects several k8s resources which is built, composed and packaged using Helm & Kustomize. I wrote few OPA tests using Conftest where one of the check is to avoid running containers as root. So here is my deployment.yaml in my base folder:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: {{ .Values.app.namespace }}
labels:
name: {{ .Values.app.name }}
#app: {{ .Values.app.name }}
component: {{ .Values.plantSimulatorService.component }}
part-of: {{ .Values.app.name }}
managed-by: helm
instance: {{ .Values.app.name }}
version: {{ .Values.app.version }}
spec:
selector:
matchLabels:
app: {{ .Values.app.name }}
replicas: 1
template:
metadata:
labels:
app: {{ .Values.app.name }}
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
runAsNonRoot: true
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.plantSimulatorService.image.repository }}:{{ .Values.plantSimulatorService.image.tag }}
ports:
- containerPort: {{ .Values.plantSimulatorService.ports.containerPort }} # Get this value from ConfigMap
I then have a patch file (flux-patch-prod.yaml) in my overlays folder that looks like this:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
fluxPatchFile: prodPatchFile
annotations:
flux.weave.works/locked: "true"
flux.weave.works/locked_msg: Lock deployment in production
flux.weave.works/locked_user: Joesan <github.com/joesan>
name: plant-simulator-prod
namespace: {{ .Values.app.namespace }}
I have now written the Conftest in my base.rego file that looks like this:
# Check container is not run as root
deny_run_as_root[msg] {
kubernetes.is_deployment
not input.spec.template.spec.securityContext.runAsNonRoot
msg = sprintf("Containers must not run as root in Deployment %s", [name])
}
exception[rules] {
kubernetes.is_deployment
input.metadata.name == "plant-simulator-prod"
rules := ["run_as_root"]
}
But when I ran them (I have the helm-conftest plugin installed), I get the following error:
FAIL - Containers must not run as root in Deployment plant-simulator-prod
90 tests, 80 passed, 3 warnings, 7 failures
Error: plugin "conftest" exited with error
I have no idea how to get this working. I do not want to end up copying the contents from deployment.yaml into the flux-patch-prod.yaml back again as it would defeat the whole purpose of using Kustomization in the first place. Any idea how to get this fixed? I have been down with this issue since yesterday!
I managed to get this fixed, but the error messages that were throwing up were not that so helpful. May be it gets better with next release versions of Conftest.
So here is what I had to do:
I went to the https://play.openpolicyagent.org/ playground to test out my files. As I copied my rego rules over there, I noticed the following error message:
policy.rego:25: rego_unsafe_var_error: var msg is unsafe
I got suspicious and as I paid closer attention to the line to see what the actual problem is, I had to change from:
msg = sprintf("Containers must not run as root in Deployment %s",
[name])
To:
msg = sprintf("Containers must not run as root in Deployment %s",
[input.name])
And it worked as expected! Silly mistake, but the error messages from Conftest were not that useful to figure them out in the first glance!

Can confuration for a progam running in a container/pod be placed in a Deployment yaml instead of ConfigMap yaml?

Can a confuration for a progam running in container/pod be placed in a Deployment yaml instead of ConfigMap yaml - like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
spec:
containers:
- env:
-name: "MyConfigKey"
value: "MyConfigValue"
Single environment
Putting values in environment variables in the Deployment works.
Problem: You should not work on the environment that is the production environment, so you will need at least another environment.
Using docker, containers and Kubernetes makes it very easy to create more than one environment.
Multiple environements
When you want to use more than one environment, you want to keep the difference as small as possible. This is important to fast detect problems and to limit the management needed.
Problem: Maintaining the difference between environments and also avoid unique problems (config drift / snowflake servers).
Therefore, keep as much as possible common for the environments, e.g. use the same Deployment.
Only use unique instances of ConfigMap, Secret and probably Ingress for each app and environment.
This is my approach when you want to set env directly from deployment:
If you using Helm:
Helm values.yaml file:
deployment:
env:
enabled: false
vars:
KEY1: VALUE1
KEY2: VALUE2
Deployment templates deployment.yaml:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: ...
{{- if .Values.deployment.env.enabled }}
env:
{{- range $key, $val := .Values.deployment.env.vars }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end}}
{{- end }}
...
And if you just want to apply directly from kubectl command and deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: ...
env:
- name: key1
value: value1
- name: key2
value: value2
...

Helm Error converting YAML to JSON: yaml: line 20: did not find expected key

I don't really know what is the error here, is a simple helm deploy with a _helpers.tpl, it doesn't make sense and is probably a stupid mistake, the code:
deploy.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
{{ include "metadata.name" . }}-deploy
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
vars: {{- include "envs.var" .Values.secret.data }}
_helpers.tpl
{{- define "envs.var"}}
{{- range $key := . }}
- name: {{ $key | upper | quote}}
valueFrom:
secretKeyRef:
key: {{ $key | lower }}
name: {{ $key }}-auth
{{- end }}
{{- end }}
values.yaml
secret:
data:
username: root
password: test
the error
Error: YAML parse error on mychart/templates/deploy.yaml: error converting YAML to JSON: yaml: line 21: did not find expected key
Here this problem happens because of indent. You can resolve by updating
env: {{- include "envs.var" .Values.secret.data | nindent 12 }}
Simplest way to resolve this kind of issues is to use tools.
These are mostly indentation issues, and can be resolved very easily using the right tool
npm install -g yaml-lint
yaml-lint is one such tool
PS E:\vsc-workspaces\grafana-1> yamllint .\grafana.yaml
× YAML Lint failed for C:/Users/mnadeem6/vsc-workspaces/grafana-1/grafana.yaml
× bad indentation of a mapping entry at line 137, column 11:
restartPolicy: Always
^
PS E:\vsc-workspaces\grafana-1> yamllint .\grafana.yaml
√ YAML Lint successful.

Deploying a kubernetes job via helm

I am new to helm and I have tried to deploy a few tutorial charts. Had a couple of queries:
I have a Kubernetes job which I need to deploy. Is it possible to deploy a job via helm?
Also, currently my kubernetes job is deployed from my custom docker image and it runs a bash script to complete the job. I wanted to pass a few parameters to this chart/job so that the bash commands takes the input parameters. That's the reason I decided to move to helm because it provided a more flexibility. Is that possible?
You can use helm. Helm installs all the kubernetes resources like job,pods,configmaps,secrets inside the templates folder. You can control the order of installation by helm hooks. Helm offers hooks like pre-install, post-install, pre-delete with respect to deployment. if two or more jobs are pre-install then their weights will be compared for installing.
|-scripts/runjob.sh
|-templates/post-install.yaml
|-Chart.yaml
|-values.yaml
Many times you need to change the variables in the script as per the environment. so instead of hardcoding variable in script, you can also pass parameters to script by setting them as environment variables to your custom docker image. Change the values in values.yaml instead of changing in your script.
values.yaml
key1:
someKey1: value1
key2:
someKey2: value1
post-install.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: post-install-job
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
"helm.sh/hook": pre-install,pre-upgrade,pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook-weight": "3"
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
app: {{ template "fullname" . }}
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "custom-docker-image:v1"
command: ["/bin/sh", "-c", {{ .Files.Get "scripts/runjob.sh" | quote }} ]
env:
#setting KEY1 as environment variable in the container,value of KEY1 in container is value1(read from values.yaml)
- name: KEY1
value: {{ .Values.key1.someKey1 }}
- name: KEY2
value: {{ .Values.key2.someKey2 }}
runjob.sh
# you can access the variable from env variable
echo $KEY1
echo $KEY2
# some stuff
You can use Helm Hooks to run jobs. Depending on how you set up your annotations you can run a different type of hook (pre-install, post-install, pre-delete, post-delete, pre-upgrade, post-upgrade, pre-rollback, post-rollback, crd-install). An example from the doc is as follows:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
You can pass your parameters as secrets or configMaps to your job as you would to a pod.
I had a similar scenario where I had a job I wanted to pass a variety of arguments to. I ended up doing something like this:
Template:
apiVersion: batch/v1
kind: Job
metadata:
name: myJob
spec:
template:
spec:
containers:
- name: myJob
image: myImage
args: {{ .Values.args }}
Command (powershell):
helm template helm-chart --set "args={arg1\, arg2\, arg3}" | kubectl apply -f -