Conftest Exception Rule Fails with Kustomization & Helm - kubernetes

I'm having in one of my projects several k8s resources which is built, composed and packaged using Helm & Kustomize. I wrote few OPA tests using Conftest where one of the check is to avoid running containers as root. So here is my deployment.yaml in my base folder:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: {{ .Values.app.namespace }}
labels:
name: {{ .Values.app.name }}
#app: {{ .Values.app.name }}
component: {{ .Values.plantSimulatorService.component }}
part-of: {{ .Values.app.name }}
managed-by: helm
instance: {{ .Values.app.name }}
version: {{ .Values.app.version }}
spec:
selector:
matchLabels:
app: {{ .Values.app.name }}
replicas: 1
template:
metadata:
labels:
app: {{ .Values.app.name }}
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
runAsNonRoot: true
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.plantSimulatorService.image.repository }}:{{ .Values.plantSimulatorService.image.tag }}
ports:
- containerPort: {{ .Values.plantSimulatorService.ports.containerPort }} # Get this value from ConfigMap
I then have a patch file (flux-patch-prod.yaml) in my overlays folder that looks like this:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
fluxPatchFile: prodPatchFile
annotations:
flux.weave.works/locked: "true"
flux.weave.works/locked_msg: Lock deployment in production
flux.weave.works/locked_user: Joesan <github.com/joesan>
name: plant-simulator-prod
namespace: {{ .Values.app.namespace }}
I have now written the Conftest in my base.rego file that looks like this:
# Check container is not run as root
deny_run_as_root[msg] {
kubernetes.is_deployment
not input.spec.template.spec.securityContext.runAsNonRoot
msg = sprintf("Containers must not run as root in Deployment %s", [name])
}
exception[rules] {
kubernetes.is_deployment
input.metadata.name == "plant-simulator-prod"
rules := ["run_as_root"]
}
But when I ran them (I have the helm-conftest plugin installed), I get the following error:
FAIL - Containers must not run as root in Deployment plant-simulator-prod
90 tests, 80 passed, 3 warnings, 7 failures
Error: plugin "conftest" exited with error
I have no idea how to get this working. I do not want to end up copying the contents from deployment.yaml into the flux-patch-prod.yaml back again as it would defeat the whole purpose of using Kustomization in the first place. Any idea how to get this fixed? I have been down with this issue since yesterday!

I managed to get this fixed, but the error messages that were throwing up were not that so helpful. May be it gets better with next release versions of Conftest.
So here is what I had to do:
I went to the https://play.openpolicyagent.org/ playground to test out my files. As I copied my rego rules over there, I noticed the following error message:
policy.rego:25: rego_unsafe_var_error: var msg is unsafe
I got suspicious and as I paid closer attention to the line to see what the actual problem is, I had to change from:
msg = sprintf("Containers must not run as root in Deployment %s",
[name])
To:
msg = sprintf("Containers must not run as root in Deployment %s",
[input.name])
And it worked as expected! Silly mistake, but the error messages from Conftest were not that useful to figure them out in the first glance!

Related

helm - run pods and dependencies with a predefined flow order

I am using K8S with helm.
I need to run pods and dependencies with a predefined flow order.
How can I create helm dependencies that run the pod only once (i.e - populate database for the first time), and exits after first success?
Also, if I have several pods, and I want to run the pod only on certain conditions occurs and after creating a pod.
Need to build 2 pods, as is described as following:
I have a database.
1st step is to create the database.
2nd step is to populate the db.
Once I populate the db, this job need to finish.
3rd step is another pod (not the db pod) that uses that database, and always in listen mode (never stops).
Can I define in which order the dependencies are running (and not always parallel).
What I see for helm create command that there are templates for deployment.yaml and service.yaml, and maybe pod.yaml is better choice?
What are the best charts types for this scenario?
Also, need the to know what is the chart hierarchy.
i.e: when having a chart of type: listener, and one pod for database creation, and one pod for the database population (that is deleted when finished), I may have a chart tree hierarchy that explain the flow.
The main chart use the populated data (after all the sub-charts and templates are run properly - BTW, can I have several templates for same chart?).
What is the correct tree flow
Thanks.
There is a fixed order with which helm with create resources, which you cannot influence apart from hooks.
Helm hooks can cause more problems than they solve, in my experience. This is because most often they actually rely on resources which are only available after the hooks are done. For example, configmaps, secrets and service accounts / rolebindings. Leading you to move more and more things into the hook lifecycle, which isn't idiomatic IMO. It also leaves them dangling when uninstalling a release.
I tend to use jobs and init containers that blocks until the jobs are done.
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- name: mysql
image: mysql
---
apiVersion: batch/v1
kind: Job
metadata:
name: migration
spec:
ttlSecondsAfterFinished: 100
template:
spec:
initContainers:
- name: wait-for-db
image: bitnami/kubectl
args:
- wait
- pod/mysql
- --for=condition=ready
- --timeout=120s
containers:
- name: migration
image: myapp
args: [--migrate]
restartPolicy: Never
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
initContainers:
- name: wait-for-migration
image: bitnami/kubectl
args:
- wait
- job/migration
- --for=condition=complete
- --timeout=120s
containers:
- name: myapp
image: myapp
args: [--server]
Moving the migration into its own job, is beneficial if you want to scale your application horizontally. Your migration need to run only 1 time. So it doesn't make sense to run it for each deployed replica.
Also, in case a pod crashes and restarts, the migration doest need to run again. So having it in a separate one time job, makes sense.
The main chart structure would look like this.
.
├── Chart.lock
├── charts
│ └── mysql-8.8.26.tgz
├── Chart.yaml
├── templates
│ ├── deployment.yaml # waits for db migration job
│ └── migration-job.yaml # waits for mysql statefulset master pod
└── values.yaml
You can achieve this using helm hooks and K8s Jobs, below is defining the same setup for Rails applications.
The first step, define a k8s job to create and populate the db
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "my-chart.name" . }}-db-prepare
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded
labels:
app: {{ template "my-chart.name" . }}
chart: {{ template "my-chart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
backoffLimit: 4
template:
metadata:
labels:
app: {{ template "my-chart.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ template "my-chart.name" . }}-db-prepare
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/docker-entrypoint.sh"]
args: ["rake", "db:extensions", "db:migrate", "db:seed"]
envFrom:
- configMapRef:
name: {{ template "my-chart.name" . }}-configmap
- secretRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
initContainers:
- name: init-wait-for-dependencies
image: wshihadeh/wait_for:v1.2
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/docker-entrypoint.sh"]
args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]
envFrom:
- configMapRef:
name: {{ template "my-chart.name" . }}-configmap
- secretRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
imagePullSecrets:
- name: {{ .Values.imagePullSecretName }}
restartPolicy: Never
Note the following :
1- The Job definitions have helm hooks to run on each deployment and to be the first task
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded
2- the container command, will take care of preparing the db
command: ["/docker-entrypoint.sh"]
args: ["rake", "db:extensions", "db:migrate", "db:seed"]
3- The job will not start until the db-connection is up (this is achieved via initContainers)
args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]
the second step is to define the application deployment object. This can be a regular deployment object (make sure that you don't use helm hooks ) example :
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "my-chart.name" . }}-web
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
labels:
app: {{ template "my-chart.name" . }}
chart: {{ template "my-chart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.webReplicaCount }}
selector:
matchLabels:
app: {{ template "my-chart.name" . }}
release: {{ .Release.Name }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
labels:
app: {{ template "my-chart.name" . }}
release: {{ .Release.Name }}
service: web
spec:
imagePullSecrets:
- name: {{ .Values.imagePullSecretName }}
containers:
- name: {{ template "my-chart.name" . }}-web
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/docker-entrypoint.sh"]
args: ["web"]
envFrom:
- configMapRef:
name: {{ template "my-chart.name" . }}-configmap
- secretRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
{{ toYaml .Values.resources | indent 12 }}
restartPolicy: {{ .Values.restartPolicy }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
if I understand correctly, you want to build a dependency chain in your deployment strategy to ensure certain things are prepared before any of your applications starts. in your case, you want a deployed and pre-populated database, before your app starts.
I propose to not build a dependency chain like this, because it makes things complicated in your deployment pipeline and prevents proper scaling of your deployment processes if you start to deploy more than a couple apps in the future. in highly dynamic environments like kubernetes, every deployment should be able to check the prerequisites it needs to start on its own without depending on a order of deployments.
this can be achieved with a combination of initContainers and probes. both can be specified per deployment to prevent it from failing if certain prerequisites are not met and/or to fullfill certain prerequisites before a service starts routing traffic to your deployment (in your case the database).
in short:
to populate a database volume before the database starts, use an initContainer
to let the database serve traffic after its initialization and prepopulation, define probes to check for these conditions. your database will only start to serve traffic after its livenessProbe and readinessProbe has succeeded. if it needs extra time, protect the pod from beeing terminated with a startupProbe.
to ensure the deployment of your app does not start and fail before the database is ready, use an initContainer to check if the database is ready to serve traffic before your app starts.
check out
https://12factor.net/ (general guideline for dynamic systems)
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
for more information.

In a Helm Chart, how can I block the upgrade until a deployment in it is complete?

I am using Rancher Pipelines and catalogs to run Helm Charts like this:
.rancher-pipeline.yml
stages:
- name: Deploy app-web
steps:
- applyAppConfig:
catalogTemplate: cattle-global-data:chart-web-server
version: 0.4.0
name: ${CICD_GIT_REPO_NAME}-${CICD_GIT_BRANCH}-serv
targetNamespace: ${CICD_GIT_REPO_NAME}
answers:
pipeline.sequence: ${CICD_EXECUTION_SEQUENCE}
...
- name: Another chart needs to wait until the previous one success
...
And in the chart-web-server app, it has a deployment:
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-dpy
labels:
{{- include "labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Release.Name }}
{{- include "labels" . | nindent 6 }}
template:
metadata:
labels:
app: {{ .Release.Name }}
{{- include "labels" . | nindent 8 }}
spec:
containers:
- name: "web-server-{{ include "numericSafe" .Values.git.commitID }}"
image: "{{ .Values.harbor.host }}/{{ .Values.web.imageTag }}"
imagePullPolicy: Always
env:
...
ports:
- containerPort: {{ .Values.web.port }}
protocol: TCP
resources:
{{- .Values.resources | toYaml | nindent 12 }}
Now, I need the pipeline to be blocked until the deployment is upgraded since I want to do some server testing in the following stages.
My idea is to use Helm hook: If I can create a Job hooking post-install and post-upgrade and waiting for the deployment to be completed, I can then block the whole pipeline until the deployment (a web server) is updated.
Does this idea work? If so, how can I write such a blocking and detecting Job?
Does not appear to be supported from what I can find of their code. It would appear they just shell out to helm upgrade, would need to use the --wait mode.

HELM cannot find Deployment.spec.template.spec.containers[0]

I am building a boilerplate HELM Chart, but HELM cannot find the container name. I have tried a hard coded name as well as various formulations of the variable. Nothing works. I am stumped. Please help!
ERROR MSG
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): missing required field "name" in io.k8s.api.core.v1.Container
deployment.yaml
apiVersion: "apps/ {{ .Release.ApiVersion }}"
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Values.deploy.image.name }}
spec:
replicas: {{ .Values.deploy.replicas }}
selector:
matchLabels:
app: {{ .Values.deploy.image.name }}
template:
metadata:
labels:
app: {{ .Values.deploy.image.name }}
spec:
containers:
- name: {{ .Values.deploy.image.name }}
image: {{ .Values.deploy.image.repository }}
imagePullPolicy: {{ .Values.deploy.image.pullPolicy }}
resources: {}
values.yaml
deploy:
type: ClusterIP
replicas: 5
image:
name: test
repository: k8stest
pullPolicy: IfNotPresent
service:
name: http
protocol: TCP
port: 80
targetPort: 8000
Your example works for me just fine, I copy pasted your code and only changed apiVersion to apps/v1. Since you say you have tried to hard code the name and still isn't working for you, I would think the problem is somewhere in the white space characters.

Helm Error converting YAML to JSON: yaml: line 20: did not find expected key

I don't really know what is the error here, is a simple helm deploy with a _helpers.tpl, it doesn't make sense and is probably a stupid mistake, the code:
deploy.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
{{ include "metadata.name" . }}-deploy
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
vars: {{- include "envs.var" .Values.secret.data }}
_helpers.tpl
{{- define "envs.var"}}
{{- range $key := . }}
- name: {{ $key | upper | quote}}
valueFrom:
secretKeyRef:
key: {{ $key | lower }}
name: {{ $key }}-auth
{{- end }}
{{- end }}
values.yaml
secret:
data:
username: root
password: test
the error
Error: YAML parse error on mychart/templates/deploy.yaml: error converting YAML to JSON: yaml: line 21: did not find expected key
Here this problem happens because of indent. You can resolve by updating
env: {{- include "envs.var" .Values.secret.data | nindent 12 }}
Simplest way to resolve this kind of issues is to use tools.
These are mostly indentation issues, and can be resolved very easily using the right tool
npm install -g yaml-lint
yaml-lint is one such tool
PS E:\vsc-workspaces\grafana-1> yamllint .\grafana.yaml
× YAML Lint failed for C:/Users/mnadeem6/vsc-workspaces/grafana-1/grafana.yaml
× bad indentation of a mapping entry at line 137, column 11:
restartPolicy: Always
^
PS E:\vsc-workspaces\grafana-1> yamllint .\grafana.yaml
√ YAML Lint successful.

How to get a pod index inside a helm chart

I'm deploying a Kubernetes stateful set and I would like to get the pod index inside the helm chart so I can configure each pod with this pod index.
For example in the following template I'm using the variable {{ .Values.podIndex }} to retrieve the pod index in order to use it to configure my app.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
template:
metadata:
labels:
app: {{ .Values.name }}
spec:
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: Always
name: {{ .Values.name }}
command: ["launch"],
args: ["-l","{{ .Values.podIndex }}"]
ports:
- containerPort: 4000
imagePullSecrets:
- name: gitlab-registry
You can't do this in the way you're describing.
Probably the best path is to change your Deployment into a StatefulSet. Each pod launched from a StatefulSet has an identity, and each pod's hostname gets set to the name of the StatefulSet plus an index. If your launch command looks at hostname, it will see something like name-0 and know that it's the first (index 0) pod in the StatefulSet.
A second path would be to create n single-replica Deployments using Go templating. This wouldn't be my preferred path, but you can
{{ range $podIndex := until .Values.replicaCount -}}
---
apiVersion: v1
kind: Deployment
metadata:
name: {{ .Values.name }}-{{ $podIndex }}
spec:
replicas: 1
template:
spec:
containers:
- name: {{ .Values.name }}
command: ["launch"]
args: ["-l", "{{ $podIndex }}"]
{{ end -}}
The actual flow here is that Helm reads in all of the template files and produces a block of YAML files, then submits these to the Kubernetes API server (with no templating directives at all), and the Kubernetes machinery acts on it. You can see what's being submitted by running helm template. By the time a Deployment is creating a Pod, all of the template directives have been stripped out; you can't make fields in the pod spec dependent on things like which replica it is or which node it got scheduled on.