K8s, Helm Secrets and Volumes - kubernetes

I'm having a hard time putting together the K8s syntax for declaring a secret in a file, then mounting it in an accessible way. In short, in the simplest way, I want to store Postgres credentials (client key, client cert, server-ca cert) in files. When I try to install, it can't find the volume. I thought the volume is defined in the deployment yaml attached below.
How do I tell Helm/K8s that yes, these secrets should be mounted as volumes, create them if needed?
My secrets defined:
>kubectl get secrets |grep postgres
postgres-client-cert Opaque 1 20m
postgres-client-key Opaque 1 18m
postgres-server-ca Opaque 1 18m
Failed attempt to deploy chart:
helm upgrade --install $APP-$TARGET_ENV ./.helm -f ./.helm/values-$TARGET_ENV.yaml -n $TARGET_ENV
W0609 18:56:15.653459 1198995 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
Error: UPGRADE FAILED: cannot patch "myapp-app" with kind Deployment: Deployment.apps "myapp-app" is invalid: [spec.template.spec.containers[0].volumeMounts[1].name: Not found: "postgres-client-key-volume", spec.template.spec.containers[0].volumeMounts[2].name: Not found: "postgres-server-ca-volume"]
.helm/templates/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
labels:
app: {{ .Chart.Name }}
{{- include "app.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount | default 1 }}
{{- end }}
selector:
matchLabels:
{{- include "app.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "app.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
env:
- name: TIME_UPDATED
value: {{ now | date "2006-01-02T15:04:05" }}
- name: SENTRY_ENV
value: {{ .Values.deployment.SENTRY_ENV }}
- name: PORT
value: {{ .Values.deployment.containerPort | quote }}
{{- toYaml .Values.deployment.env | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.sha1 | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: postgres-client-cert-volume
mountPath: "/var/run/secrets/postgres-client-cert"
readOnly: true
- name: postgres-client-key-volume
mountPath: "/var/run/secrets/postgres-client-key"
readOnly: true
- name: postgres-server-ca-volume
mountPath: "/var/run/secrets/postgres-server-ca"
readOnly: true
ports:
- name: http
containerPort: {{ .Values.deployment.containerPort }}
protocol: TCP
livenessProbe:
{{- toYaml .Values.livenessProbe | nindent 12 }}
readinessProbe:
{{- toYaml .Values.readinessProbe | nindent 12 }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumes:
- name: postgres-client-cert-volume
secret:
secretName: postgres-client-cert
optional: false
- name: postgres-client-key
secret:
secretName: postgres-client-key
optional: false
- name: postgres-server-ca
secret:
secretName: postgres-server-ca
optional: false
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Thanks,
Woodsman

The helm chart is looking for a mount named postgres-server-ca-volume but you've got it named postgres-server-ca on line 74

Related

3 node(s) didn't match Pod's node affinity/selector error

When I am trying to deploy a Docker image to EKS Cluster using Helm, I am getting this error:
Warning FailedScheduling 33s (x5 over 4m58s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector.
Here is helm chart i am using deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm-chart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
{{- toYaml .Values.iamLabels | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
app: helm
template:
metadata:
labels:
app: helm
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "helm-chart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Values.image.name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.image.containerPort }}
envFrom:
- secretRef:
name: {{ .Values.image.envSecretName }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
livenessProbe:
{{- toYaml .Values.livenessProbe | nindent 12 }}
readinessProbe:
{{- toYaml .Values.readinessProbe | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Anyone has a solution for this I am struck by long.
When i am trying to use this template i am getting above error 3 node(s) didn't match Pod's node affinity/selector.
Or your nodes are not ready and have some custom taints.
Or your helm deployment has some affinity.
Try to template your helm deployment and check that you don't have an affinity there.

Build Failed: Deployment.apps "<service_name>" is invalid: spec.template.spec.containers: Required value

I got this error after I added the volumeMounts and volumes into the deployment.yaml.
My deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chart.fullname" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "chart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "chart.selectorLabels" . | nindent 8 }}
spec:
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
{{- if not .Values.developerMode }}
volumes:
- name: config-vol
configMap:
name: service-config
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ default .Chart.AppVersion .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range .Values.env }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
ports:
- containerPort: {{ .Values.container.port }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if not .Values.developerMode }}
volumeMounts:
- mountPath: /src/config
name: config-vol
livenessProbe:
httpGet:
path: /health
port: {{ .Values.service.targetPort }}
initialDelaySeconds: 10
periodSeconds: 15
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: {{ .Values.service.targetPort }}
initialDelaySeconds: 10
periodSeconds: 15
failureThreshold: 3
{{- if not .Values.developerMode }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{ end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}
My ConfigMap:
apiVersion: v1
data:
config.json: |
{{- toPrettyJson $.Values.serviceConfig | nindent 4 }}
kind: ConfigMap
metadata:
name: service-config
namespace: {{ .Release.Namespace }}
The build worked before I added the volumes and volumesMounts and I don't know why it failed as I think everything should be correct.
I just figured out the root cause. It's because there are two {{ end }} placed in the wrong position. The last two {{ end }} should place under the securityContext and resources

Init container for Helm3 multi pod deployment

I have a helm deployment which deploys 2 containers pod.
Now I need to include init container to one of the container pod.
I'm new to helm. Kindly share the snippet to achieve this. Here under spec I have defined 2 containers in which container 1 is dependent on container 2. So container 2 should be up and then I need to run init container for container 1.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "test.fullname" . }}
namespace: {{ .Values.global.namespace }}
labels:
{{- include "test.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "testLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "test.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ .Values.cloudsqlproxySa }}
automountServiceAccountToken: true
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }} # For this I need to include the init container.
securityContext:
{{- toYaml .Values.test.securityContext | nindent 12 }}
image: "{{ .Values.test.image.repository }}:{{ .Values.test.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.test.image.pullPolicy }}
ports:
- name: {{ .Values.test.port.name }}
containerPort: {{ .Values.test.port.containerPort }}
protocol: {{ .Values.test.port.protocol }}
livenessProbe:
httpGet:
path: /
port: {{ .Values.test.port.containerPort }}
readinessProbe:
httpGet:
path: /
port: {{ .Values.test.port.containerPort }}
envFrom:
- configMapRef:
name: {{ .Values.configmap.name }}
resources:
{{- toYaml .Values.test.resources | nindent 12 }}
volumeMounts:
- name: gcp-bigquery-credential-file
mountPath: /secret
readOnly: true
- name: {{ .Chart.Name }}-gce-proxy
securityContext:
{{- toYaml .Values.cloudsqlproxy.securityContext | nindent 12 }}
image: "{{ .Values.cloudsqlproxy.image.repository }}:{{ .Values.cloudsqlproxy.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.cloudsqlproxy.image.pullPolicy }}
command:
- "/cloud_sql_proxy"
- "-instances={{ .Values.cloudsqlConnection }}=tcp:{{ .Values.cloudsqlproxy.port.containerPort }}"
ports:
- name: {{ .Values.cloudsqlproxy.port.name }}
containerPort: {{ .Values.cloudsqlproxy.port.containerPort }}
resources:
{{- toYaml .Values.cloudsqlproxy.resources | nindent 12 }}
volumeMounts:
- name: gcp-bigquery-credential-file
mountPath: /secret
readOnly: true
volumes:
- name: gcp-bigquery-credential-file
secret:
secretName: {{ .Values.bigquerysecret.name }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Posting this as a community wiki out of comments, feel free to edit and expand.
As #anemyte responded in comments, it's not possible to start init container after the main container is started, this is the logic behind init-containers. Understanding init-containers
Possible solution for this from #DavidMaze is to separate containers into different deployments and setup a container with application to restart itself until proxy container is up and running. Full quote:
If the init container exits with an error if it can't reach the proxy
container, and you run the proxy container in a separate deployment,
then you can have a setup where the application container restarts
until the proxy is up and running. That would mean splitting this into
two separate files in the templates directory

Helm - Few deployments in a loop

i have a list of customer ids that i want to pass to the values.yml in the helm chart , and then for each customer create a deployment is that possible? this is what i want to pass in values.yml:
customer:
- 62
- 63
and this is my deployment template
https://gist.github.com/JacobAmar/8c45e98f9c34bfd662b9fd11a534b9d5
im getting this error when im installing the chart
"parse error at (clientmodule/templates/deployment.yaml:51): unexpected EOF"
also i want to pass that customer id to the default command in the container , thanks for the help :)
Ok , so i found a solution to to why helm is only creating a deployment for my last item in the list , helm uses "---" as a seprator between the specification of your yaml. so now my template looks like this and it works :)
{{ range .Values.customer.id }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: "clientmodule-customer-{{ . }}"
labels:
{{- include "clientmodule.labels" $ | nindent 4 }}
customer: "{{ . }}"
spec:
{{- if not $.Values.autoscaling.enabled }}
replicas: {{ $.Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "clientmodule.selectorLabels" $ | nindent 6 }}
template:
metadata:
{{- with $.Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "clientmodule.selectorLabels" $ | nindent 8 }}
spec:
{{- with $.Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "clientmodule.serviceAccountName" $ }}
securityContext:
{{- toYaml $.Values.podSecurityContext | nindent 8 }}
containers:
- name: clientmodule-customer-{{ . }}
securityContext:
{{- toYaml $.Values.securityContext | nindent 12 }}
image: "{{ $.Values.image.repository }}:{{ $.Values.image.tag | default $.Chart.AppVersion }}"
imagePullPolicy: {{ $.Values.image.pullPolicy }}
command: ["sh","-c",{{$.Values.command}}]
resources:
{{- toYaml $.Values.resources | nindent 12 }}
{{- with $.Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $.Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $.Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
---
{{- end }}
you can refer to this answer too : looping over helm
After you entered range, you should be passing chart scope with $, for example $.Values.podAnnotations.
More info in docs
{{ range .Values.customer }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: "clientmodule-customer-{{ . }}"
labels:
{{- include "clientmodule.labels" $ | nindent 4 }}
spec:
{{- if not $.Values.autoscaling.enabled }}
replicas: {{ $.Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "clientmodule.selectorLabels" $ | nindent 6 }}
template:
metadata:
{{- with $.Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "clientmodule.selectorLabels" $ | nindent 8 }}
spec:
{{- with $.Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "clientmodule.serviceAccountName" $ }}
securityContext:
{{- toYaml $.Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ $.Chart.Name }}
securityContext:
{{- toYaml $.Values.securityContext | nindent 12 }}
image: "{{ $.Values.image.repository }}:{{ $.Values.image.tag | default $.Chart.AppVersion }}"
imagePullPolicy: {{ $.Values.image.pullPolicy }}
command: ["sh","-c",{{$.Values.command}}]
resources:
{{- toYaml $.Values.resources | nindent 12 }}
{{- with $.Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $.Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $.Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

Error: Deployment.apps "geared-marsupi-buildachart" is invalid:

helm install geared-marsupi ./buildachart/
Error: Deployment.apps "geared-marsupi-buildachart" is invalid: spec.template.spec.containers[1].volumeMounts[0].name: Not found: "postgres_sql"
It looks more like an indentation issue but somehow it is not working and fails.
I have two containers one is drupal which does not need any volumes and the second one is postgresql which i have added volumes.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "buildachart.fullname" . }}
labels:
{{- include "buildachart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "buildachart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "buildachart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "buildachart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" #| default .Chart.AppVersion
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
- name: postgres #{{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: postgres #"{{ .Values.image.repository1 }}:{{ .Values.image.tag }}" #| default .Chart.AppVersion
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: POSTGRES_USER
value: "{{ .Values.user }}" #"user"
- name: POSTGRES_PASSWORD
value: "{{ .Values.password }}" #"password"
- name: POSTGRES_DB
value: "{{ .Values.db }}" #"test"
volumeMounts:
- name: postgres_sql
mountPath: /var/lib/postgresql/data
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
volumes:
- name: postgres_sql
hostPath:
path: /data/postgresdb_path
type: Directory
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
I tried putting volumes directly under the container tab and then under the Name tab of the second postgre container but same results.
The working Yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: drupal
spec:
selector:
matchLabels:
app: drupal
template:
metadata:
labels:
app: drupal
spec:
containers:
- name: drupal
image: drupal
resources:
limits:
memory: "128Mi"
cpu: "500m"
- name: postgres
image: postgres
env:
- name: POSTGRES_USER
value: test
- name: POSTGRES_PASSWORD
value: passwd
volumeMounts:
- name: postgres_vol
mountPath: /var/lib/postgresql/data
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
volumes:
- name: test-volume
hostPath:
- path: /data
type: Directory