I need to deploy three different certificates on different namespaces using helm chart.
I create a template per certificate in the same file and add if conditions on each one in order to deloy only the needed certificate that i pass as a paramater in my helm install command,
My secret.yaml look like this :
{{- if eq .Values.val "paris_turf_support" }}
{{- range .Values.namespaces.paris_turf_support }}
{{- $ns := . -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: "tls-paris-turf.support"
namespace: {{ $ns }}
data:
tls.crt: {{ $.Files.Get "tls-paris-turf.support.crt" | b64enc }}
tls.key: {{ $.Files.Get "tls-paris-turf.support.key" | b64enc }}
{{- end }}
{{ else if eq .Values.val "geny_sports" }}
{{- range .Values.namespaces.geny_sports }}
{{- $ns := . -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: "tls-geny-sports.com"
namespace: {{ $ns }}
data:
tls.crt: {{ $.Files.Get "tls-geny-sports.com.crt" | b64enc }}
tls.key: {{ $.Files.Get "tls-geny-sports.com.key" | b64enc }}
{{- end }}
{{ else if eq .Values.val "paris_turf_com" }}
{{- range .Values.namespaces.paris_turf_com }}
{{- $ns := . -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: "tls-paris-turf.com"
namespace: {{ $ns }}
data:
tls.crt: {{ $.Files.Get "tls-paris-turf.com.crt" | b64enc }}
tls.key: {{ $.Files.Get "tls-paris-turf.com.key" | b64enc }}
{{- end }}
{{- end }}
when i run this command to install the helm chart :
helm install secret-rel ./secret --values=./secret/values/dev.yaml --namespace=secret --set val="paris_turf_com"
I get this error :
Error: YAML parse error on secret/templates/secret.yaml: error converting YAML to JSON: yaml: line 9: mapping values are not allowed in this context
Need your help please
mapping values are not allowed in this context means that there is an error in .yaml which makes it invalid.
There are plenty of online tools that can be used to validate yaml's syntax such as YAML Lint.
In your particular use case the error says that there is an issue with line 9. Looking at your config we can see that you rae missing indentations in lines 9 and 10. It should look like this instead:
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: tls-paris-turf.support
namespace: {{ $ns }}
Also you don't need to use double quotes (" ") for naming your Secrets. And as you already noticed, you should use --- a line before {{- end }}
I hope it helps.
finally i fix the problem, this is my secret.yaml :
{{- if eq .Values.val "paris_turf_support" }}
{{- range .Values.namespaces.paris_turf_support }}
{{- $ns := . -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: tls-paris-turf.support
namespace: {{ $ns }}
data:
tls.crt: {{ $.Files.Get "tls-paris-turf.support.crt" | b64enc }}
tls.key: {{ $.Files.Get "tls-paris-turf.support.key" | b64enc }}
---
{{- end }}
{{ else if eq .Values.val "geny_sports" }}
{{- range .Values.namespaces.geny_sports }}
{{- $ns := . -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: tls-geny-sports.com
namespace: {{ $ns }}
data:
tls.crt: {{ $.Files.Get "tls-geny-sports.com.crt" | b64enc }}
tls.key: {{ $.Files.Get "tls-geny-sports.com.key" | b64enc }}
---
{{- end }}
{{ else if eq .Values.val "paris_turf_com" }}
{{- range .Values.namespaces.paris_turf_com }}
{{- $ns := . -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: tls-paris-turf.com
namespace: {{ $ns }}
data:
tls.crt: {{ $.Files.Get "tls-paris-turf.com.crt" | b64enc }}
tls.key: {{ $.Files.Get "tls-paris-turf.com.key" | b64enc }}
---
{{- end }}
{{- end }}
Related
I have such configMap file app-configmap-mdc.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "app.fullname" . }}-app-mdc
labels:
app.kubernetes.io/name: "app"
{{- include "app.labels" . | nindent 4 }}
data:
mdc.properties: |
{{- range .Values.app.log.mdc.properties }}
{{ . }}
{{- end }}
And I want to automatically restart pods when app.log.mdc.properties has been changed.
So, I add checksum annotation to deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "app.fullname" . }}-app
labels:
app.kubernetes.io/name: "app"
spec:
template:
metadata:
labels:
app.kubernetes.io/name: "app"
annotations:
checksum/mdc: {{ include (print $.Template.BasePath "/app-configmap-mdc.yaml") . | sha256sum }}
spec:
containers:
- name: app
volumeMounts:
- name: app-mdc
mountPath: /app/config/mdc.properties
subPath: mdc.properties
volumes:
- name: app-mdc
configMap:
name: "{{ include "app.fullname" . }}-app-mdc"
...
But when I execute helm update command pods don't restart, checksum/mdc annotation value doesn't change in metadata, but value of configmap app-app-mdc is changed.
So It looks like during helm update command cheksum recalculation don't happen.
What do I do wrong?
Values:
global:
# Parameters for all docker registry of installation product
image:
productRepository: docker-dev-local.comp.com/ps
externalRepository: docker.comp.com
pullPolicy: IfNotPresent
imagePullSecrets:
- name: docker-dev-local
serviceAccount:
name: user
extraLabels: {}
priorityClassName: ""
# Parameters for product "APP"
app:
monitoring:
jolokia: {}
log:
scanPeriodInSec: 30
mdc:
properties: {}
configuration:
appConfigName: app_config
# Parameters for component "app"
replicaCount: 2
minAvailable: 1
resources:
limits:
cpu: 1
memory: 1536Mi
requests:
cpu: 1
memory: 1024Mi
securityContext:
privileged: false
runAsNonRoot: true
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
service:
type: LoadBalancer
ports:
http-api:
port: 5235
protocol: TCP
appProtocol: http
targetPort: 5235
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-name: "JSESSIONID"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-ttl: "60"
# Пример настроек ingress
ingress:
enabled: false
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/affinity-canary-behavior: sticky
nginx.ingress.kubernetes.io/session-cookie-name: EPMINGRESSCOOKIE
hosts:
- host: app-app.standname.mf.cloud.nexign.com
paths:
- /
tls: []
nodeSelector: {}
affinity: {}
tolerations: []
# Configuration of Java environment
java:
maxMem: 1024M
minMem: 512M
# Application configuration
Template:
{{/*
Name of the product
*/}}
{{- define "app.productname" -}}
app
{{- end -}}
{{/*
Name of the product group
*/}}
{{- define "app.productgroup" -}}
bin
{{- end -}}
{{/*
Full name of the chart.
*/}}
{{- define "app.fullname" -}}
{{- if contains .Chart.Name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
Chart name with version
*/}}
{{- define "app.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "app.labels" -}}
helm.sh/chart: "{{ include "app.chart" . }}"
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/part-of: "{{ include "app.productname" . }}"
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.global.extraLabels }}
{{ toYaml . }}
{{- end }}
{{- end -}}
{{/*
Common selectors
*/}}
{{- define "app.selectorLabels" -}}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/part-of: "{{ include "app.productname" . }}"
{{- end -}}
{{/*
Common annotations
*/}}
{{- define "app.annotations" -}}
logging: json
{{- end -}}
{{- define "app.app.propertiesHash" -}}
{{- $env := include (print $.Template.BasePath "/app-configmap-env.yaml") . | sha256sum -}}
{{ print $env | sha256sum }}
{{- end -}}
{{/*
Service account name
*/}}
{{- define "app.serviceAccountName" -}}
{{ default "default" .Values.global.serviceAccount.name }}
{{- end -}}```
The problem was with resource quotas.
After configMap replicaSet tried to start, but there was no needed resources.
So it stayed in pending state and old replicaSet with old pod stayed working
When I am trying to deploy a Docker image to EKS Cluster using Helm, I am getting this error:
Warning FailedScheduling 33s (x5 over 4m58s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector.
Here is helm chart i am using deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm-chart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
{{- toYaml .Values.iamLabels | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
app: helm
template:
metadata:
labels:
app: helm
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "helm-chart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Values.image.name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.image.containerPort }}
envFrom:
- secretRef:
name: {{ .Values.image.envSecretName }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
livenessProbe:
{{- toYaml .Values.livenessProbe | nindent 12 }}
readinessProbe:
{{- toYaml .Values.readinessProbe | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Anyone has a solution for this I am struck by long.
When i am trying to use this template i am getting above error 3 node(s) didn't match Pod's node affinity/selector.
Or your nodes are not ready and have some custom taints.
Or your helm deployment has some affinity.
Try to template your helm deployment and check that you don't have an affinity there.
I'm having a hard time putting together the K8s syntax for declaring a secret in a file, then mounting it in an accessible way. In short, in the simplest way, I want to store Postgres credentials (client key, client cert, server-ca cert) in files. When I try to install, it can't find the volume. I thought the volume is defined in the deployment yaml attached below.
How do I tell Helm/K8s that yes, these secrets should be mounted as volumes, create them if needed?
My secrets defined:
>kubectl get secrets |grep postgres
postgres-client-cert Opaque 1 20m
postgres-client-key Opaque 1 18m
postgres-server-ca Opaque 1 18m
Failed attempt to deploy chart:
helm upgrade --install $APP-$TARGET_ENV ./.helm -f ./.helm/values-$TARGET_ENV.yaml -n $TARGET_ENV
W0609 18:56:15.653459 1198995 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
Error: UPGRADE FAILED: cannot patch "myapp-app" with kind Deployment: Deployment.apps "myapp-app" is invalid: [spec.template.spec.containers[0].volumeMounts[1].name: Not found: "postgres-client-key-volume", spec.template.spec.containers[0].volumeMounts[2].name: Not found: "postgres-server-ca-volume"]
.helm/templates/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
labels:
app: {{ .Chart.Name }}
{{- include "app.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount | default 1 }}
{{- end }}
selector:
matchLabels:
{{- include "app.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "app.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
env:
- name: TIME_UPDATED
value: {{ now | date "2006-01-02T15:04:05" }}
- name: SENTRY_ENV
value: {{ .Values.deployment.SENTRY_ENV }}
- name: PORT
value: {{ .Values.deployment.containerPort | quote }}
{{- toYaml .Values.deployment.env | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.sha1 | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: postgres-client-cert-volume
mountPath: "/var/run/secrets/postgres-client-cert"
readOnly: true
- name: postgres-client-key-volume
mountPath: "/var/run/secrets/postgres-client-key"
readOnly: true
- name: postgres-server-ca-volume
mountPath: "/var/run/secrets/postgres-server-ca"
readOnly: true
ports:
- name: http
containerPort: {{ .Values.deployment.containerPort }}
protocol: TCP
livenessProbe:
{{- toYaml .Values.livenessProbe | nindent 12 }}
readinessProbe:
{{- toYaml .Values.readinessProbe | nindent 12 }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumes:
- name: postgres-client-cert-volume
secret:
secretName: postgres-client-cert
optional: false
- name: postgres-client-key
secret:
secretName: postgres-client-key
optional: false
- name: postgres-server-ca
secret:
secretName: postgres-server-ca
optional: false
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Thanks,
Woodsman
The helm chart is looking for a mount named postgres-server-ca-volume but you've got it named postgres-server-ca on line 74
i have a list of customer ids that i want to pass to the values.yml in the helm chart , and then for each customer create a deployment is that possible? this is what i want to pass in values.yml:
customer:
- 62
- 63
and this is my deployment template
https://gist.github.com/JacobAmar/8c45e98f9c34bfd662b9fd11a534b9d5
im getting this error when im installing the chart
"parse error at (clientmodule/templates/deployment.yaml:51): unexpected EOF"
also i want to pass that customer id to the default command in the container , thanks for the help :)
Ok , so i found a solution to to why helm is only creating a deployment for my last item in the list , helm uses "---" as a seprator between the specification of your yaml. so now my template looks like this and it works :)
{{ range .Values.customer.id }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: "clientmodule-customer-{{ . }}"
labels:
{{- include "clientmodule.labels" $ | nindent 4 }}
customer: "{{ . }}"
spec:
{{- if not $.Values.autoscaling.enabled }}
replicas: {{ $.Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "clientmodule.selectorLabels" $ | nindent 6 }}
template:
metadata:
{{- with $.Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "clientmodule.selectorLabels" $ | nindent 8 }}
spec:
{{- with $.Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "clientmodule.serviceAccountName" $ }}
securityContext:
{{- toYaml $.Values.podSecurityContext | nindent 8 }}
containers:
- name: clientmodule-customer-{{ . }}
securityContext:
{{- toYaml $.Values.securityContext | nindent 12 }}
image: "{{ $.Values.image.repository }}:{{ $.Values.image.tag | default $.Chart.AppVersion }}"
imagePullPolicy: {{ $.Values.image.pullPolicy }}
command: ["sh","-c",{{$.Values.command}}]
resources:
{{- toYaml $.Values.resources | nindent 12 }}
{{- with $.Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $.Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $.Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
---
{{- end }}
you can refer to this answer too : looping over helm
After you entered range, you should be passing chart scope with $, for example $.Values.podAnnotations.
More info in docs
{{ range .Values.customer }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: "clientmodule-customer-{{ . }}"
labels:
{{- include "clientmodule.labels" $ | nindent 4 }}
spec:
{{- if not $.Values.autoscaling.enabled }}
replicas: {{ $.Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "clientmodule.selectorLabels" $ | nindent 6 }}
template:
metadata:
{{- with $.Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "clientmodule.selectorLabels" $ | nindent 8 }}
spec:
{{- with $.Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "clientmodule.serviceAccountName" $ }}
securityContext:
{{- toYaml $.Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ $.Chart.Name }}
securityContext:
{{- toYaml $.Values.securityContext | nindent 12 }}
image: "{{ $.Values.image.repository }}:{{ $.Values.image.tag | default $.Chart.AppVersion }}"
imagePullPolicy: {{ $.Values.image.pullPolicy }}
command: ["sh","-c",{{$.Values.command}}]
resources:
{{- toYaml $.Values.resources | nindent 12 }}
{{- with $.Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $.Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $.Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
The stable/mongodb chart has a secrets.yaml that looks like the following.
{{ if and .Values.usePassword (not .Values.existingSecret) -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "mongodb.fullname" . }}
labels:
app: {{ template "mongodb.name" . }}
chart: {{ template "mongodb.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
type: Opaque
data:
{{- if .Values.mongodbRootPassword }}
mongodb-root-password: {{ .Values.mongodbRootPassword | b64enc | quote }}
{{- else }}
mongodb-root-password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
{{- if and .Values.mongodbUsername .Values.mongodbDatabase }}
{{- if .Values.mongodbPassword }}
mongodb-password: {{ .Values.mongodbPassword | b64enc | quote }}
{{- else }}
mongodb-password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
{{- end }}
{{- if .Values.replicaSet.enabled }}
{{- if .Values.replicaSet.key }}
mongodb-replica-set-key: {{ .Values.replicaSet.key | b64enc | quote }}
{{- else }}
mongodb-replica-set-key: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
{{- end }}
{{- end }}
I want to provide some of the values using my values.yaml file. Is this possible since stable/mongodb 5.20.0 is a subchart/dependency I am referencing? I've tried naming values the same in my app's values.yaml, but they don't seem to overwrite them when I do a test run using helm template.
Thanks!
you need to put an alias to your requirements.yaml on the mongodb dependency and use it to include the values from mongo on your own values.
https://helm.sh/docs/developing_charts/#alias-field-in-requirements-yaml