I have such configMap file app-configmap-mdc.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "app.fullname" . }}-app-mdc
labels:
app.kubernetes.io/name: "app"
{{- include "app.labels" . | nindent 4 }}
data:
mdc.properties: |
{{- range .Values.app.log.mdc.properties }}
{{ . }}
{{- end }}
And I want to automatically restart pods when app.log.mdc.properties has been changed.
So, I add checksum annotation to deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "app.fullname" . }}-app
labels:
app.kubernetes.io/name: "app"
spec:
template:
metadata:
labels:
app.kubernetes.io/name: "app"
annotations:
checksum/mdc: {{ include (print $.Template.BasePath "/app-configmap-mdc.yaml") . | sha256sum }}
spec:
containers:
- name: app
volumeMounts:
- name: app-mdc
mountPath: /app/config/mdc.properties
subPath: mdc.properties
volumes:
- name: app-mdc
configMap:
name: "{{ include "app.fullname" . }}-app-mdc"
...
But when I execute helm update command pods don't restart, checksum/mdc annotation value doesn't change in metadata, but value of configmap app-app-mdc is changed.
So It looks like during helm update command cheksum recalculation don't happen.
What do I do wrong?
Values:
global:
# Parameters for all docker registry of installation product
image:
productRepository: docker-dev-local.comp.com/ps
externalRepository: docker.comp.com
pullPolicy: IfNotPresent
imagePullSecrets:
- name: docker-dev-local
serviceAccount:
name: user
extraLabels: {}
priorityClassName: ""
# Parameters for product "APP"
app:
monitoring:
jolokia: {}
log:
scanPeriodInSec: 30
mdc:
properties: {}
configuration:
appConfigName: app_config
# Parameters for component "app"
replicaCount: 2
minAvailable: 1
resources:
limits:
cpu: 1
memory: 1536Mi
requests:
cpu: 1
memory: 1024Mi
securityContext:
privileged: false
runAsNonRoot: true
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
service:
type: LoadBalancer
ports:
http-api:
port: 5235
protocol: TCP
appProtocol: http
targetPort: 5235
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-name: "JSESSIONID"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-ttl: "60"
# Пример настроек ingress
ingress:
enabled: false
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/affinity-canary-behavior: sticky
nginx.ingress.kubernetes.io/session-cookie-name: EPMINGRESSCOOKIE
hosts:
- host: app-app.standname.mf.cloud.nexign.com
paths:
- /
tls: []
nodeSelector: {}
affinity: {}
tolerations: []
# Configuration of Java environment
java:
maxMem: 1024M
minMem: 512M
# Application configuration
Template:
{{/*
Name of the product
*/}}
{{- define "app.productname" -}}
app
{{- end -}}
{{/*
Name of the product group
*/}}
{{- define "app.productgroup" -}}
bin
{{- end -}}
{{/*
Full name of the chart.
*/}}
{{- define "app.fullname" -}}
{{- if contains .Chart.Name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
Chart name with version
*/}}
{{- define "app.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "app.labels" -}}
helm.sh/chart: "{{ include "app.chart" . }}"
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/part-of: "{{ include "app.productname" . }}"
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
{{- with .Values.global.extraLabels }}
{{ toYaml . }}
{{- end }}
{{- end -}}
{{/*
Common selectors
*/}}
{{- define "app.selectorLabels" -}}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/part-of: "{{ include "app.productname" . }}"
{{- end -}}
{{/*
Common annotations
*/}}
{{- define "app.annotations" -}}
logging: json
{{- end -}}
{{- define "app.app.propertiesHash" -}}
{{- $env := include (print $.Template.BasePath "/app-configmap-env.yaml") . | sha256sum -}}
{{ print $env | sha256sum }}
{{- end -}}
{{/*
Service account name
*/}}
{{- define "app.serviceAccountName" -}}
{{ default "default" .Values.global.serviceAccount.name }}
{{- end -}}```
The problem was with resource quotas.
After configMap replicaSet tried to start, but there was no needed resources.
So it stayed in pending state and old replicaSet with old pod stayed working
Related
When I am trying to deploy a Docker image to EKS Cluster using Helm, I am getting this error:
Warning FailedScheduling 33s (x5 over 4m58s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector.
Here is helm chart i am using deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm-chart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
{{- toYaml .Values.iamLabels | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
app: helm
template:
metadata:
labels:
app: helm
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "helm-chart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Values.image.name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.image.containerPort }}
envFrom:
- secretRef:
name: {{ .Values.image.envSecretName }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
livenessProbe:
{{- toYaml .Values.livenessProbe | nindent 12 }}
readinessProbe:
{{- toYaml .Values.readinessProbe | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Anyone has a solution for this I am struck by long.
When i am trying to use this template i am getting above error 3 node(s) didn't match Pod's node affinity/selector.
Or your nodes are not ready and have some custom taints.
Or your helm deployment has some affinity.
Try to template your helm deployment and check that you don't have an affinity there.
I'm having a hard time putting together the K8s syntax for declaring a secret in a file, then mounting it in an accessible way. In short, in the simplest way, I want to store Postgres credentials (client key, client cert, server-ca cert) in files. When I try to install, it can't find the volume. I thought the volume is defined in the deployment yaml attached below.
How do I tell Helm/K8s that yes, these secrets should be mounted as volumes, create them if needed?
My secrets defined:
>kubectl get secrets |grep postgres
postgres-client-cert Opaque 1 20m
postgres-client-key Opaque 1 18m
postgres-server-ca Opaque 1 18m
Failed attempt to deploy chart:
helm upgrade --install $APP-$TARGET_ENV ./.helm -f ./.helm/values-$TARGET_ENV.yaml -n $TARGET_ENV
W0609 18:56:15.653459 1198995 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
Error: UPGRADE FAILED: cannot patch "myapp-app" with kind Deployment: Deployment.apps "myapp-app" is invalid: [spec.template.spec.containers[0].volumeMounts[1].name: Not found: "postgres-client-key-volume", spec.template.spec.containers[0].volumeMounts[2].name: Not found: "postgres-server-ca-volume"]
.helm/templates/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
labels:
app: {{ .Chart.Name }}
{{- include "app.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount | default 1 }}
{{- end }}
selector:
matchLabels:
{{- include "app.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "app.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
env:
- name: TIME_UPDATED
value: {{ now | date "2006-01-02T15:04:05" }}
- name: SENTRY_ENV
value: {{ .Values.deployment.SENTRY_ENV }}
- name: PORT
value: {{ .Values.deployment.containerPort | quote }}
{{- toYaml .Values.deployment.env | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.sha1 | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: postgres-client-cert-volume
mountPath: "/var/run/secrets/postgres-client-cert"
readOnly: true
- name: postgres-client-key-volume
mountPath: "/var/run/secrets/postgres-client-key"
readOnly: true
- name: postgres-server-ca-volume
mountPath: "/var/run/secrets/postgres-server-ca"
readOnly: true
ports:
- name: http
containerPort: {{ .Values.deployment.containerPort }}
protocol: TCP
livenessProbe:
{{- toYaml .Values.livenessProbe | nindent 12 }}
readinessProbe:
{{- toYaml .Values.readinessProbe | nindent 12 }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumes:
- name: postgres-client-cert-volume
secret:
secretName: postgres-client-cert
optional: false
- name: postgres-client-key
secret:
secretName: postgres-client-key
optional: false
- name: postgres-server-ca
secret:
secretName: postgres-server-ca
optional: false
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Thanks,
Woodsman
The helm chart is looking for a mount named postgres-server-ca-volume but you've got it named postgres-server-ca on line 74
I got this error after I added the volumeMounts and volumes into the deployment.yaml.
My deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chart.fullname" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "chart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "chart.selectorLabels" . | nindent 8 }}
spec:
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
{{- if not .Values.developerMode }}
volumes:
- name: config-vol
configMap:
name: service-config
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ default .Chart.AppVersion .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range .Values.env }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
ports:
- containerPort: {{ .Values.container.port }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if not .Values.developerMode }}
volumeMounts:
- mountPath: /src/config
name: config-vol
livenessProbe:
httpGet:
path: /health
port: {{ .Values.service.targetPort }}
initialDelaySeconds: 10
periodSeconds: 15
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: {{ .Values.service.targetPort }}
initialDelaySeconds: 10
periodSeconds: 15
failureThreshold: 3
{{- if not .Values.developerMode }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{ end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}
My ConfigMap:
apiVersion: v1
data:
config.json: |
{{- toPrettyJson $.Values.serviceConfig | nindent 4 }}
kind: ConfigMap
metadata:
name: service-config
namespace: {{ .Release.Namespace }}
The build worked before I added the volumes and volumesMounts and I don't know why it failed as I think everything should be correct.
I just figured out the root cause. It's because there are two {{ end }} placed in the wrong position. The last two {{ end }} should place under the securityContext and resources
I have a helm deployment which deploys 2 containers pod.
Now I need to include init container to one of the container pod.
I'm new to helm. Kindly share the snippet to achieve this. Here under spec I have defined 2 containers in which container 1 is dependent on container 2. So container 2 should be up and then I need to run init container for container 1.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "test.fullname" . }}
namespace: {{ .Values.global.namespace }}
labels:
{{- include "test.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "testLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "test.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ .Values.cloudsqlproxySa }}
automountServiceAccountToken: true
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }} # For this I need to include the init container.
securityContext:
{{- toYaml .Values.test.securityContext | nindent 12 }}
image: "{{ .Values.test.image.repository }}:{{ .Values.test.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.test.image.pullPolicy }}
ports:
- name: {{ .Values.test.port.name }}
containerPort: {{ .Values.test.port.containerPort }}
protocol: {{ .Values.test.port.protocol }}
livenessProbe:
httpGet:
path: /
port: {{ .Values.test.port.containerPort }}
readinessProbe:
httpGet:
path: /
port: {{ .Values.test.port.containerPort }}
envFrom:
- configMapRef:
name: {{ .Values.configmap.name }}
resources:
{{- toYaml .Values.test.resources | nindent 12 }}
volumeMounts:
- name: gcp-bigquery-credential-file
mountPath: /secret
readOnly: true
- name: {{ .Chart.Name }}-gce-proxy
securityContext:
{{- toYaml .Values.cloudsqlproxy.securityContext | nindent 12 }}
image: "{{ .Values.cloudsqlproxy.image.repository }}:{{ .Values.cloudsqlproxy.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.cloudsqlproxy.image.pullPolicy }}
command:
- "/cloud_sql_proxy"
- "-instances={{ .Values.cloudsqlConnection }}=tcp:{{ .Values.cloudsqlproxy.port.containerPort }}"
ports:
- name: {{ .Values.cloudsqlproxy.port.name }}
containerPort: {{ .Values.cloudsqlproxy.port.containerPort }}
resources:
{{- toYaml .Values.cloudsqlproxy.resources | nindent 12 }}
volumeMounts:
- name: gcp-bigquery-credential-file
mountPath: /secret
readOnly: true
volumes:
- name: gcp-bigquery-credential-file
secret:
secretName: {{ .Values.bigquerysecret.name }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Posting this as a community wiki out of comments, feel free to edit and expand.
As #anemyte responded in comments, it's not possible to start init container after the main container is started, this is the logic behind init-containers. Understanding init-containers
Possible solution for this from #DavidMaze is to separate containers into different deployments and setup a container with application to restart itself until proxy container is up and running. Full quote:
If the init container exits with an error if it can't reach the proxy
container, and you run the proxy container in a separate deployment,
then you can have a setup where the application container restarts
until the proxy is up and running. That would mean splitting this into
two separate files in the templates directory
I need to deploy three different certificates on different namespaces using helm chart.
I create a template per certificate in the same file and add if conditions on each one in order to deloy only the needed certificate that i pass as a paramater in my helm install command,
My secret.yaml look like this :
{{- if eq .Values.val "paris_turf_support" }}
{{- range .Values.namespaces.paris_turf_support }}
{{- $ns := . -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: "tls-paris-turf.support"
namespace: {{ $ns }}
data:
tls.crt: {{ $.Files.Get "tls-paris-turf.support.crt" | b64enc }}
tls.key: {{ $.Files.Get "tls-paris-turf.support.key" | b64enc }}
{{- end }}
{{ else if eq .Values.val "geny_sports" }}
{{- range .Values.namespaces.geny_sports }}
{{- $ns := . -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: "tls-geny-sports.com"
namespace: {{ $ns }}
data:
tls.crt: {{ $.Files.Get "tls-geny-sports.com.crt" | b64enc }}
tls.key: {{ $.Files.Get "tls-geny-sports.com.key" | b64enc }}
{{- end }}
{{ else if eq .Values.val "paris_turf_com" }}
{{- range .Values.namespaces.paris_turf_com }}
{{- $ns := . -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: "tls-paris-turf.com"
namespace: {{ $ns }}
data:
tls.crt: {{ $.Files.Get "tls-paris-turf.com.crt" | b64enc }}
tls.key: {{ $.Files.Get "tls-paris-turf.com.key" | b64enc }}
{{- end }}
{{- end }}
when i run this command to install the helm chart :
helm install secret-rel ./secret --values=./secret/values/dev.yaml --namespace=secret --set val="paris_turf_com"
I get this error :
Error: YAML parse error on secret/templates/secret.yaml: error converting YAML to JSON: yaml: line 9: mapping values are not allowed in this context
Need your help please
mapping values are not allowed in this context means that there is an error in .yaml which makes it invalid.
There are plenty of online tools that can be used to validate yaml's syntax such as YAML Lint.
In your particular use case the error says that there is an issue with line 9. Looking at your config we can see that you rae missing indentations in lines 9 and 10. It should look like this instead:
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: tls-paris-turf.support
namespace: {{ $ns }}
Also you don't need to use double quotes (" ") for naming your Secrets. And as you already noticed, you should use --- a line before {{- end }}
I hope it helps.
finally i fix the problem, this is my secret.yaml :
{{- if eq .Values.val "paris_turf_support" }}
{{- range .Values.namespaces.paris_turf_support }}
{{- $ns := . -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: tls-paris-turf.support
namespace: {{ $ns }}
data:
tls.crt: {{ $.Files.Get "tls-paris-turf.support.crt" | b64enc }}
tls.key: {{ $.Files.Get "tls-paris-turf.support.key" | b64enc }}
---
{{- end }}
{{ else if eq .Values.val "geny_sports" }}
{{- range .Values.namespaces.geny_sports }}
{{- $ns := . -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: tls-geny-sports.com
namespace: {{ $ns }}
data:
tls.crt: {{ $.Files.Get "tls-geny-sports.com.crt" | b64enc }}
tls.key: {{ $.Files.Get "tls-geny-sports.com.key" | b64enc }}
---
{{- end }}
{{ else if eq .Values.val "paris_turf_com" }}
{{- range .Values.namespaces.paris_turf_com }}
{{- $ns := . -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: tls-paris-turf.com
namespace: {{ $ns }}
data:
tls.crt: {{ $.Files.Get "tls-paris-turf.com.crt" | b64enc }}
tls.key: {{ $.Files.Get "tls-paris-turf.com.key" | b64enc }}
---
{{- end }}
{{- end }}