Helm - How to write a file in a Volume using ConfigMap? - kubernetes

I have defined the values.yaml like the following:
name: custom-streams
image: streams-docker-images
imagePullPolicy: Always
restartPolicy: Always
replicas: 1
port: 8080
nodeSelector:
nodetype: free
configHocon: |-
streams {
monitoring {
custom {
uri = ${?URI}
method = ${?METHOD}
}
}
}
And configmap.yaml like the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-streams-configmap
data:
config.hocon: {{ .Values.configHocon | indent 4}}
Lastly, I have defined the deployment.yaml like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configmap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}
When I run the container via:
helm install --name custom-streams custom-streams -f values.yaml --debug --namespace streaming
Then the pods are running fine, but I cannot see the config.hocon file in the container:
$ kubectl exec -it custom-streams-55b45b7756-fb292 sh -n streaming
/ # ls
...
config
...
/ # cd config/
/config # ls
/config #
I need the config.hocon written in the /config folder. Can anyone let me know what is wrong with the configurations?

I was able to resolve the issue. The issue was using configmap in place configMap in deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configMap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}

Related

env varaible error converting YAML to JSON: yaml: did not find expected key

I have a deployment file which takes the environment variables from the values.yaml file.
Also I want to add one more variable named "PURPOSE".
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.scheduler.name }}
spec:
selector:
matchLabels:
app: {{ .Values.scheduler.name }}
template:
metadata:
labels:
app: {{ .Values.scheduler.name }}
spec:
containers:
- name: {{ .Values.scheduler.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.scheduler.targetPort }}
imagePullPolicy: Always
env:
{{- toYaml .Values.envVariables | nindent 10 }}
- name: PURPOSE
value: "SCHEDULER"
The error I get is the following:
error converting YAML to JSON: yaml: line 140: did not find expected key
The env varaibles from the values file work fine,
the problem seems to be the variable "PURPOSE"
The problem was the formatting of the environment block.
I have used the below Solution to fix the error :
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.scheduler.name }}
spec:
selector:
matchLabels:
app: {{ .Values.scheduler.name }}
template:
metadata:
labels:
app: {{ .Values.scheduler.name }}
spec:
containers:
- name: {{ .Values.scheduler.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.scheduler.targetPort }}
imagePullPolicy: Always
env:
- name: PURPOSE
value: "SCHEDULER"
{{- toYaml .Values.envVariables | nindent 10 }}

Unable to deploy Kubernetes secrets using Helm

I'm trying to create my first Helm release on an AKS cluster using a GitLab pipeline,
but when I run the following command
- helm upgrade server ./aks/server
--install
--namespace demo
--kubeconfig ${CI_PROJECT_DIR}/.kube/config
--set image.name=${CI_PROJECT_NAME}/${CI_PROJECT_NAME}-server
--set image.tag=${CI_COMMIT_SHA}
--set database.user=${POSTGRES_USER}
--set database.password=${POSTGRES_PASSWORD}
I receive the following error:
"Error: Secret in version "v1" cannot be handled as a Secret: v1.Secret.Data:
decode base64: illegal base64 data at input byte 8, error found in #10 byte of ..."
It looks like something is not working with the secrets file, but I don't understand what.
The secret.yaml template file is defined as follows:
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
data:
User: {{ .Values.database.user }}
Host: {{ .Values.database.host }}
Database: {{ .Values.database.name }}
Password: {{ .Values.database.password }}
Port: {{ .Values.database.port }}
I will also add the deployment and the service .yaml files.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.app.name }}
labels:
app: {{ .Values.app.name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
tier: backend
stack: node
app: {{ .Values.app.name }}
template:
metadata:
labels:
tier: backend
stack: node
app: {{ .Values.app.name }}
spec:
containers:
- name: {{ .Values.app.name }}
image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
imagePullPolicy: IfNotPresent
env:
- name: User
valueFrom:
secretKeyRef:
name: server-secret
key: User
optional: false
- name: Host
valueFrom:
secretKeyRef:
name: server-secret
key: Host
optional: false
- name: Database
valueFrom:
secretKeyRef:
name: server-secret
key: Database
optional: false
- name: Password
valueFrom:
secretKeyRef:
name: server-secret
key: Password
optional: false
- name: Ports
valueFrom:
secretKeyRef:
name: server-secret
key: Ports
optional: false
resources:
limits:
cpu: "1"
memory: "128M"
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: server-service
spec:
type: ClusterIP
selector:
tier: backend
stack: node
app: {{ .Values.app.name }}
ports:
- protocol: TCP
port: 3000
targetPort: 3000
Any hint?
You have to encode secret values to base64
Check the doc encoding-functions
Try below code
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
data:
User: {{ .Values.database.user | b64enc }}
Host: {{ .Values.database.host | b64enc }}
Database: {{ .Values.database.name | b64enc }}
Password: {{ .Values.database.password | b64enc }}
Port: {{ .Values.database.port | b64enc }}
Else use stringData instead of data
stringData will allow you to create the secrets without encode to base64
Check the example in the link
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
stringData:
User: {{ .Values.database.user | b64enc }}
Host: {{ .Values.database.host | b64enc }}
Database: {{ .Values.database.name | b64enc }}
Password: {{ .Values.database.password | b64enc }}
Port: {{ .Values.database.port | b64enc }}

Why can I cd /root in a pod container even after specifying proper "securityContext"?

I have a helm chart with deployment.yaml having the following params:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ .Values.newAppName }}
chart: {{ template "newApp.chart" . }}
release: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
name: {{ .Values.deploymentName }}
spec:
replicas: {{ .Values.numReplicas }}
selector:
matchLabels:
app: {{ .Values.newAppName }}
template:
metadata:
labels:
app: {{ .Values.newAppName }}
namespace: {{ .Release.Namespace }}
annotations:
some_annotation: val
some_annotation: val
spec:
serviceAccountName: {{ .Values.podRoleName }}
containers:
- env:
- name: ENV_VAR1
value: {{ .Values.env_var_1 }}
image: {{ .Values.newApp }}:{{ .Values.newAppVersion }}
imagePullPolicy: Always
command: ["/opt/myDir/bin/newApp"]
args: ["-c", "/etc/config/newApp/{{ .Values.newAppConfigFileName }}"]
name: {{ .Values.newAppName }}
ports:
- containerPort: {{ .Values.newAppTLSPort }}
protocol: TCP
livenessProbe:
httpGet:
path: /v1/h
port: {{ .Values.newAppTLSPort }}
scheme: HTTPS
initialDelaySeconds: 1
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 20
readinessProbe:
httpGet:
path: /v1/h
port: {{ .Values.newAppTLSPort }}
scheme: HTTPS
initialDelaySeconds: 2
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 20
volumeMounts:
- mountPath: /etc/config/newApp
name: config-volume
readOnly: true
- mountPath: /etc/config/metrics
name: metrics-volume
readOnly: true
- mountPath: /etc/version/container
name: container-info-volume
readOnly: true
- name: {{ template "newAppClient.name" . }}-client
image: {{ .Values.newAppClientImage }}:{{ .Values.newAppClientVersion }}
imagePullPolicy: Always
args: ["run", "--server", "--config-file=/newAppClientPath/config.yaml", "--log-level=debug", "/newAppClientPath/pl"]
volumeMounts:
- name: newAppClient-files
mountPath: /newAppClient-path
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
volumes:
- name: config-volume
configMap:
name: {{ .Values.newAppConfigMapName }}
- name: container-info-volume
configMap:
name: {{ .Values.containerVersionConfigMapName }}
- name: metrics-volume
configMap:
name: {{ .Values.metricsConfigMapName }}
- name: newAppClient-files
configMap:
name: {{ .Values.newAppClientConfigMapName }}
items:
- key: config
path: config.yaml
This helm chart is consumed by Jenkins and then deployed by Spinnaker onto AWS EKS service.
A security measure that we ensure is that /root directory should be private in all our containers, so basically it should deny permission when a user tries to manually do the same after
kubectl exec -it -n namespace_name pod_name -c container_name bash
into the container.
But when I enter the container terminal why can I still
cd /root
inside the container when it is running as non-root?
EXPECTED: It should give the following error which it is not giving:
cd root/
bash: cd: root/: Permission denied
OTHER VALUES THAT MIGHT BE USEFUL TO DEBUG:
Output of "ls -la" inside the container:
dr-xr-x--- 1 root root 18 Jul 26 2021 root
As you can see the r and x SHOULD BE UNSET for OTHER on root folder
Output of "id" inside the container:
bash-4.2$ id
uid=1000 gid=0(root) groups=0(root),1000
Using a helm chart locally to reproduce the error ->
The same 3 securityContext params when used locally in a simple Go program helm chart yields the desired result.
Deployment.yaml of helm chart:
apiVersion: {{ template "common.capabilities.deployment.apiVersion" . }}
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "fullname" . }}
template:
metadata:
labels:
app: {{ template "fullname" . }}
spec:
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
readinessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
resources:
{{ toYaml .Values.resources | indent 12 }}
Output of ls -la inside the container on local setup:
drwx------ 2 root root 4096 Jan 25 00:00 root
You can always cd into / in the UNIX system as non-root, so you can do it inside your container as well. However, e.g. creating a file there should fail with Permission denied.
Check the following.
# Run a container as non-root
docker run -it --rm --user 7447 busybox sh
# Check that it's possible to cd into '/'
cd /
# Try creating file
touch some-file
touch: some-file: Permission denied

error parsing templates/deployment.yaml: json: line 1: invalid character '{' looking for beginning of object key string

i'm getting an following error, when i try to deploy nexus using kubernetes.
Command: kubectl appy -f templates/deployment.yaml
error parsing templates/deployment.yaml: json: line 1: invalid
character '{' looking for beginning of object key string
Did anybody faced this issue?
Please find the below code which i'm trying:
{{- if .Values.localSetup.enabled }}
apiVersion: apps/v1
kind: Deployment
{{- else }}
apiVersion: apps/v1
kind: StatefulSet
{{- end }}
metadata:
labels:
app: nexus
name: nexus
spec:
replicas: 1
selector:
matchLabels:
app: nexus
template:
metadata:
labels:
app: nexus
spec:
{{- if .Values.localSetup.enabled }}
volumes:
- name: nexus-data
persistentVolumeClaim:
claimName: nexus-pv-claim
- name: nexus-data-backup
persistentVolumeClaim:
claimName: nexus-backup-pv-claim
{{- end }}
containers:
- name: nexus
image: "quay.io/travelaudience/docker-nexus:3.15.2"
imagePullPolicy: Always
env:
- name: INSTALL4J_ADD_VM_PARAMS
value: "-Xms1200M -Xmx1200M -XX:MaxDirectMemorySize=2G -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
resources:
requests:
cpu: 250m
memory: 4800Mi
ports:
- containerPort: {{ .Values.nexus.dockerPort }}
name: nexus-docker-g
- containerPort: {{ .Values.nexus.nexusPort }}
name: nexus-http
volumeMounts:
- mountPath: "/nexus-data"
name: nexus-data
- mountPath: "/nexus-data/backup"
name: nexus-data-backup
{{- if .Values.useProbes.enabled }}
livenessProbe:
httpGet:
path: {{ .Values.nexus.livenessProbe.path }}
port: {{ .Values.nexus.nexusPort }}
initialDelaySeconds: {{ .Values.nexus.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.nexus.livenessProbe.periodSeconds }}
failureThreshold: {{ .Values.nexus.livenessProbe.failureThreshold }}
{{- if .Values.nexus.livenessProbe.timeoutSeconds }}
timeoutSeconds: {{ .Values.nexus.livenessProbe.timeoutSeconds }}
{{- end }}
readinessProbe:
httpGet:
path: {{ .Values.nexus.readinessProbe.path }}
port: {{ .Values.nexus.nexusPort }}
initialDelaySeconds: {{ .Values.nexus.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.nexus.readinessProbe.periodSeconds }}
failureThreshold: {{ .Values.nexus.readinessProbe.failureThreshold }}
{{- if .Values.nexus.readinessProbe.timeoutSeconds }}
timeoutSeconds: {{ .Values.nexus.readinessProbe.timeoutSeconds }}
{{- end }}
{{- end }}
{{- if .Values.nexusProxy.enabled }}
- name: nexus-proxy
image: "quay.io/travelaudience/docker-nexus-proxy:2.4.0_8u191"
imagePullPolicy: Always
env:
- name: ALLOWED_USER_AGENTS_ON_ROOT_REGEX
value: "GoogleHC"
- name: CLOUD_IAM_AUTH_ENABLED
value: "false"
- name: BIND_PORT
value: {{ .Values.nexusProxy.targetPort | quote }}
- name: ENFORCE_HTTPS
value: "false"
{{- if .Values.localSetup.enabled }}
- name: NEXUS_DOCKER_HOST
value: {{ .Values.nexusProxy.nexusLocalDockerhost }}
- name: NEXUS_HTTP_HOST
value: {{ .Values.nexusProxy.nexusLocalHttphost }}
{{- else }}
- name: NEXUS_DOCKER_HOST
value: {{ .Values.nexusProxy.nexusDockerHost}}
- name: NEXUS_HTTP_HOST
value: {{ .Values.nexusProxy.nexusHttpHost }}
{{- end }}
- name: UPSTREAM_DOCKER_PORT
value: {{ .Values.nexus.dockerPort | quote }}
- name: UPSTREAM_HTTP_PORT
value: {{ .Values.nexus.nexusPort | quote }}
- name: UPSTREAM_HOST
value: "localhost"
ports:
- containerPort: {{ .Values.nexusProxy.targetPort }}
name: proxy-port
{{- end }}
{{- if .Values.nexusBackup.enabled }}
- name: nexus-backup
image: "quay.io/travelaudience/docker-nexus-backup:1.4.0"
imagePullPolicy: Always
env:
- name: NEXUS_AUTHORIZATION
value: false
- name: NEXUS_BACKUP_DIRECTORY
value: /nexus-data/backup
- name: NEXUS_DATA_DIRECTORY
value: /nexus-data
- name: NEXUS_LOCAL_HOST_PORT
value: "localhost:8081"
- name: OFFLINE_REPOS
value: "maven-central maven-public maven-releases maven-snapshots"
- name: TARGET_BUCKET
value: "gs://nexus-backup"
- name: GRACE_PERIOD
value: "60"
- name: TRIGGER_FILE
value: .backup
volumeMounts:
- mountPath: /nexus-data
name: nexus-data
- mountPath: /nexus-data/backup
name: nexus-data-backup
terminationGracePeriodSeconds: 10
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: nexus-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
storageClassName: {{ .Values.persistence.storageClass }}
- metadata:
name: nexus-data-backup
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
Any leads would be appreciated!
Regards
Mani
The template you provided here is the part of helm chart, which can be deployed using helm-cli, not using kubectl apply.
More info on using helm is here.
You can also get the instructions to install nexus using helm in this official stable helm chart.
Hope this helps.

Helm: Passing array values through --set

i have a cronjob helm chat, i can define many jobs in values.yaml and cronjob.yaml will provision my jobs. I have faced an issue when setting the image tag id in command line, following command throw no errors but it wont update jobs image tag to new one.
helm upgrade cronjobs cronjobs/ --wait --set job.myservice.image.tag=b70d744
cronjobs will run with old image tag how can i resolve this?
here is my cronjobs.yaml
{{- $chart_name := .Chart.Name }}
{{- $chart_version := .Chart.Version | replace "+" "_" }}
{{- $release_name := .Release.Name }}
{{- range $job := .Values.jobs }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: "{{ $job.namespace }}"
name: "{{ $release_name }}-{{ $job.name }}"
labels:
chart: "{{ $chart_name }}-{{ $chart_version }}"
spec:
concurrencyPolicy: {{ $job.concurrencyPolicy }}
failedJobsHistoryLimit: {{ $job.failedJobsHistoryLimit }}
suspend: {{ $job.suspend }}
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ $release_name }}
cron: {{ $job.name }}
spec:
containers:
- image: "{{ $job.image.repository }}:{{ $job.image.tag }}"
imagePullPolicy: {{ $job.image.imagePullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
name: {{ $job.name }}
args:
{{ toYaml $job.args | indent 12 }}
env:
{{ toYaml $job.image.env | indent 12 }}
volumeMounts:
- name: nfs
mountPath: "{{ $job.image.nfslogpath }}"
restartPolicy: OnFailure
imagePullSecrets:
- name: {{ $job.image.secret }}
volumes:
- name: nfs
nfs:
server: "{{ $job.image.server }}"
path: "{{ $job.image.nfspath }}"
readOnly: false
schedule: {{ $job.schedule | quote }}
successfulJobsHistoryLimit: {{ $job.successfulJobsHistoryLimit }}
{{- end }}
here is my values.yaml
jobs:
- name: myservice
namespace: default
image:
repository: xxx.com/myservice
tag: fe4544
pullPolicy: Always
secret: xxx
nfslogpath: "/var/logs/"
nfsserver: "xxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
schedule: "*/5 * * * *"
args:
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
concurrencyPolicy: Forbid
suspend: false
- name: myservice2
namespace: default
image:
repository: xxxx/myservice2
tag: 1dff39a
pullPolicy: IfNotPresent
secret: xxxx
nfslogpath: "/var/logs/"
nfsserver: "xxxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
schedule: "*/30 * * * *"
args:
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 2
concurrencyPolicy: Forbid
suspend: false
If you need to pass array values you can use curly braces (unix shell require quotes):
--set test={x,y,z}
--set "test={x,y,z}"
Result YAML:
test:
- x
- y
- z
Source: https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set
EDITED : added double-quotes for unix shell like bash
Update for Helm 2.5.0
As of Helm 2.5.0, it is possible to access list items using an array index syntax.
For example, --set servers[0].port=80 becomes:
servers:
- port: 80
For the sake of completeness I'll post a more complex example with Helm 3.
Let's say that you have this in your values.yaml file:
extraEnvVars:
- name: CONFIG_BACKEND_URL
value: "https://api.example.com"
- name: CONFIG_BACKEND_AUTH_USER
value: "admin"
- name: CONFIG_BACKEND_AUTH_PWD
value: "very-secret-password"
You can --set just the value for the CONFIG_BACKEND_URL this way:
helm install ... --set "extraEnvVars[0].value=http://172.23.0.1:36241"
The other two variables (i.e. CONFIG_BACKEND_AUTH_USER and CONFIG_BACKEND_AUTH_PWD) will be read from the values.yaml file since we're not overwriting them with a --set.
Same for extraEnvVars[0].name which will be CONFIG_BACKEND_URL as per values.yaml.
Source: https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set
Since you are using array in your values.yaml file, please see related issue
Alternative solution
Your values.yaml is missing values for args and env. I've set them in my example, as well as changed indent to 14
Your cronjob.yaml server: "{{ $job.image.server }}" value is null, and I've changed it to .image.nfsserver
Instead of using array, just separate your services like in example below:
values.yaml
jobs:
myservice:
namespace: default
image:
repository: xxx.com/myservice
tag: fe4544
pullPolicy: Always
secret: xxx
nfslogpath: "/var/logs/"
nfsserver: "xxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
key: val
schedule: "*/5 * * * *"
args:
key: val
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
concurrencyPolicy: Forbid
suspend: false
myservice2:
namespace: default
image:
repository: xxxx/myservice2
tag: 1dff39a
pullPolicy: IfNotPresent
secret: xxxx
nfslogpath: "/var/logs/"
nfsserver: "xxxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
key: val
schedule: "*/30 * * * *"
args:
key: val
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 2
concurrencyPolicy: Forbid
suspend: false
In your cronjob.yaml use {{- range $job, $val := .Values.jobs }} to iterate over values.
Use $job where you used {{ $job.name }}.
Access values like suspend with {{ .suspend }} instead of {{ $job.suspend }}
cronjob.yaml
{{- $chart_name := .Chart.Name }}
{{- $chart_version := .Chart.Version | replace "+" "_" }}
{{- $release_name := .Release.Name }}
{{- range $job, $val := .Values.jobs }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: {{ .namespace }}
name: "{{ $release_name }}-{{ $job }}"
labels:
chart: "{{ $chart_name }}-{{ $chart_version }}"
spec:
concurrencyPolicy: {{ .concurrencyPolicy }}
failedJobsHistoryLimit: {{ .failedJobsHistoryLimit }}
suspend: {{ .suspend }}
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ $release_name }}
cron: {{ $job }}
spec:
containers:
- image: "{{ .image.repository }}:{{ .image.tag }}"
imagePullPolicy: {{ .image.imagePullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
name: {{ $job }}
args:
{{ toYaml .args | indent 14 }}
env:
{{ toYaml .image.env | indent 14 }}
volumeMounts:
- name: nfs
mountPath: "{{ .image.nfslogpath }}"
restartPolicy: OnFailure
imagePullSecrets:
- name: {{ .image.secret }}
volumes:
- name: nfs
nfs:
server: "{{ .image.nfsserver }}"
path: "{{ .image.nfspath }}"
readOnly: false
schedule: {{ .schedule | quote }}
successfulJobsHistoryLimit: {{ .successfulJobsHistoryLimit }}
{{- end }}
Passing values using --set :
helm upgrade cronjobs cronjobs/ --wait --set jobs.myservice.image.tag=b70d744
Example:
helm install --debug --dry-run --set jobs.myservice.image.tag=my123tag .
...
HOOKS:
MANIFEST:
---
# Source: foo/templates/cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: default
name: "illmannered-iguana-myservice"
labels:
chart: "foo-0.1.0"
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
suspend: false
jobTemplate:
spec:
template:
metadata:
labels:
app: illmannered-iguana
cron: myservice
spec:
containers:
- image: "xxx.com/myservice:my123tag"
imagePullPolicy:
ports:
- name: http
containerPort: 80
protocol: TCP
name: myservice
args:
key: val
env:
key: val
volumeMounts:
- name: nfs
mountPath: "/var/logs/"
restartPolicy: OnFailure
imagePullSecrets:
- name: xxx
volumes:
- name: nfs
nfs:
server: "xxx"
path: "/nfs/xxx/cronjobs/"
readOnly: false
schedule: "*/5 * * * *"
successfulJobsHistoryLimit: 3
---
# Source: foo/templates/cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: default
name: "illmannered-iguana-myservice2"
labels:
chart: "foo-0.1.0"
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
suspend: false
jobTemplate:
spec:
template:
metadata:
labels:
app: illmannered-iguana
cron: myservice2
spec:
containers:
- image: "xxxx/myservice2:1dff39a"
imagePullPolicy:
ports:
- name: http
containerPort: 80
protocol: TCP
name: myservice2
args:
key: val
env:
key: val
volumeMounts:
- name: nfs
mountPath: "/var/logs/"
restartPolicy: OnFailure
imagePullSecrets:
- name: xxxx
volumes:
- name: nfs
nfs:
server: "xxxx"
path: "/nfs/xxx/cronjobs/"
readOnly: false
schedule: "*/30 * * * *"
successfulJobsHistoryLimit: 2
Hope that helps!
On helm 3. This works for me.
--set "servers[0].port=80" --set "servers[1].port=8080"