helm/kubernetes not installing all cronjobs in list - kubernetes

I have a helm chart which involves a loop over a range of values. The chart includes a statefulset, pvc and cronjob. If I pass it a list with 4 values, all is well, but if I pass it a list of 12 values, most of the cronjobs just don't appear in the final template (i.e. using helm install --dry-run --debug).
Can anyone explain what might be causing this? I googled to see if I could find information about maximum length of templates but couldn't find anything...
helm template creates the manifest just fine, so maybe it's kubernetes is rejecting the cronjobs for some reason?
Is there a recommended approach for when you need to create many almost-duplicates of a manifest?
EXAMPLE: The chart template looks something like this
{{ $env := .Release.Namespace }}
{{ $image_tag := .Values.image_tag }}
{{ $aws_account_id := .Values.aws_account_id }}
{{- range $collector := .Values.collectors }}
apiVersion: v1
kind: Service
metadata:
name: {{ $colname }}
labels:
app: {{ $colname }}
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: {{ $colname }}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ $colname }}
labels:
app: {{ $colname }}
spec:
selector:
matchLabels:
app: {{ $colname }}
serviceName: {{ $colname }}
replicas: 1
template:
metadata:
labels:
app: {{ $colname }}
spec:
securityContext:
fsGroup: 1000
containers:
- name: {{ $colname }}
imagePullPolicy: Always
image: {{ $aws_account_id }}.dkr.ecr.eu-west-1.amazonaws.com/d:{{ $image_tag }}
volumeMounts:
- name: {{ $colname }}-a-claim
mountPath: /home/me/a
- name: {{ $colname }}-b-claim
mountPath: /home/me/b
- name: {{ $colname }}-c-claim
mountPath: /home/me/c
env:
- name: COLLECTOR
value: {{ $collector }}
- name: ENV
value: {{ $env }}
volumeClaimTemplates:
- metadata:
name: {{ $colname }}-a-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 50Gi
- metadata:
name: {{ $colname }}-b-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 10Gi
- metadata:
name: {{ $colname }}-c-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ $colname }}-c-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 20Gi
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $colname }}-cron
spec:
schedule: {{ $update_time }}
jobTemplate:
spec:
template:
spec:
securityContext:
fsGroup: 1000
containers:
- name: {{ $colname }}
image: {{ $aws_account_id }}.dkr.ecr.eu-west-1.amazonaws.com/d:{{ $image_tag }}
env:
- name: COLLECTOR
value: {{ $collector_name }}
volumeMounts:
- name: c-storage
mountPath: /home/me/c
restartPolicy: Never
volumes:
- name: c-storage
persistentVolumeClaim:
claimName: {{ $colname }}-c-claim
---
{{ end }}
and I'm passing values like:
collectors:
- name: a
- name: b
- name: c
- name: d
- name: e
- name: f
- name: g
- name: h
- name: i
- name: j
- name: k
- name: l

Related

Kubernetes deployment stuck on pending after create pvc

I'm trying to create a persistent storage to share with all of my application in the K8s cluster.
storageClass.yaml file:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: my-local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
persistentVolume.yaml file:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-local-pv
spec:
capacity:
storage: 50Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: my-local-storage
local:
path: /base-xapp/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- juniper-ric
persistentVolumeClaim.yaml file:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: my-local-storage
resources:
requests:
storage: 50Mi
selector:
matchLabels:
name: my
and finally, this is the deployment yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}-deployment
labels:
app: {{ .Values.appName }}
xappRelease: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.appName }}
template:
metadata:
labels:
app: {{ .Values.appName }}
xappRelease: {{ .Release.Name }}
spec:
containers:
- name: {{ .Values.appName }}
image: "{{ .Values.image }}:{{ .Values.tag }}"
imagePullPolicy: IfNotPresent
ports:
- name: rmr
containerPort: {{ .Values.rmrPort }}
protocol: TCP
- name: rtg
containerPort: {{ .Values.rtgPort }}
protocol: TCP
volumeMounts:
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.routingTableFile }}
subPath: {{ .Values.routingTableFile }}
- name: app-cfg
mountPath: {{ .Values.routingTablePath }}{{ .Values.vlevelFile }}
subPath: {{ .Values.vlevelFile }}
- name: {{ .Values.appName }}-persistent-storage
mountPath: {{ .Values.appName }}/data
envFrom:
- configMapRef:
name: {{ .Values.appName }}-configmap
volumes:
- name: app-cfg
configMap:
name: {{ .Values.appName }}-configmap
items:
- key: {{ .Values.routingTableFile }}
path: {{ .Values.routingTableFile }}
- key: {{ .Values.vlevelFile }}
path: {{ .Values.vlevelFile }}
- name: {{ .Values.appName }}-persistent-storage
persistentVolumeClaim:
claimName: my-claim
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.appName }}-rmr-service
labels:
xappRelease: {{ .Release.Name }}
spec:
selector:
app: {{ .Values.appName }}
type : NodePort
ports:
- name: rmr
protocol: TCP
port: {{ .Values.rmrPort }}
targetPort: {{ .Values.rmrPort }}
- name: rtg
protocol: TCP
port: {{ .Values.rtgPort }}
targetPort: {{ .Values.rtgPort }}
When i deploy the container the container status equals Pending
base-xapp-deployment-6799d6cbf6-lgjks 0/1 Pending 0 3m25s
this is the output of the describe:
Name: base-xapp-deployment-6799d6cbf6-lgjks
Namespace: near-rt-ric
Priority: 0
Node: <none>
Labels: app=base-xapp
pod-template-hash=6799d6cbf6
xappRelease=base-xapp
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/base-xapp-deployment-6799d6cbf6
Containers:
base-xapp:
Image: base-xapp:0.1.0
Ports: 4565/TCP, 4561/TCP
Host Ports: 0/TCP, 0/TCP
Environment Variables from:
base-xapp-configmap ConfigMap Optional: false
Environment: <none>
Mounts:
/rmr_route from app-cfg (rw,path="rmr_route")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rxmwm (ro)
/vlevel from app-cfg (rw,path="vlevel")
base-xapp/data from base-xapp-persistent-storage (rw)
Conditions:
Type Status
PodScheduled False
Volumes:
app-cfg:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: base-xapp-configmap
Optional: false
base-xapp-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-claim
ReadOnly: false
kube-api-access-rxmwm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10s (x6 over 4m22s) default-scheduler 0/1 nodes are available: 1 persistentvolumeclaim "my-claim" not found.
this is the output of kubectl resources:
get pv:
dan#linux$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-local-pv 50Mi RWO Retain Available my-local-storage 6m2s
get pvc:
dan#linux$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-claim Pending my-local-storage 36m
You're missing spec.volumeName in your PVC manifest.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-claim
spec:
volumeName: my-local-pv # this line was missing
accessModes:
- ReadWriteOnce
storageClassName: my-local-storage
resources:
requests:
storage: 50Mi
selector:
matchLabels:
name: my
I can see your deployment have namespace near-rt-ric.
But your PVC doesn't have a namespace, it probable placed in default namespace
Use this command to check kubectl get pvc -A

Helm - How to write a file in a Volume using ConfigMap?

I have defined the values.yaml like the following:
name: custom-streams
image: streams-docker-images
imagePullPolicy: Always
restartPolicy: Always
replicas: 1
port: 8080
nodeSelector:
nodetype: free
configHocon: |-
streams {
monitoring {
custom {
uri = ${?URI}
method = ${?METHOD}
}
}
}
And configmap.yaml like the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-streams-configmap
data:
config.hocon: {{ .Values.configHocon | indent 4}}
Lastly, I have defined the deployment.yaml like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configmap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}
When I run the container via:
helm install --name custom-streams custom-streams -f values.yaml --debug --namespace streaming
Then the pods are running fine, but I cannot see the config.hocon file in the container:
$ kubectl exec -it custom-streams-55b45b7756-fb292 sh -n streaming
/ # ls
...
config
...
/ # cd config/
/config # ls
/config #
I need the config.hocon written in the /config folder. Can anyone let me know what is wrong with the configurations?
I was able to resolve the issue. The issue was using configmap in place configMap in deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configMap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}

error parsing templates/deployment.yaml: json: line 1: invalid character '{' looking for beginning of object key string

i'm getting an following error, when i try to deploy nexus using kubernetes.
Command: kubectl appy -f templates/deployment.yaml
error parsing templates/deployment.yaml: json: line 1: invalid
character '{' looking for beginning of object key string
Did anybody faced this issue?
Please find the below code which i'm trying:
{{- if .Values.localSetup.enabled }}
apiVersion: apps/v1
kind: Deployment
{{- else }}
apiVersion: apps/v1
kind: StatefulSet
{{- end }}
metadata:
labels:
app: nexus
name: nexus
spec:
replicas: 1
selector:
matchLabels:
app: nexus
template:
metadata:
labels:
app: nexus
spec:
{{- if .Values.localSetup.enabled }}
volumes:
- name: nexus-data
persistentVolumeClaim:
claimName: nexus-pv-claim
- name: nexus-data-backup
persistentVolumeClaim:
claimName: nexus-backup-pv-claim
{{- end }}
containers:
- name: nexus
image: "quay.io/travelaudience/docker-nexus:3.15.2"
imagePullPolicy: Always
env:
- name: INSTALL4J_ADD_VM_PARAMS
value: "-Xms1200M -Xmx1200M -XX:MaxDirectMemorySize=2G -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
resources:
requests:
cpu: 250m
memory: 4800Mi
ports:
- containerPort: {{ .Values.nexus.dockerPort }}
name: nexus-docker-g
- containerPort: {{ .Values.nexus.nexusPort }}
name: nexus-http
volumeMounts:
- mountPath: "/nexus-data"
name: nexus-data
- mountPath: "/nexus-data/backup"
name: nexus-data-backup
{{- if .Values.useProbes.enabled }}
livenessProbe:
httpGet:
path: {{ .Values.nexus.livenessProbe.path }}
port: {{ .Values.nexus.nexusPort }}
initialDelaySeconds: {{ .Values.nexus.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.nexus.livenessProbe.periodSeconds }}
failureThreshold: {{ .Values.nexus.livenessProbe.failureThreshold }}
{{- if .Values.nexus.livenessProbe.timeoutSeconds }}
timeoutSeconds: {{ .Values.nexus.livenessProbe.timeoutSeconds }}
{{- end }}
readinessProbe:
httpGet:
path: {{ .Values.nexus.readinessProbe.path }}
port: {{ .Values.nexus.nexusPort }}
initialDelaySeconds: {{ .Values.nexus.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.nexus.readinessProbe.periodSeconds }}
failureThreshold: {{ .Values.nexus.readinessProbe.failureThreshold }}
{{- if .Values.nexus.readinessProbe.timeoutSeconds }}
timeoutSeconds: {{ .Values.nexus.readinessProbe.timeoutSeconds }}
{{- end }}
{{- end }}
{{- if .Values.nexusProxy.enabled }}
- name: nexus-proxy
image: "quay.io/travelaudience/docker-nexus-proxy:2.4.0_8u191"
imagePullPolicy: Always
env:
- name: ALLOWED_USER_AGENTS_ON_ROOT_REGEX
value: "GoogleHC"
- name: CLOUD_IAM_AUTH_ENABLED
value: "false"
- name: BIND_PORT
value: {{ .Values.nexusProxy.targetPort | quote }}
- name: ENFORCE_HTTPS
value: "false"
{{- if .Values.localSetup.enabled }}
- name: NEXUS_DOCKER_HOST
value: {{ .Values.nexusProxy.nexusLocalDockerhost }}
- name: NEXUS_HTTP_HOST
value: {{ .Values.nexusProxy.nexusLocalHttphost }}
{{- else }}
- name: NEXUS_DOCKER_HOST
value: {{ .Values.nexusProxy.nexusDockerHost}}
- name: NEXUS_HTTP_HOST
value: {{ .Values.nexusProxy.nexusHttpHost }}
{{- end }}
- name: UPSTREAM_DOCKER_PORT
value: {{ .Values.nexus.dockerPort | quote }}
- name: UPSTREAM_HTTP_PORT
value: {{ .Values.nexus.nexusPort | quote }}
- name: UPSTREAM_HOST
value: "localhost"
ports:
- containerPort: {{ .Values.nexusProxy.targetPort }}
name: proxy-port
{{- end }}
{{- if .Values.nexusBackup.enabled }}
- name: nexus-backup
image: "quay.io/travelaudience/docker-nexus-backup:1.4.0"
imagePullPolicy: Always
env:
- name: NEXUS_AUTHORIZATION
value: false
- name: NEXUS_BACKUP_DIRECTORY
value: /nexus-data/backup
- name: NEXUS_DATA_DIRECTORY
value: /nexus-data
- name: NEXUS_LOCAL_HOST_PORT
value: "localhost:8081"
- name: OFFLINE_REPOS
value: "maven-central maven-public maven-releases maven-snapshots"
- name: TARGET_BUCKET
value: "gs://nexus-backup"
- name: GRACE_PERIOD
value: "60"
- name: TRIGGER_FILE
value: .backup
volumeMounts:
- mountPath: /nexus-data
name: nexus-data
- mountPath: /nexus-data/backup
name: nexus-data-backup
terminationGracePeriodSeconds: 10
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: nexus-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
storageClassName: {{ .Values.persistence.storageClass }}
- metadata:
name: nexus-data-backup
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
Any leads would be appreciated!
Regards
Mani
The template you provided here is the part of helm chart, which can be deployed using helm-cli, not using kubectl apply.
More info on using helm is here.
You can also get the instructions to install nexus using helm in this official stable helm chart.
Hope this helps.

Helm: Passing array values through --set

i have a cronjob helm chat, i can define many jobs in values.yaml and cronjob.yaml will provision my jobs. I have faced an issue when setting the image tag id in command line, following command throw no errors but it wont update jobs image tag to new one.
helm upgrade cronjobs cronjobs/ --wait --set job.myservice.image.tag=b70d744
cronjobs will run with old image tag how can i resolve this?
here is my cronjobs.yaml
{{- $chart_name := .Chart.Name }}
{{- $chart_version := .Chart.Version | replace "+" "_" }}
{{- $release_name := .Release.Name }}
{{- range $job := .Values.jobs }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: "{{ $job.namespace }}"
name: "{{ $release_name }}-{{ $job.name }}"
labels:
chart: "{{ $chart_name }}-{{ $chart_version }}"
spec:
concurrencyPolicy: {{ $job.concurrencyPolicy }}
failedJobsHistoryLimit: {{ $job.failedJobsHistoryLimit }}
suspend: {{ $job.suspend }}
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ $release_name }}
cron: {{ $job.name }}
spec:
containers:
- image: "{{ $job.image.repository }}:{{ $job.image.tag }}"
imagePullPolicy: {{ $job.image.imagePullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
name: {{ $job.name }}
args:
{{ toYaml $job.args | indent 12 }}
env:
{{ toYaml $job.image.env | indent 12 }}
volumeMounts:
- name: nfs
mountPath: "{{ $job.image.nfslogpath }}"
restartPolicy: OnFailure
imagePullSecrets:
- name: {{ $job.image.secret }}
volumes:
- name: nfs
nfs:
server: "{{ $job.image.server }}"
path: "{{ $job.image.nfspath }}"
readOnly: false
schedule: {{ $job.schedule | quote }}
successfulJobsHistoryLimit: {{ $job.successfulJobsHistoryLimit }}
{{- end }}
here is my values.yaml
jobs:
- name: myservice
namespace: default
image:
repository: xxx.com/myservice
tag: fe4544
pullPolicy: Always
secret: xxx
nfslogpath: "/var/logs/"
nfsserver: "xxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
schedule: "*/5 * * * *"
args:
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
concurrencyPolicy: Forbid
suspend: false
- name: myservice2
namespace: default
image:
repository: xxxx/myservice2
tag: 1dff39a
pullPolicy: IfNotPresent
secret: xxxx
nfslogpath: "/var/logs/"
nfsserver: "xxxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
schedule: "*/30 * * * *"
args:
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 2
concurrencyPolicy: Forbid
suspend: false
If you need to pass array values you can use curly braces (unix shell require quotes):
--set test={x,y,z}
--set "test={x,y,z}"
Result YAML:
test:
- x
- y
- z
Source: https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set
EDITED : added double-quotes for unix shell like bash
Update for Helm 2.5.0
As of Helm 2.5.0, it is possible to access list items using an array index syntax.
For example, --set servers[0].port=80 becomes:
servers:
- port: 80
For the sake of completeness I'll post a more complex example with Helm 3.
Let's say that you have this in your values.yaml file:
extraEnvVars:
- name: CONFIG_BACKEND_URL
value: "https://api.example.com"
- name: CONFIG_BACKEND_AUTH_USER
value: "admin"
- name: CONFIG_BACKEND_AUTH_PWD
value: "very-secret-password"
You can --set just the value for the CONFIG_BACKEND_URL this way:
helm install ... --set "extraEnvVars[0].value=http://172.23.0.1:36241"
The other two variables (i.e. CONFIG_BACKEND_AUTH_USER and CONFIG_BACKEND_AUTH_PWD) will be read from the values.yaml file since we're not overwriting them with a --set.
Same for extraEnvVars[0].name which will be CONFIG_BACKEND_URL as per values.yaml.
Source: https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set
Since you are using array in your values.yaml file, please see related issue
Alternative solution
Your values.yaml is missing values for args and env. I've set them in my example, as well as changed indent to 14
Your cronjob.yaml server: "{{ $job.image.server }}" value is null, and I've changed it to .image.nfsserver
Instead of using array, just separate your services like in example below:
values.yaml
jobs:
myservice:
namespace: default
image:
repository: xxx.com/myservice
tag: fe4544
pullPolicy: Always
secret: xxx
nfslogpath: "/var/logs/"
nfsserver: "xxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
key: val
schedule: "*/5 * * * *"
args:
key: val
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
concurrencyPolicy: Forbid
suspend: false
myservice2:
namespace: default
image:
repository: xxxx/myservice2
tag: 1dff39a
pullPolicy: IfNotPresent
secret: xxxx
nfslogpath: "/var/logs/"
nfsserver: "xxxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
key: val
schedule: "*/30 * * * *"
args:
key: val
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 2
concurrencyPolicy: Forbid
suspend: false
In your cronjob.yaml use {{- range $job, $val := .Values.jobs }} to iterate over values.
Use $job where you used {{ $job.name }}.
Access values like suspend with {{ .suspend }} instead of {{ $job.suspend }}
cronjob.yaml
{{- $chart_name := .Chart.Name }}
{{- $chart_version := .Chart.Version | replace "+" "_" }}
{{- $release_name := .Release.Name }}
{{- range $job, $val := .Values.jobs }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: {{ .namespace }}
name: "{{ $release_name }}-{{ $job }}"
labels:
chart: "{{ $chart_name }}-{{ $chart_version }}"
spec:
concurrencyPolicy: {{ .concurrencyPolicy }}
failedJobsHistoryLimit: {{ .failedJobsHistoryLimit }}
suspend: {{ .suspend }}
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ $release_name }}
cron: {{ $job }}
spec:
containers:
- image: "{{ .image.repository }}:{{ .image.tag }}"
imagePullPolicy: {{ .image.imagePullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
name: {{ $job }}
args:
{{ toYaml .args | indent 14 }}
env:
{{ toYaml .image.env | indent 14 }}
volumeMounts:
- name: nfs
mountPath: "{{ .image.nfslogpath }}"
restartPolicy: OnFailure
imagePullSecrets:
- name: {{ .image.secret }}
volumes:
- name: nfs
nfs:
server: "{{ .image.nfsserver }}"
path: "{{ .image.nfspath }}"
readOnly: false
schedule: {{ .schedule | quote }}
successfulJobsHistoryLimit: {{ .successfulJobsHistoryLimit }}
{{- end }}
Passing values using --set :
helm upgrade cronjobs cronjobs/ --wait --set jobs.myservice.image.tag=b70d744
Example:
helm install --debug --dry-run --set jobs.myservice.image.tag=my123tag .
...
HOOKS:
MANIFEST:
---
# Source: foo/templates/cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: default
name: "illmannered-iguana-myservice"
labels:
chart: "foo-0.1.0"
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
suspend: false
jobTemplate:
spec:
template:
metadata:
labels:
app: illmannered-iguana
cron: myservice
spec:
containers:
- image: "xxx.com/myservice:my123tag"
imagePullPolicy:
ports:
- name: http
containerPort: 80
protocol: TCP
name: myservice
args:
key: val
env:
key: val
volumeMounts:
- name: nfs
mountPath: "/var/logs/"
restartPolicy: OnFailure
imagePullSecrets:
- name: xxx
volumes:
- name: nfs
nfs:
server: "xxx"
path: "/nfs/xxx/cronjobs/"
readOnly: false
schedule: "*/5 * * * *"
successfulJobsHistoryLimit: 3
---
# Source: foo/templates/cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: default
name: "illmannered-iguana-myservice2"
labels:
chart: "foo-0.1.0"
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
suspend: false
jobTemplate:
spec:
template:
metadata:
labels:
app: illmannered-iguana
cron: myservice2
spec:
containers:
- image: "xxxx/myservice2:1dff39a"
imagePullPolicy:
ports:
- name: http
containerPort: 80
protocol: TCP
name: myservice2
args:
key: val
env:
key: val
volumeMounts:
- name: nfs
mountPath: "/var/logs/"
restartPolicy: OnFailure
imagePullSecrets:
- name: xxxx
volumes:
- name: nfs
nfs:
server: "xxxx"
path: "/nfs/xxx/cronjobs/"
readOnly: false
schedule: "*/30 * * * *"
successfulJobsHistoryLimit: 2
Hope that helps!
On helm 3. This works for me.
--set "servers[0].port=80" --set "servers[1].port=8080"

Data is empty when accessing config file in k8s configmap with Helm

I am trying to use a configmap in my deployment with helm charts. Now seems like files can be accessed with Helm according to the docs here: https://github.com/helm/helm/blob/master/docs/chart_template_guide/accessing_files.md
This is my deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "{{ template "service.fullname" . }}"
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: "{{ template "service.fullname" . }}"
spec:
containers:
- name: "{{ .Chart.Name }}"
image: "{{ .Values.registryHost }}/{{ .Values.userNamespace }}/{{ .Values.projectName }}/{{ .Values.serviceName }}:{{.Chart.Version}}"
volumeMounts:
- name: {{ .Values.configmapName}}configmap-volume
mountPath: /app/config
ports:
- containerPort: 80
name: http
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: {{ .Values.configmapName}}configmap-volume
configMap:
name: "{{ .Values.configmapName}}-configmap"
My configmap is accessing a config file. Here's the configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-configmap"
labels:
app: "{{ .Values.configmapName}}"
data:
{{ .Files.Get "files/{{ .Values.configmapName}}-config.json" | indent 2}}
The charts directory looks like this:
files/
--runtime-config.json
templates/
--configmap.yaml
--deployment.yaml
--ingress.yaml
--service.yaml
chart.value
vaues.yaml
And this is how my runtime-confi.json file looks like:
{
"GameModeConfiguration": {
"command": "xx",
"modeId": 10,
"sessionId": 11
}
}
The problem is, when I install my chart (even with a dry-run mode), the data for my configmap is empty. It doesn't add the data from the config file into my configmap declaration. This is how it looks like when I do a dry-run:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: "runtime-configmap"
labels:
app: "runtime"
data:
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "whimsical-otter-runtime-service"
labels:
chart: "runtime-service-unknown/version"
spec:
replicas: 1
template:
metadata:
labels:
app: "whimsical-otter-runtime-service"
spec:
containers:
- name: "runtime-service"
image: "gcr.io/xxx-dev/xxx/runtime_service:unknown/version"
volumeMounts:
- name: runtimeconfigmap-volume
mountPath: /app/config
ports:
- containerPort: 80
name: http
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: runtimeconfigmap-volume
configMap:
name: "runtime-configmap"
---
What am I doing wrong that I don't get data?
The replacement of the variable within the string does not work:
{{ .Files.Get "files/{{ .Values.configmapName}}-config.json" | indent 2}}
But you can gerenate a string using the printf function like this:
{{ .Files.Get (printf "files/%s-config.json" .Values.configmapName) | indent 2 }}
Apart from the syntax problem pointed by #adebasi, you still need to set this code inside a key to get a valid configmap yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-configmap"
labels:
app: "{{ .Values.configmapName}}"
data:
my-file: |
{{ .Files.Get (printf "files/%s-config.json" .Values.configmapName) | indent 4}}
Or you can use the handy configmap helper:
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-configmap"
labels:
app: "{{ .Values.configmapName}}"
data:
{{ (.Files.Glob "files/*").AsConfig | indent 2 }}