Helm: Passing array values through --set - kubernetes-helm

i have a cronjob helm chat, i can define many jobs in values.yaml and cronjob.yaml will provision my jobs. I have faced an issue when setting the image tag id in command line, following command throw no errors but it wont update jobs image tag to new one.
helm upgrade cronjobs cronjobs/ --wait --set job.myservice.image.tag=b70d744
cronjobs will run with old image tag how can i resolve this?
here is my cronjobs.yaml
{{- $chart_name := .Chart.Name }}
{{- $chart_version := .Chart.Version | replace "+" "_" }}
{{- $release_name := .Release.Name }}
{{- range $job := .Values.jobs }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: "{{ $job.namespace }}"
name: "{{ $release_name }}-{{ $job.name }}"
labels:
chart: "{{ $chart_name }}-{{ $chart_version }}"
spec:
concurrencyPolicy: {{ $job.concurrencyPolicy }}
failedJobsHistoryLimit: {{ $job.failedJobsHistoryLimit }}
suspend: {{ $job.suspend }}
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ $release_name }}
cron: {{ $job.name }}
spec:
containers:
- image: "{{ $job.image.repository }}:{{ $job.image.tag }}"
imagePullPolicy: {{ $job.image.imagePullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
name: {{ $job.name }}
args:
{{ toYaml $job.args | indent 12 }}
env:
{{ toYaml $job.image.env | indent 12 }}
volumeMounts:
- name: nfs
mountPath: "{{ $job.image.nfslogpath }}"
restartPolicy: OnFailure
imagePullSecrets:
- name: {{ $job.image.secret }}
volumes:
- name: nfs
nfs:
server: "{{ $job.image.server }}"
path: "{{ $job.image.nfspath }}"
readOnly: false
schedule: {{ $job.schedule | quote }}
successfulJobsHistoryLimit: {{ $job.successfulJobsHistoryLimit }}
{{- end }}
here is my values.yaml
jobs:
- name: myservice
namespace: default
image:
repository: xxx.com/myservice
tag: fe4544
pullPolicy: Always
secret: xxx
nfslogpath: "/var/logs/"
nfsserver: "xxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
schedule: "*/5 * * * *"
args:
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
concurrencyPolicy: Forbid
suspend: false
- name: myservice2
namespace: default
image:
repository: xxxx/myservice2
tag: 1dff39a
pullPolicy: IfNotPresent
secret: xxxx
nfslogpath: "/var/logs/"
nfsserver: "xxxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
schedule: "*/30 * * * *"
args:
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 2
concurrencyPolicy: Forbid
suspend: false

If you need to pass array values you can use curly braces (unix shell require quotes):
--set test={x,y,z}
--set "test={x,y,z}"
Result YAML:
test:
- x
- y
- z
Source: https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set
EDITED : added double-quotes for unix shell like bash

Update for Helm 2.5.0
As of Helm 2.5.0, it is possible to access list items using an array index syntax.
For example, --set servers[0].port=80 becomes:
servers:
- port: 80

For the sake of completeness I'll post a more complex example with Helm 3.
Let's say that you have this in your values.yaml file:
extraEnvVars:
- name: CONFIG_BACKEND_URL
value: "https://api.example.com"
- name: CONFIG_BACKEND_AUTH_USER
value: "admin"
- name: CONFIG_BACKEND_AUTH_PWD
value: "very-secret-password"
You can --set just the value for the CONFIG_BACKEND_URL this way:
helm install ... --set "extraEnvVars[0].value=http://172.23.0.1:36241"
The other two variables (i.e. CONFIG_BACKEND_AUTH_USER and CONFIG_BACKEND_AUTH_PWD) will be read from the values.yaml file since we're not overwriting them with a --set.
Same for extraEnvVars[0].name which will be CONFIG_BACKEND_URL as per values.yaml.
Source: https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set

Since you are using array in your values.yaml file, please see related issue
Alternative solution
Your values.yaml is missing values for args and env. I've set them in my example, as well as changed indent to 14
Your cronjob.yaml server: "{{ $job.image.server }}" value is null, and I've changed it to .image.nfsserver
Instead of using array, just separate your services like in example below:
values.yaml
jobs:
myservice:
namespace: default
image:
repository: xxx.com/myservice
tag: fe4544
pullPolicy: Always
secret: xxx
nfslogpath: "/var/logs/"
nfsserver: "xxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
key: val
schedule: "*/5 * * * *"
args:
key: val
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
concurrencyPolicy: Forbid
suspend: false
myservice2:
namespace: default
image:
repository: xxxx/myservice2
tag: 1dff39a
pullPolicy: IfNotPresent
secret: xxxx
nfslogpath: "/var/logs/"
nfsserver: "xxxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
key: val
schedule: "*/30 * * * *"
args:
key: val
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 2
concurrencyPolicy: Forbid
suspend: false
In your cronjob.yaml use {{- range $job, $val := .Values.jobs }} to iterate over values.
Use $job where you used {{ $job.name }}.
Access values like suspend with {{ .suspend }} instead of {{ $job.suspend }}
cronjob.yaml
{{- $chart_name := .Chart.Name }}
{{- $chart_version := .Chart.Version | replace "+" "_" }}
{{- $release_name := .Release.Name }}
{{- range $job, $val := .Values.jobs }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: {{ .namespace }}
name: "{{ $release_name }}-{{ $job }}"
labels:
chart: "{{ $chart_name }}-{{ $chart_version }}"
spec:
concurrencyPolicy: {{ .concurrencyPolicy }}
failedJobsHistoryLimit: {{ .failedJobsHistoryLimit }}
suspend: {{ .suspend }}
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ $release_name }}
cron: {{ $job }}
spec:
containers:
- image: "{{ .image.repository }}:{{ .image.tag }}"
imagePullPolicy: {{ .image.imagePullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
name: {{ $job }}
args:
{{ toYaml .args | indent 14 }}
env:
{{ toYaml .image.env | indent 14 }}
volumeMounts:
- name: nfs
mountPath: "{{ .image.nfslogpath }}"
restartPolicy: OnFailure
imagePullSecrets:
- name: {{ .image.secret }}
volumes:
- name: nfs
nfs:
server: "{{ .image.nfsserver }}"
path: "{{ .image.nfspath }}"
readOnly: false
schedule: {{ .schedule | quote }}
successfulJobsHistoryLimit: {{ .successfulJobsHistoryLimit }}
{{- end }}
Passing values using --set :
helm upgrade cronjobs cronjobs/ --wait --set jobs.myservice.image.tag=b70d744
Example:
helm install --debug --dry-run --set jobs.myservice.image.tag=my123tag .
...
HOOKS:
MANIFEST:
---
# Source: foo/templates/cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: default
name: "illmannered-iguana-myservice"
labels:
chart: "foo-0.1.0"
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
suspend: false
jobTemplate:
spec:
template:
metadata:
labels:
app: illmannered-iguana
cron: myservice
spec:
containers:
- image: "xxx.com/myservice:my123tag"
imagePullPolicy:
ports:
- name: http
containerPort: 80
protocol: TCP
name: myservice
args:
key: val
env:
key: val
volumeMounts:
- name: nfs
mountPath: "/var/logs/"
restartPolicy: OnFailure
imagePullSecrets:
- name: xxx
volumes:
- name: nfs
nfs:
server: "xxx"
path: "/nfs/xxx/cronjobs/"
readOnly: false
schedule: "*/5 * * * *"
successfulJobsHistoryLimit: 3
---
# Source: foo/templates/cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: default
name: "illmannered-iguana-myservice2"
labels:
chart: "foo-0.1.0"
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
suspend: false
jobTemplate:
spec:
template:
metadata:
labels:
app: illmannered-iguana
cron: myservice2
spec:
containers:
- image: "xxxx/myservice2:1dff39a"
imagePullPolicy:
ports:
- name: http
containerPort: 80
protocol: TCP
name: myservice2
args:
key: val
env:
key: val
volumeMounts:
- name: nfs
mountPath: "/var/logs/"
restartPolicy: OnFailure
imagePullSecrets:
- name: xxxx
volumes:
- name: nfs
nfs:
server: "xxxx"
path: "/nfs/xxx/cronjobs/"
readOnly: false
schedule: "*/30 * * * *"
successfulJobsHistoryLimit: 2
Hope that helps!

On helm 3. This works for me.
--set "servers[0].port=80" --set "servers[1].port=8080"

Related

how to convert kubernetes deployment job into a kubernetes cron job using HELM Chart

i am running my spring boot application docker image on Kubernetes using Helm chart.
Below is the details of the same
templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{- include "xyz.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "xyz.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "xyz.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "xyz.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Chart.yaml
apiVersion: v2
name: xyz
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: <APP_VERSION_PLACEHOLDER>
values.yaml
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
### - If we want 3 intances then we will metion 3 -then 3 pods will be created on server
### - For staging env we usually keep 1
replicaCount: 1
image:
### --->We can also give local Image details also here
### --->We can create image in Docker repository and use that image URL here
repository: gcr.io/mgcp-109-xyz-operations/projectname
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: "xyz"
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
schedule: "*/5 * * * *"
###SMS2-40 - There are 2 ways how we want to serve our applications-->1st->LoadBalancer or 2-->NodePort
service:
type: NodePort
port: 8087
liveness: /actuator/health/liveness
readiness: /actuator/health/readiness
###service:
### type: ClusterIP
### port: 80
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
#application:
# configoveride: "config/application.properties"
templates/cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "xyz.fullname" . }}
spec:
schedule: {{ .Values.schedule }}
jobTemplate:
spec:
backoffLimit: 5
template:
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{- include "xyz.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "xyz.selectorLabels" . | nindent 4 }}
i ran my application first without cronjob.yaml
once my application started running on kubernetes i tried to conevrt it into kubernetes cron job hence i deleted templates/deployment.yaml and instead added templates/cronjob.yaml
after i deployed my application it ran but when i do
kubectl get cronjobs
it shows in logs No resources found in default namespace.
what i am doing wrong here,unable to figure out
i use below command to install my helm chart helm upgrade --install chartname
Not sure if you file is half but it's not ended properly EOF error might be there when the chart being tested
End part for cronjob
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
The full file should be something like
apiVersion: batch/v1
kind: CronJob
metadata:
name: test
spec:
schedule: {{ .Values.schedule }}
jobTemplate:
spec:
backoffLimit: 5
template:
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
i just tested the above it's working fine.
Command to test helm chart template
helm template <chart name> . --output-dir ./yaml
I was also deploying deployment.yaml which was a mistake so i deleted deployment.yaml file and kept only cronjob.yaml file whose content is given below
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{ include "xyz.labels" . | nindent 4 }}
spec:
schedule: "{{ .Values.schedule }}"
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 2
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: DD_ENV
value: {{ .Values.datadog.env }}
- name: DD_SERVICE
value: {{ include "xyz.name" . }}
- name: DD_VERSION
value: {{ include "xyz.AppVersion" . }}
- name: DD_LOGS_INJECTION
value: "true"
- name: DD_RUNTIME_METRICS_ENABLED
value: "true"
volumeMounts:
- mountPath: /app/config
name: logback
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
volumes:
- configMap:
name: {{ include "xyz.name" . }}
name: logback
backoffLimit: 0
metadata:
{{ with .Values.podAnnotations }}
annotations:
{{ toYaml . | nindent 8 }}
labels:
{{ include "xyz.selectorLabels" . | nindent 8 }}
{{- end }}

Can we have two different values for each replicas in kubernets statefulset?

I have current implementation like below which takes the same configurations for each replicas.
Is there a possiblity to get the different values for each replica creation ?
statefulset file :
{{- $outer := . -}}
{{- range $idx, $app := .Values.appliance_type }}
{{- with $outer -}}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ $app.name }}
labels:
app: "{{ $app.appName }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
spec:
replicas: {{ $app.replicaCount }}
serviceName: {{ $app.serviceName }}
selector:
matchLabels:
app: {{ $app.appName }}
template:
metadata:
labels:
app: {{ $app.appName }}
spec:
containers:
- name: selfcheck
image: {{ .Values.image.registry }}/{{ .Values.da.pod.selfcheck.repository}}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CDNHOSTNAME
value: '{{ $app.hostname }}'
- name: CUSER
value: '{{ .Values.da.conf.consoleUser }}'
- name: CPASSWORD
value: '{{ .Values.da.conf.consolePassword }}'
{{ end }}
{{- end -}}
===================
values.yaml
appliance_type:
- name: sethu
hostname : s1
replicaCount: 2
serviceName: da
appName: test-ac
- name: ram
hostname : r1
replicaCount: 1
serviceName: ida
appName: test-ia
===================
Actual Results :
it will create 3 PODs
sethu-0 => CDNHOSTNAME (s1)
sethu-1 => CDNHOSTNAME (s1)
ram-0 => CDNHOSTNAME (r1)
Needed Results :
sethu-0 => CDNHOSTNAME (s1)
sethu-1 => CDNHOSTNAME (s2)
ram-0 => CDNHOSTNAME (r1)
hostname of sethu-0 and sethu-1 need to be taken different values from values.yaml
kind of below configuration - but not working
appliance_type:
- name: sethu
hostname :
- s1
- s2
replicaCount: 2
serviceName: da
appName: test-ac
- name: ram
replicaCount: 1
serviceName: ida
appName: test-ia

Why can I cd /root in a pod container even after specifying proper "securityContext"?

I have a helm chart with deployment.yaml having the following params:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ .Values.newAppName }}
chart: {{ template "newApp.chart" . }}
release: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
name: {{ .Values.deploymentName }}
spec:
replicas: {{ .Values.numReplicas }}
selector:
matchLabels:
app: {{ .Values.newAppName }}
template:
metadata:
labels:
app: {{ .Values.newAppName }}
namespace: {{ .Release.Namespace }}
annotations:
some_annotation: val
some_annotation: val
spec:
serviceAccountName: {{ .Values.podRoleName }}
containers:
- env:
- name: ENV_VAR1
value: {{ .Values.env_var_1 }}
image: {{ .Values.newApp }}:{{ .Values.newAppVersion }}
imagePullPolicy: Always
command: ["/opt/myDir/bin/newApp"]
args: ["-c", "/etc/config/newApp/{{ .Values.newAppConfigFileName }}"]
name: {{ .Values.newAppName }}
ports:
- containerPort: {{ .Values.newAppTLSPort }}
protocol: TCP
livenessProbe:
httpGet:
path: /v1/h
port: {{ .Values.newAppTLSPort }}
scheme: HTTPS
initialDelaySeconds: 1
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 20
readinessProbe:
httpGet:
path: /v1/h
port: {{ .Values.newAppTLSPort }}
scheme: HTTPS
initialDelaySeconds: 2
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 20
volumeMounts:
- mountPath: /etc/config/newApp
name: config-volume
readOnly: true
- mountPath: /etc/config/metrics
name: metrics-volume
readOnly: true
- mountPath: /etc/version/container
name: container-info-volume
readOnly: true
- name: {{ template "newAppClient.name" . }}-client
image: {{ .Values.newAppClientImage }}:{{ .Values.newAppClientVersion }}
imagePullPolicy: Always
args: ["run", "--server", "--config-file=/newAppClientPath/config.yaml", "--log-level=debug", "/newAppClientPath/pl"]
volumeMounts:
- name: newAppClient-files
mountPath: /newAppClient-path
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
volumes:
- name: config-volume
configMap:
name: {{ .Values.newAppConfigMapName }}
- name: container-info-volume
configMap:
name: {{ .Values.containerVersionConfigMapName }}
- name: metrics-volume
configMap:
name: {{ .Values.metricsConfigMapName }}
- name: newAppClient-files
configMap:
name: {{ .Values.newAppClientConfigMapName }}
items:
- key: config
path: config.yaml
This helm chart is consumed by Jenkins and then deployed by Spinnaker onto AWS EKS service.
A security measure that we ensure is that /root directory should be private in all our containers, so basically it should deny permission when a user tries to manually do the same after
kubectl exec -it -n namespace_name pod_name -c container_name bash
into the container.
But when I enter the container terminal why can I still
cd /root
inside the container when it is running as non-root?
EXPECTED: It should give the following error which it is not giving:
cd root/
bash: cd: root/: Permission denied
OTHER VALUES THAT MIGHT BE USEFUL TO DEBUG:
Output of "ls -la" inside the container:
dr-xr-x--- 1 root root 18 Jul 26 2021 root
As you can see the r and x SHOULD BE UNSET for OTHER on root folder
Output of "id" inside the container:
bash-4.2$ id
uid=1000 gid=0(root) groups=0(root),1000
Using a helm chart locally to reproduce the error ->
The same 3 securityContext params when used locally in a simple Go program helm chart yields the desired result.
Deployment.yaml of helm chart:
apiVersion: {{ template "common.capabilities.deployment.apiVersion" . }}
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "fullname" . }}
template:
metadata:
labels:
app: {{ template "fullname" . }}
spec:
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
readinessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
resources:
{{ toYaml .Values.resources | indent 12 }}
Output of ls -la inside the container on local setup:
drwx------ 2 root root 4096 Jan 25 00:00 root
You can always cd into / in the UNIX system as non-root, so you can do it inside your container as well. However, e.g. creating a file there should fail with Permission denied.
Check the following.
# Run a container as non-root
docker run -it --rm --user 7447 busybox sh
# Check that it's possible to cd into '/'
cd /
# Try creating file
touch some-file
touch: some-file: Permission denied

Helm - How to write a file in a Volume using ConfigMap?

I have defined the values.yaml like the following:
name: custom-streams
image: streams-docker-images
imagePullPolicy: Always
restartPolicy: Always
replicas: 1
port: 8080
nodeSelector:
nodetype: free
configHocon: |-
streams {
monitoring {
custom {
uri = ${?URI}
method = ${?METHOD}
}
}
}
And configmap.yaml like the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-streams-configmap
data:
config.hocon: {{ .Values.configHocon | indent 4}}
Lastly, I have defined the deployment.yaml like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configmap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}
When I run the container via:
helm install --name custom-streams custom-streams -f values.yaml --debug --namespace streaming
Then the pods are running fine, but I cannot see the config.hocon file in the container:
$ kubectl exec -it custom-streams-55b45b7756-fb292 sh -n streaming
/ # ls
...
config
...
/ # cd config/
/config # ls
/config #
I need the config.hocon written in the /config folder. Can anyone let me know what is wrong with the configurations?
I was able to resolve the issue. The issue was using configmap in place configMap in deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configMap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}

error parsing templates/deployment.yaml: json: line 1: invalid character '{' looking for beginning of object key string

i'm getting an following error, when i try to deploy nexus using kubernetes.
Command: kubectl appy -f templates/deployment.yaml
error parsing templates/deployment.yaml: json: line 1: invalid
character '{' looking for beginning of object key string
Did anybody faced this issue?
Please find the below code which i'm trying:
{{- if .Values.localSetup.enabled }}
apiVersion: apps/v1
kind: Deployment
{{- else }}
apiVersion: apps/v1
kind: StatefulSet
{{- end }}
metadata:
labels:
app: nexus
name: nexus
spec:
replicas: 1
selector:
matchLabels:
app: nexus
template:
metadata:
labels:
app: nexus
spec:
{{- if .Values.localSetup.enabled }}
volumes:
- name: nexus-data
persistentVolumeClaim:
claimName: nexus-pv-claim
- name: nexus-data-backup
persistentVolumeClaim:
claimName: nexus-backup-pv-claim
{{- end }}
containers:
- name: nexus
image: "quay.io/travelaudience/docker-nexus:3.15.2"
imagePullPolicy: Always
env:
- name: INSTALL4J_ADD_VM_PARAMS
value: "-Xms1200M -Xmx1200M -XX:MaxDirectMemorySize=2G -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
resources:
requests:
cpu: 250m
memory: 4800Mi
ports:
- containerPort: {{ .Values.nexus.dockerPort }}
name: nexus-docker-g
- containerPort: {{ .Values.nexus.nexusPort }}
name: nexus-http
volumeMounts:
- mountPath: "/nexus-data"
name: nexus-data
- mountPath: "/nexus-data/backup"
name: nexus-data-backup
{{- if .Values.useProbes.enabled }}
livenessProbe:
httpGet:
path: {{ .Values.nexus.livenessProbe.path }}
port: {{ .Values.nexus.nexusPort }}
initialDelaySeconds: {{ .Values.nexus.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.nexus.livenessProbe.periodSeconds }}
failureThreshold: {{ .Values.nexus.livenessProbe.failureThreshold }}
{{- if .Values.nexus.livenessProbe.timeoutSeconds }}
timeoutSeconds: {{ .Values.nexus.livenessProbe.timeoutSeconds }}
{{- end }}
readinessProbe:
httpGet:
path: {{ .Values.nexus.readinessProbe.path }}
port: {{ .Values.nexus.nexusPort }}
initialDelaySeconds: {{ .Values.nexus.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.nexus.readinessProbe.periodSeconds }}
failureThreshold: {{ .Values.nexus.readinessProbe.failureThreshold }}
{{- if .Values.nexus.readinessProbe.timeoutSeconds }}
timeoutSeconds: {{ .Values.nexus.readinessProbe.timeoutSeconds }}
{{- end }}
{{- end }}
{{- if .Values.nexusProxy.enabled }}
- name: nexus-proxy
image: "quay.io/travelaudience/docker-nexus-proxy:2.4.0_8u191"
imagePullPolicy: Always
env:
- name: ALLOWED_USER_AGENTS_ON_ROOT_REGEX
value: "GoogleHC"
- name: CLOUD_IAM_AUTH_ENABLED
value: "false"
- name: BIND_PORT
value: {{ .Values.nexusProxy.targetPort | quote }}
- name: ENFORCE_HTTPS
value: "false"
{{- if .Values.localSetup.enabled }}
- name: NEXUS_DOCKER_HOST
value: {{ .Values.nexusProxy.nexusLocalDockerhost }}
- name: NEXUS_HTTP_HOST
value: {{ .Values.nexusProxy.nexusLocalHttphost }}
{{- else }}
- name: NEXUS_DOCKER_HOST
value: {{ .Values.nexusProxy.nexusDockerHost}}
- name: NEXUS_HTTP_HOST
value: {{ .Values.nexusProxy.nexusHttpHost }}
{{- end }}
- name: UPSTREAM_DOCKER_PORT
value: {{ .Values.nexus.dockerPort | quote }}
- name: UPSTREAM_HTTP_PORT
value: {{ .Values.nexus.nexusPort | quote }}
- name: UPSTREAM_HOST
value: "localhost"
ports:
- containerPort: {{ .Values.nexusProxy.targetPort }}
name: proxy-port
{{- end }}
{{- if .Values.nexusBackup.enabled }}
- name: nexus-backup
image: "quay.io/travelaudience/docker-nexus-backup:1.4.0"
imagePullPolicy: Always
env:
- name: NEXUS_AUTHORIZATION
value: false
- name: NEXUS_BACKUP_DIRECTORY
value: /nexus-data/backup
- name: NEXUS_DATA_DIRECTORY
value: /nexus-data
- name: NEXUS_LOCAL_HOST_PORT
value: "localhost:8081"
- name: OFFLINE_REPOS
value: "maven-central maven-public maven-releases maven-snapshots"
- name: TARGET_BUCKET
value: "gs://nexus-backup"
- name: GRACE_PERIOD
value: "60"
- name: TRIGGER_FILE
value: .backup
volumeMounts:
- mountPath: /nexus-data
name: nexus-data
- mountPath: /nexus-data/backup
name: nexus-data-backup
terminationGracePeriodSeconds: 10
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: nexus-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
storageClassName: {{ .Values.persistence.storageClass }}
- metadata:
name: nexus-data-backup
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
Any leads would be appreciated!
Regards
Mani
The template you provided here is the part of helm chart, which can be deployed using helm-cli, not using kubectl apply.
More info on using helm is here.
You can also get the instructions to install nexus using helm in this official stable helm chart.
Hope this helps.