Data is empty when accessing config file in k8s configmap with Helm - kubernetes

I am trying to use a configmap in my deployment with helm charts. Now seems like files can be accessed with Helm according to the docs here: https://github.com/helm/helm/blob/master/docs/chart_template_guide/accessing_files.md
This is my deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "{{ template "service.fullname" . }}"
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: "{{ template "service.fullname" . }}"
spec:
containers:
- name: "{{ .Chart.Name }}"
image: "{{ .Values.registryHost }}/{{ .Values.userNamespace }}/{{ .Values.projectName }}/{{ .Values.serviceName }}:{{.Chart.Version}}"
volumeMounts:
- name: {{ .Values.configmapName}}configmap-volume
mountPath: /app/config
ports:
- containerPort: 80
name: http
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: {{ .Values.configmapName}}configmap-volume
configMap:
name: "{{ .Values.configmapName}}-configmap"
My configmap is accessing a config file. Here's the configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-configmap"
labels:
app: "{{ .Values.configmapName}}"
data:
{{ .Files.Get "files/{{ .Values.configmapName}}-config.json" | indent 2}}
The charts directory looks like this:
files/
--runtime-config.json
templates/
--configmap.yaml
--deployment.yaml
--ingress.yaml
--service.yaml
chart.value
vaues.yaml
And this is how my runtime-confi.json file looks like:
{
"GameModeConfiguration": {
"command": "xx",
"modeId": 10,
"sessionId": 11
}
}
The problem is, when I install my chart (even with a dry-run mode), the data for my configmap is empty. It doesn't add the data from the config file into my configmap declaration. This is how it looks like when I do a dry-run:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: "runtime-configmap"
labels:
app: "runtime"
data:
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "whimsical-otter-runtime-service"
labels:
chart: "runtime-service-unknown/version"
spec:
replicas: 1
template:
metadata:
labels:
app: "whimsical-otter-runtime-service"
spec:
containers:
- name: "runtime-service"
image: "gcr.io/xxx-dev/xxx/runtime_service:unknown/version"
volumeMounts:
- name: runtimeconfigmap-volume
mountPath: /app/config
ports:
- containerPort: 80
name: http
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: runtimeconfigmap-volume
configMap:
name: "runtime-configmap"
---
What am I doing wrong that I don't get data?

The replacement of the variable within the string does not work:
{{ .Files.Get "files/{{ .Values.configmapName}}-config.json" | indent 2}}
But you can gerenate a string using the printf function like this:
{{ .Files.Get (printf "files/%s-config.json" .Values.configmapName) | indent 2 }}

Apart from the syntax problem pointed by #adebasi, you still need to set this code inside a key to get a valid configmap yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-configmap"
labels:
app: "{{ .Values.configmapName}}"
data:
my-file: |
{{ .Files.Get (printf "files/%s-config.json" .Values.configmapName) | indent 4}}
Or you can use the handy configmap helper:
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-configmap"
labels:
app: "{{ .Values.configmapName}}"
data:
{{ (.Files.Glob "files/*").AsConfig | indent 2 }}

Related

AKS - Pods created by HPA trigger are getting terminated immediately after they are created

When we had a look into the events in AKS, we observed the below errors for all the pods which were created and terminated:
2m47s Warning FailedMount pod/app-fd6c6b8d9-ssr2t Unable to attach or mount volumes: unmounted volumes=[log-volume config-volume log4j2 secrets-app-inline kube-api-access-z49xc], unattached volumes=[log-volume config-volume log4j2 secrets-app-inline kube-api-access-z49xc]: timed out waiting for the condition
We already have 2 replicas running for the application so don't think that the error will be due to AccessModes of volumes.
Below is the HPA config:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: app-cpu-hpa
namespace: namespace-dev
spec:
maxReplicas: 5
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app
metrics:
- type: Resource
resource:
name: cpu
targetAverageValue: 500m
Below is the deployment config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
group: app
obs: appd
spec:
replicas: 2
selector:
matchLabels:
app: app
template:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/app: runtime/default
labels:
app: app
group: app
obs: appd
spec:
containers:
- name: app
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 2000
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources:
limits:
cpu: {{ .Values.app.limits.cpu }}
memory: {{ .Values.app.limits.memory }}
requests:
cpu: {{ .Values.app.requests.cpu }}
memory: {{ .Values.app.requests.memory }}
env:
- name: LOG_DIR_PATH
value: /opt/apps/
volumeMounts:
- name: log-volume
mountPath: /opt/apps/app/logs
- name: config-volume
mountPath: /script/start.sh
subPath: start.sh
- name: log4j2
mountPath: /opt/appdynamics-java/ver21.9.0.33073/conf/logging/log4j2.xml
subPath: log4j2.xml
- name: secrets-app-inline
mountPath: "/mnt/secrets-app"
readOnly: true
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/info
port: {{ .Values.metrics.port }}
scheme: "HTTP"
httpHeaders:
- name: Authorization
value: "Basic XXX50aXXXXXX=="
- name: cache-control
value: "no-cache"
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
initialDelaySeconds: 60
livenessProbe:
httpGet:
path: /actuator/info
port: {{ .Values.metrics.port }}
scheme: "HTTP"
httpHeaders:
- name: Authorization
value: "Basic XXX50aXXXXXX=="
- name: cache-control
value: "no-cache"
initialDelaySeconds: 300
periodSeconds: 5
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
volumes:
- name: log-volume
persistentVolumeClaim:
claimName: {{ .Values.apppvc.name }}
- name: config-volume
configMap:
name: {{ .Values.configmap.name }}-configmap
defaultMode: 0755
- name: secrets-app-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "app-kv-secret"
nodePublishSecretRef:
name: secrets-app-creds
- name: log4j2
configMap:
name: log4j2
defaultMode: 0755
restartPolicy: Always
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
Can someone please let me know where the config might be going wrong?

Why can I cd /root in a pod container even after specifying proper "securityContext"?

I have a helm chart with deployment.yaml having the following params:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ .Values.newAppName }}
chart: {{ template "newApp.chart" . }}
release: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
name: {{ .Values.deploymentName }}
spec:
replicas: {{ .Values.numReplicas }}
selector:
matchLabels:
app: {{ .Values.newAppName }}
template:
metadata:
labels:
app: {{ .Values.newAppName }}
namespace: {{ .Release.Namespace }}
annotations:
some_annotation: val
some_annotation: val
spec:
serviceAccountName: {{ .Values.podRoleName }}
containers:
- env:
- name: ENV_VAR1
value: {{ .Values.env_var_1 }}
image: {{ .Values.newApp }}:{{ .Values.newAppVersion }}
imagePullPolicy: Always
command: ["/opt/myDir/bin/newApp"]
args: ["-c", "/etc/config/newApp/{{ .Values.newAppConfigFileName }}"]
name: {{ .Values.newAppName }}
ports:
- containerPort: {{ .Values.newAppTLSPort }}
protocol: TCP
livenessProbe:
httpGet:
path: /v1/h
port: {{ .Values.newAppTLSPort }}
scheme: HTTPS
initialDelaySeconds: 1
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 20
readinessProbe:
httpGet:
path: /v1/h
port: {{ .Values.newAppTLSPort }}
scheme: HTTPS
initialDelaySeconds: 2
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 20
volumeMounts:
- mountPath: /etc/config/newApp
name: config-volume
readOnly: true
- mountPath: /etc/config/metrics
name: metrics-volume
readOnly: true
- mountPath: /etc/version/container
name: container-info-volume
readOnly: true
- name: {{ template "newAppClient.name" . }}-client
image: {{ .Values.newAppClientImage }}:{{ .Values.newAppClientVersion }}
imagePullPolicy: Always
args: ["run", "--server", "--config-file=/newAppClientPath/config.yaml", "--log-level=debug", "/newAppClientPath/pl"]
volumeMounts:
- name: newAppClient-files
mountPath: /newAppClient-path
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
volumes:
- name: config-volume
configMap:
name: {{ .Values.newAppConfigMapName }}
- name: container-info-volume
configMap:
name: {{ .Values.containerVersionConfigMapName }}
- name: metrics-volume
configMap:
name: {{ .Values.metricsConfigMapName }}
- name: newAppClient-files
configMap:
name: {{ .Values.newAppClientConfigMapName }}
items:
- key: config
path: config.yaml
This helm chart is consumed by Jenkins and then deployed by Spinnaker onto AWS EKS service.
A security measure that we ensure is that /root directory should be private in all our containers, so basically it should deny permission when a user tries to manually do the same after
kubectl exec -it -n namespace_name pod_name -c container_name bash
into the container.
But when I enter the container terminal why can I still
cd /root
inside the container when it is running as non-root?
EXPECTED: It should give the following error which it is not giving:
cd root/
bash: cd: root/: Permission denied
OTHER VALUES THAT MIGHT BE USEFUL TO DEBUG:
Output of "ls -la" inside the container:
dr-xr-x--- 1 root root 18 Jul 26 2021 root
As you can see the r and x SHOULD BE UNSET for OTHER on root folder
Output of "id" inside the container:
bash-4.2$ id
uid=1000 gid=0(root) groups=0(root),1000
Using a helm chart locally to reproduce the error ->
The same 3 securityContext params when used locally in a simple Go program helm chart yields the desired result.
Deployment.yaml of helm chart:
apiVersion: {{ template "common.capabilities.deployment.apiVersion" . }}
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "fullname" . }}
template:
metadata:
labels:
app: {{ template "fullname" . }}
spec:
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
readinessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
resources:
{{ toYaml .Values.resources | indent 12 }}
Output of ls -la inside the container on local setup:
drwx------ 2 root root 4096 Jan 25 00:00 root
You can always cd into / in the UNIX system as non-root, so you can do it inside your container as well. However, e.g. creating a file there should fail with Permission denied.
Check the following.
# Run a container as non-root
docker run -it --rm --user 7447 busybox sh
# Check that it's possible to cd into '/'
cd /
# Try creating file
touch some-file
touch: some-file: Permission denied

trable with go-templates helm3

Im trying to write my 1st helm chart
thats my deployment
in this part: containerPort: {{ .Values.port }} ... its work
buy not work in this:
value: {{ .Values.port | quote }}
value: {{ .Value.logs | quote }}
i dont understand why... and error nothing help me
plz help
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
ports:
- name: http
containerPort: {{ .Values.port }}
protocol: TCP
- env:
- name: PORT
value: {{ .Values.port | quote }}
- name: LOGS
value: {{ .Value.logs | quote }}
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
this is my:
values.yaml
port: 8080
logs: "/logs/access.log"
replicaCount: 1
image:
repository: #
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: "develop"
lint or helm install gives an error message:
gitlab-runner:~$ helm install test ./test --dry-run --debug
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/gitlab-runner/test
Error: template: test/templates/deployment.yaml:28:28: executing "test/templates/deployment.yaml" at <.Value.logs>: nil pointer evaluating interface {}.logs
helm.go:88: [debug] template: test/templates/deployment.yaml:28:28: executing "test/templates/deployment.yaml" at <.Value.logs>: nil pointer evaluating interface {}.logs
i dont understan what i do wrong
and im sorry for my bad english ^^
plan 1
deployment.yaml
- env:
- name: PORT
value: "{{ .Values.port }}"
- name: LOGS
value: "{{ .Value.logs }}"
values.yaml
port: 8080
logs: /logs/access.log
plan 2
deployment.yaml
- env:
- name: PORT
value: {{ .Values.port | quote }}
- name: LOGS
value: {{ .Value.logs | quote }}
values.yaml
port: 8080
logs: /logs/access.log

helm/kubernetes not installing all cronjobs in list

I have a helm chart which involves a loop over a range of values. The chart includes a statefulset, pvc and cronjob. If I pass it a list with 4 values, all is well, but if I pass it a list of 12 values, most of the cronjobs just don't appear in the final template (i.e. using helm install --dry-run --debug).
Can anyone explain what might be causing this? I googled to see if I could find information about maximum length of templates but couldn't find anything...
helm template creates the manifest just fine, so maybe it's kubernetes is rejecting the cronjobs for some reason?
Is there a recommended approach for when you need to create many almost-duplicates of a manifest?
EXAMPLE: The chart template looks something like this
{{ $env := .Release.Namespace }}
{{ $image_tag := .Values.image_tag }}
{{ $aws_account_id := .Values.aws_account_id }}
{{- range $collector := .Values.collectors }}
apiVersion: v1
kind: Service
metadata:
name: {{ $colname }}
labels:
app: {{ $colname }}
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: {{ $colname }}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ $colname }}
labels:
app: {{ $colname }}
spec:
selector:
matchLabels:
app: {{ $colname }}
serviceName: {{ $colname }}
replicas: 1
template:
metadata:
labels:
app: {{ $colname }}
spec:
securityContext:
fsGroup: 1000
containers:
- name: {{ $colname }}
imagePullPolicy: Always
image: {{ $aws_account_id }}.dkr.ecr.eu-west-1.amazonaws.com/d:{{ $image_tag }}
volumeMounts:
- name: {{ $colname }}-a-claim
mountPath: /home/me/a
- name: {{ $colname }}-b-claim
mountPath: /home/me/b
- name: {{ $colname }}-c-claim
mountPath: /home/me/c
env:
- name: COLLECTOR
value: {{ $collector }}
- name: ENV
value: {{ $env }}
volumeClaimTemplates:
- metadata:
name: {{ $colname }}-a-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 50Gi
- metadata:
name: {{ $colname }}-b-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 10Gi
- metadata:
name: {{ $colname }}-c-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ $colname }}-c-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 20Gi
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $colname }}-cron
spec:
schedule: {{ $update_time }}
jobTemplate:
spec:
template:
spec:
securityContext:
fsGroup: 1000
containers:
- name: {{ $colname }}
image: {{ $aws_account_id }}.dkr.ecr.eu-west-1.amazonaws.com/d:{{ $image_tag }}
env:
- name: COLLECTOR
value: {{ $collector_name }}
volumeMounts:
- name: c-storage
mountPath: /home/me/c
restartPolicy: Never
volumes:
- name: c-storage
persistentVolumeClaim:
claimName: {{ $colname }}-c-claim
---
{{ end }}
and I'm passing values like:
collectors:
- name: a
- name: b
- name: c
- name: d
- name: e
- name: f
- name: g
- name: h
- name: i
- name: j
- name: k
- name: l

Azure Devops Error : "unknown field "imagePullPolicy" in io.k8s.api.core.v1.PodSpec"

I am using Azure Devops, and getting unknown field imagePullPolicy"in io.k8s.api.core.v1.PodSpec while doing helm install :
2019-07-05T10:49:11.0064690Z ##[warning]Can't find command extension for ##vso[telemetry.command]. Please reference documentation (http://go.microsoft.com/fwlink/?LinkId=817296)
2019-07-05T09:56:41.1837910Z Error: validation failed: error validating "": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "imagePullPolicy" in io.k8s.api.core.v1.PodSpec
2019-07-05T09:56:41.1980030Z ##[error]Error: validation failed: error validating "": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "imagePullPolicy" in io.k8s.api.core.v1.PodSpec
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "clusterfitusecaseapihelm.fullname" . }}
labels:
{{ include "clusterfitusecaseapihelm.labels" . | indent 4 }}
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: {{ include "clusterfitusecaseapihelm.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "clusterfitusecaseapihelm.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
name: {{ .Chart.Name }}
env:
- name: ASPNETCORE_ENVIRONMENT
value: {{ .Values.environment }}
resources:
requests:
cpu: {{ .Values.resources.requests.cpu }}
memory: {{ .Values.resources.requests.memory }}
limits:
cpu: {{ .Values.resources.limits.cpu }}
memory: {{ .Values.resources.limits.memory }}
livenessProbe:
httpGet:
path: /api/version
port: 80
initialDelaySeconds: 90
timeoutSeconds: 10
periodSeconds: 15
readinessProbe:
httpGet:
path: /api/version
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 15
ports:
- containerPort: 80
name: http
volumeMounts:
- mountPath: /app/config
name: {{ include "clusterfitusecaseapihelm.name" . }}
readOnly: true
volumes:
- name: {{ include "clusterfitusecaseapihelm.name" . }}
imagePullPolicy: Always
imagePullSecrets:
- name: regsecret
Tried this also but failed:
imagePullPolicy is a property of a Container object, not a Pod object, so you need to move this setting inside the containers: list (next to image:).