trable with go-templates helm3 - kubernetes

Im trying to write my 1st helm chart
thats my deployment
in this part: containerPort: {{ .Values.port }} ... its work
buy not work in this:
value: {{ .Values.port | quote }}
value: {{ .Value.logs | quote }}
i dont understand why... and error nothing help me
plz help
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
ports:
- name: http
containerPort: {{ .Values.port }}
protocol: TCP
- env:
- name: PORT
value: {{ .Values.port | quote }}
- name: LOGS
value: {{ .Value.logs | quote }}
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
this is my:
values.yaml
port: 8080
logs: "/logs/access.log"
replicaCount: 1
image:
repository: #
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: "develop"
lint or helm install gives an error message:
gitlab-runner:~$ helm install test ./test --dry-run --debug
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/gitlab-runner/test
Error: template: test/templates/deployment.yaml:28:28: executing "test/templates/deployment.yaml" at <.Value.logs>: nil pointer evaluating interface {}.logs
helm.go:88: [debug] template: test/templates/deployment.yaml:28:28: executing "test/templates/deployment.yaml" at <.Value.logs>: nil pointer evaluating interface {}.logs
i dont understan what i do wrong
and im sorry for my bad english ^^

plan 1
deployment.yaml
- env:
- name: PORT
value: "{{ .Values.port }}"
- name: LOGS
value: "{{ .Value.logs }}"
values.yaml
port: 8080
logs: /logs/access.log
plan 2
deployment.yaml
- env:
- name: PORT
value: {{ .Values.port | quote }}
- name: LOGS
value: {{ .Value.logs | quote }}
values.yaml
port: 8080
logs: /logs/access.log

Related

Why can I cd /root in a pod container even after specifying proper "securityContext"?

I have a helm chart with deployment.yaml having the following params:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ .Values.newAppName }}
chart: {{ template "newApp.chart" . }}
release: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
name: {{ .Values.deploymentName }}
spec:
replicas: {{ .Values.numReplicas }}
selector:
matchLabels:
app: {{ .Values.newAppName }}
template:
metadata:
labels:
app: {{ .Values.newAppName }}
namespace: {{ .Release.Namespace }}
annotations:
some_annotation: val
some_annotation: val
spec:
serviceAccountName: {{ .Values.podRoleName }}
containers:
- env:
- name: ENV_VAR1
value: {{ .Values.env_var_1 }}
image: {{ .Values.newApp }}:{{ .Values.newAppVersion }}
imagePullPolicy: Always
command: ["/opt/myDir/bin/newApp"]
args: ["-c", "/etc/config/newApp/{{ .Values.newAppConfigFileName }}"]
name: {{ .Values.newAppName }}
ports:
- containerPort: {{ .Values.newAppTLSPort }}
protocol: TCP
livenessProbe:
httpGet:
path: /v1/h
port: {{ .Values.newAppTLSPort }}
scheme: HTTPS
initialDelaySeconds: 1
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 20
readinessProbe:
httpGet:
path: /v1/h
port: {{ .Values.newAppTLSPort }}
scheme: HTTPS
initialDelaySeconds: 2
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 20
volumeMounts:
- mountPath: /etc/config/newApp
name: config-volume
readOnly: true
- mountPath: /etc/config/metrics
name: metrics-volume
readOnly: true
- mountPath: /etc/version/container
name: container-info-volume
readOnly: true
- name: {{ template "newAppClient.name" . }}-client
image: {{ .Values.newAppClientImage }}:{{ .Values.newAppClientVersion }}
imagePullPolicy: Always
args: ["run", "--server", "--config-file=/newAppClientPath/config.yaml", "--log-level=debug", "/newAppClientPath/pl"]
volumeMounts:
- name: newAppClient-files
mountPath: /newAppClient-path
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
volumes:
- name: config-volume
configMap:
name: {{ .Values.newAppConfigMapName }}
- name: container-info-volume
configMap:
name: {{ .Values.containerVersionConfigMapName }}
- name: metrics-volume
configMap:
name: {{ .Values.metricsConfigMapName }}
- name: newAppClient-files
configMap:
name: {{ .Values.newAppClientConfigMapName }}
items:
- key: config
path: config.yaml
This helm chart is consumed by Jenkins and then deployed by Spinnaker onto AWS EKS service.
A security measure that we ensure is that /root directory should be private in all our containers, so basically it should deny permission when a user tries to manually do the same after
kubectl exec -it -n namespace_name pod_name -c container_name bash
into the container.
But when I enter the container terminal why can I still
cd /root
inside the container when it is running as non-root?
EXPECTED: It should give the following error which it is not giving:
cd root/
bash: cd: root/: Permission denied
OTHER VALUES THAT MIGHT BE USEFUL TO DEBUG:
Output of "ls -la" inside the container:
dr-xr-x--- 1 root root 18 Jul 26 2021 root
As you can see the r and x SHOULD BE UNSET for OTHER on root folder
Output of "id" inside the container:
bash-4.2$ id
uid=1000 gid=0(root) groups=0(root),1000
Using a helm chart locally to reproduce the error ->
The same 3 securityContext params when used locally in a simple Go program helm chart yields the desired result.
Deployment.yaml of helm chart:
apiVersion: {{ template "common.capabilities.deployment.apiVersion" . }}
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "fullname" . }}
template:
metadata:
labels:
app: {{ template "fullname" . }}
spec:
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
readinessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
resources:
{{ toYaml .Values.resources | indent 12 }}
Output of ls -la inside the container on local setup:
drwx------ 2 root root 4096 Jan 25 00:00 root
You can always cd into / in the UNIX system as non-root, so you can do it inside your container as well. However, e.g. creating a file there should fail with Permission denied.
Check the following.
# Run a container as non-root
docker run -it --rm --user 7447 busybox sh
# Check that it's possible to cd into '/'
cd /
# Try creating file
touch some-file
touch: some-file: Permission denied

Helm - How to write a file in a Volume using ConfigMap?

I have defined the values.yaml like the following:
name: custom-streams
image: streams-docker-images
imagePullPolicy: Always
restartPolicy: Always
replicas: 1
port: 8080
nodeSelector:
nodetype: free
configHocon: |-
streams {
monitoring {
custom {
uri = ${?URI}
method = ${?METHOD}
}
}
}
And configmap.yaml like the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-streams-configmap
data:
config.hocon: {{ .Values.configHocon | indent 4}}
Lastly, I have defined the deployment.yaml like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configmap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}
When I run the container via:
helm install --name custom-streams custom-streams -f values.yaml --debug --namespace streaming
Then the pods are running fine, but I cannot see the config.hocon file in the container:
$ kubectl exec -it custom-streams-55b45b7756-fb292 sh -n streaming
/ # ls
...
config
...
/ # cd config/
/config # ls
/config #
I need the config.hocon written in the /config folder. Can anyone let me know what is wrong with the configurations?
I was able to resolve the issue. The issue was using configmap in place configMap in deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configMap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}

Azure Devops Error : "unknown field "imagePullPolicy" in io.k8s.api.core.v1.PodSpec"

I am using Azure Devops, and getting unknown field imagePullPolicy"in io.k8s.api.core.v1.PodSpec while doing helm install :
2019-07-05T10:49:11.0064690Z ##[warning]Can't find command extension for ##vso[telemetry.command]. Please reference documentation (http://go.microsoft.com/fwlink/?LinkId=817296)
2019-07-05T09:56:41.1837910Z Error: validation failed: error validating "": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "imagePullPolicy" in io.k8s.api.core.v1.PodSpec
2019-07-05T09:56:41.1980030Z ##[error]Error: validation failed: error validating "": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "imagePullPolicy" in io.k8s.api.core.v1.PodSpec
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "clusterfitusecaseapihelm.fullname" . }}
labels:
{{ include "clusterfitusecaseapihelm.labels" . | indent 4 }}
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: {{ include "clusterfitusecaseapihelm.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "clusterfitusecaseapihelm.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
name: {{ .Chart.Name }}
env:
- name: ASPNETCORE_ENVIRONMENT
value: {{ .Values.environment }}
resources:
requests:
cpu: {{ .Values.resources.requests.cpu }}
memory: {{ .Values.resources.requests.memory }}
limits:
cpu: {{ .Values.resources.limits.cpu }}
memory: {{ .Values.resources.limits.memory }}
livenessProbe:
httpGet:
path: /api/version
port: 80
initialDelaySeconds: 90
timeoutSeconds: 10
periodSeconds: 15
readinessProbe:
httpGet:
path: /api/version
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 15
ports:
- containerPort: 80
name: http
volumeMounts:
- mountPath: /app/config
name: {{ include "clusterfitusecaseapihelm.name" . }}
readOnly: true
volumes:
- name: {{ include "clusterfitusecaseapihelm.name" . }}
imagePullPolicy: Always
imagePullSecrets:
- name: regsecret
Tried this also but failed:
imagePullPolicy is a property of a Container object, not a Pod object, so you need to move this setting inside the containers: list (next to image:).

Helm: Passing array values through --set

i have a cronjob helm chat, i can define many jobs in values.yaml and cronjob.yaml will provision my jobs. I have faced an issue when setting the image tag id in command line, following command throw no errors but it wont update jobs image tag to new one.
helm upgrade cronjobs cronjobs/ --wait --set job.myservice.image.tag=b70d744
cronjobs will run with old image tag how can i resolve this?
here is my cronjobs.yaml
{{- $chart_name := .Chart.Name }}
{{- $chart_version := .Chart.Version | replace "+" "_" }}
{{- $release_name := .Release.Name }}
{{- range $job := .Values.jobs }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: "{{ $job.namespace }}"
name: "{{ $release_name }}-{{ $job.name }}"
labels:
chart: "{{ $chart_name }}-{{ $chart_version }}"
spec:
concurrencyPolicy: {{ $job.concurrencyPolicy }}
failedJobsHistoryLimit: {{ $job.failedJobsHistoryLimit }}
suspend: {{ $job.suspend }}
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ $release_name }}
cron: {{ $job.name }}
spec:
containers:
- image: "{{ $job.image.repository }}:{{ $job.image.tag }}"
imagePullPolicy: {{ $job.image.imagePullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
name: {{ $job.name }}
args:
{{ toYaml $job.args | indent 12 }}
env:
{{ toYaml $job.image.env | indent 12 }}
volumeMounts:
- name: nfs
mountPath: "{{ $job.image.nfslogpath }}"
restartPolicy: OnFailure
imagePullSecrets:
- name: {{ $job.image.secret }}
volumes:
- name: nfs
nfs:
server: "{{ $job.image.server }}"
path: "{{ $job.image.nfspath }}"
readOnly: false
schedule: {{ $job.schedule | quote }}
successfulJobsHistoryLimit: {{ $job.successfulJobsHistoryLimit }}
{{- end }}
here is my values.yaml
jobs:
- name: myservice
namespace: default
image:
repository: xxx.com/myservice
tag: fe4544
pullPolicy: Always
secret: xxx
nfslogpath: "/var/logs/"
nfsserver: "xxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
schedule: "*/5 * * * *"
args:
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
concurrencyPolicy: Forbid
suspend: false
- name: myservice2
namespace: default
image:
repository: xxxx/myservice2
tag: 1dff39a
pullPolicy: IfNotPresent
secret: xxxx
nfslogpath: "/var/logs/"
nfsserver: "xxxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
schedule: "*/30 * * * *"
args:
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 2
concurrencyPolicy: Forbid
suspend: false
If you need to pass array values you can use curly braces (unix shell require quotes):
--set test={x,y,z}
--set "test={x,y,z}"
Result YAML:
test:
- x
- y
- z
Source: https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set
EDITED : added double-quotes for unix shell like bash
Update for Helm 2.5.0
As of Helm 2.5.0, it is possible to access list items using an array index syntax.
For example, --set servers[0].port=80 becomes:
servers:
- port: 80
For the sake of completeness I'll post a more complex example with Helm 3.
Let's say that you have this in your values.yaml file:
extraEnvVars:
- name: CONFIG_BACKEND_URL
value: "https://api.example.com"
- name: CONFIG_BACKEND_AUTH_USER
value: "admin"
- name: CONFIG_BACKEND_AUTH_PWD
value: "very-secret-password"
You can --set just the value for the CONFIG_BACKEND_URL this way:
helm install ... --set "extraEnvVars[0].value=http://172.23.0.1:36241"
The other two variables (i.e. CONFIG_BACKEND_AUTH_USER and CONFIG_BACKEND_AUTH_PWD) will be read from the values.yaml file since we're not overwriting them with a --set.
Same for extraEnvVars[0].name which will be CONFIG_BACKEND_URL as per values.yaml.
Source: https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set
Since you are using array in your values.yaml file, please see related issue
Alternative solution
Your values.yaml is missing values for args and env. I've set them in my example, as well as changed indent to 14
Your cronjob.yaml server: "{{ $job.image.server }}" value is null, and I've changed it to .image.nfsserver
Instead of using array, just separate your services like in example below:
values.yaml
jobs:
myservice:
namespace: default
image:
repository: xxx.com/myservice
tag: fe4544
pullPolicy: Always
secret: xxx
nfslogpath: "/var/logs/"
nfsserver: "xxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
key: val
schedule: "*/5 * * * *"
args:
key: val
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
concurrencyPolicy: Forbid
suspend: false
myservice2:
namespace: default
image:
repository: xxxx/myservice2
tag: 1dff39a
pullPolicy: IfNotPresent
secret: xxxx
nfslogpath: "/var/logs/"
nfsserver: "xxxx"
nfspath: "/nfs/xxx/cronjobs/"
nfsreadonly: false
env:
key: val
schedule: "*/30 * * * *"
args:
key: val
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 2
concurrencyPolicy: Forbid
suspend: false
In your cronjob.yaml use {{- range $job, $val := .Values.jobs }} to iterate over values.
Use $job where you used {{ $job.name }}.
Access values like suspend with {{ .suspend }} instead of {{ $job.suspend }}
cronjob.yaml
{{- $chart_name := .Chart.Name }}
{{- $chart_version := .Chart.Version | replace "+" "_" }}
{{- $release_name := .Release.Name }}
{{- range $job, $val := .Values.jobs }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: {{ .namespace }}
name: "{{ $release_name }}-{{ $job }}"
labels:
chart: "{{ $chart_name }}-{{ $chart_version }}"
spec:
concurrencyPolicy: {{ .concurrencyPolicy }}
failedJobsHistoryLimit: {{ .failedJobsHistoryLimit }}
suspend: {{ .suspend }}
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ $release_name }}
cron: {{ $job }}
spec:
containers:
- image: "{{ .image.repository }}:{{ .image.tag }}"
imagePullPolicy: {{ .image.imagePullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
name: {{ $job }}
args:
{{ toYaml .args | indent 14 }}
env:
{{ toYaml .image.env | indent 14 }}
volumeMounts:
- name: nfs
mountPath: "{{ .image.nfslogpath }}"
restartPolicy: OnFailure
imagePullSecrets:
- name: {{ .image.secret }}
volumes:
- name: nfs
nfs:
server: "{{ .image.nfsserver }}"
path: "{{ .image.nfspath }}"
readOnly: false
schedule: {{ .schedule | quote }}
successfulJobsHistoryLimit: {{ .successfulJobsHistoryLimit }}
{{- end }}
Passing values using --set :
helm upgrade cronjobs cronjobs/ --wait --set jobs.myservice.image.tag=b70d744
Example:
helm install --debug --dry-run --set jobs.myservice.image.tag=my123tag .
...
HOOKS:
MANIFEST:
---
# Source: foo/templates/cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: default
name: "illmannered-iguana-myservice"
labels:
chart: "foo-0.1.0"
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
suspend: false
jobTemplate:
spec:
template:
metadata:
labels:
app: illmannered-iguana
cron: myservice
spec:
containers:
- image: "xxx.com/myservice:my123tag"
imagePullPolicy:
ports:
- name: http
containerPort: 80
protocol: TCP
name: myservice
args:
key: val
env:
key: val
volumeMounts:
- name: nfs
mountPath: "/var/logs/"
restartPolicy: OnFailure
imagePullSecrets:
- name: xxx
volumes:
- name: nfs
nfs:
server: "xxx"
path: "/nfs/xxx/cronjobs/"
readOnly: false
schedule: "*/5 * * * *"
successfulJobsHistoryLimit: 3
---
# Source: foo/templates/cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: default
name: "illmannered-iguana-myservice2"
labels:
chart: "foo-0.1.0"
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
suspend: false
jobTemplate:
spec:
template:
metadata:
labels:
app: illmannered-iguana
cron: myservice2
spec:
containers:
- image: "xxxx/myservice2:1dff39a"
imagePullPolicy:
ports:
- name: http
containerPort: 80
protocol: TCP
name: myservice2
args:
key: val
env:
key: val
volumeMounts:
- name: nfs
mountPath: "/var/logs/"
restartPolicy: OnFailure
imagePullSecrets:
- name: xxxx
volumes:
- name: nfs
nfs:
server: "xxxx"
path: "/nfs/xxx/cronjobs/"
readOnly: false
schedule: "*/30 * * * *"
successfulJobsHistoryLimit: 2
Hope that helps!
On helm 3. This works for me.
--set "servers[0].port=80" --set "servers[1].port=8080"

Data is empty when accessing config file in k8s configmap with Helm

I am trying to use a configmap in my deployment with helm charts. Now seems like files can be accessed with Helm according to the docs here: https://github.com/helm/helm/blob/master/docs/chart_template_guide/accessing_files.md
This is my deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "{{ template "service.fullname" . }}"
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: "{{ template "service.fullname" . }}"
spec:
containers:
- name: "{{ .Chart.Name }}"
image: "{{ .Values.registryHost }}/{{ .Values.userNamespace }}/{{ .Values.projectName }}/{{ .Values.serviceName }}:{{.Chart.Version}}"
volumeMounts:
- name: {{ .Values.configmapName}}configmap-volume
mountPath: /app/config
ports:
- containerPort: 80
name: http
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: {{ .Values.configmapName}}configmap-volume
configMap:
name: "{{ .Values.configmapName}}-configmap"
My configmap is accessing a config file. Here's the configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-configmap"
labels:
app: "{{ .Values.configmapName}}"
data:
{{ .Files.Get "files/{{ .Values.configmapName}}-config.json" | indent 2}}
The charts directory looks like this:
files/
--runtime-config.json
templates/
--configmap.yaml
--deployment.yaml
--ingress.yaml
--service.yaml
chart.value
vaues.yaml
And this is how my runtime-confi.json file looks like:
{
"GameModeConfiguration": {
"command": "xx",
"modeId": 10,
"sessionId": 11
}
}
The problem is, when I install my chart (even with a dry-run mode), the data for my configmap is empty. It doesn't add the data from the config file into my configmap declaration. This is how it looks like when I do a dry-run:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: "runtime-configmap"
labels:
app: "runtime"
data:
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "whimsical-otter-runtime-service"
labels:
chart: "runtime-service-unknown/version"
spec:
replicas: 1
template:
metadata:
labels:
app: "whimsical-otter-runtime-service"
spec:
containers:
- name: "runtime-service"
image: "gcr.io/xxx-dev/xxx/runtime_service:unknown/version"
volumeMounts:
- name: runtimeconfigmap-volume
mountPath: /app/config
ports:
- containerPort: 80
name: http
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: runtimeconfigmap-volume
configMap:
name: "runtime-configmap"
---
What am I doing wrong that I don't get data?
The replacement of the variable within the string does not work:
{{ .Files.Get "files/{{ .Values.configmapName}}-config.json" | indent 2}}
But you can gerenate a string using the printf function like this:
{{ .Files.Get (printf "files/%s-config.json" .Values.configmapName) | indent 2 }}
Apart from the syntax problem pointed by #adebasi, you still need to set this code inside a key to get a valid configmap yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-configmap"
labels:
app: "{{ .Values.configmapName}}"
data:
my-file: |
{{ .Files.Get (printf "files/%s-config.json" .Values.configmapName) | indent 4}}
Or you can use the handy configmap helper:
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-configmap"
labels:
app: "{{ .Values.configmapName}}"
data:
{{ (.Files.Glob "files/*").AsConfig | indent 2 }}