multiple configMap files in Helm - kubernetes

Somehow I cannot load environment variables and I have the following error when the pod starts:
Error: Could not find or load main class
Caused by: java.lang.ClassNotFoundException:
The structure of my Helm chart:
I have the following configuration in the configmap.yaml Helm template:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.nameOverride }}-config
data:
application.yaml: {{ tpl (.Files.Get "files/application.yaml") . | quote }}
appdynamicscontrollerconfig.yaml: {{ tpl (.Files.Get "files/appdynamics-controller-config.yaml") . | quote }}
javaconfigmap.yaml: {{ tpl (.Files.Get "files/java-config-map.yaml") . | quote }}
The deployment.yaml Helm template:
containers:
- name: {{ .Values.nameOverride }}
env:
- name: APPLICATION
valueFrom:
configMapKeyRef:
name: {{ .Values.nameOverride }}-config
key: application.yaml
- name: APPDYNAMICS_CONTROLLER_CONFIG
valueFrom:
configMapKeyRef:
name: {{ .Values.nameOverride }}-config
key: appdynamicscontrollerconfig.yaml
- name: JAVA_OPTS
valueFrom:
configMapKeyRef:
name: {{ .Values.nameOverride }}-config
key: javaconfigmap.yaml
volumes:
- configMap:
defaultMode: 420
name: {{ .Values.nameOverride }}-config
items:
- key: application.yaml
path: application.yaml
- key: appdynamicscontrollerconfig.yaml
path: appdynamics-controller-config.yaml
- key: javaconfigmap.yaml
path: java-config-map.yaml
name: {{ .Values.nameOverride }}-config
volumeMounts:
- mountPath: /cs/app/config
name: {{ .Values.nameOverride }}-config
readOnly: true
Am I referencing incorrectly to the files which contain the environment variable?
Probably yes, but I couldn't find a documentation for it.

Related

Unable to deploy Kubernetes secrets using Helm

I'm trying to create my first Helm release on an AKS cluster using a GitLab pipeline,
but when I run the following command
- helm upgrade server ./aks/server
--install
--namespace demo
--kubeconfig ${CI_PROJECT_DIR}/.kube/config
--set image.name=${CI_PROJECT_NAME}/${CI_PROJECT_NAME}-server
--set image.tag=${CI_COMMIT_SHA}
--set database.user=${POSTGRES_USER}
--set database.password=${POSTGRES_PASSWORD}
I receive the following error:
"Error: Secret in version "v1" cannot be handled as a Secret: v1.Secret.Data:
decode base64: illegal base64 data at input byte 8, error found in #10 byte of ..."
It looks like something is not working with the secrets file, but I don't understand what.
The secret.yaml template file is defined as follows:
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
data:
User: {{ .Values.database.user }}
Host: {{ .Values.database.host }}
Database: {{ .Values.database.name }}
Password: {{ .Values.database.password }}
Port: {{ .Values.database.port }}
I will also add the deployment and the service .yaml files.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.app.name }}
labels:
app: {{ .Values.app.name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
tier: backend
stack: node
app: {{ .Values.app.name }}
template:
metadata:
labels:
tier: backend
stack: node
app: {{ .Values.app.name }}
spec:
containers:
- name: {{ .Values.app.name }}
image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
imagePullPolicy: IfNotPresent
env:
- name: User
valueFrom:
secretKeyRef:
name: server-secret
key: User
optional: false
- name: Host
valueFrom:
secretKeyRef:
name: server-secret
key: Host
optional: false
- name: Database
valueFrom:
secretKeyRef:
name: server-secret
key: Database
optional: false
- name: Password
valueFrom:
secretKeyRef:
name: server-secret
key: Password
optional: false
- name: Ports
valueFrom:
secretKeyRef:
name: server-secret
key: Ports
optional: false
resources:
limits:
cpu: "1"
memory: "128M"
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: server-service
spec:
type: ClusterIP
selector:
tier: backend
stack: node
app: {{ .Values.app.name }}
ports:
- protocol: TCP
port: 3000
targetPort: 3000
Any hint?
You have to encode secret values to base64
Check the doc encoding-functions
Try below code
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
data:
User: {{ .Values.database.user | b64enc }}
Host: {{ .Values.database.host | b64enc }}
Database: {{ .Values.database.name | b64enc }}
Password: {{ .Values.database.password | b64enc }}
Port: {{ .Values.database.port | b64enc }}
Else use stringData instead of data
stringData will allow you to create the secrets without encode to base64
Check the example in the link
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
stringData:
User: {{ .Values.database.user | b64enc }}
Host: {{ .Values.database.host | b64enc }}
Database: {{ .Values.database.name | b64enc }}
Password: {{ .Values.database.password | b64enc }}
Port: {{ .Values.database.port | b64enc }}

Helm referring to kubernetes secrets in enviroment variables

I have some environment variables that I'm using in a helm installation and want to hide the password using a k8s secret.
values.yaml
env:
USER_EMAIL: "test#test.com"
USER_PASSWORD: "p8ssword"
I want to add the password via a kubernetes secret mysecrets, created using
# file: mysecrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecrets
type: Opaque
data:
test_user_password: cGFzc3dvcmQ=
and then add this to values.yaml
- name: TEST_USER_PASSWORD
valueFrom:
secretKeyRef:
name: mysecrets
key: test_user_password
I then use the following in the deployment
env:
{{- range $key, $value := $.Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
Is it possible to mix formats for environment variables in values.yaml i.e.,
env:
USER_EMAIL: "test#test.com"
- name: USER_PASSWORD
valueFrom:
secretKeyRef:
name: mysecrets
key: test_user_password
Or is there a way of referring to the secret in line in the original format?
Plan 1 :
One of the simplest implementation methods
You can directly use the yaml file injection method, put the env part here as it is, so you can write the kv form value and the ref form value in the values in the required format.
As follows:
values.yaml
env:
- name: "USER_EMAIL"
value: "test#test.com"
- name: "USER_PASSWORD"
valueFrom:
secretKeyRef:
name: mysecrets
key: test_user_password
deployment.yaml
containers:
- name: {{ .Chart.Name }}
env:
{{ toYaml .Values.env | nindent xxx }}
{{- end }}
(ps: xxx --> actual indent)
Plan 2:
Distinguish the scene by judging the type.
As follows:
values.yaml
env:
USER_EMAIL:
type: "kv"
value: "test#test.com"
USER_PASSWORD:
type: "secretRef"
name: mysecrets
key: p8ssword
USER_CONFIG:
type: "configmapRef"
name: myconfigmap
key: mycm
deployment.yaml
containers:
- name: {{ .Chart.Name }}
env:
{{- range $k, $v := .Values.env }}
- name: {{ $k | quote }}
{{- if eq $v.type "kv" }}
value: {{ $v.value | quote }}
{{- else if eq $v.type "secretRef" }}
valueFrom:
secretKeyRef:
name: {{ $v.name | quote }}
key: {{ $v.key | quote }}
{{- else if eq $v.type "configmapRef" }}
valueFrom:
configMapKeyRef:
name: {{ $v.name | quote }}
key: {{ $v.key | quote }}
{{- end }}
{{- end }}

Helm - How to write a file in a Volume using ConfigMap?

I have defined the values.yaml like the following:
name: custom-streams
image: streams-docker-images
imagePullPolicy: Always
restartPolicy: Always
replicas: 1
port: 8080
nodeSelector:
nodetype: free
configHocon: |-
streams {
monitoring {
custom {
uri = ${?URI}
method = ${?METHOD}
}
}
}
And configmap.yaml like the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-streams-configmap
data:
config.hocon: {{ .Values.configHocon | indent 4}}
Lastly, I have defined the deployment.yaml like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configmap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}
When I run the container via:
helm install --name custom-streams custom-streams -f values.yaml --debug --namespace streaming
Then the pods are running fine, but I cannot see the config.hocon file in the container:
$ kubectl exec -it custom-streams-55b45b7756-fb292 sh -n streaming
/ # ls
...
config
...
/ # cd config/
/config # ls
/config #
I need the config.hocon written in the /config folder. Can anyone let me know what is wrong with the configurations?
I was able to resolve the issue. The issue was using configmap in place configMap in deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configMap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}

Issues in Deployment.yaml file

I got an error in my Deoloyment.ysml file. I have made env in this file and assign values in values file. I got a syntax error in this file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
template:
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
spec:
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources: {}
env:
- name: MONGODB_ADDRESS
value: {{ .Values.mongodb.db.address }}
- name: MONGODB
value: "akira-article"
- name: MONGODB_USER
value: {{ .Values.mongodb.db.user | quote }}
- name: MONGODB_PASS
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: mongodb-password
- name: MONGODB_AUTH_DB
value: {{ .Values.mongodb.db.name | quote }}
- name: DAKEN_USERID
value: {{ .Values.mongodb.db.userId | quote }}
- name: DAKEN_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: daken-pass
- name: JWT_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: jwt-Privat-Key
- name: WEBSITE_NAME
value: {{ .Values.website.Name }}
- name: WEBSITE_SHORT_NAME
value: {{ .Values.website.shortName }}
- name: AKIRA_HTTP_PORT
value: {{ .Values.website.port }}
ports:
- containerPort: {{ .Values.service.port }}
I got this error:
Error: Deployment in version "v1" cannot be handled as a Deployment:
v1.Deployment.Spec: v1.DeploymentSpec.Template:
v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container:
v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects "
or n, but found 8, error found in #10 byte of
...|,"value":8080}],"ima|..., bigger context
...|,"value":"AA"},{"name":"AKIRA_HTTP_PORT","value":8080}],"image":"dr.xenon.team/websites/akira-fronte|...
Answer to your problem is available in Helm documentation QUOTE STRINGS, DON’T QUOTE INTEGERS.
When you are working with string data, you are always safer quoting the strings than leaving them as bare words:
name: {{ .Values.MyName | quote }}
But when working with integers do not quote the values. That can, in many cases, cause parsing errors inside of Kubernetes.
port: {{ .Values.Port }}
This remark does not apply to env variables values which are expected to be string, even if they represent integers:
env:
- name: HOST
value: "http://host"
- name: PORT
value: "1234"
I'm assuming you have put the port value of AKIRA_HTTP_PORT inside quotes, that's why you are getting the error.
You can read the docs about Template Functions and Pipelines.
With AKIRA_HTTP_PORT: "8080" in values.yaml, in the env variables write:
env:
- name: AKIRA_HTTP_PORT
value: {{ .Values.website.port | quote }}
It should have to work

Helm chart: reference a secret gives name: %!s(<nil>)-%!s(<nil>)

I am creating a Helm chart. When doing a dry run I get a error:
Error: YAML parse error on vstsagent/templates/vsts-buildrelease-agent.yaml: error converting YAML to JSON: yaml: line 28: found character that cannot start any token
The dry run also outputs the secret and deployment YAML file which I created. The part where it goes wrong in the deployment shows:
- name: ACCOUNT
valueFrom:
secretKeyRef:
name: %!s(<nil>)-%!s(<nil>)
key: ACCOUNT
- name: TOKEN
valueFrom:
secretKeyRef:
name: %!s(<nil>)-%!s(<nil>)
key: TOKEN
The output from the dry run for the secret looks fine.
The templates I created:
apiVersion: v1
kind: Secret
metadata:
name: {{ template "chart.fullname" . }}
type: Opaque
data:
ACCOUNT: {{ .Values.chart.secret.account }}
TOKEN: {{ .Values.chart.secret.token }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "chart.fullname" . }}
labels:
app: {{ template "chart.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
release: {{ .Release.Name }}
app: {{ template "chart.name" . }}
annotations:
agentVersion: {{ .Values.chart.image.tag }}
spec:
containers:
- name: {{ template "chart.name" . }}
image: {{ .Values.chart.image.name }}
imagePullPolicy: {{ .Values.chart.image.pullPolicy }}
env:
- name: ACCOUNT
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" }}
key: ACCOUNT
- name: TOKEN
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" }}
key: TOKEN
The _helper.tpl looks like this:
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "chart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "chart.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
Where am I going wrong in this?
I missed 2 dots....
- name: ACCOUNT
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" . }}
key: ACCOUNT
- name: TOKEN
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" . }}
key: TOKEN