I'm trying to create my first Helm release on an AKS cluster using a GitLab pipeline,
but when I run the following command
- helm upgrade server ./aks/server
--install
--namespace demo
--kubeconfig ${CI_PROJECT_DIR}/.kube/config
--set image.name=${CI_PROJECT_NAME}/${CI_PROJECT_NAME}-server
--set image.tag=${CI_COMMIT_SHA}
--set database.user=${POSTGRES_USER}
--set database.password=${POSTGRES_PASSWORD}
I receive the following error:
"Error: Secret in version "v1" cannot be handled as a Secret: v1.Secret.Data:
decode base64: illegal base64 data at input byte 8, error found in #10 byte of ..."
It looks like something is not working with the secrets file, but I don't understand what.
The secret.yaml template file is defined as follows:
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
data:
User: {{ .Values.database.user }}
Host: {{ .Values.database.host }}
Database: {{ .Values.database.name }}
Password: {{ .Values.database.password }}
Port: {{ .Values.database.port }}
I will also add the deployment and the service .yaml files.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.app.name }}
labels:
app: {{ .Values.app.name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
tier: backend
stack: node
app: {{ .Values.app.name }}
template:
metadata:
labels:
tier: backend
stack: node
app: {{ .Values.app.name }}
spec:
containers:
- name: {{ .Values.app.name }}
image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
imagePullPolicy: IfNotPresent
env:
- name: User
valueFrom:
secretKeyRef:
name: server-secret
key: User
optional: false
- name: Host
valueFrom:
secretKeyRef:
name: server-secret
key: Host
optional: false
- name: Database
valueFrom:
secretKeyRef:
name: server-secret
key: Database
optional: false
- name: Password
valueFrom:
secretKeyRef:
name: server-secret
key: Password
optional: false
- name: Ports
valueFrom:
secretKeyRef:
name: server-secret
key: Ports
optional: false
resources:
limits:
cpu: "1"
memory: "128M"
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: server-service
spec:
type: ClusterIP
selector:
tier: backend
stack: node
app: {{ .Values.app.name }}
ports:
- protocol: TCP
port: 3000
targetPort: 3000
Any hint?
You have to encode secret values to base64
Check the doc encoding-functions
Try below code
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
data:
User: {{ .Values.database.user | b64enc }}
Host: {{ .Values.database.host | b64enc }}
Database: {{ .Values.database.name | b64enc }}
Password: {{ .Values.database.password | b64enc }}
Port: {{ .Values.database.port | b64enc }}
Else use stringData instead of data
stringData will allow you to create the secrets without encode to base64
Check the example in the link
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
stringData:
User: {{ .Values.database.user | b64enc }}
Host: {{ .Values.database.host | b64enc }}
Database: {{ .Values.database.name | b64enc }}
Password: {{ .Values.database.password | b64enc }}
Port: {{ .Values.database.port | b64enc }}
Related
I have a deployment file which takes the environment variables from the values.yaml file.
Also I want to add one more variable named "PURPOSE".
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.scheduler.name }}
spec:
selector:
matchLabels:
app: {{ .Values.scheduler.name }}
template:
metadata:
labels:
app: {{ .Values.scheduler.name }}
spec:
containers:
- name: {{ .Values.scheduler.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.scheduler.targetPort }}
imagePullPolicy: Always
env:
{{- toYaml .Values.envVariables | nindent 10 }}
- name: PURPOSE
value: "SCHEDULER"
The error I get is the following:
error converting YAML to JSON: yaml: line 140: did not find expected key
The env varaibles from the values file work fine,
the problem seems to be the variable "PURPOSE"
The problem was the formatting of the environment block.
I have used the below Solution to fix the error :
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.scheduler.name }}
spec:
selector:
matchLabels:
app: {{ .Values.scheduler.name }}
template:
metadata:
labels:
app: {{ .Values.scheduler.name }}
spec:
containers:
- name: {{ .Values.scheduler.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.scheduler.targetPort }}
imagePullPolicy: Always
env:
- name: PURPOSE
value: "SCHEDULER"
{{- toYaml .Values.envVariables | nindent 10 }}
I have current implementation like below which takes the same configurations for each replicas.
Is there a possiblity to get the different values for each replica creation ?
statefulset file :
{{- $outer := . -}}
{{- range $idx, $app := .Values.appliance_type }}
{{- with $outer -}}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ $app.name }}
labels:
app: "{{ $app.appName }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
spec:
replicas: {{ $app.replicaCount }}
serviceName: {{ $app.serviceName }}
selector:
matchLabels:
app: {{ $app.appName }}
template:
metadata:
labels:
app: {{ $app.appName }}
spec:
containers:
- name: selfcheck
image: {{ .Values.image.registry }}/{{ .Values.da.pod.selfcheck.repository}}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CDNHOSTNAME
value: '{{ $app.hostname }}'
- name: CUSER
value: '{{ .Values.da.conf.consoleUser }}'
- name: CPASSWORD
value: '{{ .Values.da.conf.consolePassword }}'
{{ end }}
{{- end -}}
===================
values.yaml
appliance_type:
- name: sethu
hostname : s1
replicaCount: 2
serviceName: da
appName: test-ac
- name: ram
hostname : r1
replicaCount: 1
serviceName: ida
appName: test-ia
===================
Actual Results :
it will create 3 PODs
sethu-0 => CDNHOSTNAME (s1)
sethu-1 => CDNHOSTNAME (s1)
ram-0 => CDNHOSTNAME (r1)
Needed Results :
sethu-0 => CDNHOSTNAME (s1)
sethu-1 => CDNHOSTNAME (s2)
ram-0 => CDNHOSTNAME (r1)
hostname of sethu-0 and sethu-1 need to be taken different values from values.yaml
kind of below configuration - but not working
appliance_type:
- name: sethu
hostname :
- s1
- s2
replicaCount: 2
serviceName: da
appName: test-ac
- name: ram
replicaCount: 1
serviceName: ida
appName: test-ia
Somehow I cannot load environment variables and I have the following error when the pod starts:
Error: Could not find or load main class
Caused by: java.lang.ClassNotFoundException:
The structure of my Helm chart:
I have the following configuration in the configmap.yaml Helm template:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.nameOverride }}-config
data:
application.yaml: {{ tpl (.Files.Get "files/application.yaml") . | quote }}
appdynamicscontrollerconfig.yaml: {{ tpl (.Files.Get "files/appdynamics-controller-config.yaml") . | quote }}
javaconfigmap.yaml: {{ tpl (.Files.Get "files/java-config-map.yaml") . | quote }}
The deployment.yaml Helm template:
containers:
- name: {{ .Values.nameOverride }}
env:
- name: APPLICATION
valueFrom:
configMapKeyRef:
name: {{ .Values.nameOverride }}-config
key: application.yaml
- name: APPDYNAMICS_CONTROLLER_CONFIG
valueFrom:
configMapKeyRef:
name: {{ .Values.nameOverride }}-config
key: appdynamicscontrollerconfig.yaml
- name: JAVA_OPTS
valueFrom:
configMapKeyRef:
name: {{ .Values.nameOverride }}-config
key: javaconfigmap.yaml
volumes:
- configMap:
defaultMode: 420
name: {{ .Values.nameOverride }}-config
items:
- key: application.yaml
path: application.yaml
- key: appdynamicscontrollerconfig.yaml
path: appdynamics-controller-config.yaml
- key: javaconfigmap.yaml
path: java-config-map.yaml
name: {{ .Values.nameOverride }}-config
volumeMounts:
- mountPath: /cs/app/config
name: {{ .Values.nameOverride }}-config
readOnly: true
Am I referencing incorrectly to the files which contain the environment variable?
Probably yes, but I couldn't find a documentation for it.
I have defined the values.yaml like the following:
name: custom-streams
image: streams-docker-images
imagePullPolicy: Always
restartPolicy: Always
replicas: 1
port: 8080
nodeSelector:
nodetype: free
configHocon: |-
streams {
monitoring {
custom {
uri = ${?URI}
method = ${?METHOD}
}
}
}
And configmap.yaml like the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-streams-configmap
data:
config.hocon: {{ .Values.configHocon | indent 4}}
Lastly, I have defined the deployment.yaml like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configmap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}
When I run the container via:
helm install --name custom-streams custom-streams -f values.yaml --debug --namespace streaming
Then the pods are running fine, but I cannot see the config.hocon file in the container:
$ kubectl exec -it custom-streams-55b45b7756-fb292 sh -n streaming
/ # ls
...
config
...
/ # cd config/
/config # ls
/config #
I need the config.hocon written in the /config folder. Can anyone let me know what is wrong with the configurations?
I was able to resolve the issue. The issue was using configmap in place configMap in deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configMap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}
I got an error in my Deoloyment.ysml file. I have made env in this file and assign values in values file. I got a syntax error in this file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
template:
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
spec:
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources: {}
env:
- name: MONGODB_ADDRESS
value: {{ .Values.mongodb.db.address }}
- name: MONGODB
value: "akira-article"
- name: MONGODB_USER
value: {{ .Values.mongodb.db.user | quote }}
- name: MONGODB_PASS
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: mongodb-password
- name: MONGODB_AUTH_DB
value: {{ .Values.mongodb.db.name | quote }}
- name: DAKEN_USERID
value: {{ .Values.mongodb.db.userId | quote }}
- name: DAKEN_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: daken-pass
- name: JWT_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: jwt-Privat-Key
- name: WEBSITE_NAME
value: {{ .Values.website.Name }}
- name: WEBSITE_SHORT_NAME
value: {{ .Values.website.shortName }}
- name: AKIRA_HTTP_PORT
value: {{ .Values.website.port }}
ports:
- containerPort: {{ .Values.service.port }}
I got this error:
Error: Deployment in version "v1" cannot be handled as a Deployment:
v1.Deployment.Spec: v1.DeploymentSpec.Template:
v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container:
v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects "
or n, but found 8, error found in #10 byte of
...|,"value":8080}],"ima|..., bigger context
...|,"value":"AA"},{"name":"AKIRA_HTTP_PORT","value":8080}],"image":"dr.xenon.team/websites/akira-fronte|...
Answer to your problem is available in Helm documentation QUOTE STRINGS, DON’T QUOTE INTEGERS.
When you are working with string data, you are always safer quoting the strings than leaving them as bare words:
name: {{ .Values.MyName | quote }}
But when working with integers do not quote the values. That can, in many cases, cause parsing errors inside of Kubernetes.
port: {{ .Values.Port }}
This remark does not apply to env variables values which are expected to be string, even if they represent integers:
env:
- name: HOST
value: "http://host"
- name: PORT
value: "1234"
I'm assuming you have put the port value of AKIRA_HTTP_PORT inside quotes, that's why you are getting the error.
You can read the docs about Template Functions and Pipelines.
With AKIRA_HTTP_PORT: "8080" in values.yaml, in the env variables write:
env:
- name: AKIRA_HTTP_PORT
value: {{ .Values.website.port | quote }}
It should have to work