Helm chart not picking up correct values - kubernetes

I'm trying to assign static IPs for Load Balancers in GKE to services by storing them in the values.yaml file as:
ip:
sandbox:
service1: xxx.xxx.201.74
service2: xxx.xxx.80.114
dev:
service1: xxx.xxx.249.203
service2: xxx.xxx.197.77
test:
service1: xxx.xxx.123.212
service2: xxx.xxx.194.133
prod:
service1: xxx.xx.244.211
service2: xxx.xxx.207.177
All works fine till I want to deploy to prod and that will fail as:
Error: UPGRADE FAILED: template: chart-v1/templates/service2-service.yaml:24:28: executing "chart-v1/templates/service2-service.yaml" at <.Values.ip.prod.service2>: nil pointer evaluating interface {}.service2
helm.go:94: [debug] template: chart-v1/templates/service2-service.yaml:24:28: executing "chart-v1/templates/service2-service.yaml" at <.Values.ip.prod.service2>: nil pointer evaluating interface {}.service2
and the part for service2-service.yaml looks like:
apiVersion: v1
kind: Service
metadata:
annotations:
appName: {{ include "common.fullname" . }}
componentName: service2
labels:
io.kompose.service: service2
name: service2
spec:
ports:
- name: "{{ .Values.service.service2.ports.name }}"
port: {{ .Values.service.service2.ports.port }}
protocol: {{ .Values.service.service2.ports.protocol }}
targetPort: {{ .Values.service.service2.ports.port }}
type: LoadBalancer
{{ if eq .Values.target.deployment.namespace "sandbox" }}
loadBalancerIP: {{ .Values.ip.sandbox.service2 }}
{{ else if eq .Values.target.deployment.namespace "dev" }}
loadBalancerIP: {{ .Values.ip.dev.service2 }}
{{ else if eq .Values.target.deployment.namespace "test" }}
loadBalancerIP: {{ .Values.ip.test.service2 }}
{{ else if eq .Values.target.deployment.namespace "prod" }}
loadBalancerIP: {{ .Values.ip.prod.service2 }}
{{ else }}
{{ end }}
selector:
io.kompose.service: service2
status:
loadBalancer: {}
Any clue why is complaining that is nil (empty)?

it could be due to the function changing the context and defined in values.yaml
Normally with range, we can use the $ for global scope, appName: {{ include "common.fullname" $ }}
When tested the same template by keeping the static value of the appName it worked for me, so there is no issue with access from values.yaml unless nil is getting set at .Values.ip.prod.service2.
in other case as you mentioned {{ (.Values.ip.prod).service2 }} multiple level nesting will solve issue.

Related

env varaible error converting YAML to JSON: yaml: did not find expected key

I have a deployment file which takes the environment variables from the values.yaml file.
Also I want to add one more variable named "PURPOSE".
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.scheduler.name }}
spec:
selector:
matchLabels:
app: {{ .Values.scheduler.name }}
template:
metadata:
labels:
app: {{ .Values.scheduler.name }}
spec:
containers:
- name: {{ .Values.scheduler.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.scheduler.targetPort }}
imagePullPolicy: Always
env:
{{- toYaml .Values.envVariables | nindent 10 }}
- name: PURPOSE
value: "SCHEDULER"
The error I get is the following:
error converting YAML to JSON: yaml: line 140: did not find expected key
The env varaibles from the values file work fine,
the problem seems to be the variable "PURPOSE"
The problem was the formatting of the environment block.
I have used the below Solution to fix the error :
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.scheduler.name }}
spec:
selector:
matchLabels:
app: {{ .Values.scheduler.name }}
template:
metadata:
labels:
app: {{ .Values.scheduler.name }}
spec:
containers:
- name: {{ .Values.scheduler.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.scheduler.targetPort }}
imagePullPolicy: Always
env:
- name: PURPOSE
value: "SCHEDULER"
{{- toYaml .Values.envVariables | nindent 10 }}

How to mount Vault secret as a file in Kubernetes?

I'm using Hashicorp Vault in Kubernetes. I'm trying to mount secret file into main folder where my application resides. It would look like that: /usr/share/nginx/html/.env while application files are in /usr/share/nginx/html. But the container is not starting because of that. I suspect that that /usr/share/nginx/html was overwritten by Vault (annotation: vault.hashicorp.com/secret-volume-path). How can I mount only file /usr/share/nginx/html/.env?
My annotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: configs/data/app/dev
vault.hashicorp.com/agent-inject-template-.env: |
{{- with secret (print "configs/data/app/dev") -}}{{- range $k, $v := .Data.data -}}
{{ $k }}={{ $v }}
{{ end }}{{- end -}}
vault.hashicorp.com/role: app
vault.hashicorp.com/secret-volume-path: /usr/share/nginx/html
I tried to replicate the use case, but I got an error
2022/10/21 06:42:12 [error] 29#29: *9 directory index of "/usr/share/nginx/html/" is forbidden, client: 20.1.48.169, server: localhost, request: "GET / HTTP/1.1", host: "20.1.55.62:80"
so it seems like vault changed the directory permission as well, as it create .env in the path, here is the config
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: kv/develop/us-west-2/app1-secrets
vault.hashicorp.com/agent-inject-template-.env: |
"{{ with secret "kv/develop/us-west-2/app1-secrets" }}
{{ range $k, $v := .Data.data }}
{{ $k }} = "{{ $v }}"
{{ end }}
{{ end }} "
vault.hashicorp.com/agent-limits-ephemeral: ""
vault.hashicorp.com/secret-volume-path: /usr/share/nginx/html/
vault.hashicorp.com/agent-inject-file-.env: .env
vault.hashicorp.com/auth-path: auth/kubernetes/develop/us-west-2
vault.hashicorp.com/role: rolename
The work around was to overide the command of the desired container, for this use case, i used nginx
command: ["bash", "-c", "cat /vault/secret/.env > /usr/share/nginx/html/.env && nginx -g 'daemon off;' "]
Here is the compelete example with dummy value of my-app
apiVersion: apps/v1
kind: Deployment
metadata:
name: debug-app
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: kv/my-app/develop/us-west-2/develop-my-app
vault.hashicorp.com/agent-inject-template-.env: |
"{{ with secret "kv/my-app/develop/us-west-2/develop-my-app" }}
{{ range $k, $v := .Data.data }}
{{ $k }} = "{{ $v }}"
{{ end }}
{{ end }} "
vault.hashicorp.com/agent-limits-ephemeral: ""
vault.hashicorp.com/secret-volume-path: /vault/secret/
vault.hashicorp.com/agent-inject-file-.env: .env
vault.hashicorp.com/auth-path: auth/kubernetes/develop/us-west-2
vault.hashicorp.com/role: my-app-develop-my-app
spec:
serviceAccountName: develop-my-app
containers:
- name: debug
image: nginx
command: ["bash", "-c", "cat /vault/secret/.env > /usr/share/nginx/html/.env && nginx -g 'daemon off;' "]
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http

How to add a PersistentVolumeClaim to a deployment running GitLab AutoDevops?

What am I trying to achieve?
We are using a self-hosted GitLab instance and use GitLab AutoDevops to deploy our projects to a Kubernetes cluster. At the time of writing, we are only using one node within the cluster. For one of our projects it is important that the built application (i.e. the pod(s)) is able to access (read only) files stored on the Kubernetes cluster's node itself.
What have I tried?
Created a (hostPath) PersistentVolume (PV) on our cluster
Created a PersistentVolumeClaim (PVC) on our cluster (named "test-api-claim")
Now GitLab AutoDevops uses a default helm chart to deploy the applications. In order to modify it's behavior, I've added this chart to the project's repository (GitLab AutoDevops automatically uses the chart in a project's ./chart directory if found). So my line of thinking was to modify the chart so that the deployed pods use the PV and PVC which I created manually on the cluster.
Therefore I modified the deployment.yaml file that can be found here. As you can see in the following code-snippet, I have added the volumeMounts & volumes keys (not present in the default/original file). Scroll to the end of the snippet to see the added keys.
{{- if not .Values.application.initializeCommand -}}
apiVersion: {{ default "extensions/v1beta1" .Values.deploymentApiVersion }}
kind: Deployment
metadata:
name: {{ template "trackableappname" . }}
annotations:
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if or .Values.enableSelector (eq (default "extensions/v1beta1" .Values.deploymentApiVersion) "apps/v1") }}
selector:
matchLabels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
{{- end }}
replicas: {{ .Values.replicaCount }}
{{- if .Values.strategyType }}
strategy:
type: {{ .Values.strategyType | quote }}
{{- end }}
template:
metadata:
annotations:
checksum/application-secrets: "{{ .Values.application.secretChecksum }}"
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
spec:
imagePullSecrets:
{{ toYaml .Values.image.secrets | indent 10 }}
containers:
- name: {{ .Chart.Name }}
image: {{ template "imagename" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.application.secretName }}
envFrom:
- secretRef:
name: {{ .Values.application.secretName }}
{{- end }}
env:
{{- if .Values.postgresql.managed }}
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: app-postgres
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: app-postgres
key: password
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
{{- end }}
- name: DATABASE_URL
value: {{ .Values.application.database_url | quote }}
- name: GITLAB_ENVIRONMENT_NAME
value: {{ .Values.gitlab.envName | quote }}
- name: GITLAB_ENVIRONMENT_URL
value: {{ .Values.gitlab.envURL | quote }}
ports:
- name: "{{ .Values.service.name }}"
containerPort: {{ .Values.service.internalPort }}
livenessProbe:
{{- if eq .Values.livenessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.livenessProbe.path }}
scheme: {{ .Values.livenessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.livenessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
{{- if eq .Values.readinessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.readinessProbe.path }}
scheme: {{ .Values.readinessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.readinessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- end -}}
volumeMounts:
- mountPath: /data
name: test-pvc
volumes:
- name: test-pvc
persistentVolumeClaim:
claimName: test-api-claim
What is the problem?
Now when I trigger the Pipeline to deploy the application (using AutoDevops with my modified helm chart), I am getting this error:
Error: YAML parse error on auto-deploy-app/templates/deployment.yaml: error converting YAML to JSON: yaml: line 71: did not find expected key
Line 71 in the script refers to the valueFrom.secretKeyRef.name in the yaml:
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
The weird thing is that when I delete the volumes and volumeMounts keys, it works as expected (and the valueFrom.secretKeyRef.name is still presented and causes no trouble..).
I am not using tabs in the yaml file and I double checked the indentation.
Two questions
Could there be something wrong with my yaml?
Does anyone know of another solution to achieve my desired behavior? (adding PVC to the deployment so that pods actually use it?)
General information
We use GitLab EE 13.12.11
For auto-deploy-image (which provides the helm chart I am referring to) we use version 1.0.7
Thanks in advance and have a nice day!
it seems that adding persistence is now supported in the default helm chart.
Check the pvc.yaml and deployment.yaml.
Given that, it should be enough to edit values in .gitlab/auto-deploy-values.yaml to meet your needs. Check defaults in values.yaml for more context.

helm nested variable reference not happening

I am not able to reference variable inside a nested variable in Helm. I am not able to do this nested reference, I want to retrieve app1_image and app1_tag using the value of the apps_label variable. How can I do that?
values.yaml:
apps:
- name: web-server
label: app1
command: /root/web.sh
port: 80
- name: app-server
label: app2
command: /root/app.sh
port: 8080
app1_image:
name: nginx
tag: v1.0
app2_image:
name: tomcat
tag: v1.0
deployment.yaml:
{{- range $apps := .Values.apps
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $apps.name }}
labels:
app: {{ $apps.name }}
spec:
replicas: 1
selector:
matchLabels:
app:
template:
metadata:
labels:
app: {{ $apps.name }}
spec:
containers:
- name: {{ $apps.name }}
image: {{ $.Values.$apps.label.image }}: {{ $.Values.$apps.label.tag }}
ports:
- containerPort: {{ $apps.port}}
{{- end }}
The core Go text/template language includes an index function that you can use as a more dynamic version of the . operator. Given the values file you show, you could do the lookup (inside the loop) as something like:
{{- $key := printf "%s_image" $apps.label }}
{{- $settings := index $.Values $key | required (printf "could not find top-level settings for %s" $key) }}
- name: {{ $apps.name }}
image: {{ $settings.image }}:{{ $settings.tag }}
You could probably rearrange the layout of the values.yaml file to make this clearer. You also might experiment with what you can provide with multiple helm install -f options to override options at install time; if you can keep all of these settings in one place it is easier to manage.

Issues in Deployment.yaml file

I got an error in my Deoloyment.ysml file. I have made env in this file and assign values in values file. I got a syntax error in this file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
template:
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
spec:
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources: {}
env:
- name: MONGODB_ADDRESS
value: {{ .Values.mongodb.db.address }}
- name: MONGODB
value: "akira-article"
- name: MONGODB_USER
value: {{ .Values.mongodb.db.user | quote }}
- name: MONGODB_PASS
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: mongodb-password
- name: MONGODB_AUTH_DB
value: {{ .Values.mongodb.db.name | quote }}
- name: DAKEN_USERID
value: {{ .Values.mongodb.db.userId | quote }}
- name: DAKEN_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: daken-pass
- name: JWT_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: jwt-Privat-Key
- name: WEBSITE_NAME
value: {{ .Values.website.Name }}
- name: WEBSITE_SHORT_NAME
value: {{ .Values.website.shortName }}
- name: AKIRA_HTTP_PORT
value: {{ .Values.website.port }}
ports:
- containerPort: {{ .Values.service.port }}
I got this error:
Error: Deployment in version "v1" cannot be handled as a Deployment:
v1.Deployment.Spec: v1.DeploymentSpec.Template:
v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container:
v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects "
or n, but found 8, error found in #10 byte of
...|,"value":8080}],"ima|..., bigger context
...|,"value":"AA"},{"name":"AKIRA_HTTP_PORT","value":8080}],"image":"dr.xenon.team/websites/akira-fronte|...
Answer to your problem is available in Helm documentation QUOTE STRINGS, DON’T QUOTE INTEGERS.
When you are working with string data, you are always safer quoting the strings than leaving them as bare words:
name: {{ .Values.MyName | quote }}
But when working with integers do not quote the values. That can, in many cases, cause parsing errors inside of Kubernetes.
port: {{ .Values.Port }}
This remark does not apply to env variables values which are expected to be string, even if they represent integers:
env:
- name: HOST
value: "http://host"
- name: PORT
value: "1234"
I'm assuming you have put the port value of AKIRA_HTTP_PORT inside quotes, that's why you are getting the error.
You can read the docs about Template Functions and Pipelines.
With AKIRA_HTTP_PORT: "8080" in values.yaml, in the env variables write:
env:
- name: AKIRA_HTTP_PORT
value: {{ .Values.website.port | quote }}
It should have to work