helm secrets that replace variables inside configurations files - kubernetes

I am trying to deploy a rest api application in kubernetes with helm. Some of the configuration files have credentials in them and I would like to replace the variables inside the helm templates during the deployment with Kubernetes secrets.
Does anyone have a pointer to a documentation where I can explore this please ?

If you are looking forward to directly deploy the ENV to the deployment file you can also do it if you can few environment variables however best practices to create the secret and inject them all into the deployment.
here sharing the direct example to inject the secret into the deployment
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Chart.Name }}-deployment"
labels:
chart: '{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}'
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app: "{{ .Chart.Name }}-selector"
version: "current"
revisionHistoryLimit: {{ .Values.revisionHistoryLimit }}
template:
metadata:
labels:
app: "{{ .Chart.Name }}-selector"
version: "current"
spec:
containers:
- name: "{{ .Chart.Name }}"
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.servicePort}}
resources:
requests:
cpu: "{{ .Values.image.resources.requests.cpu }}"
memory: "{{ .Values.image.resources.requests.memory }}"
env:
- name: PORT
value : "{{ .Values.service.servicePort }}"
{{- if .Values.image.livenessProbe }}
livenessProbe:
{{ toYaml .Values.image.livenessProbe | indent 10 }}
{{- end }}
{{- if .Values.image.readinessProbe }}
readinessProbe:
{{ toYaml .Values.image.readinessProbe | indent 10 }}
{{- end }}
values.yaml
image:
repository: nodeserver
tag: 1.0.0
pullPolicy: IfNotPresent
resources:
requests:
cpu: 200m
memory: 300Mi
readinessProbe: {}
# Example (replace readinessProbe: {} with the following):
# readinessProbe:
# httpGet:
# path: /ready
# port: 3000
# initialDelaySeconds: 3
# periodSeconds: 5
livenessProbe: {}
# Example (replace livenessProbe: {} with the following)::
# livenessProbe:
# httpGet:
# path: /live
# port: 3000
# initialDelaySeconds: 40
# periodSeconds: 10
service:
name: Node
type: NodePort
servicePort: 3000
you can see inside the deployment.yaml code block
env:
- name: PORT
value : "{{ .Values.service.servicePort }}"
it's fetching the values from values.yaml file
service:
name: Node
type: NodePort
servicePort: 3000
if you don't want to update the values.yaml file you can rewrite the value using the command also
helm install chart my-chart -n namespace-name --set service.servicePort=5000

Create a Secret template in your templates folder. Then, you can pass the values through helm cli.
For example, here is my secret.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: {{ .Values.password | b64enc }}
Now, I can set the value for password as bellow:
helm install my-chart-instance my-chart -n my-namespace --set password=my-secret-value

Related

How can I make sure my health checks work in kubernetes?

I'm currently trying to add health checks into my API through my helm chart. However, when I run this service under kubernetes it seems that my pod is ready but I have 0/1 Running and when I try to hit the URL for the service is not returning anything.
This is how my deployment looks:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "test.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "test.name" . }}
helm.sh/chart: {{ include "test.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "test.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "test.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
readinessProbe:
httpGet:
path: /health
port: 8001
initialDelaySeconds: 60
periodSeconds: 60
livenessProbe:
httpGet:
path: /health
port: 8001
initialDelaySeconds: 60
periodSeconds: 60
startupProbe:
httpGet:
path: /health
port: 8001
failureThreshold: 60
periodSeconds: 60
env:
{{- range .Values.variables }}
- name: {{ .name }}
value: {{ .value }}
{{- end }}
ports:
- name: http
containerPort: 80001
protocol: TCP
If I delete the health checks, I will have 1/1 Running. Does anyone knows if I have to add something else to the health checks?
Your healthcheck specification looks syntactically fine.
But your port number seems wrong: The maximum possible TCP port is 65535, you specify 80001. Probably a zero too many?
Verify which port your application is listening to in the container and use that port.
You can verify it by running the app as local docker container and call the health check with curl.

Helm error converting YAML to JSON: yaml: line 24: did not find expected key

Getting error
postgres deployment for service is getting fail. Checked yaml with yamllint and it is valid, but still getting the error. Deployment file contains ServiceAccount , Service and Statefulset.
install.go:158: [debug] Original chart version: ""
install.go:175: [debug] CHART PATH: /builds/xxx/xyxy/xyxyxy/xx/xxyy/src/main/helm
Error: YAML parse error on postgresdeployment.yaml: error converting YAML to JSON: yaml: line 24: did not find expected key
helm.go:75: [debug] error converting YAML to JSON: yaml: line 24: did not find expected key
YAML parse error on postgresdeployment.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
/home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:146
helm.sh/helm/v3/pkg/releaseutil.SortManifests
/home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:106
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
/home/circleci/helm.sh/helm/pkg/action/install.go:489
helm.sh/helm/v3/pkg/action.(*Install).Run
/home/circleci/helm.sh/helm/pkg/action/install.go:230
main.runInstall
/home/circleci/helm.sh/helm/cmd/helm/install.go:223
main.newUpgradeCmd.func1
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:113
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra#v0.0.5/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra#v0.0.5/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra#v0.0.5/command.go:864
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:74
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
postgresdeployment.yaml
Is there is any invalid yaml syntax?
Any indentation is missing ?
Which node is missing here?
{{- if contains "-dev" .Values.istio.suffix }}
# Postgre ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: postgres
---
# PostgreSQL StatefulSet Service
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
---
# Postgre StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: postgres
spec:
serviceAccountName: postgres
securityContext:
{{- toYaml .Values.securityContext | nindent 8 }}
terminationGracePeriodSeconds: {{ default 60 .Values.terminationGracePeriodSeconds }}
volumes:
{{ include "xxx.volumes.logs.spec" . | indent 8 }}
- emptyDir: { }
name: postgres-disk
containers:
- name: postgres
image: "{{ template "xxx.dockerRegistry.hostport" . }}/postgres:latest"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: postgres
containerPort: 5432
livenessProbe:
tcpSocket:
port: 5432
failureThreshold: 3
initialDelaySeconds: 240
periodSeconds: 45
timeoutSeconds: 5
readinessProbe:
tcpSocket:
port: 5432
failureThreshold: 2
initialDelaySeconds: 180
periodSeconds: 5
timeoutSeconds: 20
resources:
{{ if .Values.lowResourceMode }}
{{- toYaml .Values.resources.low | nindent 12 }}
{{ else }}
{{- toYaml .Values.resources.high | nindent 12 }}
{{ end }}
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: postgres-disk
mountPath: /var/lib/postgresql/data
{{- end }}
The templating mustaches in helm (and its golang text/template peer) must be one token, otherwise yaml believes that { opens a dict, and then { tries to open a child dict and just like in JSON that's not a valid structure
So you'll want:
serviceAccountName: postgres
securityContext:
{{- toYaml .Values.securityContext | nindent 8 }}

acumos AI clio installation fails with "error converting YAML to JSON"

I have been trying to install clio release.
VM :
ubuntu 18.04
16 Cores
32 GB RAM
500 GB Storage.
Command :
bash /home/ubuntu/system-integration/tools/aio_k8s_deployer/aio_k8s_deployer.sh all acai-server ubuntu generic
All most all steps of installation have completed successfully but during "setup-lum", I got below error.
Error:
YAML parse error on lum-helm/templates/deployment.yaml:
error converting YAML to JSON: yaml: line 36: mapping values are not allowed in this context
Workaround :
I was able to get away with these error(tested via helm install --dry-run ) by
a. removing "resource, affinity and tolerant blocks
b. replace "Release.Name" with actual release value( e.g. license-clio-configmap)
but when I run the full installation command, those helms charts are updated again.
Full error :
...
helm install -f kubernetes/values.yaml --name license-clio --namespace default --debug ./kubernetes/license-usage-manager/lum-helm
[debug] Created tunnel using local port: '46109'
[debug] SERVER: "127.0.0.1:46109"
[debug] Original chart version: ""
[debug] CHART PATH: /deploy/system-integration/AIO/lum/kubernetes/license-usage-manager/lum-helm
YAML parse error on lum-helm/templates/deployment.yaml: error converting YAML to JSON: yaml: line 36: mapping values are not allowed in this context
Yaml of deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "lum-helm.fullname" . }}
labels:
app: {{ template "lum-helm.name" . }}
chart: {{ template "lum-helm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "lum-helm.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "lum-helm.name" . }}
release: {{ .Release.Name }}
spec:
initContainers:
- name: wait-for-db
image: busybox:1.28
command:
- 'sh'
- '-c'
- >
until nc -z -w 2 {{ .Release.Name }}-postgresql {{ .Values.postgresql.servicePort }} && echo postgresql ok;
do sleep 2;
done
containers:
- name: {{ .Chart.Name }}
image: nexus3.acumos.org:10002/acumos/lum-server:default
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgresql
key: postgresql-password
- name: NODE
volumeMounts:
- name: config-volume
mountPath: /opt/app/lum/etc/config.json
subPath: lum-config.json
ports:
- name: http
containerPort: 2080
protocol: TCP
livenessProbe:
httpGet:
path: '/api/healthcheck'
port: http
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 10
readinessProbe:
httpGet:
path: '/api/healthcheck'
port: http
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 10
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}-configmap
This error was resolved as per Error trying to install Acumos Clio using AIO
I provided an imagetag:1.3.2 in my actual value.yaml and lum deployment was successful
in acumos setup there are two copied of setup-lum.sh and values.yaml
actual :
~/system-integration/AIO/lum/kubernetes/value.yaml
and run time copy
~/aio_k8s_deployer/deploy/system-integration/AIO/lum/kubernetes/value.yaml
I found this workaround:
Uncommented the IMAGE-TAG line in the values.yaml file
Commented the following lines in the setup-lum.sh file (these were already executed at the first run and in this way I skipped the overwriting problem)
rm -frd kubernetes/license-usage-manager
git clone "https://gerrit.acumos.org/r/license-usage-manager" \
kubernetes/license-usage-manager

Knative service with Keycloak gatekeeper sidecar

I am trying to deploy the following service:
{{- if .Values.knativeDeploy }}
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
{{- if .Values.service.name }}
name: {{ .Values.service.name }}
{{- else }}
name: {{ template "fullname" . }}
{{- end }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
template:
spec:
containers:
- image: quay.io/keycloak/keycloak-gatekeeper:9.0.3
name: gatekeeper-sidecar
ports:
- containerPort: {{ .Values.keycloak.proxyPort }}
env:
- name: KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ template "keycloakclient" . }}
key: secret
args:
- --resources=uri=/*
- --discovery-url={{ .Values.keycloak.url }}/auth/realms/{{ .Values.keycloak.realm }}
- --client-id={{ template "keycloakclient" . }}
- --client-secret=$(KEYCLOAK_CLIENT_SECRET)
- --listen=0.0.0.0:{{ .Values.keycloak.proxyPort }} # listen on all interfaces
- --enable-logging=true
- --enable-json-logging=true
- --upstream-url=http://127.0.0.1:{{ .Values.service.internalPort }} # To connect with the main container's port
resources:
{{ toYaml .Values.gatekeeper.resources | indent 12 }}
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $pkey, $pval := .Values.env }}
- name: {{ $pkey }}
value: {{ quote $pval }}
{{- end }}
envFrom:
{{ toYaml .Values.envFrom | indent 10 }}
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: {{ .Values.probePath }}
port: {{ .Values.service.internalPort }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
httpGet:
path: {{ .Values.probePath }}
port: {{ .Values.service.internalPort }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- end }}
Which fails with the following error:
Error from server (BadRequest): error when creating "/tmp/helm-template-workdir-290082188/jx/output/namespaces/jx-staging/env/charts/docs/templates/part0-ksvc.yaml": admission webhook "webhook.serving.knative.dev" denied the request: mutation failed: expected exactly one, got both: spec.template.spec.containers'
Now, if I read the specs (https://knative.dev/v0.15-docs/serving/getting-started-knative-app/), I can see this example:
apiVersion: serving.knative.dev/v1 # Current version of Knative
kind: Service
metadata:
name: helloworld-go # The name of the app
namespace: default # The namespace the app will use
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
env:
- name: TARGET # The environment variable printed out by the sample app
value: "Go Sample v1"
Which has exactly the same structure. Now, my questions are:
How can I validate my yam without waiting for a deployment? Intellij has a k8n plugin, but I can't find the CRD schema for serving.knative.dev/v1 that are machine consumable. (https://knative.dev/docs/serving/spec/knative-api-specification-1.0/)
Is it allowed with knative to have multiple container? (that configuration works perfectly with apiVersion: apps/v1 kind: Deployment)
Multi container is alpha feature in knative version 0.16.
This feature need to be enabled by setting multi-container to enabled in the config-features ConfigMap. So edit the configmap using
kubectl edit cm config-features and enable that feature.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-features
namespace: knative-serving
labels:
serving.knative.dev/release: devel
annotations:
knative.dev/example-checksum: "983ddf13"
data:
_example: |
...
# Indicates whether multi container support is enabled
multi-container: "enabled"
...
What version of Knative are you using?
Support for multiple containers was added as an alpha feature in 0.16. If you're not using 0.16 or later or don't have the alpha flag enabled, the request will probably be blocked.
There were a number of edge cases to define for multi-container support in Knative, so the default was to be conservative and only allow one container until the constraints had been explored.

How to refer whole structure from values.yaml instead of specifying one by one?

I am trying to deploy helm chart in local vritual box on minikube using helm command shown below.
I am referring livenessProbe , readinessProbe configuration directly from values.yam in the deployment.yaml as shown below. However following this approach gives me the error specified below , if i change this to refer each attribute value independently i see the chart is getting installed , pod deploys successfully.
livenessProbe:
- {{ .Values.monitorConfig.liveness }}
readinessProbe:
- {{ .Values.monitorConfig.readiness }}
Can anyone please let me know what can be done to avoid the error and why??
Thank you.
Helm Command
helm install --debug -n pspk ./pkg/helm/my-service/
Error
Error: release pspk failed: Deployment in version "v1beta1" cannot be
handled as a Deployment: v1beta1.Deployment.Spec:
v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec:
v1.PodSpec.Containers: []v1.Container: v1.Container.LivenessProbe:
readObjectStart: expect { or n, but found [, error found in #10 byte
of ...|ssProbe":["map[failu|..., bigger context
...|"imagePullPolicy":"IfNotPresent","livenessProbe":["map[failureThreshold:3
httpGet:map[path:/greeting|...
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "fullname" .}}
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: "{{ .Release.Name }}"
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "fullname" . }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 50443
protocol: TCP
- name: grpc
containerPort: 50051
protocol: TCP
livenessProbe:
- {{ .Values.monitorConfig.liveness }}
readinessProbe:
- {{ .Values.monitorConfig.readiness }}
resources:
{{ toYaml .Values.resources | indent 12 }}
values.yaml
replicaCount: 2
application:
track: stable
image:
repository: test/go-k8s
tag: 0.1.1
pullPolicy: IfNotPresent
# SQL migration scripts
service:
enabled: false
type: NodePort
port: 80
grpc_port: 50051
env:
# POSTGRES_HOST
postgresHost: localhost
# POSTGRES_PORT
postgresPort: "5432"
# POSTGRES_SSL_MODE
postgresSSLMode: "disable"
# POSTGRES_DB
postgresDB: test
# POSTGRES_USER
postgresUser: test
# POSTGRES_PASSWORD
postgresPassword: "test"
monitorConfig:
liveness:
httpGet:
path: "/greeting"
port: 50443
periodSeconds: 2
timeoutSeconds: 10
initialDelaySeconds: 5
failureThreshold: 3
successThreshold: 1
readiness:
httpGet:
path: "/greeting"
port: 50443
periodSeconds: 2
timeoutSeconds: 10
initialDelaySeconds: 5
failureThreshold: 3
successThreshold: 1
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
You need to do two things to make this work correctly: explicitly serialize the value as YAML, and make the indentation correct. This tends to look something like
livenessProbe:
- {{ .Values.monitorConfig.liveness | toYaml | indent 8 | trim }}
The default serialization will be a Go-native dump format, which isn't YAML and leads to the weird map[failureThreshold:1] output; toYaml fixes this. indent 8 puts spaces at the front of every line in the resulting block (you will need to adjust the "8"). trim removes leading and trailing spaces. (toYaml is Helm-specific and isn't documented well; the other two functions come from the Sprig support library.)
You should double-check this output with
helm template -n pspk ./pkg/helm/my-service/
and if it doesn't look like valid YAML, adjust it further.
In your YAML:
livenessProbe:
- {{ .Values.monitorConfig.liveness }}
readinessProbe:
- {{ .Values.monitorConfig.readiness }}
You insert your values into sequence items. Sequence items in YAML are started with -. However, the contents of livenessProbe is expected to be a YAML mapping. The error message is poor but tells you what goes wrong:
expect { or n, but found [,
{ starts a YAML mapping (in flow style), [ starts a YAML sequence (in flow style). The message tells you that the start of a YAML mapping is expected, but the start of a YAML sequence is found. Note that since you're using block style, you don't actually use { and [ here.
So to fix it, simply remove the - so that your inserted mapping (as seen in your values.yaml) is the direct value of livenessProbe and not contained in a sequence:
livenessProbe:
{{ .Values.monitorConfig.liveness }}
readinessProbe:
{{ .Values.monitorConfig.readiness }}
Thanks to the community answers/comments and helm template guide,
it can be combined into:
{{- if .Values.monitorConfig.liveness }}
livenessProbe:
{{ toYaml .Values.monitorConfig.liveness | indent 12 }}
{{- end }}
This will give more flexibility.