MariaDB helm chart crash - kubernetes

I am currently dealing with helm and kubernetis.
I would like to deploy a MISP in kubernetis, but unfortunately the helm chart always fails with the mariadb.
In the logs of the mariadb pod I see the following error message, but unfortunately I currently have no idea how to change this.
022-11-23 11:13:29 0 [Note] /opt/bitnami/mariadb/sbin/mysqld: ready for connections.
Version: '10.6.11-MariaDB' socket: '/opt/bitnami/mariadb/tmp/mysql.sock' port: 3306 Source distribution
2022-11-23 11:13:31 3 [Warning] Access denied for user 'root'#'localhost' (using password: YES)
Reading datadir from the MariaDB server failed. Got the following error when executing the 'mysql' command line client
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
FATAL ERROR: Upgrade failed
Here are my values.yaml
# Default values for misp.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: coolacid/misp-docker
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart
# appVersion. N.B. in the particular case of coolacid's
# Dockerization of MISP, the misp-docker repo has multiple different
# images, and the tags not only distinguish between versions, but
# also between images.
tag: ""
imagePullSecrets: []
nameOverride: "misp"
fullnameOverride: "misp-chart"
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
#
# It appears some of the supervisord scripts need to be root,
# because they write files in /etc/cron.d.
#
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
global:
storageClass: local-path
# Mariadb chart defaults to a single node.
mariadb:
# If you don't set mariadb.auth.password and
# mariadb.auth.root_password, you cannot effectively helm upgrade
# this chart.
auth:
username: misp
database: misp
password: misp
#root_password: misp
image:
# Without this, you don't get any logs the database server puts
# out, only nice colorful things said by supervisord scripts, such
# as "===> Starting the database."
debug: true
# Redis chart settings here are for a single node.
redis:
usePassword: false
cluster:
enabled: false
master:
persistence:
storageClass: local-path
mispModules:
enabled: true
# A hostname to connect to Redis. Ignored if empty.
redis:
hostname: ""
# "Initialize MISP, things includes, attempting to import SQL and the Files DIR"
initialSetup: true
# Creating the GNUPG secrets:
# pwgen -s 32
# kubectl create secret -n misp generic --from-literal='passphrase=<PASSPHRASE>' misp-gnupg-passphrase
# cd /tmp
# mkdir mgpgh
# gpg --homedir=mgpgh --gen-key
# # ^^ when you are generating the key you say what email address it is for
# mkdir mgpghs
# gpg --homedir=mgpgh --export-secret-keys -a -o mgpghs/gnupg-private-key
# kubectl create secret -n misp generic --from-file=mgpghs misp-gnupg-private-key
# rm -rf mgpgh mgpghs
gnupg:
# A Secret containing a GnuPG private key. You must construct this
# yourself.
privateKeySecret: "misp-gnupg-private-key"
# A Secret with the passphrase to unlock the private key.
passphraseSecret: "misp-gnupg-passphrase"
# The email address for which the key was created.
emailAddress: "me#mycompany.com"
# This is constructed by the container's scripts; don't change it
#homeDirectory: "/var/www/.gnupg"
homeDirectory: "/var/www/MISP/.gnupg"
passphraseFile: "/var/www/MISP/.gnupg-passphrase"
importing:
image:
repository: 'olbat/gnupg'
pullPolicy: IfNotPresent
tag: 'light'
# Authentication/authorization via OpenID Connect. See
# <https://github.com/MISP/MISP/tree/2.4/app/Plugin/OidcAuth>. Values
# here are named with snake_case according to the convention in that
# documentation, not camelCase as is usual with Helm.
#oidc:
# Use OIDC for authn/authz.
# enabled: false
# provider_url: "https://keycloak.example.com/auth/realms/example_realm/protocol/openid-connect/auth"
# client_id: "misp"
# client_secret: "01234567-5768-abcd-cafe-012345670123"
and here are my templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "misp.fullname" . }}
labels:
{{- include "misp.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "misp.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "misp.selectorLabels" . | nindent 8 }}
spec:
volumes:
- name: gnupg-home
emptyDir: {}
{{- with .Values.gnupg }}
- name: private-key
secret:
secretName: {{ .privateKeySecret }}
- name: passphrase
secret:
secretName: {{ .passphraseSecret }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "misp.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
initContainers:
- name: "gnupg-import"
securityContext:
runAsNonRoot: true
{{- /* FIXME: is this always right? */}}
runAsUser: 33
{{- with .Values.gnupg }}
{{- with .importing.image }}
image: {{ printf "%s:%s" .repository .tag | quote }}
imagePullPolicy: {{ .pullPolicy }}
{{- end }}
volumeMounts:
- name: private-key
mountPath: /tmp/misp-gpg.priv
subPath: gnupg-private-key
- name: passphrase
mountPath: /tmp/misp-gpg.passphrase
subPath: passphrase
- name: gnupg-home
mountPath: {{ .homeDirectory }}
command:
- 'gpg'
- '--homedir'
- {{ .homeDirectory }}
- '--batch'
- '--passphrase-file'
- '/tmp/misp-gpg.passphrase'
- '--import'
- '/tmp/misp-gpg.priv'
{{- end }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
# this is what coolacid's docker compose.yml says
- name: CRON_USER_ID
value: "1"
# "Initialize MISP, things includes, attempting to import SQL and the Files DIR"
- name: INIT
value: {{ .Values.initialSetup | quote }}
# Don't redirect port 80 traffic to port 443.
- name: NOREDIR
value: "true"
- name: HOSTNAME
{{- if .Values.ingress.enabled }}
value: {{ printf "https://%s" (mustFirst .Values.ingress.hosts).host | quote }}
{{- else }}
value: {{ printf "http://%s" (include "misp.fullname" .) | quote }}
{{- end }}
- name: MYSQL_HOST
{{/* FIXME: can't use the mariadb.primary.fullname template because it gets evaluated in the context of this chart, and doesn't work right */}}
value: {{ .Release.Name }}-mariadb
# we allow MYSQL_PORT to take its default
- name: MYSQL_DATABASE
value: {{ .Values.mariadb.auth.database }}
- name: MYSQL_USER
value: {{ .Values.mariadb.auth.username }}
- name: MYSQL_PASSWORD
value: {{ .Values.mariadb.auth.password }}
#valueFrom:
# secretKeyRef:
# name: {{ printf "%s-%s" .Release.Name "mariadb" | quote }}
# key: mariadb-password
- name: REDIS_FQDN
value: {{ .Release.Name }}-redis-master
{{- if .Values.mispModules.enabled }}
- name: MISP_MODULES_FQDN
value: {{ printf "http://%s" (include "mispModules.fullname" .) | quote }}
{{- end }}
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
#volumeMounts:
# - name: passphrase
# mountPath: {{ .Values.gnupg.passphraseFile }}
# subPath: passphrase
# - name: gnupg-home
# mountPath: {{ .Values.gnupg.homeDirectory }}
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
startupProbe:
httpGet:
path: /
port: http
timeoutSeconds: 5
failureThreshold: 200
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Thank you for your help
greetings Tob
I try to change the Version of the mariadb

Related

how to convert kubernetes deployment job into a kubernetes cron job using HELM Chart

i am running my spring boot application docker image on Kubernetes using Helm chart.
Below is the details of the same
templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{- include "xyz.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "xyz.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "xyz.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "xyz.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Chart.yaml
apiVersion: v2
name: xyz
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: <APP_VERSION_PLACEHOLDER>
values.yaml
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
### - If we want 3 intances then we will metion 3 -then 3 pods will be created on server
### - For staging env we usually keep 1
replicaCount: 1
image:
### --->We can also give local Image details also here
### --->We can create image in Docker repository and use that image URL here
repository: gcr.io/mgcp-109-xyz-operations/projectname
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: "xyz"
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
schedule: "*/5 * * * *"
###SMS2-40 - There are 2 ways how we want to serve our applications-->1st->LoadBalancer or 2-->NodePort
service:
type: NodePort
port: 8087
liveness: /actuator/health/liveness
readiness: /actuator/health/readiness
###service:
### type: ClusterIP
### port: 80
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
#application:
# configoveride: "config/application.properties"
templates/cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "xyz.fullname" . }}
spec:
schedule: {{ .Values.schedule }}
jobTemplate:
spec:
backoffLimit: 5
template:
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{- include "xyz.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "xyz.selectorLabels" . | nindent 4 }}
i ran my application first without cronjob.yaml
once my application started running on kubernetes i tried to conevrt it into kubernetes cron job hence i deleted templates/deployment.yaml and instead added templates/cronjob.yaml
after i deployed my application it ran but when i do
kubectl get cronjobs
it shows in logs No resources found in default namespace.
what i am doing wrong here,unable to figure out
i use below command to install my helm chart helm upgrade --install chartname
Not sure if you file is half but it's not ended properly EOF error might be there when the chart being tested
End part for cronjob
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
The full file should be something like
apiVersion: batch/v1
kind: CronJob
metadata:
name: test
spec:
schedule: {{ .Values.schedule }}
jobTemplate:
spec:
backoffLimit: 5
template:
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
i just tested the above it's working fine.
Command to test helm chart template
helm template <chart name> . --output-dir ./yaml
I was also deploying deployment.yaml which was a mistake so i deleted deployment.yaml file and kept only cronjob.yaml file whose content is given below
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{ include "xyz.labels" . | nindent 4 }}
spec:
schedule: "{{ .Values.schedule }}"
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 2
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: DD_ENV
value: {{ .Values.datadog.env }}
- name: DD_SERVICE
value: {{ include "xyz.name" . }}
- name: DD_VERSION
value: {{ include "xyz.AppVersion" . }}
- name: DD_LOGS_INJECTION
value: "true"
- name: DD_RUNTIME_METRICS_ENABLED
value: "true"
volumeMounts:
- mountPath: /app/config
name: logback
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
volumes:
- configMap:
name: {{ include "xyz.name" . }}
name: logback
backoffLimit: 0
metadata:
{{ with .Values.podAnnotations }}
annotations:
{{ toYaml . | nindent 8 }}
labels:
{{ include "xyz.selectorLabels" . | nindent 8 }}
{{- end }}

Helm - deep merge in containers field in chart with two deployments

I have library chart:
# only part from it
containers:
- name: {{ .Chart.Name }}
{{- if .Values.config.command }}
command: {{ .Values.config.command }}
{{- end }}
resources:
{{- toYaml .Values.config.resources | nindent 10 }}
{{- if .Values.config.containerPort}}
ports:
- containerPort: {{ .Values.config.containerPort }}
{{- end}}
envFrom:
{{- if .Values.config.envFrom }}
{{- toYaml .Values.config.envFrom | nindent 10 }}
{{- end }}
...
# from the Common Helm Helper Chart
{{- define "common-chartlib.deployment" -}}
{{- include "common-chartlib.util.merge" (append . "common-chartlib.deployment.tpl") -}}
{{- end -}}
There is application chart that contains two deployments that differs only with command field value:
# values
command1: ["123"]
command2: ["456"]
# deployment1
spec:
containers:
- name: deployment1
command: {{ .Values.config.command1 }}
# deployment2
spec:
containers:
- name: deployment1
command: {{ .Values.config.command2 }}
If I run helm template I will get:
containers:
- command:
- 123
name: backend
# other fields like ports, envFrom, resources were removed
volumes:
- name: backend-private-key
secret:
secretName: backend-private-key
As you see all fields except name and command were removed after merging.
Expected result:
containers:
- command:
- 123
name: backend
# other fields taken from library chart like ports, envFrom, resources must NOT be removed
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: backend
volumes:
- name: backend-private-key
secret:
secretName: backend-private-key
Output of helm version:
version.BuildInfo{Version:"v3.8.0", GitCommit:"d14138609b01886f544b2025f5000351c9eb092e", GitTreeState:"clean", GoVersion:"go1.17.6"}
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:31:32Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"darwin/amd64"}
Please help.

How to set environment variables in helm?

I have the following deployment definition:
...
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{ if .Values.env.enabled }}
env:
{{- range .Values.env.vars }}
?????What comes here?????
{{- end }}
{{ end }}
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
...
in the values.yaml, I have defined:
env:
enabled: false
vars: []
What I would like to do is, to set environment dynamically via --set, for instance:
helm template user-svc \
--set image.tag=0.1.0 \
--set image.repository=user-svc \
--set env.enabled=true \
--set env.vars.POSTGRES_URL="jdbc:postgresql://localhost:5432/users" \
--set env.vars.POSTGRES_USER="dbuser" \
./svc
after rendering, it should show:
...
containers:
- name: demo
image: game.example/demo-game
env:
- name: POSTGRES_URL
value: jdbc:postgresql://localhost:5432/users
...
and how to set the following option via --set:
- name: UI_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: game-demo
key: ui_properties_file_name
You can access the --set option using .Values.
{{- if eq .Values.env.enabled "true" -}}
env:
- name: {{ .Values.env.vars.POSTGRES_USER }}
value: {{ .Values.env.vars.env.vars.POSTGRES_URL}}
{{- end }}
Try the above.

Getting Readiness & Liveliness probe failed: HTTP probe failed with statuscode: 503 during starting of Varnish using Helm Chart Kubernetes”

I am trying to create a Helm chart for varnish to be deployed/run on Kubernetes cluster. While running the helm package which has varnish image from Docker community its throwing error
Readiness probe failed: HTTP probe failed with statuscode: 503
Liveness probe failed: HTTP probe failed with statuscode: 503
Have shared values.yaml, deployment.yaml, varnish-config.yaml, varnish.vcl.
Any solution approached would be welcomed....
Values.yaml:
# Default values for tt.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
#vcl 4.0;
#import std;
#backend default {
# .host = "www.varnish-cache.org";
# .port = "80";
# .first_byte_timeout = 60s;
# .connect_timeout = 300s;
#}
varnishBackendService: "www.varnish-cache.org"
varnishBackendServicePort: "80"
image:
repository: varnish
tag: 6.0.6
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
type: ClusterIP
port: 80
#probes:
# enabled: true
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
limits:
memory: 128Mi
requests:
memory: 64Mi
#resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
Deployment.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ include "varnish.fullname" . }}
labels:
app: {{ include "varnish.name" . }}
chart: {{ include "varnish.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "varnish.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ include "varnish.name" . }}
release: {{ .Release.Name }}
# annotations:
# sidecar.istio.io/rewriteAppHTTPProbers: "true"
spec:
volumes:
- name: varnish-config
configMap:
name: {{ include "varnish.fullname" . }}-varnish-config
items:
- key: default.vcl
path: default.vcl
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: VARNISH_VCL
value: /etc/varnish/default.vcl
volumeMounts:
- name: varnish-config
mountPath : /etc/varnish/
ports:
- name: http
containerPort: 80
protocol: TCP
targetPort: 80
livenessProbe:
httpGet:
path: /healthcheck
port: http
port: 80
failureThreshold: 3
initialDelaySeconds: 45
timeoutSeconds: 10
periodSeconds: 20
readinessProbe:
httpGet:
path: /healthcheck
port: http
port: 80
initialDelaySeconds: 10
timeoutSeconds: 15
periodSeconds: 5
restartPolicy: "Always"
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
vanrnish-config.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "varnish.fullname" . }}-varnish-config
labels:
app: {{ template "varnish.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
data:
default.vcl: |-
{{ $file := (.Files.Get "config/varnish.vcl") }}
{{ tpl $file . | indent 4 }}
varnish.vcl:
# VCL version 5.0 is not supported so it should be 4.0 or 4.1 even though actually used Varnish version is 6
vcl 4.1;
import std;
# The minimal Varnish version is 5.0
# For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'
backend default {
#.host = "{{ default "google.com" .Values.varnishBackendService }}";
.host = "{{ .Values.varnishBackendService }}";
.port = "{{ .Values.varnishBackendServicePort }}";
#.port = "{{ default "80" .Values.varnishBackendServicePort }}";
.first_byte_timeout = 60s;
.connect_timeout = 300s ;
.probe = {
.url = "/";
.timeout = 1s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}
backend server2 {
.host = "74.125.24.105:80";
.probe = {
.url = "/";
.timeout = 1s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}
import directors;
sub vcl_init {
new vdir = directors.round_robin();
vdir.add_backend(default);
vdir.add_backend(server2);
}
#sub vcl_recv {
# if (req.url ~ "/healthcheck"){
# error 200 "imok";
# set req.http.Connection = "close";
# }
#}
The fact that Kubernetes returns an HTTP 503 error for both the readiness & the liveliness probes means that there's probably something wrong with the connection to your backend.
Interestingly, that's besides the point. Those probes aren't there to perform an end-to-end test of your HTTP flow. The probes are only there to verify if the service they are monitoring is responding.
That's why you can just return a synthetic HTTP response when capturing requests that point to /healthcheck.
Here's the VCL code to do it:
sub vcl_recv {
if(req.url == "/healthcheck") {
return(synth(200,"OK"));
}
}
That doesn't explain the fact why you're getting an HTTP 503 error, but at least, the probes will work.

Getting Error Varnish returning HTTP/1.1 503 Backend fetch failed/ X-Cache: uncached when executing curl -IL (external Ip)

I have created helm chart for varnish cache server which is running in kubernetes cluster , while testing with the "external IP" generated its throwing error , sharing below
HTTP/1.1 503 Backend fetch failed
Date: Tue, 17 Mar 2020 08:20:52 GMT
Server: Varnish
Content-Type: text/html; charset=utf-8
Retry-After: 5
X-Varnish: 570521
Age: 0
Via: 1.1 varnish (Varnish/6.3)
X-Cache: uncached
Content-Length: 283
Connection: keep-alive
Sharing varnish.vcl, values.yaml and deployment.yaml below . Any suggestions how to resolve as I have hardcoded the backend server as .host="www.varnish-cache.org" with port : "80". My requirement is on executing curl -IL I should get the response with cached values not as described above (directly from backend server)..
varnish.vcl:
# VCL version 5.0 is not supported so it should be 4.0 or 4.1 even though actually used Varnish version is 6
vcl 4.1;
import std;
# The minimal Varnish version is 5.0
# For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'
{{ .Values.varnishconfigData | indent 2 }}
sub vcl_recv {
if(req.url == "/healthcheck") {
return(synth(200,"OK"));
}
}
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Cache = "cached";
} else {
set resp.http.X-Cache = "uncached";
}
}
values.yaml:
# Default values for tt.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
#varnishBackendService: "www.varnish-cache.org"
#varnishBackendServicePort: "80"
image:
repository: varnish
tag: 6.3
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
# type: ClusterIP
type: LoadBalancer
port: 80
# externalIPs: 192.168.0.10
varnishconfigData: |-
backend default {
.host = "www.varnish-cache.org";
.host = "100.68.38.132"
.port = "80";
.first_byte_timeout = 60s;
.connect_timeout = 300s ;
.probe = {
.url = "/";
.timeout = 1s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}
sub vcl_backend_response {
set beresp.ttl = 5m;
}
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
limits:
memory: 128Mi
requests:
memory: 64Mi
#resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
Deployment.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ include "varnish.fullname" . }}
labels:
app: {{ include "varnish.name" . }}
chart: {{ include "varnish.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "varnish.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ include "varnish.name" . }}
release: {{ .Release.Name }}
spec:
volumes:
- name: varnish-config
configMap:
name: {{ include "varnish.fullname" . }}-varnish-config
items:
- key: default.vcl
path: default.vcl
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
# command: ["/bin/sh"]
# args: ["-c", "while true; do service varnishd status, varnishlog; sleep 10;done"]
env:
- name: VARNISH_VCL
value: /etc/varnish/default.vcl
volumeMounts:
- name: varnish-config
mountPath : /etc/varnish/
ports:
- name: http
containerPort: 80
protocol: TCP
targetPort: 80
livenessProbe:
httpGet:
path: /healthcheck
port: http
port: 80
failureThreshold: 3
initialDelaySeconds: 45
timeoutSeconds: 10
periodSeconds: 20
readinessProbe:
httpGet:
path: /healthcheck
port: http
port: 80
initialDelaySeconds: 10
timeoutSeconds: 15
periodSeconds: 5
restartPolicy: "Always"
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
The HTTP/1.1 503 Backend fetch failed error indicates that Varnish is unable to connect to the backend host and port.
I advise you to try some good old manual debugging:
Access the bash shell of one of the containers
Open /etc/varnish/default.vcl and check the exact hostname & port that were parsed into the backend definition
Make sure curl is installed and try to curl the hostname on that specific port
Maybe even install telnet and try to see if the port of the hostname is accepting connections
Basically you'll try to figure out if there is a network configuration that is prohibiting you from making an outbound connection, or if there's something else preventing Varnish from making the fetch.