I'm using Hashicorp Vault in Kubernetes. I'm trying to mount secret file into main folder where my application resides. It would look like that: /usr/share/nginx/html/.env while application files are in /usr/share/nginx/html. But the container is not starting because of that. I suspect that that /usr/share/nginx/html was overwritten by Vault (annotation: vault.hashicorp.com/secret-volume-path). How can I mount only file /usr/share/nginx/html/.env?
My annotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: configs/data/app/dev
vault.hashicorp.com/agent-inject-template-.env: |
{{- with secret (print "configs/data/app/dev") -}}{{- range $k, $v := .Data.data -}}
{{ $k }}={{ $v }}
{{ end }}{{- end -}}
vault.hashicorp.com/role: app
vault.hashicorp.com/secret-volume-path: /usr/share/nginx/html
I tried to replicate the use case, but I got an error
2022/10/21 06:42:12 [error] 29#29: *9 directory index of "/usr/share/nginx/html/" is forbidden, client: 20.1.48.169, server: localhost, request: "GET / HTTP/1.1", host: "20.1.55.62:80"
so it seems like vault changed the directory permission as well, as it create .env in the path, here is the config
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: kv/develop/us-west-2/app1-secrets
vault.hashicorp.com/agent-inject-template-.env: |
"{{ with secret "kv/develop/us-west-2/app1-secrets" }}
{{ range $k, $v := .Data.data }}
{{ $k }} = "{{ $v }}"
{{ end }}
{{ end }} "
vault.hashicorp.com/agent-limits-ephemeral: ""
vault.hashicorp.com/secret-volume-path: /usr/share/nginx/html/
vault.hashicorp.com/agent-inject-file-.env: .env
vault.hashicorp.com/auth-path: auth/kubernetes/develop/us-west-2
vault.hashicorp.com/role: rolename
The work around was to overide the command of the desired container, for this use case, i used nginx
command: ["bash", "-c", "cat /vault/secret/.env > /usr/share/nginx/html/.env && nginx -g 'daemon off;' "]
Here is the compelete example with dummy value of my-app
apiVersion: apps/v1
kind: Deployment
metadata:
name: debug-app
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: kv/my-app/develop/us-west-2/develop-my-app
vault.hashicorp.com/agent-inject-template-.env: |
"{{ with secret "kv/my-app/develop/us-west-2/develop-my-app" }}
{{ range $k, $v := .Data.data }}
{{ $k }} = "{{ $v }}"
{{ end }}
{{ end }} "
vault.hashicorp.com/agent-limits-ephemeral: ""
vault.hashicorp.com/secret-volume-path: /vault/secret/
vault.hashicorp.com/agent-inject-file-.env: .env
vault.hashicorp.com/auth-path: auth/kubernetes/develop/us-west-2
vault.hashicorp.com/role: my-app-develop-my-app
spec:
serviceAccountName: develop-my-app
containers:
- name: debug
image: nginx
command: ["bash", "-c", "cat /vault/secret/.env > /usr/share/nginx/html/.env && nginx -g 'daemon off;' "]
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
Related
I am currently dealing with helm and kubernetis.
I would like to deploy a MISP in kubernetis, but unfortunately the helm chart always fails with the mariadb.
In the logs of the mariadb pod I see the following error message, but unfortunately I currently have no idea how to change this.
022-11-23 11:13:29 0 [Note] /opt/bitnami/mariadb/sbin/mysqld: ready for connections.
Version: '10.6.11-MariaDB' socket: '/opt/bitnami/mariadb/tmp/mysql.sock' port: 3306 Source distribution
2022-11-23 11:13:31 3 [Warning] Access denied for user 'root'#'localhost' (using password: YES)
Reading datadir from the MariaDB server failed. Got the following error when executing the 'mysql' command line client
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
FATAL ERROR: Upgrade failed
Here are my values.yaml
# Default values for misp.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: coolacid/misp-docker
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart
# appVersion. N.B. in the particular case of coolacid's
# Dockerization of MISP, the misp-docker repo has multiple different
# images, and the tags not only distinguish between versions, but
# also between images.
tag: ""
imagePullSecrets: []
nameOverride: "misp"
fullnameOverride: "misp-chart"
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
#
# It appears some of the supervisord scripts need to be root,
# because they write files in /etc/cron.d.
#
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
global:
storageClass: local-path
# Mariadb chart defaults to a single node.
mariadb:
# If you don't set mariadb.auth.password and
# mariadb.auth.root_password, you cannot effectively helm upgrade
# this chart.
auth:
username: misp
database: misp
password: misp
#root_password: misp
image:
# Without this, you don't get any logs the database server puts
# out, only nice colorful things said by supervisord scripts, such
# as "===> Starting the database."
debug: true
# Redis chart settings here are for a single node.
redis:
usePassword: false
cluster:
enabled: false
master:
persistence:
storageClass: local-path
mispModules:
enabled: true
# A hostname to connect to Redis. Ignored if empty.
redis:
hostname: ""
# "Initialize MISP, things includes, attempting to import SQL and the Files DIR"
initialSetup: true
# Creating the GNUPG secrets:
# pwgen -s 32
# kubectl create secret -n misp generic --from-literal='passphrase=<PASSPHRASE>' misp-gnupg-passphrase
# cd /tmp
# mkdir mgpgh
# gpg --homedir=mgpgh --gen-key
# # ^^ when you are generating the key you say what email address it is for
# mkdir mgpghs
# gpg --homedir=mgpgh --export-secret-keys -a -o mgpghs/gnupg-private-key
# kubectl create secret -n misp generic --from-file=mgpghs misp-gnupg-private-key
# rm -rf mgpgh mgpghs
gnupg:
# A Secret containing a GnuPG private key. You must construct this
# yourself.
privateKeySecret: "misp-gnupg-private-key"
# A Secret with the passphrase to unlock the private key.
passphraseSecret: "misp-gnupg-passphrase"
# The email address for which the key was created.
emailAddress: "me#mycompany.com"
# This is constructed by the container's scripts; don't change it
#homeDirectory: "/var/www/.gnupg"
homeDirectory: "/var/www/MISP/.gnupg"
passphraseFile: "/var/www/MISP/.gnupg-passphrase"
importing:
image:
repository: 'olbat/gnupg'
pullPolicy: IfNotPresent
tag: 'light'
# Authentication/authorization via OpenID Connect. See
# <https://github.com/MISP/MISP/tree/2.4/app/Plugin/OidcAuth>. Values
# here are named with snake_case according to the convention in that
# documentation, not camelCase as is usual with Helm.
#oidc:
# Use OIDC for authn/authz.
# enabled: false
# provider_url: "https://keycloak.example.com/auth/realms/example_realm/protocol/openid-connect/auth"
# client_id: "misp"
# client_secret: "01234567-5768-abcd-cafe-012345670123"
and here are my templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "misp.fullname" . }}
labels:
{{- include "misp.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "misp.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "misp.selectorLabels" . | nindent 8 }}
spec:
volumes:
- name: gnupg-home
emptyDir: {}
{{- with .Values.gnupg }}
- name: private-key
secret:
secretName: {{ .privateKeySecret }}
- name: passphrase
secret:
secretName: {{ .passphraseSecret }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "misp.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
initContainers:
- name: "gnupg-import"
securityContext:
runAsNonRoot: true
{{- /* FIXME: is this always right? */}}
runAsUser: 33
{{- with .Values.gnupg }}
{{- with .importing.image }}
image: {{ printf "%s:%s" .repository .tag | quote }}
imagePullPolicy: {{ .pullPolicy }}
{{- end }}
volumeMounts:
- name: private-key
mountPath: /tmp/misp-gpg.priv
subPath: gnupg-private-key
- name: passphrase
mountPath: /tmp/misp-gpg.passphrase
subPath: passphrase
- name: gnupg-home
mountPath: {{ .homeDirectory }}
command:
- 'gpg'
- '--homedir'
- {{ .homeDirectory }}
- '--batch'
- '--passphrase-file'
- '/tmp/misp-gpg.passphrase'
- '--import'
- '/tmp/misp-gpg.priv'
{{- end }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
# this is what coolacid's docker compose.yml says
- name: CRON_USER_ID
value: "1"
# "Initialize MISP, things includes, attempting to import SQL and the Files DIR"
- name: INIT
value: {{ .Values.initialSetup | quote }}
# Don't redirect port 80 traffic to port 443.
- name: NOREDIR
value: "true"
- name: HOSTNAME
{{- if .Values.ingress.enabled }}
value: {{ printf "https://%s" (mustFirst .Values.ingress.hosts).host | quote }}
{{- else }}
value: {{ printf "http://%s" (include "misp.fullname" .) | quote }}
{{- end }}
- name: MYSQL_HOST
{{/* FIXME: can't use the mariadb.primary.fullname template because it gets evaluated in the context of this chart, and doesn't work right */}}
value: {{ .Release.Name }}-mariadb
# we allow MYSQL_PORT to take its default
- name: MYSQL_DATABASE
value: {{ .Values.mariadb.auth.database }}
- name: MYSQL_USER
value: {{ .Values.mariadb.auth.username }}
- name: MYSQL_PASSWORD
value: {{ .Values.mariadb.auth.password }}
#valueFrom:
# secretKeyRef:
# name: {{ printf "%s-%s" .Release.Name "mariadb" | quote }}
# key: mariadb-password
- name: REDIS_FQDN
value: {{ .Release.Name }}-redis-master
{{- if .Values.mispModules.enabled }}
- name: MISP_MODULES_FQDN
value: {{ printf "http://%s" (include "mispModules.fullname" .) | quote }}
{{- end }}
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
#volumeMounts:
# - name: passphrase
# mountPath: {{ .Values.gnupg.passphraseFile }}
# subPath: passphrase
# - name: gnupg-home
# mountPath: {{ .Values.gnupg.homeDirectory }}
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
startupProbe:
httpGet:
path: /
port: http
timeoutSeconds: 5
failureThreshold: 200
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Thank you for your help
greetings Tob
I try to change the Version of the mariadb
I have a helm chart with deployment.yaml having the following params:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ .Values.newAppName }}
chart: {{ template "newApp.chart" . }}
release: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
name: {{ .Values.deploymentName }}
spec:
replicas: {{ .Values.numReplicas }}
selector:
matchLabels:
app: {{ .Values.newAppName }}
template:
metadata:
labels:
app: {{ .Values.newAppName }}
namespace: {{ .Release.Namespace }}
annotations:
some_annotation: val
some_annotation: val
spec:
serviceAccountName: {{ .Values.podRoleName }}
containers:
- env:
- name: ENV_VAR1
value: {{ .Values.env_var_1 }}
image: {{ .Values.newApp }}:{{ .Values.newAppVersion }}
imagePullPolicy: Always
command: ["/opt/myDir/bin/newApp"]
args: ["-c", "/etc/config/newApp/{{ .Values.newAppConfigFileName }}"]
name: {{ .Values.newAppName }}
ports:
- containerPort: {{ .Values.newAppTLSPort }}
protocol: TCP
livenessProbe:
httpGet:
path: /v1/h
port: {{ .Values.newAppTLSPort }}
scheme: HTTPS
initialDelaySeconds: 1
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 20
readinessProbe:
httpGet:
path: /v1/h
port: {{ .Values.newAppTLSPort }}
scheme: HTTPS
initialDelaySeconds: 2
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 20
volumeMounts:
- mountPath: /etc/config/newApp
name: config-volume
readOnly: true
- mountPath: /etc/config/metrics
name: metrics-volume
readOnly: true
- mountPath: /etc/version/container
name: container-info-volume
readOnly: true
- name: {{ template "newAppClient.name" . }}-client
image: {{ .Values.newAppClientImage }}:{{ .Values.newAppClientVersion }}
imagePullPolicy: Always
args: ["run", "--server", "--config-file=/newAppClientPath/config.yaml", "--log-level=debug", "/newAppClientPath/pl"]
volumeMounts:
- name: newAppClient-files
mountPath: /newAppClient-path
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
volumes:
- name: config-volume
configMap:
name: {{ .Values.newAppConfigMapName }}
- name: container-info-volume
configMap:
name: {{ .Values.containerVersionConfigMapName }}
- name: metrics-volume
configMap:
name: {{ .Values.metricsConfigMapName }}
- name: newAppClient-files
configMap:
name: {{ .Values.newAppClientConfigMapName }}
items:
- key: config
path: config.yaml
This helm chart is consumed by Jenkins and then deployed by Spinnaker onto AWS EKS service.
A security measure that we ensure is that /root directory should be private in all our containers, so basically it should deny permission when a user tries to manually do the same after
kubectl exec -it -n namespace_name pod_name -c container_name bash
into the container.
But when I enter the container terminal why can I still
cd /root
inside the container when it is running as non-root?
EXPECTED: It should give the following error which it is not giving:
cd root/
bash: cd: root/: Permission denied
OTHER VALUES THAT MIGHT BE USEFUL TO DEBUG:
Output of "ls -la" inside the container:
dr-xr-x--- 1 root root 18 Jul 26 2021 root
As you can see the r and x SHOULD BE UNSET for OTHER on root folder
Output of "id" inside the container:
bash-4.2$ id
uid=1000 gid=0(root) groups=0(root),1000
Using a helm chart locally to reproduce the error ->
The same 3 securityContext params when used locally in a simple Go program helm chart yields the desired result.
Deployment.yaml of helm chart:
apiVersion: {{ template "common.capabilities.deployment.apiVersion" . }}
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "fullname" . }}
template:
metadata:
labels:
app: {{ template "fullname" . }}
spec:
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
readinessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
resources:
{{ toYaml .Values.resources | indent 12 }}
Output of ls -la inside the container on local setup:
drwx------ 2 root root 4096 Jan 25 00:00 root
You can always cd into / in the UNIX system as non-root, so you can do it inside your container as well. However, e.g. creating a file there should fail with Permission denied.
Check the following.
# Run a container as non-root
docker run -it --rm --user 7447 busybox sh
# Check that it's possible to cd into '/'
cd /
# Try creating file
touch some-file
touch: some-file: Permission denied
I have defined the values.yaml like the following:
name: custom-streams
image: streams-docker-images
imagePullPolicy: Always
restartPolicy: Always
replicas: 1
port: 8080
nodeSelector:
nodetype: free
configHocon: |-
streams {
monitoring {
custom {
uri = ${?URI}
method = ${?METHOD}
}
}
}
And configmap.yaml like the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-streams-configmap
data:
config.hocon: {{ .Values.configHocon | indent 4}}
Lastly, I have defined the deployment.yaml like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configmap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}
When I run the container via:
helm install --name custom-streams custom-streams -f values.yaml --debug --namespace streaming
Then the pods are running fine, but I cannot see the config.hocon file in the container:
$ kubectl exec -it custom-streams-55b45b7756-fb292 sh -n streaming
/ # ls
...
config
...
/ # cd config/
/config # ls
/config #
I need the config.hocon written in the /config folder. Can anyone let me know what is wrong with the configurations?
I was able to resolve the issue. The issue was using configmap in place configMap in deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configMap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}
I am not able to reference variable inside a nested variable in Helm. I am not able to do this nested reference, I want to retrieve app1_image and app1_tag using the value of the apps_label variable. How can I do that?
values.yaml:
apps:
- name: web-server
label: app1
command: /root/web.sh
port: 80
- name: app-server
label: app2
command: /root/app.sh
port: 8080
app1_image:
name: nginx
tag: v1.0
app2_image:
name: tomcat
tag: v1.0
deployment.yaml:
{{- range $apps := .Values.apps
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $apps.name }}
labels:
app: {{ $apps.name }}
spec:
replicas: 1
selector:
matchLabels:
app:
template:
metadata:
labels:
app: {{ $apps.name }}
spec:
containers:
- name: {{ $apps.name }}
image: {{ $.Values.$apps.label.image }}: {{ $.Values.$apps.label.tag }}
ports:
- containerPort: {{ $apps.port}}
{{- end }}
The core Go text/template language includes an index function that you can use as a more dynamic version of the . operator. Given the values file you show, you could do the lookup (inside the loop) as something like:
{{- $key := printf "%s_image" $apps.label }}
{{- $settings := index $.Values $key | required (printf "could not find top-level settings for %s" $key) }}
- name: {{ $apps.name }}
image: {{ $settings.image }}:{{ $settings.tag }}
You could probably rearrange the layout of the values.yaml file to make this clearer. You also might experiment with what you can provide with multiple helm install -f options to override options at install time; if you can keep all of these settings in one place it is easier to manage.
I am trying to create a Helm chart for varnish to be deployed/run on Kubernetes cluster. While running the helm package which has varnish image from Docker community its throwing error
Readiness probe failed: HTTP probe failed with statuscode: 503
Liveness probe failed: HTTP probe failed with statuscode: 503
Have shared values.yaml, deployment.yaml, varnish-config.yaml, varnish.vcl.
Any solution approached would be welcomed....
Values.yaml:
# Default values for tt.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
#vcl 4.0;
#import std;
#backend default {
# .host = "www.varnish-cache.org";
# .port = "80";
# .first_byte_timeout = 60s;
# .connect_timeout = 300s;
#}
varnishBackendService: "www.varnish-cache.org"
varnishBackendServicePort: "80"
image:
repository: varnish
tag: 6.0.6
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
type: ClusterIP
port: 80
#probes:
# enabled: true
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
limits:
memory: 128Mi
requests:
memory: 64Mi
#resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
Deployment.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ include "varnish.fullname" . }}
labels:
app: {{ include "varnish.name" . }}
chart: {{ include "varnish.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "varnish.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ include "varnish.name" . }}
release: {{ .Release.Name }}
# annotations:
# sidecar.istio.io/rewriteAppHTTPProbers: "true"
spec:
volumes:
- name: varnish-config
configMap:
name: {{ include "varnish.fullname" . }}-varnish-config
items:
- key: default.vcl
path: default.vcl
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: VARNISH_VCL
value: /etc/varnish/default.vcl
volumeMounts:
- name: varnish-config
mountPath : /etc/varnish/
ports:
- name: http
containerPort: 80
protocol: TCP
targetPort: 80
livenessProbe:
httpGet:
path: /healthcheck
port: http
port: 80
failureThreshold: 3
initialDelaySeconds: 45
timeoutSeconds: 10
periodSeconds: 20
readinessProbe:
httpGet:
path: /healthcheck
port: http
port: 80
initialDelaySeconds: 10
timeoutSeconds: 15
periodSeconds: 5
restartPolicy: "Always"
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
vanrnish-config.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "varnish.fullname" . }}-varnish-config
labels:
app: {{ template "varnish.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
data:
default.vcl: |-
{{ $file := (.Files.Get "config/varnish.vcl") }}
{{ tpl $file . | indent 4 }}
varnish.vcl:
# VCL version 5.0 is not supported so it should be 4.0 or 4.1 even though actually used Varnish version is 6
vcl 4.1;
import std;
# The minimal Varnish version is 5.0
# For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'
backend default {
#.host = "{{ default "google.com" .Values.varnishBackendService }}";
.host = "{{ .Values.varnishBackendService }}";
.port = "{{ .Values.varnishBackendServicePort }}";
#.port = "{{ default "80" .Values.varnishBackendServicePort }}";
.first_byte_timeout = 60s;
.connect_timeout = 300s ;
.probe = {
.url = "/";
.timeout = 1s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}
backend server2 {
.host = "74.125.24.105:80";
.probe = {
.url = "/";
.timeout = 1s;
.interval = 5s;
.window = 5;
.threshold = 3;
}
}
import directors;
sub vcl_init {
new vdir = directors.round_robin();
vdir.add_backend(default);
vdir.add_backend(server2);
}
#sub vcl_recv {
# if (req.url ~ "/healthcheck"){
# error 200 "imok";
# set req.http.Connection = "close";
# }
#}
The fact that Kubernetes returns an HTTP 503 error for both the readiness & the liveliness probes means that there's probably something wrong with the connection to your backend.
Interestingly, that's besides the point. Those probes aren't there to perform an end-to-end test of your HTTP flow. The probes are only there to verify if the service they are monitoring is responding.
That's why you can just return a synthetic HTTP response when capturing requests that point to /healthcheck.
Here's the VCL code to do it:
sub vcl_recv {
if(req.url == "/healthcheck") {
return(synth(200,"OK"));
}
}
That doesn't explain the fact why you're getting an HTTP 503 error, but at least, the probes will work.