I have library chart:
# only part from it
containers:
- name: {{ .Chart.Name }}
{{- if .Values.config.command }}
command: {{ .Values.config.command }}
{{- end }}
resources:
{{- toYaml .Values.config.resources | nindent 10 }}
{{- if .Values.config.containerPort}}
ports:
- containerPort: {{ .Values.config.containerPort }}
{{- end}}
envFrom:
{{- if .Values.config.envFrom }}
{{- toYaml .Values.config.envFrom | nindent 10 }}
{{- end }}
...
# from the Common Helm Helper Chart
{{- define "common-chartlib.deployment" -}}
{{- include "common-chartlib.util.merge" (append . "common-chartlib.deployment.tpl") -}}
{{- end -}}
There is application chart that contains two deployments that differs only with command field value:
# values
command1: ["123"]
command2: ["456"]
# deployment1
spec:
containers:
- name: deployment1
command: {{ .Values.config.command1 }}
# deployment2
spec:
containers:
- name: deployment1
command: {{ .Values.config.command2 }}
If I run helm template I will get:
containers:
- command:
- 123
name: backend
# other fields like ports, envFrom, resources were removed
volumes:
- name: backend-private-key
secret:
secretName: backend-private-key
As you see all fields except name and command were removed after merging.
Expected result:
containers:
- command:
- 123
name: backend
# other fields taken from library chart like ports, envFrom, resources must NOT be removed
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: backend
volumes:
- name: backend-private-key
secret:
secretName: backend-private-key
Output of helm version:
version.BuildInfo{Version:"v3.8.0", GitCommit:"d14138609b01886f544b2025f5000351c9eb092e", GitTreeState:"clean", GoVersion:"go1.17.6"}
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:31:32Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"darwin/amd64"}
Please help.
Related
i am running my spring boot application docker image on Kubernetes using Helm chart.
Below is the details of the same
templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{- include "xyz.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "xyz.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "xyz.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "xyz.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Chart.yaml
apiVersion: v2
name: xyz
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: <APP_VERSION_PLACEHOLDER>
values.yaml
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
### - If we want 3 intances then we will metion 3 -then 3 pods will be created on server
### - For staging env we usually keep 1
replicaCount: 1
image:
### --->We can also give local Image details also here
### --->We can create image in Docker repository and use that image URL here
repository: gcr.io/mgcp-109-xyz-operations/projectname
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: "xyz"
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
schedule: "*/5 * * * *"
###SMS2-40 - There are 2 ways how we want to serve our applications-->1st->LoadBalancer or 2-->NodePort
service:
type: NodePort
port: 8087
liveness: /actuator/health/liveness
readiness: /actuator/health/readiness
###service:
### type: ClusterIP
### port: 80
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
#application:
# configoveride: "config/application.properties"
templates/cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "xyz.fullname" . }}
spec:
schedule: {{ .Values.schedule }}
jobTemplate:
spec:
backoffLimit: 5
template:
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{- include "xyz.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "xyz.selectorLabels" . | nindent 4 }}
i ran my application first without cronjob.yaml
once my application started running on kubernetes i tried to conevrt it into kubernetes cron job hence i deleted templates/deployment.yaml and instead added templates/cronjob.yaml
after i deployed my application it ran but when i do
kubectl get cronjobs
it shows in logs No resources found in default namespace.
what i am doing wrong here,unable to figure out
i use below command to install my helm chart helm upgrade --install chartname
Not sure if you file is half but it's not ended properly EOF error might be there when the chart being tested
End part for cronjob
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
The full file should be something like
apiVersion: batch/v1
kind: CronJob
metadata:
name: test
spec:
schedule: {{ .Values.schedule }}
jobTemplate:
spec:
backoffLimit: 5
template:
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
i just tested the above it's working fine.
Command to test helm chart template
helm template <chart name> . --output-dir ./yaml
I was also deploying deployment.yaml which was a mistake so i deleted deployment.yaml file and kept only cronjob.yaml file whose content is given below
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{ include "xyz.labels" . | nindent 4 }}
spec:
schedule: "{{ .Values.schedule }}"
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 2
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: DD_ENV
value: {{ .Values.datadog.env }}
- name: DD_SERVICE
value: {{ include "xyz.name" . }}
- name: DD_VERSION
value: {{ include "xyz.AppVersion" . }}
- name: DD_LOGS_INJECTION
value: "true"
- name: DD_RUNTIME_METRICS_ENABLED
value: "true"
volumeMounts:
- mountPath: /app/config
name: logback
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
volumes:
- configMap:
name: {{ include "xyz.name" . }}
name: logback
backoffLimit: 0
metadata:
{{ with .Values.podAnnotations }}
annotations:
{{ toYaml . | nindent 8 }}
labels:
{{ include "xyz.selectorLabels" . | nindent 8 }}
{{- end }}
I am currently dealing with helm and kubernetis.
I would like to deploy a MISP in kubernetis, but unfortunately the helm chart always fails with the mariadb.
In the logs of the mariadb pod I see the following error message, but unfortunately I currently have no idea how to change this.
022-11-23 11:13:29 0 [Note] /opt/bitnami/mariadb/sbin/mysqld: ready for connections.
Version: '10.6.11-MariaDB' socket: '/opt/bitnami/mariadb/tmp/mysql.sock' port: 3306 Source distribution
2022-11-23 11:13:31 3 [Warning] Access denied for user 'root'#'localhost' (using password: YES)
Reading datadir from the MariaDB server failed. Got the following error when executing the 'mysql' command line client
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
FATAL ERROR: Upgrade failed
Here are my values.yaml
# Default values for misp.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: coolacid/misp-docker
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart
# appVersion. N.B. in the particular case of coolacid's
# Dockerization of MISP, the misp-docker repo has multiple different
# images, and the tags not only distinguish between versions, but
# also between images.
tag: ""
imagePullSecrets: []
nameOverride: "misp"
fullnameOverride: "misp-chart"
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
#
# It appears some of the supervisord scripts need to be root,
# because they write files in /etc/cron.d.
#
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
global:
storageClass: local-path
# Mariadb chart defaults to a single node.
mariadb:
# If you don't set mariadb.auth.password and
# mariadb.auth.root_password, you cannot effectively helm upgrade
# this chart.
auth:
username: misp
database: misp
password: misp
#root_password: misp
image:
# Without this, you don't get any logs the database server puts
# out, only nice colorful things said by supervisord scripts, such
# as "===> Starting the database."
debug: true
# Redis chart settings here are for a single node.
redis:
usePassword: false
cluster:
enabled: false
master:
persistence:
storageClass: local-path
mispModules:
enabled: true
# A hostname to connect to Redis. Ignored if empty.
redis:
hostname: ""
# "Initialize MISP, things includes, attempting to import SQL and the Files DIR"
initialSetup: true
# Creating the GNUPG secrets:
# pwgen -s 32
# kubectl create secret -n misp generic --from-literal='passphrase=<PASSPHRASE>' misp-gnupg-passphrase
# cd /tmp
# mkdir mgpgh
# gpg --homedir=mgpgh --gen-key
# # ^^ when you are generating the key you say what email address it is for
# mkdir mgpghs
# gpg --homedir=mgpgh --export-secret-keys -a -o mgpghs/gnupg-private-key
# kubectl create secret -n misp generic --from-file=mgpghs misp-gnupg-private-key
# rm -rf mgpgh mgpghs
gnupg:
# A Secret containing a GnuPG private key. You must construct this
# yourself.
privateKeySecret: "misp-gnupg-private-key"
# A Secret with the passphrase to unlock the private key.
passphraseSecret: "misp-gnupg-passphrase"
# The email address for which the key was created.
emailAddress: "me#mycompany.com"
# This is constructed by the container's scripts; don't change it
#homeDirectory: "/var/www/.gnupg"
homeDirectory: "/var/www/MISP/.gnupg"
passphraseFile: "/var/www/MISP/.gnupg-passphrase"
importing:
image:
repository: 'olbat/gnupg'
pullPolicy: IfNotPresent
tag: 'light'
# Authentication/authorization via OpenID Connect. See
# <https://github.com/MISP/MISP/tree/2.4/app/Plugin/OidcAuth>. Values
# here are named with snake_case according to the convention in that
# documentation, not camelCase as is usual with Helm.
#oidc:
# Use OIDC for authn/authz.
# enabled: false
# provider_url: "https://keycloak.example.com/auth/realms/example_realm/protocol/openid-connect/auth"
# client_id: "misp"
# client_secret: "01234567-5768-abcd-cafe-012345670123"
and here are my templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "misp.fullname" . }}
labels:
{{- include "misp.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "misp.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "misp.selectorLabels" . | nindent 8 }}
spec:
volumes:
- name: gnupg-home
emptyDir: {}
{{- with .Values.gnupg }}
- name: private-key
secret:
secretName: {{ .privateKeySecret }}
- name: passphrase
secret:
secretName: {{ .passphraseSecret }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "misp.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
initContainers:
- name: "gnupg-import"
securityContext:
runAsNonRoot: true
{{- /* FIXME: is this always right? */}}
runAsUser: 33
{{- with .Values.gnupg }}
{{- with .importing.image }}
image: {{ printf "%s:%s" .repository .tag | quote }}
imagePullPolicy: {{ .pullPolicy }}
{{- end }}
volumeMounts:
- name: private-key
mountPath: /tmp/misp-gpg.priv
subPath: gnupg-private-key
- name: passphrase
mountPath: /tmp/misp-gpg.passphrase
subPath: passphrase
- name: gnupg-home
mountPath: {{ .homeDirectory }}
command:
- 'gpg'
- '--homedir'
- {{ .homeDirectory }}
- '--batch'
- '--passphrase-file'
- '/tmp/misp-gpg.passphrase'
- '--import'
- '/tmp/misp-gpg.priv'
{{- end }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
# this is what coolacid's docker compose.yml says
- name: CRON_USER_ID
value: "1"
# "Initialize MISP, things includes, attempting to import SQL and the Files DIR"
- name: INIT
value: {{ .Values.initialSetup | quote }}
# Don't redirect port 80 traffic to port 443.
- name: NOREDIR
value: "true"
- name: HOSTNAME
{{- if .Values.ingress.enabled }}
value: {{ printf "https://%s" (mustFirst .Values.ingress.hosts).host | quote }}
{{- else }}
value: {{ printf "http://%s" (include "misp.fullname" .) | quote }}
{{- end }}
- name: MYSQL_HOST
{{/* FIXME: can't use the mariadb.primary.fullname template because it gets evaluated in the context of this chart, and doesn't work right */}}
value: {{ .Release.Name }}-mariadb
# we allow MYSQL_PORT to take its default
- name: MYSQL_DATABASE
value: {{ .Values.mariadb.auth.database }}
- name: MYSQL_USER
value: {{ .Values.mariadb.auth.username }}
- name: MYSQL_PASSWORD
value: {{ .Values.mariadb.auth.password }}
#valueFrom:
# secretKeyRef:
# name: {{ printf "%s-%s" .Release.Name "mariadb" | quote }}
# key: mariadb-password
- name: REDIS_FQDN
value: {{ .Release.Name }}-redis-master
{{- if .Values.mispModules.enabled }}
- name: MISP_MODULES_FQDN
value: {{ printf "http://%s" (include "mispModules.fullname" .) | quote }}
{{- end }}
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
#volumeMounts:
# - name: passphrase
# mountPath: {{ .Values.gnupg.passphraseFile }}
# subPath: passphrase
# - name: gnupg-home
# mountPath: {{ .Values.gnupg.homeDirectory }}
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
startupProbe:
httpGet:
path: /
port: http
timeoutSeconds: 5
failureThreshold: 200
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Thank you for your help
greetings Tob
I try to change the Version of the mariadb
What am I trying to achieve?
We are using a self-hosted GitLab instance and use GitLab AutoDevops to deploy our projects to a Kubernetes cluster. At the time of writing, we are only using one node within the cluster. For one of our projects it is important that the built application (i.e. the pod(s)) is able to access (read only) files stored on the Kubernetes cluster's node itself.
What have I tried?
Created a (hostPath) PersistentVolume (PV) on our cluster
Created a PersistentVolumeClaim (PVC) on our cluster (named "test-api-claim")
Now GitLab AutoDevops uses a default helm chart to deploy the applications. In order to modify it's behavior, I've added this chart to the project's repository (GitLab AutoDevops automatically uses the chart in a project's ./chart directory if found). So my line of thinking was to modify the chart so that the deployed pods use the PV and PVC which I created manually on the cluster.
Therefore I modified the deployment.yaml file that can be found here. As you can see in the following code-snippet, I have added the volumeMounts & volumes keys (not present in the default/original file). Scroll to the end of the snippet to see the added keys.
{{- if not .Values.application.initializeCommand -}}
apiVersion: {{ default "extensions/v1beta1" .Values.deploymentApiVersion }}
kind: Deployment
metadata:
name: {{ template "trackableappname" . }}
annotations:
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if or .Values.enableSelector (eq (default "extensions/v1beta1" .Values.deploymentApiVersion) "apps/v1") }}
selector:
matchLabels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
{{- end }}
replicas: {{ .Values.replicaCount }}
{{- if .Values.strategyType }}
strategy:
type: {{ .Values.strategyType | quote }}
{{- end }}
template:
metadata:
annotations:
checksum/application-secrets: "{{ .Values.application.secretChecksum }}"
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
spec:
imagePullSecrets:
{{ toYaml .Values.image.secrets | indent 10 }}
containers:
- name: {{ .Chart.Name }}
image: {{ template "imagename" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.application.secretName }}
envFrom:
- secretRef:
name: {{ .Values.application.secretName }}
{{- end }}
env:
{{- if .Values.postgresql.managed }}
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: app-postgres
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: app-postgres
key: password
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
{{- end }}
- name: DATABASE_URL
value: {{ .Values.application.database_url | quote }}
- name: GITLAB_ENVIRONMENT_NAME
value: {{ .Values.gitlab.envName | quote }}
- name: GITLAB_ENVIRONMENT_URL
value: {{ .Values.gitlab.envURL | quote }}
ports:
- name: "{{ .Values.service.name }}"
containerPort: {{ .Values.service.internalPort }}
livenessProbe:
{{- if eq .Values.livenessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.livenessProbe.path }}
scheme: {{ .Values.livenessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.livenessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
{{- if eq .Values.readinessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.readinessProbe.path }}
scheme: {{ .Values.readinessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.readinessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- end -}}
volumeMounts:
- mountPath: /data
name: test-pvc
volumes:
- name: test-pvc
persistentVolumeClaim:
claimName: test-api-claim
What is the problem?
Now when I trigger the Pipeline to deploy the application (using AutoDevops with my modified helm chart), I am getting this error:
Error: YAML parse error on auto-deploy-app/templates/deployment.yaml: error converting YAML to JSON: yaml: line 71: did not find expected key
Line 71 in the script refers to the valueFrom.secretKeyRef.name in the yaml:
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
The weird thing is that when I delete the volumes and volumeMounts keys, it works as expected (and the valueFrom.secretKeyRef.name is still presented and causes no trouble..).
I am not using tabs in the yaml file and I double checked the indentation.
Two questions
Could there be something wrong with my yaml?
Does anyone know of another solution to achieve my desired behavior? (adding PVC to the deployment so that pods actually use it?)
General information
We use GitLab EE 13.12.11
For auto-deploy-image (which provides the helm chart I am referring to) we use version 1.0.7
Thanks in advance and have a nice day!
it seems that adding persistence is now supported in the default helm chart.
Check the pvc.yaml and deployment.yaml.
Given that, it should be enough to edit values in .gitlab/auto-deploy-values.yaml to meet your needs. Check defaults in values.yaml for more context.
my current setup involves Helm charts,Kubernetes
I have a requirement where i have to replace a property in configMap.yaml file with an environment variable declared in the deployment.yaml file
here is a section my configMap.yaml which declares a property file
data:
rest.properties: |
map.dirs=/data/maps
catalog.dir=/data/catalog
work.dir=/data/tmp
map.file.extension={{ .Values.rest.mapFileExtension }}
unload.time=1
max.flow.threads=10
max.map.threads=50
trace.level=ERROR
run.mode={{ .Values.runMode }}
{{- if eq .Values.cache.redis.location "external" }}
redis.host={{ .Values.cache.redis.host }}
{{- else if eq .Values.cache.redis.location "internal" }}
redis.host=localhost
{{- end }}
redis.port={{ .Values.cache.redis.port }}
redis.stem={{ .Values.cache.redis.stem }}
redis.database={{ .Values.cache.redis.database }}
redis.logfile=redis.log
redis.loglevel=notice
exec.log.dir=/data/logs
exec.log.file.count=5
exec.log.file.size=100
exec.log.level=all
synchronous.timeout=300
{{- if .Values.global.linkIntegration.enabled }}
authentication.enabled=false
authentication.server=https://{{ .Release.Name }}-product-design-server:443
config.dir=/opt/runtime/config
{{- end }}
{{- if .Values.keycloak.enabled }}
authentication.keycloak.enabled={{ .Values.keycloak.enabled }}
authentication.keycloak.serverUrl={{ .Values.keycloak.serverUrl }}
authentication.keycloak.realmId={{ .Values.keycloak.realmId }}
authentication.keycloak.clientId={{ .Values.keycloak.clientId }}
authentication.keycloak.clientSecret=${HIP_KEYCLOAK_CLIENT_SECRET}
{{- end }}
i need to replace ${HIP_KEYCLOAK_CLIENT_SECRET} which is defined in deployment.yaml file as shown below
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.global.hImageRegistry }}/{{ include "image.runtime.repo" . }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- if .Values.keycloak.enabled }}
- name: HIP_KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ .Values.keycloak.secret }}
key: clientSecret
{{ end }}
the idea is to have the property file in the deployed pod under /opt/runtime/rest.properties
here is my complete deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "lnk-service.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "lnk-service.name" . }}
helm.sh/chart: {{ include "lnk-service.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "lnk-service.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "lnk-service.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
{{- if .Values.global.hImagePullSecret }}
imagePullSecrets:
- name: {{ .Values.global.hImagePullSecret }}
{{- end }}
securityContext:
runAsUser: 998
runAsGroup: 997
fsGroup: 997
volumes:
- name: configuration
configMap:
name: {{ include "lnk-service.fullname" . }}-server-config
- name: core-configuration
configMap:
name: {{ include "lnk-service.fullname" . }}-server-core-config
- name: hch-configuration
configMap:
name: {{ include "lnk-service.fullname" . }}-hch-config
- name: data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
{{- if .Values.global.linkIntegration.enabled }}
claimName: lnk-shared-px
{{- else }}
claimName: {{ include "pvc.name" . }}
{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
- name: hch-data
{{- if .Values.global.linkIntegration.enabled }}
persistentVolumeClaim:
claimName: {{ include "unicapvc.fullname" . }}
{{- else }}
emptyDir: {}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.global.hImageRegistry }}/{{ include "image.runtime.repo" . }}:{{ .Values.image.tag }}"
#command: ['/bin/sh']
#args: ['-c', 'echo $HIP_KEYCLOAK_CLIENT_SECRET']
#command: [ "/bin/sh", "-c", "export" ]
#command: [ "/bin/sh", "-ce", "export" ]
command: [ "/bin/sh", "-c", "export --;trap : TERM INT; sleep infinity & wait" ]
#command: ['sh', '-c', 'sed -i "s/REPLACEME/$HIP_KEYCLOAK_CLIENT_SECRET/g" /opt/runtime/rest.properties']
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: "HIP_CLOUD_LICENSE_SERVER_URL"
value: {{ include "license.url" . | quote }}
- name: "HIP_CLOUD_LICENSE_SERVER_ID"
value: {{ include "license.id" . | quote }}
{{- if .Values.keycloak.enabled }}
- name: HIP_KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ .Values.keycloak.secret }}
key: clientSecret
{{ end }}
envFrom:
- configMapRef:
name: {{ include "lnk-service.fullname" . }}-server-env
{{- if .Values.rest.extraEnvConfigMap }}
- configMapRef:
name: {{ .Values.rest.extraEnvConfigMap }}
{{- end }}
{{- if .Values.rest.extraEnvSecret }}
- secretRef:
name: {{ .Values.rest.extraEnvSecret }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.image.port }}
protocol: TCP
- name: https
containerPort: 8443
protocol: TCP
volumeMounts:
- name: configuration
mountPath: /opt/runtime/rest.properties
subPath: rest.properties
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
i have tried init containers and replacing the string in rest.properties which works however it involves creating volumes with emptyDir.
can someone kindly help me if there is any simpler way to do this
confd will give you the solution, you can tell it to look at the configmap and change all the environment variables that is expected by the file to the env values that had been set.
Change your ConfigMap to create the file rest.properties.template.
Use an InitContainer that runs cat rest.properties.template | envsubst > rest.properties. The InitContainer can use any Docker container that includes envsubst.
Thanks for the response ,
solution 1: use init containers
Solution 2: we changed the code to read the value from environment variables.
we chose Solution 2
thanks you all for your responses
I have defined the values.yaml like the following:
name: custom-streams
image: streams-docker-images
imagePullPolicy: Always
restartPolicy: Always
replicas: 1
port: 8080
nodeSelector:
nodetype: free
configHocon: |-
streams {
monitoring {
custom {
uri = ${?URI}
method = ${?METHOD}
}
}
}
And configmap.yaml like the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-streams-configmap
data:
config.hocon: {{ .Values.configHocon | indent 4}}
Lastly, I have defined the deployment.yaml like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configmap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}
When I run the container via:
helm install --name custom-streams custom-streams -f values.yaml --debug --namespace streaming
Then the pods are running fine, but I cannot see the config.hocon file in the container:
$ kubectl exec -it custom-streams-55b45b7756-fb292 sh -n streaming
/ # ls
...
config
...
/ # cd config/
/config # ls
/config #
I need the config.hocon written in the /config folder. Can anyone let me know what is wrong with the configurations?
I was able to resolve the issue. The issue was using configmap in place configMap in deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configMap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}