What am I trying to achieve?
We are using a self-hosted GitLab instance and use GitLab AutoDevops to deploy our projects to a Kubernetes cluster. At the time of writing, we are only using one node within the cluster. For one of our projects it is important that the built application (i.e. the pod(s)) is able to access (read only) files stored on the Kubernetes cluster's node itself.
What have I tried?
Created a (hostPath) PersistentVolume (PV) on our cluster
Created a PersistentVolumeClaim (PVC) on our cluster (named "test-api-claim")
Now GitLab AutoDevops uses a default helm chart to deploy the applications. In order to modify it's behavior, I've added this chart to the project's repository (GitLab AutoDevops automatically uses the chart in a project's ./chart directory if found). So my line of thinking was to modify the chart so that the deployed pods use the PV and PVC which I created manually on the cluster.
Therefore I modified the deployment.yaml file that can be found here. As you can see in the following code-snippet, I have added the volumeMounts & volumes keys (not present in the default/original file). Scroll to the end of the snippet to see the added keys.
{{- if not .Values.application.initializeCommand -}}
apiVersion: {{ default "extensions/v1beta1" .Values.deploymentApiVersion }}
kind: Deployment
metadata:
name: {{ template "trackableappname" . }}
annotations:
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if or .Values.enableSelector (eq (default "extensions/v1beta1" .Values.deploymentApiVersion) "apps/v1") }}
selector:
matchLabels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
{{- end }}
replicas: {{ .Values.replicaCount }}
{{- if .Values.strategyType }}
strategy:
type: {{ .Values.strategyType | quote }}
{{- end }}
template:
metadata:
annotations:
checksum/application-secrets: "{{ .Values.application.secretChecksum }}"
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
spec:
imagePullSecrets:
{{ toYaml .Values.image.secrets | indent 10 }}
containers:
- name: {{ .Chart.Name }}
image: {{ template "imagename" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.application.secretName }}
envFrom:
- secretRef:
name: {{ .Values.application.secretName }}
{{- end }}
env:
{{- if .Values.postgresql.managed }}
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: app-postgres
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: app-postgres
key: password
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
{{- end }}
- name: DATABASE_URL
value: {{ .Values.application.database_url | quote }}
- name: GITLAB_ENVIRONMENT_NAME
value: {{ .Values.gitlab.envName | quote }}
- name: GITLAB_ENVIRONMENT_URL
value: {{ .Values.gitlab.envURL | quote }}
ports:
- name: "{{ .Values.service.name }}"
containerPort: {{ .Values.service.internalPort }}
livenessProbe:
{{- if eq .Values.livenessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.livenessProbe.path }}
scheme: {{ .Values.livenessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.livenessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
{{- if eq .Values.readinessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.readinessProbe.path }}
scheme: {{ .Values.readinessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.readinessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- end -}}
volumeMounts:
- mountPath: /data
name: test-pvc
volumes:
- name: test-pvc
persistentVolumeClaim:
claimName: test-api-claim
What is the problem?
Now when I trigger the Pipeline to deploy the application (using AutoDevops with my modified helm chart), I am getting this error:
Error: YAML parse error on auto-deploy-app/templates/deployment.yaml: error converting YAML to JSON: yaml: line 71: did not find expected key
Line 71 in the script refers to the valueFrom.secretKeyRef.name in the yaml:
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
The weird thing is that when I delete the volumes and volumeMounts keys, it works as expected (and the valueFrom.secretKeyRef.name is still presented and causes no trouble..).
I am not using tabs in the yaml file and I double checked the indentation.
Two questions
Could there be something wrong with my yaml?
Does anyone know of another solution to achieve my desired behavior? (adding PVC to the deployment so that pods actually use it?)
General information
We use GitLab EE 13.12.11
For auto-deploy-image (which provides the helm chart I am referring to) we use version 1.0.7
Thanks in advance and have a nice day!
it seems that adding persistence is now supported in the default helm chart.
Check the pvc.yaml and deployment.yaml.
Given that, it should be enough to edit values in .gitlab/auto-deploy-values.yaml to meet your needs. Check defaults in values.yaml for more context.
Related
i am running my spring boot application docker image on Kubernetes using Helm chart.
Below is the details of the same
templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{- include "xyz.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "xyz.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "xyz.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "xyz.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Chart.yaml
apiVersion: v2
name: xyz
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: <APP_VERSION_PLACEHOLDER>
values.yaml
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
### - If we want 3 intances then we will metion 3 -then 3 pods will be created on server
### - For staging env we usually keep 1
replicaCount: 1
image:
### --->We can also give local Image details also here
### --->We can create image in Docker repository and use that image URL here
repository: gcr.io/mgcp-109-xyz-operations/projectname
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: "xyz"
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
schedule: "*/5 * * * *"
###SMS2-40 - There are 2 ways how we want to serve our applications-->1st->LoadBalancer or 2-->NodePort
service:
type: NodePort
port: 8087
liveness: /actuator/health/liveness
readiness: /actuator/health/readiness
###service:
### type: ClusterIP
### port: 80
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
#application:
# configoveride: "config/application.properties"
templates/cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "xyz.fullname" . }}
spec:
schedule: {{ .Values.schedule }}
jobTemplate:
spec:
backoffLimit: 5
template:
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{- include "xyz.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "xyz.selectorLabels" . | nindent 4 }}
i ran my application first without cronjob.yaml
once my application started running on kubernetes i tried to conevrt it into kubernetes cron job hence i deleted templates/deployment.yaml and instead added templates/cronjob.yaml
after i deployed my application it ran but when i do
kubectl get cronjobs
it shows in logs No resources found in default namespace.
what i am doing wrong here,unable to figure out
i use below command to install my helm chart helm upgrade --install chartname
Not sure if you file is half but it's not ended properly EOF error might be there when the chart being tested
End part for cronjob
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
The full file should be something like
apiVersion: batch/v1
kind: CronJob
metadata:
name: test
spec:
schedule: {{ .Values.schedule }}
jobTemplate:
spec:
backoffLimit: 5
template:
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
i just tested the above it's working fine.
Command to test helm chart template
helm template <chart name> . --output-dir ./yaml
I was also deploying deployment.yaml which was a mistake so i deleted deployment.yaml file and kept only cronjob.yaml file whose content is given below
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{ include "xyz.labels" . | nindent 4 }}
spec:
schedule: "{{ .Values.schedule }}"
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 2
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: DD_ENV
value: {{ .Values.datadog.env }}
- name: DD_SERVICE
value: {{ include "xyz.name" . }}
- name: DD_VERSION
value: {{ include "xyz.AppVersion" . }}
- name: DD_LOGS_INJECTION
value: "true"
- name: DD_RUNTIME_METRICS_ENABLED
value: "true"
volumeMounts:
- mountPath: /app/config
name: logback
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
volumes:
- configMap:
name: {{ include "xyz.name" . }}
name: logback
backoffLimit: 0
metadata:
{{ with .Values.podAnnotations }}
annotations:
{{ toYaml . | nindent 8 }}
labels:
{{ include "xyz.selectorLabels" . | nindent 8 }}
{{- end }}
I'm using a Helm chart to control what environment variables are set for a certain container in a deployment.
In my Values.yaml, I have an entry called env which is a dictionary:
image:
repository: xxxx.yyyyy.com/myimage
pullPolicy: IfNotPresent
# Enviroment variables that will be passed to the container.
env: {}
Now, I'll pass variables to the env dict using --set:
helm upgrade mydeployment chart --set env.VARIABLE=test
However, this must be transformed into a list to adhere to Kubernetes yaml:
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
# This should come from that dict
env:
- name: VARIABLE
value: "test"
I don't know how to use the template language from Helm (sprig / go) to achieve that. Is it even possible?
To iterate through the map, the core Go text/template language provides a range keyword that can iterate through maps or arrays.
{{ range $key, $value := .Values.env }}
...
{{ end }}
Inside of this you can put arbitrary text. Helm doesn't require this to be any particular kind of YAML construct, just so long as the final result is valid YAML. For this setup a typical loop would look like
env:
{{- range $key, $value := .Values.env }}
- name: {{ quote $key }}
value: {{ quote $value }}
{{- end }}
You do need to be careful with indentation here. As a rule of thumb it often will work to include a - "swallow whitespace" indicator inside the open {{ and to not include one inside the close }}. The - name: must be at least as indented as the env: above it (ignoring the range line), and value: must be aligned with name:. I might put all of the template-language lines (the range and end) starting at the first column, even if they're embedded in a structure that's nested more.
spec:
template:
spec:
containers:
- name: {{ template "chart.fullname" . }}
env:
{{- range $key, $value := .Values.env }}
- name: {{ quote $key }}
value: {{ quote $value }}
{{- end }}
image: {{ .Values.registry }}/{{ .Values.image }}:{{ .Values.tag }}
my current setup involves Helm charts,Kubernetes
I have a requirement where i have to replace a property in configMap.yaml file with an environment variable declared in the deployment.yaml file
here is a section my configMap.yaml which declares a property file
data:
rest.properties: |
map.dirs=/data/maps
catalog.dir=/data/catalog
work.dir=/data/tmp
map.file.extension={{ .Values.rest.mapFileExtension }}
unload.time=1
max.flow.threads=10
max.map.threads=50
trace.level=ERROR
run.mode={{ .Values.runMode }}
{{- if eq .Values.cache.redis.location "external" }}
redis.host={{ .Values.cache.redis.host }}
{{- else if eq .Values.cache.redis.location "internal" }}
redis.host=localhost
{{- end }}
redis.port={{ .Values.cache.redis.port }}
redis.stem={{ .Values.cache.redis.stem }}
redis.database={{ .Values.cache.redis.database }}
redis.logfile=redis.log
redis.loglevel=notice
exec.log.dir=/data/logs
exec.log.file.count=5
exec.log.file.size=100
exec.log.level=all
synchronous.timeout=300
{{- if .Values.global.linkIntegration.enabled }}
authentication.enabled=false
authentication.server=https://{{ .Release.Name }}-product-design-server:443
config.dir=/opt/runtime/config
{{- end }}
{{- if .Values.keycloak.enabled }}
authentication.keycloak.enabled={{ .Values.keycloak.enabled }}
authentication.keycloak.serverUrl={{ .Values.keycloak.serverUrl }}
authentication.keycloak.realmId={{ .Values.keycloak.realmId }}
authentication.keycloak.clientId={{ .Values.keycloak.clientId }}
authentication.keycloak.clientSecret=${HIP_KEYCLOAK_CLIENT_SECRET}
{{- end }}
i need to replace ${HIP_KEYCLOAK_CLIENT_SECRET} which is defined in deployment.yaml file as shown below
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.global.hImageRegistry }}/{{ include "image.runtime.repo" . }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- if .Values.keycloak.enabled }}
- name: HIP_KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ .Values.keycloak.secret }}
key: clientSecret
{{ end }}
the idea is to have the property file in the deployed pod under /opt/runtime/rest.properties
here is my complete deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "lnk-service.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "lnk-service.name" . }}
helm.sh/chart: {{ include "lnk-service.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "lnk-service.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "lnk-service.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
{{- if .Values.global.hImagePullSecret }}
imagePullSecrets:
- name: {{ .Values.global.hImagePullSecret }}
{{- end }}
securityContext:
runAsUser: 998
runAsGroup: 997
fsGroup: 997
volumes:
- name: configuration
configMap:
name: {{ include "lnk-service.fullname" . }}-server-config
- name: core-configuration
configMap:
name: {{ include "lnk-service.fullname" . }}-server-core-config
- name: hch-configuration
configMap:
name: {{ include "lnk-service.fullname" . }}-hch-config
- name: data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
{{- if .Values.global.linkIntegration.enabled }}
claimName: lnk-shared-px
{{- else }}
claimName: {{ include "pvc.name" . }}
{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
- name: hch-data
{{- if .Values.global.linkIntegration.enabled }}
persistentVolumeClaim:
claimName: {{ include "unicapvc.fullname" . }}
{{- else }}
emptyDir: {}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.global.hImageRegistry }}/{{ include "image.runtime.repo" . }}:{{ .Values.image.tag }}"
#command: ['/bin/sh']
#args: ['-c', 'echo $HIP_KEYCLOAK_CLIENT_SECRET']
#command: [ "/bin/sh", "-c", "export" ]
#command: [ "/bin/sh", "-ce", "export" ]
command: [ "/bin/sh", "-c", "export --;trap : TERM INT; sleep infinity & wait" ]
#command: ['sh', '-c', 'sed -i "s/REPLACEME/$HIP_KEYCLOAK_CLIENT_SECRET/g" /opt/runtime/rest.properties']
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: "HIP_CLOUD_LICENSE_SERVER_URL"
value: {{ include "license.url" . | quote }}
- name: "HIP_CLOUD_LICENSE_SERVER_ID"
value: {{ include "license.id" . | quote }}
{{- if .Values.keycloak.enabled }}
- name: HIP_KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ .Values.keycloak.secret }}
key: clientSecret
{{ end }}
envFrom:
- configMapRef:
name: {{ include "lnk-service.fullname" . }}-server-env
{{- if .Values.rest.extraEnvConfigMap }}
- configMapRef:
name: {{ .Values.rest.extraEnvConfigMap }}
{{- end }}
{{- if .Values.rest.extraEnvSecret }}
- secretRef:
name: {{ .Values.rest.extraEnvSecret }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.image.port }}
protocol: TCP
- name: https
containerPort: 8443
protocol: TCP
volumeMounts:
- name: configuration
mountPath: /opt/runtime/rest.properties
subPath: rest.properties
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
i have tried init containers and replacing the string in rest.properties which works however it involves creating volumes with emptyDir.
can someone kindly help me if there is any simpler way to do this
confd will give you the solution, you can tell it to look at the configmap and change all the environment variables that is expected by the file to the env values that had been set.
Change your ConfigMap to create the file rest.properties.template.
Use an InitContainer that runs cat rest.properties.template | envsubst > rest.properties. The InitContainer can use any Docker container that includes envsubst.
Thanks for the response ,
solution 1: use init containers
Solution 2: we changed the code to read the value from environment variables.
we chose Solution 2
thanks you all for your responses
I got an error in my Deoloyment.ysml file. I have made env in this file and assign values in values file. I got a syntax error in this file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
template:
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
spec:
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources: {}
env:
- name: MONGODB_ADDRESS
value: {{ .Values.mongodb.db.address }}
- name: MONGODB
value: "akira-article"
- name: MONGODB_USER
value: {{ .Values.mongodb.db.user | quote }}
- name: MONGODB_PASS
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: mongodb-password
- name: MONGODB_AUTH_DB
value: {{ .Values.mongodb.db.name | quote }}
- name: DAKEN_USERID
value: {{ .Values.mongodb.db.userId | quote }}
- name: DAKEN_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: daken-pass
- name: JWT_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: jwt-Privat-Key
- name: WEBSITE_NAME
value: {{ .Values.website.Name }}
- name: WEBSITE_SHORT_NAME
value: {{ .Values.website.shortName }}
- name: AKIRA_HTTP_PORT
value: {{ .Values.website.port }}
ports:
- containerPort: {{ .Values.service.port }}
I got this error:
Error: Deployment in version "v1" cannot be handled as a Deployment:
v1.Deployment.Spec: v1.DeploymentSpec.Template:
v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container:
v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects "
or n, but found 8, error found in #10 byte of
...|,"value":8080}],"ima|..., bigger context
...|,"value":"AA"},{"name":"AKIRA_HTTP_PORT","value":8080}],"image":"dr.xenon.team/websites/akira-fronte|...
Answer to your problem is available in Helm documentation QUOTE STRINGS, DON’T QUOTE INTEGERS.
When you are working with string data, you are always safer quoting the strings than leaving them as bare words:
name: {{ .Values.MyName | quote }}
But when working with integers do not quote the values. That can, in many cases, cause parsing errors inside of Kubernetes.
port: {{ .Values.Port }}
This remark does not apply to env variables values which are expected to be string, even if they represent integers:
env:
- name: HOST
value: "http://host"
- name: PORT
value: "1234"
I'm assuming you have put the port value of AKIRA_HTTP_PORT inside quotes, that's why you are getting the error.
You can read the docs about Template Functions and Pipelines.
With AKIRA_HTTP_PORT: "8080" in values.yaml, in the env variables write:
env:
- name: AKIRA_HTTP_PORT
value: {{ .Values.website.port | quote }}
It should have to work
I am creating a Helm chart. When doing a dry run I get a error:
Error: YAML parse error on vstsagent/templates/vsts-buildrelease-agent.yaml: error converting YAML to JSON: yaml: line 28: found character that cannot start any token
The dry run also outputs the secret and deployment YAML file which I created. The part where it goes wrong in the deployment shows:
- name: ACCOUNT
valueFrom:
secretKeyRef:
name: %!s(<nil>)-%!s(<nil>)
key: ACCOUNT
- name: TOKEN
valueFrom:
secretKeyRef:
name: %!s(<nil>)-%!s(<nil>)
key: TOKEN
The output from the dry run for the secret looks fine.
The templates I created:
apiVersion: v1
kind: Secret
metadata:
name: {{ template "chart.fullname" . }}
type: Opaque
data:
ACCOUNT: {{ .Values.chart.secret.account }}
TOKEN: {{ .Values.chart.secret.token }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "chart.fullname" . }}
labels:
app: {{ template "chart.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
release: {{ .Release.Name }}
app: {{ template "chart.name" . }}
annotations:
agentVersion: {{ .Values.chart.image.tag }}
spec:
containers:
- name: {{ template "chart.name" . }}
image: {{ .Values.chart.image.name }}
imagePullPolicy: {{ .Values.chart.image.pullPolicy }}
env:
- name: ACCOUNT
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" }}
key: ACCOUNT
- name: TOKEN
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" }}
key: TOKEN
The _helper.tpl looks like this:
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "chart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "chart.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
Where am I going wrong in this?
I missed 2 dots....
- name: ACCOUNT
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" . }}
key: ACCOUNT
- name: TOKEN
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" . }}
key: TOKEN