helm chart always not going into the if condition - kubernetes

yaml file and in that below values are defined including one specific value called "environment"
image:
repository: my_repo_url
tag: my_tag
pullPolicy: IfNotPresent
releaseName: cron_script
schedule: "0 10 * * *"
namespace: deploy_cron
rav_admin_password: asdf
environment: testing
testing_forwarder_ip: 10.2.71.21
prod_us_forwarder_ip: 10.2.71.15
Now in my helm chart based on this environment value i need to assign a value to new variable and for that I have written code like below, but always it is not entering into the if else block itself
{{- $fwip := .Values.prod_us_forwarder_ip }}
{{- if contains .Values.environment "testing" }}
{{- $fwip := .Values.testing_forwarder_ip }}
{{- end }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: "{{ .Values.releaseName }}"
namespace: "{{ .Values.namespace }}"
labels:
....................................
....................................
....................................
spec:
restartPolicy: Never
containers:
- name: "{{ .Values.releaseName }}"
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: IfNotPresent
args:
- python3
- test.py
- --data
- 100
- {{ $fwip }}
In the above code always i get $fwip value as 10.2.71.21 what ever environment value is either testing or production for both i am getting same value
And if i don't declare the variable $fwip before the if else statement then it says $fwip variable is not defined error, So i am not sure why exactly if else statement is not getting used at all, How to debug further ?

This is a syntax problem of variables and local variables.
The fwip in if should use = instead of :=
{{- $fwip := .Values.prod_us_forwarder_ip }}
{{- if contains .Values.environment "testing" }}
{{- $fwip = .Values.testing_forwarder_ip }}
{{- end }}
I translated it into go code to make it easier for you to understand.
(In the go language, := means definition and assignment, = means assignment)
// :=
env := "testing"
test := "10.2.71.21"
prod := "10.2.71.15"
fwip := prod
if strings.Contains(env,"testing"){
fwip := test
fmt.Println(fwip) // 10.2.71.21
}
fmt.Println(fwip) // 10.2.71.15
// =
env := "testing"
test := "10.2.71.21"
prod := "10.2.71.15"
fwip := prod
if strings.Contains(env,"testing"){
fwip = test
fmt.Println(fwip) // 10.2.71.21
}
fmt.Println(fwip) // 10.2.71.21

Related

How to ammend a condition in helm charts

Currently, I am checking if lifecycle hooks are enabled, if yes add some extra delay:
{{- $delay := hasKey .Values "shutdownDelay" | ternary .Values.shutdownDelay 30 }}
{{- $graceperiod := hasKey .Values.service "terminationGracePeriodSeconds" | ternary .Values.service.terminationGracePeriodSeconds 120 }}
{{- $extraDelay := .Values.lifecycleHooks.enabled | ternary $delay 0 }}
terminationGracePeriodSeconds: {{ add $graceperiod $extraDelay }}
I want to cover a use case where the if .Values.lifecycleHooks.postStart and .Values.lifecycleHooks.prestart have some values then it should not add the extra delay in terminationGracePeriodSeconds
The values.yaml looks like
#shutdownDelay: 40
lifecycleHooks:
enabled: true
postStart:
exec:
command:
- echo
- "Run after starting container"
preStop:
exec:
command:
- echo
- "Run before stopping container"
service:
terminationGracePeriodSeconds: 120
So if the poststop hook value is defined like in values.yaml then it should not add any delay to terminationperiod.
The question is not very specific but if you are looking for "if"
condition with "and"/"or", below is an example might be helpfull.
As per your explanation assuming value for poststart/prestart, If the lifecycle.poststart is "false" and lifecycle.prestart is "true" then the terminationgraceperiodseconds wont have extradelay else condition will have an extradelay
{{- if and (eq .Values.lifecycleHooks.postStart "false") (eq .Values.lifecycleHooks.prestart "true")) }}
terminationGracePeriodSeconds: {{ $graceperiod }}
{{- else}}
terminationGracePeriodSeconds: {{ add $graceperiod $extraDelay }}
{{- end }}
conditional or
{{- if or (eq .Values.lifecycleHooks.postStart "false") (eq .Values.lifecycleHooks.prestart "true")) }}
terminationGracePeriodSeconds: {{ add $graceperiod $extraDelay }}
{{- end }}

Helm - Check if value not exists OR part of list

Assuming I have this values.yaml under my helm chart -
tasks:
- name: test-production-dev
env:
- production
- dev
- name: test-dev
env:
- dev
- name: test-all
environment_variables:
STAGE: dev
I would like to run my cronjob based on these values -
if .env doesn't exist - run any time.
if .env exists - run only if environment_variables.STAGE is in the .env list.
This is what I've done so far ( with no luck ) -
{{- range $.Values.tasks}}
# check if $value.env not exists OR contains stage
{{if or .env (hasKey .env "$.Values.environment_variables.STAGE") }}
apiVersion: batch/v1
kind: CronJob
...
{{- end}}
---
{{- end}}
values.yaml
tasks:
- name: test-production-dev
env:
- production
- dev
- name: test-dev
env:
- dev
- name: test-all
- name: test-production
env:
- production
environment_variables:
STAGE: dev
template/xxx.yaml
plan a
...
{{- range $.Values.tasks }}
{{- $flag := false }}
{{- if .env }}
{{- range .env }}
{{- if eq . $.Values.environment_variables.STAGE }}
{{- $flag = true }}
{{- end }}
{{- end }}
{{- else }}
{{- $flag = true }}
{{- end }}
{{- if $flag }}
apiVersion: batch/v1
kind: CronJob
meta:
name: {{ .name }}
{{- end }}
{{- end }}
...
plan b
...
{{- range $.Values.tasks }}
{{- if or (not .env) (has $.Values.environment_variables.STAGE .env) }}
apiVersion: batch/v1
kind: CronJob
meta:
name: {{ .name }}
{{- end }}
{{- end }}
...
output
...
apiVersion: batch/v1
kind: CronJob
meta:
name: test-production-dev
apiVersion: batch/v1
kind: CronJob
meta:
name: test-dev
apiVersion: batch/v1
kind: CronJob
meta:
name: test-all
...

Helm Template iterating over map to create multiple jobs

I'm trying to iterate over map in Helm chart to create multiple Kubernetes Cronjobs. Since I had trouble generating multiple manifests from a single template I used '---' to separate the manifests. Otherwise it kept generating only a one manifest.
{{- range $k, $job := .Values.Jobs }}
{{- if $job.enabled }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $job.name }}
namespace: {{ $.Release.Namespace }}
spec:
schedule: {{ $job.schedule }}
startingDeadlineSeconds: xxx
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: x
failedJobsHistoryLimit: x
jobTemplate:
spec:
template:
spec:
containers:
- name: {{ $job.name }}
image: {{ $.Values.cronJobImage }}
command:
- /bin/sh
- -c
- curl {{ $.Values.schedulerBaseUrl }}/{{ $job.url }}
restartPolicy: Never
---
{{- end }}
{{ end }}
values.yaml
Jobs:
- name: "xxx-job"
enabled: true
schedule: "00 18 * * *"
url: "jobs/xxx"
- name: "xxx-job"
enabled: true
schedule: "00 18 * * *"
url: "jobs/xxx"
From this way it works and generates all the Jobs defined in the values.yaml. I was wandering is there an any better way to do this?
I have the same situation and am having trouble writing tests; we are using https://github.com/quintush/helm-unittest/blob/master/DOCUMENT.md for unit tests.
The problem is, which document index, in this case, do we have to use to separate manifest? For example, in the case above, iterate four times, and in two cases will fail!

In Minikube, spark driver does not mount hostPath when driver run in sparkapplication.yaml deployed

I am newbie on spark and minikube. I faced the problem while running spark job in sparkapplication.yaml, spark driver and executor s' are created successfully but each of them does not mount hostPath. I referred Tom Louis's minikube-spark example. Everything runs fine if I put data into the sparkjob image file directly via Dockfile// COPY ~~//.
Currently, data(*.csv) are in localFolder - (mounted) - minikube - (not mounted) - spark driver Pod.
I don't know why hostPath does not mounted, there might be some error I did^^;
Anybody can take a look into my problem? Appreciated..!
template/sparkapplication.yaml
apiVersion: sparkoperator.k8s.io/v1beta2
kind: SparkApplication
metadata:
name: {{ .Release.Name | trunc 63 }}
labels:
chartname: {{ .Chart.Name | trunc 63 | quote }}
release: {{ .Release.Name | trunc 63 | quote }}
revision: {{ .Release.Revision | quote }}
sparkVersion: {{ .Values.sparkVersion | quote }}
version: {{ .Chart.Version | quote }}
spec:
type: Scala
mode: cluster
image: {{ list .Values.imageRegistry .Values.image | join "/" | quote }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.imagePullSecrets }}
- {{ . | quote }}
{{- end }}
{{- end }}
mainClass: {{ .Values.mainClass | quote }}
mainApplicationFile: {{ .Values.jar | quote }}
{{- if .Values.arguments }}
arguments:
{{- range .Values.arguments }}
- {{ . | quote }}
{{- end }}
{{- end }}
sparkVersion: {{ .Values.sparkVersion | quote }}
restartPolicy:
type: Never
{{- if or .Values.jarDependencies .Values.fileDependencies .Values.sparkConf .Values.hadoopConf }}
deps:
{{- if .Values.jarDependencies }}
jars:
{{- range .Values.jarDependencies }}
- {{ . | quote }}
{{- end }}
{{- end }}
{{- if .Values.fileDependencies }}
files:
{{- range .Values.fileDependencies }}
- {{ . | quote }}
{{- end }}
{{- end }}
{{- if .Values.sparkConf }}
sparkConf:
{{- range $conf, $value := .Values.sparkConf }}
{{ $conf | quote }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if .Values.hadoopConf }}
hadoopConf:
{{- range $conf, $value := .Values.hadoopConf }}
{{ $conf | quote }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- end }}
driver:
{{- if .Values.envSecretKeyRefs }}
envSecretKeyRefs:
{{- range $name, $value := .Values.envSecretKeyRefs }}
{{ $name }}:
name: {{ $value.name}}
key: {{ $value.key}}
{{- end }}
{{- end }}
{{- if .Values.envVars }}
envVars:
{{- range $name, $value := .Values.envVars }}
{{ $name }}: {{ $value | quote }}
{{- end }}
{{- end }}
securityContext:
runAsUser: {{ .Values.userId }}
cores: {{ .Values.driver.cores }}
coreLimit: {{ .Values.driver.coreLimit | default .Values.driver.cores | quote }}
memory: {{ .Values.driver.memory }}
hostNetwork: {{ .Values.hostNetwork }}
labels:
release: {{ .Release.Name | trunc 63 | quote }}
revision: {{ .Release.Revision | quote }}
sparkVersion: {{ .Values.sparkVersion | quote }}
version: {{ .Chart.Version | quote }}
serviceAccount: {{ .Values.serviceAccount }}
{{- if .Values.javaOptions }}
javaOptions: {{ .Values.javaOptions | quote}}
{{- end }}
{{- if .Values.mounts }}
volumeMounts:
{{- range $name, $path := .Values.mounts }}
- name: {{ $name }}
mountPath: {{ $path }}
{{- end }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
executor:
{{- if .Values.envVars }}
envVars:
{{- range $name, $value := .Values.envVars }}
{{ $name | quote }}: {{ $value | quote }}
{{- end }}
{{- end }}
securityContext:
runAsUser: {{ .Values.userId }}
cores: {{ .Values.executor.cores }}
coreLimit: {{ .Values.executor.coreLimit | default .Values.executor.cores | quote }}
instances: {{ .Values.executor.instances }}
memory: {{ .Values.executor.memory }}
labels:
release: {{ .Release.Name | trunc 63 | quote }}
revision: {{ .Release.Revision | quote }}
sparkVersion: {{ .Values.sparkVersion | quote }}
version: {{ .Chart.Version | quote }}
serviceAccount: {{ .Values.serviceAccount }}
{{- if .Values.javaOptions }}
javaOptions: {{ .Values.javaOptions }}
{{- end }}
{{- if .Values.mounts }}
volumeMounts:
{{- range $name, $path := .Values.mounts }}
- name: {{ $name }}
mountPath: {{ $path }}
{{- end }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
{{- if .Values.jmxExporterJar }}
monitoring:
exposeDriverMetrics: true
exposeExecutorMetrics: true
prometheus:
port: {{ .Values.jmxPort | default 8090 }}
jmxExporterJar: {{ .Values.jmxExporterJar }}
{{- end }}
{{- if .Values.volumes }}
volumes:
- name: input-data
hostPath:
path: /input-data
- name: output-data
hostPath:
path: /output-data
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 4 }}
{{- end }}
values.yaml
# Generated by build.sbt. Please don't manually update
version: 0.1
sparkVersion: 3.0.2
image: kaspi/kaspi-sparkjob:0.1
jar: local:///opt/spark/jars/kaspi-kaspi-sparkjob.jar
mainClass: kaspi.sparkjob
fileDependencies: []
environment: minikube
serviceAccount: spark-spark
imageRegistry: localhost:5000
arguments:
- "/mnt/data-in/"
- "/mnt/data-out/"
volumes:
- name: input-data
hostPath:
path: /input-data
- name: output-data
hostPath:
path: /output-data
mounts:
input-data: /mnt/data-in
output-data: /mnt/data-out
driver:
cores: 1
memory: "2g"
executor:
instances: 2
cores: 1
memory: "1g"
hadoopConf:
sparkConf:
hostNetwork: false
imagePullPolicy: Never
userId: 0
build.sbt
val sparkVersion = "3.0.2"
val sparkLibs = Seq(
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" %% "spark-sql" % sparkVersion,
"org.apache.spark" %% "spark-streaming" % sparkVersion,
"org.apache.spark" %% "spark-mllib" % sparkVersion
)
lazy val commonSettings = Seq(
organization := "kaspi",
scalaVersion := "2.12.13",
version := "0.1",
libraryDependencies ++= sparkLibs
)
val domain = "kaspi"
// for building FAT jar
lazy val assemblySettings = Seq(
assembly / assemblyOption := (assemblyOption in assembly).value.copy(includeScala = false),
assembly / assemblyOutputPath := baseDirectory.value / "output" / s"${domain}-${name.value}.jar"
)
val targetDockerJarPath = "/opt/spark/jars"
val baseRegistry = sys.props.getOrElse("baseRegistry", default = "localhost:5000")
// for building docker image
lazy val dockerSettings = Seq(
imageNames in docker := Seq(
ImageName(s"$domain/${name.value}:latest"),
ImageName(s"$domain/${name.value}:${version.value}"),
),
buildOptions in docker := BuildOptions(
cache = false,
removeIntermediateContainers = BuildOptions.Remove.Always,
pullBaseImage = BuildOptions.Pull.Always
),
dockerfile in docker := {
// The assembly task generates a fat JAR file
val artifact: File = assembly.value
val artifactTargetPath = s"$targetDockerJarPath/$domain-${name.value}.jar"
new Dockerfile {
from(s"$baseRegistry/spark-runner:0.1")
}.add(artifact, artifactTargetPath)
}
)
// Include "provided" dependencies back to default run task
lazy val runLocalSettings = Seq(
// https://stackoverflow.com/questions/18838944/how-to-add-provided-dependencies-back-to-run-test-tasks-classpath/21803413#21803413
Compile / run := Defaults
.runTask(
fullClasspath in Compile,
mainClass in (Compile, run),
runner in (Compile, run)
)
.evaluated
)
lazy val root = (project in file("."))
.enablePlugins(sbtdocker.DockerPlugin)
.enablePlugins(AshScriptPlugin)
.settings(
commonSettings,
assemblySettings,
dockerSettings,
runLocalSettings,
name := "kaspi-sparkjob",
Compile / mainClass := Some("kaspi.sparkjob"),
Compile / resourceGenerators += createImporterHelmChart.taskValue
)
// Task to create helm chart
lazy val createImporterHelmChart: Def.Initialize[Task[Seq[File]]] = Def.task {
val chartFile = baseDirectory.value / "helm" / "Chart.yaml"
val valuesFile = baseDirectory.value / "helm" / "values.yaml"
val chartContents =
s"""# Generated by build.sbt. Please don't manually update
|apiVersion: v1
|name: $domain-${name.value}
|version: ${version.value}
|appVersion: ${version.value}
|description: ETL Job
|home: https://github.com/jyyoo0530/kaspi
|sources:
| - https://github.com/jyyoo0530/kaspi
|maintainers:
| - name: Jeremy Yoo
| email: jyyoo0530#gmail.com
| url: https://www.linkedin.com/in/jeeyoungyoo
|""".stripMargin
val valuesContents =
s"""# Generated by build.sbt. Please don't manually update
|version: ${version.value}
|sparkVersion: ${sparkVersion}
|image: $domain/${name.value}:${version.value}
|jar: local://$targetDockerJarPath/$domain-${name.value}.jar
|mainClass: ${(Compile / run / mainClass).value.getOrElse("__MAIN_CLASS__")}
|fileDependencies: []
|environment: minikube
|serviceAccount: spark-spark
|imageRegistry: localhost:5000
|arguments:
| - "/mnt/data-in/"
| - "/mnt/data-out/"
|volumes:
| - name: input-data
| hostPath:
| path: /input-data
| - name: output-data
| hostPath:
| path: /output-data
|mounts:
| input-data: /mnt/data-in
| output-data: /mnt/data-out
|driver:
| cores: 1
| memory: "2g"
|executor:
| instances: 2
| cores: 1
| memory: "1g"
|hadoopConf:
|sparkConf:
|hostNetwork: false
|imagePullPolicy: Never
|userId: 0
|""".stripMargin
IO.write(chartFile, chartContents)
IO.write(valuesFile, valuesContents)
Seq(chartFile, valuesFile)
}
lazy val showVersion = taskKey[Unit]("Show version")
showVersion := {
println((version).value)
}
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
******2021/2/25 UPDATES ********
I tried below yaml for test purpose, then volume from hostpath mounted successfully in Pod. There are no differences, but the object characteristic is different, one is "container", one is "driver","executor"...etc.
(Same problem happened while using gaffer-hdfs which k8s object name is "namenode", "datanode"...etc).
Can it be a problem using custom kubernetes object name??
But if it is still inherited container properties,,, there is no reason to not to be mounted.
.... so.... still struggling..! :)
apiVersion: v1
kind: Pod
metadata:
name: hostpath
namespace: spark-apps
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: volumepath
mountPath: /mnt/data
volumes:
- name: volumepath
hostPath:
path: /input-data
type: Directory

Multiple Env Variables in Helm Charts

I have created common helm charts. In values.yml file, I have set of env variables that need to be set as part of deployment.yaml file.
Snippet of values file.
env:
name: ABC
value: 123
name: XYZ
value: 567
name: PQRS
value: 345
In deployment.yaml, when the values are referred, only the last name/value are set, other values are overwritten. How to read/set all the names/values in the deployment file?
I've gone through a few iterations of how to handle setting sensitive environment variables. Something like the following is the simplest solution I've come up with so far:
template:
{{- if or $.Values.env $.Values.envSecrets }}
env:
{{- range $key, $value := $.Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- range $key, $secret := $.Values.envSecrets }}
- name: {{ $key }}
valueFrom:
secretKeyRef:
name: {{ $secret }}
key: {{ $key | quote }}
{{- end }}
{{- end }}
values:
env:
ENV_VAR: value
envSecrets:
SECRET_VAR: k8s-secret-name
Pros:
syntax is pretty straightforward
keys are easily mergeable. This came in useful when creating CronJobs with shared secrets. I was able to easily override "global" values using the following:
{{- range $key, $secret := merge (default dict .envSecrets) $.Values.globalEnvSecrets }}
Cons:
This only works for secret keys that exactly match the name of the environment variable, but it seems like that is the typical use case.
This is how I solved it in a common helm-chart I developed previously:
env:
{{- if .Values.env }}
{{- toYaml .Values.env | indent 12 }}
{{- end }}
In the values.yaml:
env:
- name: ENV_VAR
value: value
# or
- name: ENV_VAR
valueFrom:
secretKeyRef:
name: secret_name
key: secret_key
An important thing to note here is the indention. Incorrect indentation might lead to a valid helm-chart (yaml file), but the kubernetes API will give an error.
It looks like you've made a typo and forgot your dashes. Without the dashes yaml will evaluate env into a single object instead of a list and overwrite values in unexpected ways.
Your env should like more like this:
env:
- name: ABC
value: 123
- name: XYZ
value: 567
- name: PQRS
value: 345
- name: SECRET
valueFrom:
secretKeyRef:
name: name
key: key
https://www.convertjson.com/yaml-to-json.htm can help visualize how the yaml is being interpreted and investigate syntax issues.
You could let the chart user decide if he wants to take environment variables from a secret, provide the value, or take it from the downward API in the values.yaml
env:
FOO:
value: foo
BAR:
valueFrom:
secretKeyRef:
name: bar
key: barKey
POD_NAME:
valueFrom:
fieldRef:
fieldPath: metadata.name
and render it in the deployment.yaml
spec:
# ...
template:
# ...
spec:
# ...
containers:
- name: {{ .Chart.Name }}
env:
{{- range $name, $item := .Values.env }}
- name: {{ $name }}
{{- $item | toYaml | nindent 14 }}
{{- end }}
# ...
This is relatively simple and flexible.
It has the shortcoming of not keeping the order of the environment variables. This can break dependent environment variables.
I have written a bit longer story on how to support corrrect ordering as well: An Advanced API for Environment Variables in Helm Charts.