Helm. Execute bash script to choose proper image - kubernetes-helm

Helmfile:
spec:
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.image.name }} --> execute shell script here
imagePullPolicy: Always
ports:
- containerPort: 8081
env:
- name: BACKEND_HOST
value: {{ .Values.backend.host }}
I want to execute bash script to check if this image exists. If not, than other image would be taken. How to do it with helm? Or is there any solution to do it?

Helm doesn't have any way to call out to other processes, make network connections, or do any other sort of external lookup (with one specific exception where it can read Kubernetes objects out of the cluster). You'd have to pass this value in when you run the helm install command instead:
helm install release-name ./chart-directory \
--set image.name=$(the command you want to run)
If this is getting run from part of some larger process, you may find it easier to write a JSON or YAML file that can be passed to the helm install -f option instead of dynamically calling out to the script; the helm install --set option has some unusual syntax and behavior. You can even go one step further and check that per-installation YAML file into source control, and have another step in your deployment pipeline notice the commit and actually do the installation ("GitOps" style).

Related

Different name required to override value in Helm subchart

I have read the Helm docs and various StackOverflow questions - this is not (I hope!) a lazy question. I'm having an issue overriding a single particular value in a Helm chart, not having trouble with the concept in general.
I'm trying to install the Gitea helm chart on a k8s cluster on Raspberry Pis (that is - on arm64 architecture). Since the default memcached dependency chart is from Bitnami, who don't support arm64, I have overridden the image appropriately (to arm64v8/memcached, link).
However, this new image has a different entrypoint - /entrypoint.sh instead of /run.sh. Referencing the relevant part of the template, I believed I needed to override memcached.args, but that didn't work as expected:
$ cat values.yaml
memcached:
image:
repository: "arm64v8/memcached"
tag: "1.6.17"
args:
- "/entrypoint.sh"
diagnosticMode:
enabled: false
$ helm template gitea-charts/gitea --values values.yaml
[...]
# Source: gitea/charts/memcached/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-memcached
namespace: gitea
labels: [...]
spec:
selector:
matchLabels: [...]
replicas: 1
template:
metadata:
labels: [...]
spec:
[...]
serviceAccountName: release-name-memcached
containers:
- name: memcached
image: docker.io/arm64v8/memcached:1.6.17
imagePullPolicy: "IfNotPresent"
args:
- /run.sh # <----- this should be `/entrypoint.sh`
env:
- name: BITNAMI_DEBUG
value: "false"
ports:
- name: memcache
containerPort: 11211
[...]
However, when I instead overrode memcached.arguments, the expected behaviour occurred - the contents of memcached.arguments rendered in the template's args (or, if memcached.arguments was empty, no args were rendered)
Where is this mapping from arguments to args taking place?
Note in particular that the Bitnami chart docs refer to args, so this is unexpected - though note also that the Bitnami chart's values.yaml refers to arguments in the comment (this is what prompted me to try this "obviously wrong" approach!). In the "Upgrade to 5.0.0 notes", we see "arguments has been renamed to args." - but the Gitea chart is using a >5.0.0 version of the Bitnami chart.
You're reasoning is correct. And the current parameter name is definitely called args (arguments is deprecated, someone just forgot to update the comment here).
Now, why arguments work for you and args? I think you're just using the old version, before it was renamed. I checked it and:
Gitea chart uses version 5.9.0 from the repo https://raw.githubusercontent.com/bitnami/charts/pre-2022/bitnami
This corresponds to the following Helm Chart: https://charts.bitnami.com/bitnami/memcached-5.9.0.tgz (you can check it here).
When you extract this file chart, you see it's the old version of chart (with arguments not yet renamed to args).

Can a deploy with multiple ReplicaSets run CMD different command?

I want to create few pods from same image (I have the Dockerfile) so i want to use ReplicaSets.
but the final CMD command need to be different for each container.
for exmple
(https://www.devspace.sh/docs/5.x/configuration/images/entrypoint-cmd):
image:
frontend:
image: john/appfrontend
cmd:
- run
- dev
And the other container will do:
image:
frontend:
image: john/appfrontend
cmd:
- run
- <new value>
Also I would like to move the CMD value from a list, so i would like the value there to be variable (it will be in a loop so each Pod will have to be created separately).
Is it possible?
You can't directly do this as you've described it. A ReplicaSet manages some number of identical Pods, where the command, environment variables, and every other detail except for the Pod name are the same across every replica.
In practice you don't usually directly use ReplicaSets; instead, you create a Deployment, which creates one or more ReplicaSets, which create Pods. The same statement and mechanics apply to Deployments, though.
Since this is specifically in the context of a Helm chart, you can have two separate Deployment YAML files in your chart, but then use Helm templating to reduce the amount of code that needs to be repeated. You can add a helper template to templates/_helpers.tpl that contains most of the data for a container
# templates/_helpers.tpl
{{- define "myapp.container" -}}
image: my-image:{{ .Values.tag }}
env:
- name: FOO
value: bar
- name: ET
value: cetera
{{ end -}}
Now you can have two template Deployment files, but provide a separate command: for each.
# templates/deployment-one.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "myapp.name" . }}-one
labels:
{{ include "myapp.labels" . | indent 4 }}
spec:
replicas: {{ .Values.one.replicas }}
template:
metadata:
labels:
{{ include "myapp.labels" . | indent 8 }}
spec:
containers:
- name: frontend
{{ include "myapp.container" . | indent 10 }}
command:
- npm
- run
- dev
There is still a fair amount to copy and paste, but you should be able to cp the whole file. Most of the boilerplate is Kubernetes boilerplate and every Deployment will have these parts; little of it is specific to any given application.
If your image has a default CMD (this is good practice) then you can omit the command: override on one of the Deployments, and it will run that default CMD.
In the question you make specific reference to Dockerfile CMD. One important terminology difference is that Kubernetes command: overrides Docker ENTRYPOINT, and Kubernetes args: matches CMD. If you are using an entrypoint wrapper script, in this example you will need to provide args: instead of command: so that the wrapper is still invoked.

How to pull variable environment into deployment.yml?

I created a process from Jenkins that builds a dockerfile and then creates a chart for me through the helm. The problem is that the name of the image I'm pushing to the dockerhub then repository changes according to the Jenkins build number.
deployment.yaml
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
values.yaml:
image:
repository: photop/micro_focus
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: "%image_tag%"
Jenkinsfile:
stage ('Deploy&Operate HM'){
steps{
script{
bat 'minikube start'
bat 'kubectl create deployment %BUILD_NUMBER% --image="%BUILD_NUMBER%":latest'
bat 'helm install test-%BUILD_NUMBER% ./micro --set image_tag=%BUILD_NUMBER%'
Output:
Failed to apply default image tag "photop/micro_focus:%image_tag%": couldn't parse image reference "photop/micro_focus:%image_tag%": invalid reference format
How to change the variable of the Jenkins build and don't %image_tag% :
photop/micro_focus:%image_tag%
You just create the deployment which automatically applies to your cluster since you did not specify the --dry-run=client param. Therefore i do not understand why you would use helm install, it feels ambigious in this way. But i could be wrong and don't understand this way doings this way.

kubernetes cache clear and handling

I am using Kubernetes with Helm 3.8.0, with windows docker desktop configured on WSL2.
Sometime, after running: helm install, and retrieve a container, the container that is created behind sense, is an old container that created before (even after restarting the computer).
i.e: Now the yaml is declared with password: 12345, and database: test. before I tried to run the container yaml with password: 11111, and database: my_database.
Now when I do helm install mychart ./mychart --namespace test-chart --create-namespace for the current folder chart, the container is running with password: 11111 and database: my_datatbase, instead of the new parameters provided. There is no current yaml code with the old password, so I don't understand why the docker is run with the old one.
I did several actions, such as docker system prune, restarting Windows Docker Desktop, but still I get the old container, that cannot be seen, even in Windows Docker Desktop, I have checked the option in: Settings -> Kubernetes -> Show System Containers -> Show system containers.
After some investigations, I realized that that may be because of Kubernetes has it's own garbage collection handling of containers, and that is why I may refer to old container, even I didn't mean to.
In my case, I am creating a job template (I didn't put any line that reference this job in the _helpers.tpl file - I never changed that file, and I don't know whether that may cause a problem).
Here is my job template:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "myChart.fullname" . }}-migration
labels:
name: {{ include "myChart.fullname" . }}-migration
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-300"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
parallelism: 1
completions: 1
backoffLimit: 1
template:
metadata:
labels:
app: {{ template "myChart.name" . }}
release: {{ .Release.Namespace }}
spec:
initContainers:
- name: wait-mysql
image: {{ .Values.mysql.image }}
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: "12345"
- name: MYSQL_DATABASE
value: test
command:
- /bin/sh
- -c
- |
service mysql start &
until mysql -uroot -p12345 -e 'show databases'; do
echo `date +%H:%M:%S`' - Waiting for mysql...'
sleep 5
done
containers:
- name: migration
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
command: {{- toYaml .Values.image.entrypoint | nindent 12 }}
args: {{- toYaml .Values.image.cmd | nindent 12}}
restartPolicy: Never
In the job - there is a database, which is first created, and after that it has data that is populated with code.
Also, are the annotations (hooks) are necessary?
After running helm install myChart ./myChart --namespace my-namespace --create-namespace, I realized that I am using very old container, which I don't really need.
I didn't understand if I write the meta data, as the following example (in: Garbage Collection) really help, and what to put in uid, whether I don't know it, or don't have it.
metadata:
...
ownerReferences:
- apiVersion: extensions/v1beta1
controller: true
blockOwnerDeletion: true
kind: ReplicaSet
name: my-repset
uid: d9607e19-f88f-11e6-a518-42010a800195
Sometimes I really want to reference existing pod (or container) from several templates (use the same container, which is not stateless, such as database container - one template for the pod and the other for the job) - How can I do that, also?
Is there any command (in command line, or a kind of method) that clear all the cached in Garbage Collection, or not use Garbage Collection at all? (What are the main benefits for the GC of Kubernetes?)

How to set java environment variables in a helm chart?

What is the best practice to set environment variables for a java app's deployment in a helm chart so that I can use the same chart for dev and prod environments? I have separate kubernetes deployments for both the environments.
spec:
containers:
env:
- name: SYSTEM_OPTS
- value: "-Dapp1.url=http://dev.app1.xyz -Dapp2.url=http://dev.app2.abc ..."
Similarly, my prod variables would something like
"-Dapp1.url=http://prod.app1.xyz -Dapp2.url=http://prod.app2.abc ..."
Now, how can I leverage helm to write a single chart but can create separated set of pods with different properties according to the environment as in
helm install my-app --set env=prod ./test-chart
or
helm install my-app --set env=dev ./test-chart
The best way is to use single deployment template and use separate value file for each environment.
It does not need to be only environment variable used in the application.
The same can be apply for any environment specific configuration.
Example:
deployment.yaml
spec:
containers:
env:
- name: SYSTEM_OPTS
- value: "{{ .Values.opts }}"
values-dev.yaml
# system opts
opts: "-Dapp1.url=http://dev.app1.xyz -Dapp2.url=http://dev.app2.abc "
values-prod.yaml
# system opts
opts: "-Dapp1.url=http://prod.app1.xyz -Dapp2.url=http://prod.app2.abc "
Then specify the related value file in the helm command.
For example, deploying on dev enviornemnt.
helm install -f values-dev.yaml my-app ./test-chart