Helm function to set value based on a variable? - kubernetes

I'm learning Helm to setup my 3 AWS EKS clusters - sandbox, staging, and production.
How can I set up my templates so some values are derived based on which cluster the chart is being installed at? For example, in my myapp/templates/deployment.yaml I may want
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
I may want replicas to be either 1, 2, or 4 depending if I'm installing the chart in my sandbox, staging, or production cluster respectively? I wanna do same trick for cpu and memory requests and limits for my pods for example.
I was thinking of having something like this in my values.yaml file
environments:
- sandbox
- staging
- production
perClusterValues:
replicas:
- 1
- 2
- 4
cpu:
requests:
- 256m
- 512m
- 1024m
limits:
- 512m
- 1024m
- 2048m
memory:
requests:
- 1024Mi
- 1024Mi
- 2048Mi
limits:
- 2048Mi
- 2048Mi
- 3072Mi
So if I install a helm chart in the sandbox environment, I want to be able to do
$ helm install myapp myapp --set environment=sandbox
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
{{- if not .Values.autoscaling.enabled }}
# In pseudo-code, in my YAML files
# Get the index value from .Values.environments list
# based on pass-in environment parameter
{{ $myIndex = indexOf .Values.environments .Value.environment }}
replicas: {{ .Values.perClusterValues.replicas $myIndex }}
{{- end }}
I hope you understand my logic, but what is the correct syntax? Or is this even a good approach?

You can use the helm install -f option to pass an extra YAML values file in, and this takes precedence over the chart's own values.yaml file. So using exactly the template structure you already have, you can provide alternate values files
# sandbox.yaml
autoscaling:
enabled: false
replicaCount: 1
# production.yaml
autoscaling:
enabled: true
replicaCount: 5
And then when you go to deploy the chart, run it with
helm install myapp . -f production.yaml
(You can also helm install --set replicaCount=3 to override specific values, but the --set syntax is finicky and unusual; using a separate YAML file per environment is probably easier. Some tooling might be able to take advantage of JSON files also being valid YAML to write out additional deploy-time customizations.)

Related

Can a deploy with multiple ReplicaSets run CMD different command?

I want to create few pods from same image (I have the Dockerfile) so i want to use ReplicaSets.
but the final CMD command need to be different for each container.
for exmple
(https://www.devspace.sh/docs/5.x/configuration/images/entrypoint-cmd):
image:
frontend:
image: john/appfrontend
cmd:
- run
- dev
And the other container will do:
image:
frontend:
image: john/appfrontend
cmd:
- run
- <new value>
Also I would like to move the CMD value from a list, so i would like the value there to be variable (it will be in a loop so each Pod will have to be created separately).
Is it possible?
You can't directly do this as you've described it. A ReplicaSet manages some number of identical Pods, where the command, environment variables, and every other detail except for the Pod name are the same across every replica.
In practice you don't usually directly use ReplicaSets; instead, you create a Deployment, which creates one or more ReplicaSets, which create Pods. The same statement and mechanics apply to Deployments, though.
Since this is specifically in the context of a Helm chart, you can have two separate Deployment YAML files in your chart, but then use Helm templating to reduce the amount of code that needs to be repeated. You can add a helper template to templates/_helpers.tpl that contains most of the data for a container
# templates/_helpers.tpl
{{- define "myapp.container" -}}
image: my-image:{{ .Values.tag }}
env:
- name: FOO
value: bar
- name: ET
value: cetera
{{ end -}}
Now you can have two template Deployment files, but provide a separate command: for each.
# templates/deployment-one.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "myapp.name" . }}-one
labels:
{{ include "myapp.labels" . | indent 4 }}
spec:
replicas: {{ .Values.one.replicas }}
template:
metadata:
labels:
{{ include "myapp.labels" . | indent 8 }}
spec:
containers:
- name: frontend
{{ include "myapp.container" . | indent 10 }}
command:
- npm
- run
- dev
There is still a fair amount to copy and paste, but you should be able to cp the whole file. Most of the boilerplate is Kubernetes boilerplate and every Deployment will have these parts; little of it is specific to any given application.
If your image has a default CMD (this is good practice) then you can omit the command: override on one of the Deployments, and it will run that default CMD.
In the question you make specific reference to Dockerfile CMD. One important terminology difference is that Kubernetes command: overrides Docker ENTRYPOINT, and Kubernetes args: matches CMD. If you are using an entrypoint wrapper script, in this example you will need to provide args: instead of command: so that the wrapper is still invoked.

kubernetes cache clear and handling

I am using Kubernetes with Helm 3.8.0, with windows docker desktop configured on WSL2.
Sometime, after running: helm install, and retrieve a container, the container that is created behind sense, is an old container that created before (even after restarting the computer).
i.e: Now the yaml is declared with password: 12345, and database: test. before I tried to run the container yaml with password: 11111, and database: my_database.
Now when I do helm install mychart ./mychart --namespace test-chart --create-namespace for the current folder chart, the container is running with password: 11111 and database: my_datatbase, instead of the new parameters provided. There is no current yaml code with the old password, so I don't understand why the docker is run with the old one.
I did several actions, such as docker system prune, restarting Windows Docker Desktop, but still I get the old container, that cannot be seen, even in Windows Docker Desktop, I have checked the option in: Settings -> Kubernetes -> Show System Containers -> Show system containers.
After some investigations, I realized that that may be because of Kubernetes has it's own garbage collection handling of containers, and that is why I may refer to old container, even I didn't mean to.
In my case, I am creating a job template (I didn't put any line that reference this job in the _helpers.tpl file - I never changed that file, and I don't know whether that may cause a problem).
Here is my job template:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "myChart.fullname" . }}-migration
labels:
name: {{ include "myChart.fullname" . }}-migration
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-300"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
parallelism: 1
completions: 1
backoffLimit: 1
template:
metadata:
labels:
app: {{ template "myChart.name" . }}
release: {{ .Release.Namespace }}
spec:
initContainers:
- name: wait-mysql
image: {{ .Values.mysql.image }}
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: "12345"
- name: MYSQL_DATABASE
value: test
command:
- /bin/sh
- -c
- |
service mysql start &
until mysql -uroot -p12345 -e 'show databases'; do
echo `date +%H:%M:%S`' - Waiting for mysql...'
sleep 5
done
containers:
- name: migration
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
command: {{- toYaml .Values.image.entrypoint | nindent 12 }}
args: {{- toYaml .Values.image.cmd | nindent 12}}
restartPolicy: Never
In the job - there is a database, which is first created, and after that it has data that is populated with code.
Also, are the annotations (hooks) are necessary?
After running helm install myChart ./myChart --namespace my-namespace --create-namespace, I realized that I am using very old container, which I don't really need.
I didn't understand if I write the meta data, as the following example (in: Garbage Collection) really help, and what to put in uid, whether I don't know it, or don't have it.
metadata:
...
ownerReferences:
- apiVersion: extensions/v1beta1
controller: true
blockOwnerDeletion: true
kind: ReplicaSet
name: my-repset
uid: d9607e19-f88f-11e6-a518-42010a800195
Sometimes I really want to reference existing pod (or container) from several templates (use the same container, which is not stateless, such as database container - one template for the pod and the other for the job) - How can I do that, also?
Is there any command (in command line, or a kind of method) that clear all the cached in Garbage Collection, or not use Garbage Collection at all? (What are the main benefits for the GC of Kubernetes?)

How to provide Vault secrets for a Flink application custom resource in Kubernetes

I would like to provide secrets from a Hashicorp Vault for the Apache Flink jobs running in a Kubernetes cluster.
These credits will be used to access a state-backend for checkpointing and savepoints. The state-backend could be for example Minio S3 storage.
Could someone provide a working example for a FlinkApplication operator please given the following setup?
Vault secrets for username and password (or an access key):
vault kv put vvp/storage/config username=user password=secret
vault kv put vvp/storage/config access-key=minio secret-key=minio123
k8s manifest of the Flink application custom resource:
apiVersion: flink.k8s.io/v1beta1
kind: FlinkApplication
metadata:
name: processor
namespace: default
spec:
image: stream-processor:0.1.0
deleteMode: None
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: vvp-flink-job
vault.hashicorp.com/agent-inject-secret-storage-config.txt: vvp/data/storage/config
flinkConfig:
taskmanager.memory.flink.size: 1024mb
taskmanager.heap.size: 200
taskmanager.network.memory.fraction: 0.1
taskmanager.network.memory.min: 10mb
web.upload.dir: /opt/flink
jobManagerConfig:
resources:
requests:
memory: "1280Mi"
cpu: "0.1"
replicas: 1
taskManagerConfig:
taskSlots: 2
resources:
requests:
memory: "1280Mi"
cpu: "0.1"
flinkVersion: "1.14.2"
jarName: "stream-processor-1.0-SNAPSHOT.jar"
parallelism: 3
entryClass: "org.StreamingJob"
programArgs: >
--name value
Docker file of the flink application:
FROM maven:3.8.4-jdk-11 AS build
ARG revision
WORKDIR /
COPY src /src
COPY pom.xml /
RUN mvn -B -Drevision=${revision} package
# runtime
FROM flink:1.14.2-scala_2.12-java11
ENV FLINK_HOME=/opt/flink
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 6123 8081
CMD ["help"]
The flink-config.yaml contains the following examples:
# state.backend: filesystem
# Directory for checkpoints filesystem, when using any of the default bundled
# state backends.
#
# state.checkpoints.dir: hdfs://namenode-host:port/flink-checkpoints
# Default target directory for savepoints, optional.
#
# state.savepoints.dir: hdfs://namenode-host:port/flink-savepoints
The end goal is to replace the hardcoded secrets or set them somehow from the vault:
state.backend: filesystem
s3.endpoint: http://minio:9000
s3.path.style.access: true
s3.access-key: minio
s3.secret-key: minio123
Thank you.
Once you have vault variables set
You can add the annotation in deployment to get variables out of the vault to deployment
annotations:
vault.hashicorp.com/agent-image: <Agent image>
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-secrets: kv/<Path-of-secret>
vault.hashicorp.com/agent-inject-template-secrets: |2
{{- with secret "kv/<Path-of-secret>" -}}
#!/bin/sh
set -e
{{- range $key, $value := .Data.data }}
export {{ $key }}={{ $value }}
{{- end }}
exec "$#"
{{- end }}
vault.hashicorp.com/auth-path: auth/<K8s cluster for auth>
vault.hashicorp.com/role: app
this will create the file inside your POD.
When you application run it should execute this file first and the environment variable will get injected to POD.
So vault annotation will create one file the same as you are getting as txt but instead, we will be doing it like
{{- range $key, $value := .Data.data }}
export {{ $key }}={{ $value }}
{{- end }}
it will keep and inject the export before key & value. Now the file is a kind of shell script and once it will get executed on the startup of the application it will inject variables to the OS level.
Keep this file in reop and add it in Docker ./bin/runapp
#!/bin/bash
if [ -f '/vault/secrets/secrets' ]; then
source '/vault/secrets/secrets'
fi
node <path-insnide-docker>/index.js #Sorry dont know scala or Java
package.json
"start": "./bin/runapp",
Dockerfile
ADD ./bin/runapp ./
EXPOSE 4444
CMD ["npm", "start"]
Your vault injected file will be something like inside pod at /vault/secrets/secrets or your configured path.
#!/bin/sh
set -e
export development=false
export production=true
exec "$#"

Use of Umbrella Chart in CI/CD Pipeline w/ Multiple Contractors

I am new to this group. Glad to have connected.
I am wondering if someone has experience in using an umbrella helm chart in a CI/CD process?
In our project, we have 2 separate developer contractors. Each contractor is responsible for specific microservices.
We are using Harbor as our repository for charts and accompanying container images and GitLab for our code repo and CI/CD orchestrator...via GitLab runners.
The plan is to use an umbrella chart to deploy all approx 60 microservices as one system.
I am interested in hearing from any groups that have taken a similar approach and how they treated/handled the umbrella chart in their CI/CD process.
Thank you for any input/guidance.
VR,
We use similar kind of pattern where we have 30+ microservices.
We have got a Github repo for base-charts.
The base-microservice chart has all sorts of kubernetes templates (like HPA,ConfigMap,Secrets,Deployment,Service,Ingress etc) ,each having the option to be enabled or disabled.
Note- The base chart can even contain other charts too
eg. This base-chart has a dependency of nginx-ingress chart:
apiVersion: v2
name: base-microservice
description: A base helm chart for deploying a microservice in Kubernetes
type: application
version: 0.1.6
appVersion: 1
dependencies:
- name: nginx-ingress
version: "~1.39.1"
repository: "alias:stable"
condition: nginx-ingress.enabled
Below is an example template for secrets.yaml template:
{{- if .Values.secrets.enabled -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "base-microservice.fullname" . }}
type: Opaque
data:
{{- toYaml .Values.secrets.data | nindent 2}}
{{- end}}
Now when commit happens in this base-charts repo, as part of CI process, (along with other things) we do
Check if Helm index already exists in charts repository
If exists, then download the existing index and merge currently generated index with existing one -> helm repo index --merge oldindex/index.yaml .
If it does not exist, then we create new Helm index ->( helm repo index . ) Then upload the archived charts and index yaml to our charts repository.
Now in each of our microservice, we have a charts directory , inside which we have 2 files only:
Chart.yaml
values.yaml
Directory structure of a sample microservice:
The Chart.yaml for this microservice A looks like:
apiVersion: v2
name: my-service-A
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: 1
dependencies:
- name: base-microservice
version: "0.1.6"
repository: "alias:azure"
And the values.yaml for this microservice A has those values which need to be overriden for the base-microservice values.
eg.
base-microservice:
nameOverride: my-service-A
image:
repository: myDockerRepo/my-service-A
resources:
limits:
cpu: 1000m
memory: 1024Mi
requests:
cpu: 300m
memory: 500Mi
probe:
initialDelaySeconds: 120
nginx-ingress:
enabled: true
ingress:
enabled: true
Now while doing Continuous Deployment of this microservice, we have these steps (among others):
Fetch helm dependencies (helm dependency update ./charts/my-service-A)
Deploy my release to kubernetes (helm upgrade --install my-service-a ./charts/my-service-A)

Kubernetes w/ helm: MountVolume.SetUp failed for volume "secret" : invalid character '\r' in string literal

I'm using a script to run helm command which upgrades my k8s deployment.
Before I've used kubectl to directly deploy, as I've move to helm and started using charts, I see an error after deploying on the k8s pods:
MountVolume.SetUp failed for volume "secret" : invalid character '\r' in string literal
My script looks similar to:
value1="foo"
value2="bar"
helm upgrade deploymentName --debug --install --atomic --recreate-pods --reset-values --force --timeout 900 pathToChartDir --set value1 --set value2
The deployment.yaml is as following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploymentName
spec:
selector:
matchLabels:
run: deploymentName
replicas: 2
template:
metadata:
labels:
run: deploymentName
app: appName
spec:
containers:
- name: deploymentName
image: {{ .Values.image.acr.registry }}/{{ .Values.image.name }}:{{ .Values.image.tag }}
volumeMounts:
- name: secret
mountPath: /secrets
readOnly: true
ports:
- containerPort: 1234
env:
- name: DOTENV_CONFIG_PATH
value: "/secrets/env"
volumes:
- name: secret
flexVolume:
driver: "azure/kv"
secretRef:
name: "kvcreds"
options:
usepodidentity: "false"
tenantid: {{ .Values.tenantid }}
subscriptionid: {{ .Values.subsid }}
resourcegroup: {{ .Values.rg }}
keyvaultname: {{ .Values.kvname }}
keyvaultobjecttype: secret
keyvaultobjectname: {{ .Values.objectname }}
As can be seen, the error relates to the secret volume and its values.
I've triple checked there is no line-break or anything like that in the values.
I've run helm lint - no errors found.
I've run helm template - nothing strange or missing in output.
Update:
I've copied the output of helm template and put in a deploy.yaml file.
Then used kubectl apply -f deploy.yaml to manually deploy the service, and... it works.
That makes me think it's actually some kind of a bug in helm? make sense?
Update 2:
I've also tried replacing the azure/kv volume with emptyDir volume and I was able to deploy using helm. It looks like a specific issue of helm with azure/kv volume?
Any ideas for a workaround?
A completely correct answer requires that I say the actual details of your \r problem might be different from mine.
I found the issue in my case by looking in the kv log of the AKS node (/var/log/kv-driver.log). In my case, the error was:
Original Error: autorest/azure: Service returned an error. Status=403 Code="Forbidden" Message="Access denied. Caller was not found on any access policy.\r\n
You can learn to SSH into the node on this page:
https://learn.microsoft.com/en-us/azure/aks/ssh
If you want to follow the solution, I opened an issue:
https://github.com/Azure/kubernetes-keyvault-flexvol/issues/121