Helm: break loop (range) in template - kubernetes

I have an application which is hosted on multiple environments and helm chart is used to deploy the application. I have values.yaml-
app:
component: mobile
type: web
env: prd --> (It will override with external parameters while deployment like dev / stg / uat)
image:
repository: ********.dkr.ecr.ap-south-1.amazonaws.com/mobile
pullPolicy: IfNotPresent
versions:
v1:
name: stable
replicaCount: 2
tag: latest
v2:
name: release
replicaCount: 1
tag: latest
Based on version v1 and v2(canary fashion), Deployment will iterate over loop. Canary deployment will be perform only on PRD environment. So on DEV / STG / UAT, only one version will be deployed and therefore loop will needed to iterate only one time for such environment.
{{- range $version, $val := $.Values.image.versions }}
---
apiVersion: apps/v1
kind: Deployment
I can set number of required pods for v2 as 0 but it create unnecessary metadata (deployemnt, replicaset).
So is there any method to break loop in helm template with condition (env: prd) to avoid loop iteration over v2.

Related

Can a deploy with multiple ReplicaSets run CMD different command?

I want to create few pods from same image (I have the Dockerfile) so i want to use ReplicaSets.
but the final CMD command need to be different for each container.
for exmple
(https://www.devspace.sh/docs/5.x/configuration/images/entrypoint-cmd):
image:
frontend:
image: john/appfrontend
cmd:
- run
- dev
And the other container will do:
image:
frontend:
image: john/appfrontend
cmd:
- run
- <new value>
Also I would like to move the CMD value from a list, so i would like the value there to be variable (it will be in a loop so each Pod will have to be created separately).
Is it possible?
You can't directly do this as you've described it. A ReplicaSet manages some number of identical Pods, where the command, environment variables, and every other detail except for the Pod name are the same across every replica.
In practice you don't usually directly use ReplicaSets; instead, you create a Deployment, which creates one or more ReplicaSets, which create Pods. The same statement and mechanics apply to Deployments, though.
Since this is specifically in the context of a Helm chart, you can have two separate Deployment YAML files in your chart, but then use Helm templating to reduce the amount of code that needs to be repeated. You can add a helper template to templates/_helpers.tpl that contains most of the data for a container
# templates/_helpers.tpl
{{- define "myapp.container" -}}
image: my-image:{{ .Values.tag }}
env:
- name: FOO
value: bar
- name: ET
value: cetera
{{ end -}}
Now you can have two template Deployment files, but provide a separate command: for each.
# templates/deployment-one.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "myapp.name" . }}-one
labels:
{{ include "myapp.labels" . | indent 4 }}
spec:
replicas: {{ .Values.one.replicas }}
template:
metadata:
labels:
{{ include "myapp.labels" . | indent 8 }}
spec:
containers:
- name: frontend
{{ include "myapp.container" . | indent 10 }}
command:
- npm
- run
- dev
There is still a fair amount to copy and paste, but you should be able to cp the whole file. Most of the boilerplate is Kubernetes boilerplate and every Deployment will have these parts; little of it is specific to any given application.
If your image has a default CMD (this is good practice) then you can omit the command: override on one of the Deployments, and it will run that default CMD.
In the question you make specific reference to Dockerfile CMD. One important terminology difference is that Kubernetes command: overrides Docker ENTRYPOINT, and Kubernetes args: matches CMD. If you are using an entrypoint wrapper script, in this example you will need to provide args: instead of command: so that the wrapper is still invoked.

Helm function to set value based on a variable?

I'm learning Helm to setup my 3 AWS EKS clusters - sandbox, staging, and production.
How can I set up my templates so some values are derived based on which cluster the chart is being installed at? For example, in my myapp/templates/deployment.yaml I may want
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
I may want replicas to be either 1, 2, or 4 depending if I'm installing the chart in my sandbox, staging, or production cluster respectively? I wanna do same trick for cpu and memory requests and limits for my pods for example.
I was thinking of having something like this in my values.yaml file
environments:
- sandbox
- staging
- production
perClusterValues:
replicas:
- 1
- 2
- 4
cpu:
requests:
- 256m
- 512m
- 1024m
limits:
- 512m
- 1024m
- 2048m
memory:
requests:
- 1024Mi
- 1024Mi
- 2048Mi
limits:
- 2048Mi
- 2048Mi
- 3072Mi
So if I install a helm chart in the sandbox environment, I want to be able to do
$ helm install myapp myapp --set environment=sandbox
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
{{- if not .Values.autoscaling.enabled }}
# In pseudo-code, in my YAML files
# Get the index value from .Values.environments list
# based on pass-in environment parameter
{{ $myIndex = indexOf .Values.environments .Value.environment }}
replicas: {{ .Values.perClusterValues.replicas $myIndex }}
{{- end }}
I hope you understand my logic, but what is the correct syntax? Or is this even a good approach?
You can use the helm install -f option to pass an extra YAML values file in, and this takes precedence over the chart's own values.yaml file. So using exactly the template structure you already have, you can provide alternate values files
# sandbox.yaml
autoscaling:
enabled: false
replicaCount: 1
# production.yaml
autoscaling:
enabled: true
replicaCount: 5
And then when you go to deploy the chart, run it with
helm install myapp . -f production.yaml
(You can also helm install --set replicaCount=3 to override specific values, but the --set syntax is finicky and unusual; using a separate YAML file per environment is probably easier. Some tooling might be able to take advantage of JSON files also being valid YAML to write out additional deploy-time customizations.)

Use of Umbrella Chart in CI/CD Pipeline w/ Multiple Contractors

I am new to this group. Glad to have connected.
I am wondering if someone has experience in using an umbrella helm chart in a CI/CD process?
In our project, we have 2 separate developer contractors. Each contractor is responsible for specific microservices.
We are using Harbor as our repository for charts and accompanying container images and GitLab for our code repo and CI/CD orchestrator...via GitLab runners.
The plan is to use an umbrella chart to deploy all approx 60 microservices as one system.
I am interested in hearing from any groups that have taken a similar approach and how they treated/handled the umbrella chart in their CI/CD process.
Thank you for any input/guidance.
VR,
We use similar kind of pattern where we have 30+ microservices.
We have got a Github repo for base-charts.
The base-microservice chart has all sorts of kubernetes templates (like HPA,ConfigMap,Secrets,Deployment,Service,Ingress etc) ,each having the option to be enabled or disabled.
Note- The base chart can even contain other charts too
eg. This base-chart has a dependency of nginx-ingress chart:
apiVersion: v2
name: base-microservice
description: A base helm chart for deploying a microservice in Kubernetes
type: application
version: 0.1.6
appVersion: 1
dependencies:
- name: nginx-ingress
version: "~1.39.1"
repository: "alias:stable"
condition: nginx-ingress.enabled
Below is an example template for secrets.yaml template:
{{- if .Values.secrets.enabled -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "base-microservice.fullname" . }}
type: Opaque
data:
{{- toYaml .Values.secrets.data | nindent 2}}
{{- end}}
Now when commit happens in this base-charts repo, as part of CI process, (along with other things) we do
Check if Helm index already exists in charts repository
If exists, then download the existing index and merge currently generated index with existing one -> helm repo index --merge oldindex/index.yaml .
If it does not exist, then we create new Helm index ->( helm repo index . ) Then upload the archived charts and index yaml to our charts repository.
Now in each of our microservice, we have a charts directory , inside which we have 2 files only:
Chart.yaml
values.yaml
Directory structure of a sample microservice:
The Chart.yaml for this microservice A looks like:
apiVersion: v2
name: my-service-A
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: 1
dependencies:
- name: base-microservice
version: "0.1.6"
repository: "alias:azure"
And the values.yaml for this microservice A has those values which need to be overriden for the base-microservice values.
eg.
base-microservice:
nameOverride: my-service-A
image:
repository: myDockerRepo/my-service-A
resources:
limits:
cpu: 1000m
memory: 1024Mi
requests:
cpu: 300m
memory: 500Mi
probe:
initialDelaySeconds: 120
nginx-ingress:
enabled: true
ingress:
enabled: true
Now while doing Continuous Deployment of this microservice, we have these steps (among others):
Fetch helm dependencies (helm dependency update ./charts/my-service-A)
Deploy my release to kubernetes (helm upgrade --install my-service-a ./charts/my-service-A)

How to set java environment variables in a helm chart?

What is the best practice to set environment variables for a java app's deployment in a helm chart so that I can use the same chart for dev and prod environments? I have separate kubernetes deployments for both the environments.
spec:
containers:
env:
- name: SYSTEM_OPTS
- value: "-Dapp1.url=http://dev.app1.xyz -Dapp2.url=http://dev.app2.abc ..."
Similarly, my prod variables would something like
"-Dapp1.url=http://prod.app1.xyz -Dapp2.url=http://prod.app2.abc ..."
Now, how can I leverage helm to write a single chart but can create separated set of pods with different properties according to the environment as in
helm install my-app --set env=prod ./test-chart
or
helm install my-app --set env=dev ./test-chart
The best way is to use single deployment template and use separate value file for each environment.
It does not need to be only environment variable used in the application.
The same can be apply for any environment specific configuration.
Example:
deployment.yaml
spec:
containers:
env:
- name: SYSTEM_OPTS
- value: "{{ .Values.opts }}"
values-dev.yaml
# system opts
opts: "-Dapp1.url=http://dev.app1.xyz -Dapp2.url=http://dev.app2.abc "
values-prod.yaml
# system opts
opts: "-Dapp1.url=http://prod.app1.xyz -Dapp2.url=http://prod.app2.abc "
Then specify the related value file in the helm command.
For example, deploying on dev enviornemnt.
helm install -f values-dev.yaml my-app ./test-chart

How to pass gitlab ci/cd variables to kubernetes(AKS) deployment.yaml

I have a node.js (express) project checked into gitlab and this is running in Kubernetes . I know we can set env variables in Kubernetes(on Azure, aks) in deployment.yaml file.
How can i pass gitlab ci/cd env variables to kubernetes(aks) (deployment.yaml file) ?
You can develop your own helm charts. This will pay back in long perspective.
Other approach: there is an easy and versatile way is to put ${MY_VARIABLE} placeholders into the deployment.yaml file. Next, during the pipeline run, at the deployment job use the envsubst command to substitute vars with respective values and deploy the file.
Example deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-${MY_VARIABLE}
labels:
app: nginx
spec:
replicas: 3
(...)
Example job:
(...)
deploy:
stage: deploy
script:
- envsubst < deployment.yaml > deployment-${CI_JOB_NAME}.yaml
- kubectl apply -f deployment-${CI_JOB_NAME}.yaml
I'm going to give you an easy solution that may or may not be "the solution".
To do what you want you could simply add your gitlab env variables in a secret during the cd before launching your deployment. This will allow you to use env secret inside the deployment.
If you want to do it like this you will need to think of how to delete them when you want to update them for idempotence.
Another solution would be to create the thing you are deploying as a Helm Chart. This would allow you to have specific variables (called values) that you can use in the templating and override at install / upgrade time.
There are many articles around getting setup with something like this.
Here is one specifically around the context of CI/CD: https://medium.com/#gajus/the-missing-ci-cd-kubernetes-component-helm-package-manager-1fe002aac680.
Another specifically around GitLab: https://medium.com/#yanick.witschi/automated-kubernetes-deployments-with-gitlab-helm-and-traefik-4e54bec47dcf
For future readers. Another way is to use a template file and generate deployment.yaml from the template using envsubst.
Template file:
# template/deployment.tmpl
---
apiVersion: apps/v1
kind: deployment
metadata:
name: strapi-deployment
namespace: strapi
labels:
app: strapi
# deployment specifications
spec:
replicas: 1
selector:
matchLabels:
app: strapi
serviceName: strapi
# pod specifications
template:
metadata:
labels:
app: strapi
# pod blueprints
spec:
containers:
- name: strapi-container
image: registry.gitlab.com/repo-name/image:${IMAGE_TAG}
imagePullPolicy: Always
imagePullSecrets:
- name: gitlab-registry-secret
deploy stage in .gitlab-ci.yml
(...)
deploy:
stage: deploy
script:
# deploy resources in k8s cluster
- envsubst < strapi-deployment.tmpl > strapi-deployment.yaml
- kubectl apply -f strapi-deployment.yaml
As defined here image: registry.gitlab.com/repo-name/image:${IMAGE_TAG}, IMAGE_TAG is an environment variable defined in gitlab. envsubst would go through strapi-deployment.tmpl and substitute any variable defined there and generate strapi-deployment.yaml file.
sed command helped me with this:
In Deployment.yaml use some placeholder, like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
#Other configs bla-bla-bla
spec:
containers:
- name: app
image: my.registry./myapp:<VERSION>
And in .gitlab-ci.yml use sed:
deploy:
stage: deploy
image: kubectl-img
script:
# - kubectl bla-bla-bla whatever you want to do before the apply command
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" Deployment.yaml
- kubectl apply -f Deployment.yaml
So the resulting Deployment.yaml will contain CI_COMMIT_SHORT_SHA value instead of <VERSION>
Source of the solution