Blue Green Deployment with Helm Charts - kubernetes

We Could deploy applications using 'Helm Charts' with
helm install --name the-release helm/the-service-helm --namespace myns
And we cold 'Rolling Upgrade' the deployment using,
helm upgrade --recreate-pods the-release helm/the-service-helm --namespace myns
Is there a way to use 'Helm Charts' to achieve 'Blue/Green' Deployments?

Let's start from definitions
Since there are many deployment strategies, let's start from the definition.
As per Martin Flower:
The blue-green deployment approach does this by ensuring you have two production environments, as identical as possible. At any time one of them, let's say blue for the example, is live. As you prepare a new release of your software you do your final stage of testing in the green environment. Once the software is working in the green environment, you switch the router so that all incoming requests go to the green environment - the blue one is now idle.
Blue/Green is not recommended in Helm. But there are workaround solutions
As per to helm issue #3518, it's not recommended to use Helm for blue/green or canary deployment.
There are at least 3 solutions based on top of Helm, see below
However there is a Helm chart for that case.
Helm itself (TL;DR: not recommended)
Helm itself is not intended for the case. See their explanation:
direct support for blue / green deployment pattern in helm · Issue #3518 · helm/helm
Helm works more in the sense of a traditional package manager, upgrading charts from one version to the next in a graceful manner (thanks to pod liveness/readiness probes and deployment update strategies), much like how one expects something like apt upgrade to work. Blue/green deployments are a very different beast compared to the package manager style of upgrade workflows; blue/green sits at a level higher in the toolchain because the use cases around these deployments require step-in/step-out policies, gradual traffic migrations and rollbacks. Because of that, we decided that blue/green deployments are something out of scope for Helm, though a tool that utilizes Helm under the covers (or something parallel like istio) could more than likely be able to handle that use case.
Other solutions based on Helm
There are at least three solution based on top of Helm, described and compared here:
Shipper
Istio
Flagger.
Shipper by Booking.com - DEPRECATED
bookingcom/shipper: Kubernetes native multi-cluster canary or blue-green rollouts using Helm
It does this by relying on Helm, and using Helm Charts as the unit of configuration deployment. Shipper's Application object provides an interface for specifying values to a Chart just like the helm command line tool.
Shipper consumes Charts directly from a Chart repository like ChartMuseum, and installs objects into clusters itself. This has the nice property that regular Kubernetes authentication and RBAC controls can be used to manage access to Shipper APIs.
Kubernetes native multi-cluster canary or blue-green rollouts using Helm
Istio
You can try something like this:
kubectl create -f <(istioctl kube-inject -f cowsay-v1.yaml) # deploy v1
kubectl create -f <(istioctl kube-inject -f cowsay-v2.yaml) # deploy v1
Flagger.
There is guide written by Flagger team: Blue/Green Deployments - Flagger
This guide shows you how to automate Blue/Green deployments with Flagger and Kubernetes
You might try Helm itself
Also, as Kamol Hasan recommended, you can try that chart: puneetsaraswat/HelmCharts/blue-green.
blue.yml sample
{{ if .Values.blue.enabled }}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "blue-green.fullname" . }}-blue
labels:
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "blue-green.name" . }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "blue-green.name" . }}
release: {{ .Release.Name }}
slot: blue
spec:
containers:
- name: {{ template "blue-green.name" . }}-blue
image: nginx:stable
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
# This (and the volumes section below) mount the config map as a volume.
volumeMounts:
- mountPath: /usr/share/nginx/html
name: wwwdata-volume
volumes:
- name: wwwdata-volume
configMap:
name: {{ template "blue-green.fullname" . }}
{{ end }}
Medium blog post: Blue/Green Deployments using Helm Charts

Related

Install newer version of chart side by side

I'm trying to implement canary deployment with Istio but first I have to deploy chart pods from the old version (Already managed to do it) and chart pods from the new version.
I created a new version of my chart. The chart has been created successfully.
Now I try to use helm install command to deploy the new version side by side with the old one.
I pass a new release name to the command in order to avoid overriding the old version my-release-v2 but I get an error that the release name in the chart must match the release name.
At this stage I'm a bit puzzled. Should I override it in the values.yaml if so - How exactly? Is this a best practice?
OK, I got this one is case it helps someone.
The release name should be unique. A good practice is to use our application name Chart.AppName.fullname along with the intended version in our helm install conmmand.
Then, we can use the same practice for our Deployment object that deploys our pods so it will be unique to the version. (Relevant part in the deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include ".Chart.Name.fullname" . }}-{{ .Chart.AppVersion }}
And of course for a future selector in Istio create a version label in the pods (Relevant part in the deployment.yaml):
apiVersion: apps/v1
kind: Deployment
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app.kubernetes.io/version: {{ .Chart.AppVersion }}

helm dependencies with different namespaces

Right now, I have to install multiple helm charts in different namespaces for my product to work. I am trying to create a super helm chart in which I am planning to add the helm charts (of my tools, as mentioned above) and install them in one shot. My problem is, as these tools are in different namespaces I am not sure where to specify the namespace key where I want that particular dependency (chart) to be installed. For e.g. if below is the Charts.yaml of my super helm chart
dependencies:
- name: first_chart
version: 1.2.3
repository: https://firstchart.repo
- name: second_chart
version: 1.5.6
repository: https://secondchart.repo
I want my first chart to be installed in namespace foo and the second chart to be installed in namespace bar.
I was looking at using conditions but I believe conditions will only take a boolean as a value.
I stumbled upon this link (https://github.com/helm/helm/issues/2060) which says the we can do it in Helm 3 but mostly on how to keep releases between different namespaces. It does not specifically answer my question.
There is no builtin way to do this with pure Helm, but there is with helmfile.
Your example as helmfile.yaml:
releases:
- name: chart1 # name of the release (helm install <...> first_chart)
chart: repo1/first_chart
version: 1.2.3
namespace: foo
- name: chart2
chart: repo2/second_chart
version: 1.5.6
namespace: bar
# in case you want helmfile to automatically update repos
repositories:
- name: repo1
url: https://firstchart.repo
- name: repo2
url: https://secondchart.repo
Then, run:
helmfile sync => run helm install/upgrade on all releases, or
helmfile apply => same as sync, but do a diff first to only upgrade/install releases that changed
There is way more to helmfile, but this is the gist.
PS: if you struggle with values or want to have something similar to how umbrella Chart values are handled, have a look at helmfile: a simple trick to handle values intuitively
The way I solved this for my clusters with with ArgoCD's App of Apps cluster bootstrapping model. Of course, it requires that ArgoCD is install the cluster. However, for many reasons not relevant to this answer I would highly encourage installing ArgoCD regardless of the easy of bootstrapping capabilities.
Assuming ArgoCD is in place the structure is a single Helm chart containing templates for each of the child charts it will deploy and managed via Argo's Application CRD. You will notice there is a definition as part of the CRD, spec.destination.namespace, which governs where the chart will be deployed.
An example Application template which governs my cert-manager chart deployment to the cert-manager namespace looks like:
{{- if .Values.certManager.enabled }}
# ref: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#applications
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add a this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
# The project the application belongs to.
project: cluster-configs
# Source of the application manifests
source:
repoURL: https://github.com/yourOrg/Helm
targetRevision: {{ .Values.targetRevision }}
path: charts/cert-manager-chart
# helm specific config
helm:
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
{{- range .Values.certManager.valueFiles }}
- {{ . }}
{{- end }}
# Optional Helm version to template with. If omitted it will fall back to look at the 'apiVersion' in Chart.yaml
# and decide which Helm binary to use automatically. This field can be either 'v2' or 'v3'.
version: v3
# Destination cluster and namespace to deploy the application
destination:
server: https://kubernetes.default.svc
namespace: cert-manager
{{- end }}
With a corresponding values.yaml file for this parent chart which may look something like the following with the path to desired value file(s) in that child chart's directory specified.
targetRevision: v1.11.0
certManager:
enabled: true
valueFiles:
- "values.yaml"
clusterAutoScaler:
valueFiles:
- "envs/dev-account/saas/values.yaml"
clusterResourceLimits:
valueFiles:
- "values.yaml"
externalDns:
valueFiles:
- "envs/dev-account/saas/values.yaml"
ingressNginx:
enabled: true
valueFiles:
- "values.yaml"
Below is a screenshot of one of my app of apps directory to complete the example.

Helm best practices

I am new to helm and liked the idea of helm to create versions for the deployments and package them as artifact in jfrog articatory but one thing that I am unclear about is easiness of creating it.
I am comfortable with kubernetes mainfest and creating it is very simple where you don't have to handcraft a yaml.
You can simply run kubectl command in dry-run mode and export most of the yaml tags as below:
kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx-manifest.yaml
Now for creating helm, I need to run helm create and key in all the values needed by helm yaml files.
Curious if helm has such shortcuts that kubectl provides to create charts easily which keys in required value through command line while generating charts?
Also is there a migration utility available that supports converting the deployment manifest to helm charts?
helm create does what you are looking for. It creates a directory with all the basic stuff so that you don't need to manually create each file/directory. However, it can't create the content of a Chart it has no clue about.
But, there is no magic behind the scenes, a chart consists in templates and values. The templates are the same YAML files you are used to work with, except that you can replace whatever you want to make "dynamic" with the placeholders used by Helm. That's it.
So, in other words, just keep exporting as you are (I strongly suggest stopping doing this and create proper files suited for your needs) and add placeholders ({{ .Values.foo }})
For example, this is the template for a service I have:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name | default .Chart.Name }}
spec:
ports:
- port: {{ .Values.port }}
protocol: TCP
targetPort: {{ .Values.port }}
selector:
app: {{ .Values.name | default .Chart.Name }}

using node selector helm chart to assign pods to a specific node pool

i'm trying to assign pods to a specific node as part of helm command, so by the end the deployment yaml should look like this
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
node-name: dev-cpu-pool
i'm using this command as part of Jenkins file deployment
`sh "helm upgrade -f charts/${job_name}/default.yaml --set nodeSelector.name=${deployNamespace}-cpu-pool --install ${deployNamespace}-${name} helm/${name} --namespace=${deployNamespace} --recreate-pods --version=${version}`"
the deployment works good and the pod is up and running but from some reason i cannot see the nodeSelector key and value as part of the deployment yaml and as a results pods not assign to the specific node i want. any idea what is wrong ? should i put any place holder as part of my chart template or is not must ?
The artifacts that Helm submits to the Kubernetes API are exactly the result of rendering the chart templates; nothing more, nothing less. If your templates don't include a nodeSelector: block then the resulting Deployment never will either. Even if you helm install --set ... things that could match Kubernetes API fields, nothing will implicitly fill them in.
If you want an option to specify rarely-used fields like nodeSelector: then your chart code needs to include them. You can make the presence of the field conditional on the value being set, but you do need to explicitly list it out:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
{{- if .Values.nodeSelector }}
nodeSelector: {{- .Values.nodeSelector | toYaml | nindent 8 }}
{{- end }}

How to parameterize image version when passing yaml for container creation

Is there any way to pass image version from a varibale/config when passing a manifest .yaml to kubectl command
Example :
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 1
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:${IMAGE_VERSION}
imagePullPolicy: Always
resources:
limits:
cpu: "1.2"
memory: 100Mi
ports:
- containerPort: 80
Use case is to launch specific image version which is set at kubernetes level, and that the variable is resolved by kubernetes itself at server side.
Thanks and Regards,
Ravi
k8s manifest files are static yaml/json.
If you would like to template the manifests (and manage multiple resources in a bundle-like fashion), I strongly recommend you to have a look at Helm
I've recently created a Workshop which focuses precisely on the "Templating" features of Helm.
Helm does a lot more than just templating, it is built as a full fledged package manager for Kubernetes applications (think Apt/Yum/Homebrew).
If you want to handle everything client side, have a look at https://github.com/errordeveloper/kubegen
Although, at some point, you will need the other features of Helm and a migration will be needed when that time comes - I recommend biting the bullet and going for Helm straight up.
After looking into this recently we decided to just go with sed. Wrap kubectl apply into a small bash script and replace the placeholders before running apply.
We did look into more sophisticated tooling but we only found Helm. However Helm is a complex piece of technology that does way more than just templating. It changes your workflow a lot as you no longer deploy using kubectl and have to have a Helm package repo around to push your packages to. Our assessment was that Helm is not useful for deploying our application and using it for just the templating is overkill.
Here is an example how to do this with sed (it is a excerpt from my typical circleci config):
replaces="s/{.Namespace}/$CIRCLE_BRANCH/;";
replaces="$replaces s/{.CiBuild}/$CIRCLE_BUILD_NUM/; ";
replaces="$replaces s/{.CiCommit}/$CIRCLE_SHA1/; ";
replaces="$replaces s/{.CiUser}/$CIRCLE_USERNAME/; ";
cat ./k8s/app.yaml | sed -e "$replaces" | ./kubectl --kubeconfig=`pwd`/.kube/config apply --namespace=$NAMESPACE -f -