Helm best practices - kubernetes

I am new to helm and liked the idea of helm to create versions for the deployments and package them as artifact in jfrog articatory but one thing that I am unclear about is easiness of creating it.
I am comfortable with kubernetes mainfest and creating it is very simple where you don't have to handcraft a yaml.
You can simply run kubectl command in dry-run mode and export most of the yaml tags as below:
kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx-manifest.yaml
Now for creating helm, I need to run helm create and key in all the values needed by helm yaml files.
Curious if helm has such shortcuts that kubectl provides to create charts easily which keys in required value through command line while generating charts?
Also is there a migration utility available that supports converting the deployment manifest to helm charts?

helm create does what you are looking for. It creates a directory with all the basic stuff so that you don't need to manually create each file/directory. However, it can't create the content of a Chart it has no clue about.
But, there is no magic behind the scenes, a chart consists in templates and values. The templates are the same YAML files you are used to work with, except that you can replace whatever you want to make "dynamic" with the placeholders used by Helm. That's it.
So, in other words, just keep exporting as you are (I strongly suggest stopping doing this and create proper files suited for your needs) and add placeholders ({{ .Values.foo }})
For example, this is the template for a service I have:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name | default .Chart.Name }}
spec:
ports:
- port: {{ .Values.port }}
protocol: TCP
targetPort: {{ .Values.port }}
selector:
app: {{ .Values.name | default .Chart.Name }}

Related

Using ingress host from k8s secret in helm

I am trying to deploy a service using helm. The cluster is Azure AKS & I have one DNS zone associated with a cluster that can be used for ingress.
But the issue is that the DNS zone is in k8s secret & I want to use it in ingress as host. like below
spec:
tls:
- hosts:
- {{ .Chart.Name }}.{{ .Values.tls.host }}
rules:
- host: {{ .Chart.Name }}.{{ .Values.tls.host }}
http:
paths:
-
pathType: Prefix
backend:
service:
name: {{ .Chart.Name }}
port:
number: 80
path: "/"
I want .Values.tls.host value from secret. Currently, it is hardcoded in values.yaml file.
This is a community wiki answer posted for better visibility. Feel free to expand it.
For the current version of Helm (3.8.0), it seems not possible to use values right from Secret with standard approach.
Based on the information from Helm website:
A template directive is enclosed in {{ and }} blocks.
The values that are passed into a template can be thought of as namespaced
objects, where a dot (.) separates each namespaced element.
Objects are passed into a template from the template engine and can be:
Release
Values
Chart
Files
Capabilities
Template
Contents for Values objects can come from multiple sources:
The values.yaml file in the chart
If this is a subchart, the values.yaml file of a parent chart
A values file if passed into helm install or helm upgrade with the -f flag (helm install -f myvals.yaml ./mychart)
Individual parameters passed with --set (such as helm install --set foo=bar ./mychart)
Helm cannot fetch values from Kubernetes, and ingress manifest do not support this type of reference.
You need to do this outside of helm and stitch the commands together, in your pipeline or from some script or manually in the shell.
helm install my-app chart/ \
--set tls.host="$(kubectl get secret my-dns-zone \
-o go-template='{{index .data "dns-zone" | base64decode }}')"

Using same spec across different deployment in argocd

I am currently using Kustomize. We are have multiple deployments and services. These have the same spec but different names. Is it possible to store the spec in individual files & refer them across all the deployments files?
Helm is a good fit for the solution.
However, since we were already using Kustomize & migration to Helm would have needed time, we solved the problem using namePrefix & label modifiers in Kustomize.
Use Helm, in ArgoCD create a pipeline with helm:3 container and create a helm-chart directory or repository. Pull the chart repository, deploy with helm. Use values.yaml for the dynamic values you want to use. Also, you will need to add kubeconfig file to your pipeline but that is another issue.
This is the best offer I can give. For further information I need to inspect ArgoCD.
I was faced with this problem and I resolved it using Helm3 charts:
I have a chart. Yaml file where I indicated my release name and version
values. Yam where I define all variable to use for a specific environment.
Values-test. Yaml a file to use, for example, in a test environment where you should only put the variable that must be changed from an environment to another.
I hope that can help you to resolve your issue.
I would also suggest using Helm. However a restriction of Helm is that you cannot create dynamic values.yaml files (https://github.com/helm/helm/issues/6699) - this can be very annoying, especially for multi-environment setups. However, ArgoCD provides a very nice way to do this with its Application type.
The solution is to create a custom Helm chart for generating your ArgoCD applications (which can be called with different config for each environment). The templates in this helm chart will generate ArgoCD Application types. This type supports a source.helm.values field where you can dynamically set the values.yaml.
For example, the values.yaml for HashiCorp Vault can be highly complex and this is a scenario where a dynamic values.yaml per environment is highly desirable (as this prevents having multiple values.yaml files for each environment which are large but very similar).
If your custom ArgoCD helm chart is my-argocd-application-helm, then the following are example values.yaml and the template which generates your Vault application i.e.
values.yaml
server: 1.2.3.4 # Target kubernetes server for all applications
vault:
name: vault-dev
repoURL: https://git.acme.com/myapp/vault-helm.git
targetRevision: master
path: helm/vault-chart
namespace: vault
hostname: 5.6.7.8 # target server for Vault
...
templates/vault-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {{ .Values.vault.name }}
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: 'vault'
server: {{ .Values.server }}
project: 'default'
source:
path: '{{ .Values.vault.path }}'
repoURL: {{ .Values.vault.repoURL }}
targetRevision: {{ .Values.vault.targetRevision }}
helm:
# Dynamically generate `values.yaml`
values: |
vault:
server:
ingress:
activeService: true
hosts:
- host: {{ required "Please set 'vault.hostname'" .Values.vault.hostname | quote }}
paths:
- /
ha:
enabled: true
config: |
ui = true
...
These values will then override any base configuration residing in the values.yaml specified by {{ .Values.vault.repoURL }} which can contain config which doesn't change for each environment.

using node selector helm chart to assign pods to a specific node pool

i'm trying to assign pods to a specific node as part of helm command, so by the end the deployment yaml should look like this
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
node-name: dev-cpu-pool
i'm using this command as part of Jenkins file deployment
`sh "helm upgrade -f charts/${job_name}/default.yaml --set nodeSelector.name=${deployNamespace}-cpu-pool --install ${deployNamespace}-${name} helm/${name} --namespace=${deployNamespace} --recreate-pods --version=${version}`"
the deployment works good and the pod is up and running but from some reason i cannot see the nodeSelector key and value as part of the deployment yaml and as a results pods not assign to the specific node i want. any idea what is wrong ? should i put any place holder as part of my chart template or is not must ?
The artifacts that Helm submits to the Kubernetes API are exactly the result of rendering the chart templates; nothing more, nothing less. If your templates don't include a nodeSelector: block then the resulting Deployment never will either. Even if you helm install --set ... things that could match Kubernetes API fields, nothing will implicitly fill them in.
If you want an option to specify rarely-used fields like nodeSelector: then your chart code needs to include them. You can make the presence of the field conditional on the value being set, but you do need to explicitly list it out:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
{{- if .Values.nodeSelector }}
nodeSelector: {{- .Values.nodeSelector | toYaml | nindent 8 }}
{{- end }}

Blue Green Deployment with Helm Charts

We Could deploy applications using 'Helm Charts' with
helm install --name the-release helm/the-service-helm --namespace myns
And we cold 'Rolling Upgrade' the deployment using,
helm upgrade --recreate-pods the-release helm/the-service-helm --namespace myns
Is there a way to use 'Helm Charts' to achieve 'Blue/Green' Deployments?
Let's start from definitions
Since there are many deployment strategies, let's start from the definition.
As per Martin Flower:
The blue-green deployment approach does this by ensuring you have two production environments, as identical as possible. At any time one of them, let's say blue for the example, is live. As you prepare a new release of your software you do your final stage of testing in the green environment. Once the software is working in the green environment, you switch the router so that all incoming requests go to the green environment - the blue one is now idle.
Blue/Green is not recommended in Helm. But there are workaround solutions
As per to helm issue #3518, it's not recommended to use Helm for blue/green or canary deployment.
There are at least 3 solutions based on top of Helm, see below
However there is a Helm chart for that case.
Helm itself (TL;DR: not recommended)
Helm itself is not intended for the case. See their explanation:
direct support for blue / green deployment pattern in helm · Issue #3518 · helm/helm
Helm works more in the sense of a traditional package manager, upgrading charts from one version to the next in a graceful manner (thanks to pod liveness/readiness probes and deployment update strategies), much like how one expects something like apt upgrade to work. Blue/green deployments are a very different beast compared to the package manager style of upgrade workflows; blue/green sits at a level higher in the toolchain because the use cases around these deployments require step-in/step-out policies, gradual traffic migrations and rollbacks. Because of that, we decided that blue/green deployments are something out of scope for Helm, though a tool that utilizes Helm under the covers (or something parallel like istio) could more than likely be able to handle that use case.
Other solutions based on Helm
There are at least three solution based on top of Helm, described and compared here:
Shipper
Istio
Flagger.
Shipper by Booking.com - DEPRECATED
bookingcom/shipper: Kubernetes native multi-cluster canary or blue-green rollouts using Helm
It does this by relying on Helm, and using Helm Charts as the unit of configuration deployment. Shipper's Application object provides an interface for specifying values to a Chart just like the helm command line tool.
Shipper consumes Charts directly from a Chart repository like ChartMuseum, and installs objects into clusters itself. This has the nice property that regular Kubernetes authentication and RBAC controls can be used to manage access to Shipper APIs.
Kubernetes native multi-cluster canary or blue-green rollouts using Helm
Istio
You can try something like this:
kubectl create -f <(istioctl kube-inject -f cowsay-v1.yaml) # deploy v1
kubectl create -f <(istioctl kube-inject -f cowsay-v2.yaml) # deploy v1
Flagger.
There is guide written by Flagger team: Blue/Green Deployments - Flagger
This guide shows you how to automate Blue/Green deployments with Flagger and Kubernetes
You might try Helm itself
Also, as Kamol Hasan recommended, you can try that chart: puneetsaraswat/HelmCharts/blue-green.
blue.yml sample
{{ if .Values.blue.enabled }}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "blue-green.fullname" . }}-blue
labels:
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "blue-green.name" . }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "blue-green.name" . }}
release: {{ .Release.Name }}
slot: blue
spec:
containers:
- name: {{ template "blue-green.name" . }}-blue
image: nginx:stable
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
# This (and the volumes section below) mount the config map as a volume.
volumeMounts:
- mountPath: /usr/share/nginx/html
name: wwwdata-volume
volumes:
- name: wwwdata-volume
configMap:
name: {{ template "blue-green.fullname" . }}
{{ end }}
Medium blog post: Blue/Green Deployments using Helm Charts

Best way to DRY up deployments that all depend on a very similar init-container

I have 10 applications to deploy to Kubernetes. Each of the deployments depends on an init container that is basically identical except for a single parameter (and it doesn't make conceptual sense for me to decouple this init container from the application). So far I've been copy-pasting this init container into each deployment.yaml file, but I feel like that's got to be a better way of doing this!
I haven't seen a great solution from my research, though the only thing I can think of so far is to use something like Helm to package up the init container and deploy it as part of some dependency-based way (Argo?).
Has anyone else with this issue found a solution they were satisfied with?
A Helm template can contain an arbitrary amount of text, just so long as when all of the macros are expanded it produces a valid YAML Kubernetes manifest. ("Valid YAML" is trickier than it sounds because the indentation matters.)
The simplest way to do this would be to write a shared Helm template that included the definition for the init container:
_init_container.tpl:
{{- define "common.myinit" -}}
name: myinit
image: myname/myinit:{{ .Values.initTag }}
# Other things from a container spec
{{ end -}}
Then in your deployment, include this:
deployment.yaml:
apiVersion: v1
kind: Deployment
spec:
template:
spec:
initContainers:
- {{ include "common.myinit" . | indent 10 | strip }}
Then you can copy the _init_container.tpl file into each of your individual services.
If you want to avoid the copy-and-paste (reasonable enough) you can create a Helm chart that contains only templates and no actual Kubernetes resources. You need to set up some sort of repository to hold this chart. Put the _init_container.tpl into that shared chart, declare it as a dependency is the chart metadata, and reference the template in your deployment YAML in the same way (Go template names are shared across all included charts).