Using ingress host from k8s secret in helm - kubernetes

I am trying to deploy a service using helm. The cluster is Azure AKS & I have one DNS zone associated with a cluster that can be used for ingress.
But the issue is that the DNS zone is in k8s secret & I want to use it in ingress as host. like below
spec:
tls:
- hosts:
- {{ .Chart.Name }}.{{ .Values.tls.host }}
rules:
- host: {{ .Chart.Name }}.{{ .Values.tls.host }}
http:
paths:
-
pathType: Prefix
backend:
service:
name: {{ .Chart.Name }}
port:
number: 80
path: "/"
I want .Values.tls.host value from secret. Currently, it is hardcoded in values.yaml file.

This is a community wiki answer posted for better visibility. Feel free to expand it.
For the current version of Helm (3.8.0), it seems not possible to use values right from Secret with standard approach.
Based on the information from Helm website:
A template directive is enclosed in {{ and }} blocks.
The values that are passed into a template can be thought of as namespaced
objects, where a dot (.) separates each namespaced element.
Objects are passed into a template from the template engine and can be:
Release
Values
Chart
Files
Capabilities
Template
Contents for Values objects can come from multiple sources:
The values.yaml file in the chart
If this is a subchart, the values.yaml file of a parent chart
A values file if passed into helm install or helm upgrade with the -f flag (helm install -f myvals.yaml ./mychart)
Individual parameters passed with --set (such as helm install --set foo=bar ./mychart)

Helm cannot fetch values from Kubernetes, and ingress manifest do not support this type of reference.
You need to do this outside of helm and stitch the commands together, in your pipeline or from some script or manually in the shell.
helm install my-app chart/ \
--set tls.host="$(kubectl get secret my-dns-zone \
-o go-template='{{index .data "dns-zone" | base64decode }}')"

Related

helm dependencies with different namespaces

Right now, I have to install multiple helm charts in different namespaces for my product to work. I am trying to create a super helm chart in which I am planning to add the helm charts (of my tools, as mentioned above) and install them in one shot. My problem is, as these tools are in different namespaces I am not sure where to specify the namespace key where I want that particular dependency (chart) to be installed. For e.g. if below is the Charts.yaml of my super helm chart
dependencies:
- name: first_chart
version: 1.2.3
repository: https://firstchart.repo
- name: second_chart
version: 1.5.6
repository: https://secondchart.repo
I want my first chart to be installed in namespace foo and the second chart to be installed in namespace bar.
I was looking at using conditions but I believe conditions will only take a boolean as a value.
I stumbled upon this link (https://github.com/helm/helm/issues/2060) which says the we can do it in Helm 3 but mostly on how to keep releases between different namespaces. It does not specifically answer my question.
There is no builtin way to do this with pure Helm, but there is with helmfile.
Your example as helmfile.yaml:
releases:
- name: chart1 # name of the release (helm install <...> first_chart)
chart: repo1/first_chart
version: 1.2.3
namespace: foo
- name: chart2
chart: repo2/second_chart
version: 1.5.6
namespace: bar
# in case you want helmfile to automatically update repos
repositories:
- name: repo1
url: https://firstchart.repo
- name: repo2
url: https://secondchart.repo
Then, run:
helmfile sync => run helm install/upgrade on all releases, or
helmfile apply => same as sync, but do a diff first to only upgrade/install releases that changed
There is way more to helmfile, but this is the gist.
PS: if you struggle with values or want to have something similar to how umbrella Chart values are handled, have a look at helmfile: a simple trick to handle values intuitively
The way I solved this for my clusters with with ArgoCD's App of Apps cluster bootstrapping model. Of course, it requires that ArgoCD is install the cluster. However, for many reasons not relevant to this answer I would highly encourage installing ArgoCD regardless of the easy of bootstrapping capabilities.
Assuming ArgoCD is in place the structure is a single Helm chart containing templates for each of the child charts it will deploy and managed via Argo's Application CRD. You will notice there is a definition as part of the CRD, spec.destination.namespace, which governs where the chart will be deployed.
An example Application template which governs my cert-manager chart deployment to the cert-manager namespace looks like:
{{- if .Values.certManager.enabled }}
# ref: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#applications
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add a this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
# The project the application belongs to.
project: cluster-configs
# Source of the application manifests
source:
repoURL: https://github.com/yourOrg/Helm
targetRevision: {{ .Values.targetRevision }}
path: charts/cert-manager-chart
# helm specific config
helm:
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
{{- range .Values.certManager.valueFiles }}
- {{ . }}
{{- end }}
# Optional Helm version to template with. If omitted it will fall back to look at the 'apiVersion' in Chart.yaml
# and decide which Helm binary to use automatically. This field can be either 'v2' or 'v3'.
version: v3
# Destination cluster and namespace to deploy the application
destination:
server: https://kubernetes.default.svc
namespace: cert-manager
{{- end }}
With a corresponding values.yaml file for this parent chart which may look something like the following with the path to desired value file(s) in that child chart's directory specified.
targetRevision: v1.11.0
certManager:
enabled: true
valueFiles:
- "values.yaml"
clusterAutoScaler:
valueFiles:
- "envs/dev-account/saas/values.yaml"
clusterResourceLimits:
valueFiles:
- "values.yaml"
externalDns:
valueFiles:
- "envs/dev-account/saas/values.yaml"
ingressNginx:
enabled: true
valueFiles:
- "values.yaml"
Below is a screenshot of one of my app of apps directory to complete the example.

Using same spec across different deployment in argocd

I am currently using Kustomize. We are have multiple deployments and services. These have the same spec but different names. Is it possible to store the spec in individual files & refer them across all the deployments files?
Helm is a good fit for the solution.
However, since we were already using Kustomize & migration to Helm would have needed time, we solved the problem using namePrefix & label modifiers in Kustomize.
Use Helm, in ArgoCD create a pipeline with helm:3 container and create a helm-chart directory or repository. Pull the chart repository, deploy with helm. Use values.yaml for the dynamic values you want to use. Also, you will need to add kubeconfig file to your pipeline but that is another issue.
This is the best offer I can give. For further information I need to inspect ArgoCD.
I was faced with this problem and I resolved it using Helm3 charts:
I have a chart. Yaml file where I indicated my release name and version
values. Yam where I define all variable to use for a specific environment.
Values-test. Yaml a file to use, for example, in a test environment where you should only put the variable that must be changed from an environment to another.
I hope that can help you to resolve your issue.
I would also suggest using Helm. However a restriction of Helm is that you cannot create dynamic values.yaml files (https://github.com/helm/helm/issues/6699) - this can be very annoying, especially for multi-environment setups. However, ArgoCD provides a very nice way to do this with its Application type.
The solution is to create a custom Helm chart for generating your ArgoCD applications (which can be called with different config for each environment). The templates in this helm chart will generate ArgoCD Application types. This type supports a source.helm.values field where you can dynamically set the values.yaml.
For example, the values.yaml for HashiCorp Vault can be highly complex and this is a scenario where a dynamic values.yaml per environment is highly desirable (as this prevents having multiple values.yaml files for each environment which are large but very similar).
If your custom ArgoCD helm chart is my-argocd-application-helm, then the following are example values.yaml and the template which generates your Vault application i.e.
values.yaml
server: 1.2.3.4 # Target kubernetes server for all applications
vault:
name: vault-dev
repoURL: https://git.acme.com/myapp/vault-helm.git
targetRevision: master
path: helm/vault-chart
namespace: vault
hostname: 5.6.7.8 # target server for Vault
...
templates/vault-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {{ .Values.vault.name }}
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: 'vault'
server: {{ .Values.server }}
project: 'default'
source:
path: '{{ .Values.vault.path }}'
repoURL: {{ .Values.vault.repoURL }}
targetRevision: {{ .Values.vault.targetRevision }}
helm:
# Dynamically generate `values.yaml`
values: |
vault:
server:
ingress:
activeService: true
hosts:
- host: {{ required "Please set 'vault.hostname'" .Values.vault.hostname | quote }}
paths:
- /
ha:
enabled: true
config: |
ui = true
...
These values will then override any base configuration residing in the values.yaml specified by {{ .Values.vault.repoURL }} which can contain config which doesn't change for each environment.

Helm best practices

I am new to helm and liked the idea of helm to create versions for the deployments and package them as artifact in jfrog articatory but one thing that I am unclear about is easiness of creating it.
I am comfortable with kubernetes mainfest and creating it is very simple where you don't have to handcraft a yaml.
You can simply run kubectl command in dry-run mode and export most of the yaml tags as below:
kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx-manifest.yaml
Now for creating helm, I need to run helm create and key in all the values needed by helm yaml files.
Curious if helm has such shortcuts that kubectl provides to create charts easily which keys in required value through command line while generating charts?
Also is there a migration utility available that supports converting the deployment manifest to helm charts?
helm create does what you are looking for. It creates a directory with all the basic stuff so that you don't need to manually create each file/directory. However, it can't create the content of a Chart it has no clue about.
But, there is no magic behind the scenes, a chart consists in templates and values. The templates are the same YAML files you are used to work with, except that you can replace whatever you want to make "dynamic" with the placeholders used by Helm. That's it.
So, in other words, just keep exporting as you are (I strongly suggest stopping doing this and create proper files suited for your needs) and add placeholders ({{ .Values.foo }})
For example, this is the template for a service I have:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name | default .Chart.Name }}
spec:
ports:
- port: {{ .Values.port }}
protocol: TCP
targetPort: {{ .Values.port }}
selector:
app: {{ .Values.name | default .Chart.Name }}

kube helm charts - multiple values files

it might be simple question but can't find anywhere if it's duable;
Is it possible to have values files for helm charts (stable/jenkins let's say) and have two different values files for it?
I would like in values_a.yaml have some values like these:
master:
componentName: "jenkins-master"
image: "jenkins/jenkins"
tag: "lts"
...
password: {{ .Values.secrets.masterPassword }}
and in the values_b.yaml - which will be encrypted with AWS KMS
secrets:
masterPassword: xxx
the above code doesn't work and wanted to know, as you can put those vars in kube manifests like it
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.config.name }}
namespace: {{ .Values.config.namespace }}
...
can they be somehow passed to other values files
EDIT:
If it was possible I would just put the
master:
password: xxx
in values_b.yaml but vars cannot be duplicated, and the official helm chart expects the master.password val from that file - so has to somehow pass it there but in encrypted way
I'm not quite sure but this feature of helm might help you.
Helm gives you the functionality to pass custom Values.yaml which have higher precedence over the fields of the main Values.yaml while performing helm install or helm upgrade.
For Helm 3
$ helm install <name> ./mychart -f myValues.yaml
For Helm 2
$ helm install --name <name> ./mychart --values myValues.yaml
The valid answer is from David Maze in the comments of the response of Kamol Hasan.
You can use multiple -f or --values options: helm install ... -f
values_a.yaml -f values_b.yaml. But you can't use templating in any
Helm values file unless the chart specifically supports it (using the
tpl function).
If you use multiple -f then latest value files override earlier ones.

Blue Green Deployment with Helm Charts

We Could deploy applications using 'Helm Charts' with
helm install --name the-release helm/the-service-helm --namespace myns
And we cold 'Rolling Upgrade' the deployment using,
helm upgrade --recreate-pods the-release helm/the-service-helm --namespace myns
Is there a way to use 'Helm Charts' to achieve 'Blue/Green' Deployments?
Let's start from definitions
Since there are many deployment strategies, let's start from the definition.
As per Martin Flower:
The blue-green deployment approach does this by ensuring you have two production environments, as identical as possible. At any time one of them, let's say blue for the example, is live. As you prepare a new release of your software you do your final stage of testing in the green environment. Once the software is working in the green environment, you switch the router so that all incoming requests go to the green environment - the blue one is now idle.
Blue/Green is not recommended in Helm. But there are workaround solutions
As per to helm issue #3518, it's not recommended to use Helm for blue/green or canary deployment.
There are at least 3 solutions based on top of Helm, see below
However there is a Helm chart for that case.
Helm itself (TL;DR: not recommended)
Helm itself is not intended for the case. See their explanation:
direct support for blue / green deployment pattern in helm · Issue #3518 · helm/helm
Helm works more in the sense of a traditional package manager, upgrading charts from one version to the next in a graceful manner (thanks to pod liveness/readiness probes and deployment update strategies), much like how one expects something like apt upgrade to work. Blue/green deployments are a very different beast compared to the package manager style of upgrade workflows; blue/green sits at a level higher in the toolchain because the use cases around these deployments require step-in/step-out policies, gradual traffic migrations and rollbacks. Because of that, we decided that blue/green deployments are something out of scope for Helm, though a tool that utilizes Helm under the covers (or something parallel like istio) could more than likely be able to handle that use case.
Other solutions based on Helm
There are at least three solution based on top of Helm, described and compared here:
Shipper
Istio
Flagger.
Shipper by Booking.com - DEPRECATED
bookingcom/shipper: Kubernetes native multi-cluster canary or blue-green rollouts using Helm
It does this by relying on Helm, and using Helm Charts as the unit of configuration deployment. Shipper's Application object provides an interface for specifying values to a Chart just like the helm command line tool.
Shipper consumes Charts directly from a Chart repository like ChartMuseum, and installs objects into clusters itself. This has the nice property that regular Kubernetes authentication and RBAC controls can be used to manage access to Shipper APIs.
Kubernetes native multi-cluster canary or blue-green rollouts using Helm
Istio
You can try something like this:
kubectl create -f <(istioctl kube-inject -f cowsay-v1.yaml) # deploy v1
kubectl create -f <(istioctl kube-inject -f cowsay-v2.yaml) # deploy v1
Flagger.
There is guide written by Flagger team: Blue/Green Deployments - Flagger
This guide shows you how to automate Blue/Green deployments with Flagger and Kubernetes
You might try Helm itself
Also, as Kamol Hasan recommended, you can try that chart: puneetsaraswat/HelmCharts/blue-green.
blue.yml sample
{{ if .Values.blue.enabled }}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "blue-green.fullname" . }}-blue
labels:
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "blue-green.name" . }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "blue-green.name" . }}
release: {{ .Release.Name }}
slot: blue
spec:
containers:
- name: {{ template "blue-green.name" . }}-blue
image: nginx:stable
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
# This (and the volumes section below) mount the config map as a volume.
volumeMounts:
- mountPath: /usr/share/nginx/html
name: wwwdata-volume
volumes:
- name: wwwdata-volume
configMap:
name: {{ template "blue-green.fullname" . }}
{{ end }}
Medium blog post: Blue/Green Deployments using Helm Charts