How to set environment related values.yaml in Helm subcharts? - kubernetes

I am currently deploying my applications in a Kubernetes cluster using Helm. Now I also need to be able to modify some parameter in the values.yaml file for different environments.
For simple charts with only one level this is easy by having different values-local.yaml and values-prod.yaml and add this to the helm install flag, e.g. helm install --values values-local.yaml.
But if I have a second layer of subcharts, which also need to distinguish the values between multiple environments, I cannot set a custom values.yaml.
Assuming following structure:
| chart
| Chart.yaml
| values-local.yaml
| values-prod.yaml
| charts
| foo-app
| Chart.yaml
| values-local.yaml
| values-prod.yaml
| templates
| deployments.yaml
| services.yaml
This will not work since Helm is expecting a values.yaml in subcharts.
My workaround right now is to have an if-else-construct in the subchart/values.yaml and set this in as a global variable in the parent values.yaml.
*foo-app/values.yaml*
{{ - if .Values.global.env.local }}
foo-app:
replicas: 1
{{ else if .Values.global.env.dev}}
foo-app:
replicas: 2
{{ end }}
parent/values-local.yaml
global:
env:
local: true
parent/values-prod.yaml
global:
env:
prod: true
But I hope there is a better approach around so I do not need to rely on these custom flags.
I hope you can help me out on this.

Here is how I would do it (for reference overriding values):
In your child charts (foochart) define the number of replicas as a variable:
foochart/values.yaml
...
replicas: 1
...
foochart/templates/deployment.yaml
...
spec:
replicas: {{ .Values.replicas }}
...
Then, in your main chart's values files:
values-local.yaml
foochart:
replicas: 1
values-prod.yaml
foochart:
replicas: 2

Just an idea, needs to be fleshed out a bit more...
At KubeCon I saw a talk where they introduced a Kubernetes Operator called Lostromos. The idea was to simplify deployments to support multiple different environments and make maintaining these kinds of things easier. It uses Custom Resource definitions. I wonder if you can leverage Lostromos in this case. Your sub-charts would have a single values.yaml but use lostromos and a CRD to feed in the properties you need. So you would deploy the CRD instead and the CRD would trigger Lostromos to deploy your Helm chart.
Just something to get the ideas going but seemed like it might be worth exploring.

I'm currently getting my chart from stable/jenkins and am trying to set my values.yaml file. I have made the appropriate changes and try to run 'helm install -n --values= stable/jenkins but it continues to install the default values instead of the modified yaml file i created. To be more specific, I commented out the plug-in requirements in the yaml file since it has been causing my pod status to stay on 'Init:0/1' on Kubernetes.

Related

Deploying multiple version of a single application with Helm in the same namespace

I have a situation where i have an application for which i would like to run several set of instances configured differently. I guess from readying online, people would usually having several version of the same application in your clusters.
But let me somewhat describe the use case at the high level. The application is a component that takes as configuration a a dataset and a set of instructions stating how to process the dataset. The dataset is actually a datasource.
So in the same namespace, we would like for instance process 2 dataset.
So it is like having two deployments for the same application. Each dataset has different requirements, hence we should be able to have deployment1 scale to 10 instance and deployement 2 scale to 5 instance.
The thing is it is the same application and so far it is the same helm chart, and deployment definition.
The question is what are the different options that exist to handle that at this time.
examples, pointers, article are welcome.
So far i found the following article as the most promising:
https://itnext.io/support-multiple-versions-of-a-service-in-kubernetes-using-helm-ce26adcb516d
Another thing i though about is duplicating the deployment chart into 2 sub-charts of which the folder name differ.
Helm supports this pretty straightforwardly.
In Helm terminology, you would write a chart that describes how to install one copy of your application. This creates Kubernetes Deployments and other manifests; but it has templating that allows parts of the application to be filled in at deploy time. One copy of the installation is a release, but you can have multiple releases, in the same or different Kubernetes namespaces.
For example, say you have a YAML template for a Kubernetes deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-processor
spec:
replicas: {{ .Values.replicas }}
template:
spec:
containers:
- env:
- name: DATASET_NAME
value: {{ .Values.dataset }}
# and the other things that usually go into a container spec
When you go to deploy this, you can create a values file:
# a.yaml
replicas: 10
dataset: dataset-1
And you can deploy it:
helm install \
one \ # release name
. \ # chart location
-f a.yaml # additional values to use
If you use kubectl get deployment, you will see one-processor, and if you look at it in detail, you will see it has 10 replicas and its environment variable is set to dataset-1.
You can create a second deployment with different settings in the same namespace:
# b.yaml
replicas: 5
dataset: dataset-2
helm install two . -f b.yaml
Or in a different namespace:
helm install three . -n other-namespace -f c.yaml
It's theoretically possible to have a chart that only installs other subcharts (an umbrella chart), but there are some practical issues with it, most notably that Helm will want to install only one copy of a given chart no matter where it appears in the chart hierarchy. There are other higher-level tools like Helmsman and Helmfile that would allow you to basically describe these multiple helm install commands in a single file.
You can "cascade" the values YAML files to achieve what you want. For example, you could define common.yaml to be all the common settings for your application. Then, each separate instance would be a second YAML file.
Here is an example. Let's say that the file common.yaml looks like this:
namespace: myapp-dev
pod-count: 1
use_ssl: true
image-name: debian:buster-slim
... more ...
Let's say you want two Deployments, own that scales to 5 replicas and one that scales to 10. You would create two more files:
# local5.yaml
pod-count: 5
and
# local10.yaml
pod-count: 10
Note that you do not have to repeat the settings in common.yaml. To deploy the five-replica version you do something like this:
$ helm install -f common.yaml -f local5.yaml five .
To deploy the 10-replica version:
$ helm install -f common.yaml -f local10.yaml ten .
The YAML files cascade with the later file overriding the earlier.

Using same spec across different deployment in argocd

I am currently using Kustomize. We are have multiple deployments and services. These have the same spec but different names. Is it possible to store the spec in individual files & refer them across all the deployments files?
Helm is a good fit for the solution.
However, since we were already using Kustomize & migration to Helm would have needed time, we solved the problem using namePrefix & label modifiers in Kustomize.
Use Helm, in ArgoCD create a pipeline with helm:3 container and create a helm-chart directory or repository. Pull the chart repository, deploy with helm. Use values.yaml for the dynamic values you want to use. Also, you will need to add kubeconfig file to your pipeline but that is another issue.
This is the best offer I can give. For further information I need to inspect ArgoCD.
I was faced with this problem and I resolved it using Helm3 charts:
I have a chart. Yaml file where I indicated my release name and version
values. Yam where I define all variable to use for a specific environment.
Values-test. Yaml a file to use, for example, in a test environment where you should only put the variable that must be changed from an environment to another.
I hope that can help you to resolve your issue.
I would also suggest using Helm. However a restriction of Helm is that you cannot create dynamic values.yaml files (https://github.com/helm/helm/issues/6699) - this can be very annoying, especially for multi-environment setups. However, ArgoCD provides a very nice way to do this with its Application type.
The solution is to create a custom Helm chart for generating your ArgoCD applications (which can be called with different config for each environment). The templates in this helm chart will generate ArgoCD Application types. This type supports a source.helm.values field where you can dynamically set the values.yaml.
For example, the values.yaml for HashiCorp Vault can be highly complex and this is a scenario where a dynamic values.yaml per environment is highly desirable (as this prevents having multiple values.yaml files for each environment which are large but very similar).
If your custom ArgoCD helm chart is my-argocd-application-helm, then the following are example values.yaml and the template which generates your Vault application i.e.
values.yaml
server: 1.2.3.4 # Target kubernetes server for all applications
vault:
name: vault-dev
repoURL: https://git.acme.com/myapp/vault-helm.git
targetRevision: master
path: helm/vault-chart
namespace: vault
hostname: 5.6.7.8 # target server for Vault
...
templates/vault-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {{ .Values.vault.name }}
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: 'vault'
server: {{ .Values.server }}
project: 'default'
source:
path: '{{ .Values.vault.path }}'
repoURL: {{ .Values.vault.repoURL }}
targetRevision: {{ .Values.vault.targetRevision }}
helm:
# Dynamically generate `values.yaml`
values: |
vault:
server:
ingress:
activeService: true
hosts:
- host: {{ required "Please set 'vault.hostname'" .Values.vault.hostname | quote }}
paths:
- /
ha:
enabled: true
config: |
ui = true
...
These values will then override any base configuration residing in the values.yaml specified by {{ .Values.vault.repoURL }} which can contain config which doesn't change for each environment.

Helm best practices

I am new to helm and liked the idea of helm to create versions for the deployments and package them as artifact in jfrog articatory but one thing that I am unclear about is easiness of creating it.
I am comfortable with kubernetes mainfest and creating it is very simple where you don't have to handcraft a yaml.
You can simply run kubectl command in dry-run mode and export most of the yaml tags as below:
kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx-manifest.yaml
Now for creating helm, I need to run helm create and key in all the values needed by helm yaml files.
Curious if helm has such shortcuts that kubectl provides to create charts easily which keys in required value through command line while generating charts?
Also is there a migration utility available that supports converting the deployment manifest to helm charts?
helm create does what you are looking for. It creates a directory with all the basic stuff so that you don't need to manually create each file/directory. However, it can't create the content of a Chart it has no clue about.
But, there is no magic behind the scenes, a chart consists in templates and values. The templates are the same YAML files you are used to work with, except that you can replace whatever you want to make "dynamic" with the placeholders used by Helm. That's it.
So, in other words, just keep exporting as you are (I strongly suggest stopping doing this and create proper files suited for your needs) and add placeholders ({{ .Values.foo }})
For example, this is the template for a service I have:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name | default .Chart.Name }}
spec:
ports:
- port: {{ .Values.port }}
protocol: TCP
targetPort: {{ .Values.port }}
selector:
app: {{ .Values.name | default .Chart.Name }}

helm - programmatically override subchart values.yaml

I'm writing a helm chart that uses the stable/redis chart as a subchart.
I need to override the storage class name used for both microservices within my chart, and within the redis chart.
I'm using helm 2.12.3
I would like to be able to specify redis.master.persistence.storageClass in terms of a template, like so
storage:
storageClasses:
name: azurefile
redis:
usePassword: false
master:
persistence:
storageClass: {{ $.Values.storage.storageClasses.name }}
Except, as I understand, templates aren't supported within values.yaml
As this is a public chart, I'm not able to modify it to depend on a global value as described here in the documentation
I considered using {{ $.Values.redis.master.persistence.storageClass }} elsewhere in my chart rather than {{ $.Values.storage.storageClasses.name }}, but this would:
Not hide the complexity of the dependencies of my chart
Not scale if I was to add yet another subchart dependency
In my values.yaml file I have:
storage:
storageClasses:
name: azurefile
redis:
master:
persistence:
storageClass: azurefile
I would like to specify a single value in values.yaml that can be overwritten at chart deploy time.
e.g. like this
helm install --set storage.storageClasses.name=foo mychart
rather than
helm install --set storage.storageClasses.name=foo --set redis.master.persistence.storageClass mychart
As you correctly mentioned, helm value files are plain yaml files which cannot contain any templates. For your use case, you'd need to use a templating system for your values files also which basically means you are also generating your value files on the go. I'd suggest taking a look at helmfile. This lets you share values file across multiple charts and application environments.

Best way to DRY up deployments that all depend on a very similar init-container

I have 10 applications to deploy to Kubernetes. Each of the deployments depends on an init container that is basically identical except for a single parameter (and it doesn't make conceptual sense for me to decouple this init container from the application). So far I've been copy-pasting this init container into each deployment.yaml file, but I feel like that's got to be a better way of doing this!
I haven't seen a great solution from my research, though the only thing I can think of so far is to use something like Helm to package up the init container and deploy it as part of some dependency-based way (Argo?).
Has anyone else with this issue found a solution they were satisfied with?
A Helm template can contain an arbitrary amount of text, just so long as when all of the macros are expanded it produces a valid YAML Kubernetes manifest. ("Valid YAML" is trickier than it sounds because the indentation matters.)
The simplest way to do this would be to write a shared Helm template that included the definition for the init container:
_init_container.tpl:
{{- define "common.myinit" -}}
name: myinit
image: myname/myinit:{{ .Values.initTag }}
# Other things from a container spec
{{ end -}}
Then in your deployment, include this:
deployment.yaml:
apiVersion: v1
kind: Deployment
spec:
template:
spec:
initContainers:
- {{ include "common.myinit" . | indent 10 | strip }}
Then you can copy the _init_container.tpl file into each of your individual services.
If you want to avoid the copy-and-paste (reasonable enough) you can create a Helm chart that contains only templates and no actual Kubernetes resources. You need to set up some sort of repository to hold this chart. Put the _init_container.tpl into that shared chart, declare it as a dependency is the chart metadata, and reference the template in your deployment YAML in the same way (Go template names are shared across all included charts).