Install single Kubernetes deployment multiple times - kubernetes

I have a Helm chart that installs different kubernetes resources to deploy my application.
One of those resources is a deployment that has two flavors, one for a client part of the app and one for the server part, so actually they are two deployments. Most of their manifests (yaml files) are exactly the same, the only important difference is that each refers to a different configmap in order to have specific values for some of the configmap properties (particularly the type: client/server and number of replicas). This doesn't seem to be very efficient since I'm duplicating code for the deployments, but it's the way I found to do it.
On the other hand, for the configmaps I made use of Helm's template feature ({{ include }}) so I have a "main" configmap template which has all the common content, and two separate configmaps specifying the differences for each deployment and including the main template.
So far so good, even though there may be some unnecessary code duplication, in which case I wouldn't know how to improve.
The problem is that multiple variants of the above two deployments came into play. For example, I may want to deploy a client-type pod with property X having a certain value, and two server-type pods with property X having a different value. So following my approach, I would have to start creating more deployment yaml files to cover all possible combinations: type=client & X=Y, type=client & X=Z, type=server & X=Y, type=server & X=Z and so on. And the only purpose of this is to be able to specify how many replicas I want for each kind or combination.
Is there any way (using Helm or other Kubernetes related framework) to have a single deployment yaml file and be able to install it multiple times specifying only the properties that vary and the number of replicas for that variation?
For example:
I want:
3 replicas that have "type=client" and "X=1"
2 replicas that have "type=server" and "X=1"
4 replicas that have "type=client" and "X=2"
1 replicas that have "type=server" and "X=3"
where type and X are properties (data) in some configmap.
Hope it's clear enough, otherwise please let me know, thanks.

In Helm there are a couple of ways to approach this. You need to bring the settings up to Helm's configuration layer (they would be in values.yaml or provided via a mechanism like helm install --set); you can't extract them out of the ConfigMap.
One approach is to have your Helm chart install only a single instance of the Deployment and the corresponding ConfigMap. Have a single templates/deployment.yaml file that includes lines like:
name: {{ .Release.Name }}-{{ .Chart.Name }}-{{ .Values.type }}-{{ .Values.X }}
replicas: {{ .Values.replicas }}
env:
- name: TYPE
value: {{ .Values.type }}
- name: X
value: {{ quote .Values.X }}
Then you can deploy multiple copies of it:
helm install c1 . --set type=client --set X=1 --set replicas=3
helm install s1 . --set type=server --set X=1 --set replicas=2
You mention that you're generating similar ConfigMaps using templates already, and you can also use that same approach for any YAML structure. A template takes a single parameter, and one trick that's possible is to pass a list as that parameter. The other important detail to remember is that the top-level names like .Values are actually field lookups in a special object ., which can get reassigned in several contexts, so you may need to explicitly pass around and reference the top-level object.
Say your template needs the top-level values, and also some extra configuration settings:
{{- define "a.deployment" -}}
{{- $top := index . 0 -}}
{{- $config := index . 1 -}}
metadata:
name: {{ include "chart.name" $top }}-{{ $config.type }}-{{ $config.X }}
{{ end -}}
Note that we unpack the two values from the single list parameter, then pass $top in places where we might expect to pass . as a parameter.
You can have a top-level file per variant of this. For example, templates/deployment-server-1.yaml might contain:
{{- $config := dict "type" "server" "X" "1" -}}
{{- include "a.deployment" (list . $config) -}}
Here . is the top-level object; we're embedding that and the config dictionary into a single list parameter to match what the template expects. You could use any templating constructs in the dict call if some of the values were specified in Helm configuration.
Finally, there's not actually a rule that a YAML file contains only a single object. If your Helm configuration just lists out the variants, you can loop through them and emit them all:
{{-/* range will reassign . so save its current value */-}}
{{- $top := . -}}
{{- range .Values.installations -}}
{{-/* Now . is one item from the installations list */-}}
{{-/* This is the YAML start-of-document marker: */-}}
---
{{ include "a.deployment" (list $top .) -}}
{{- end -}}
You'd just list out all of the variants and settings in the Helm values.yaml (or, again, an externally provided helm install -f more-values.yaml file):
installations:
- type: client
X: 1
replicas: 3
- type: server
X: 1
replicas: 2

Related

Yaml dynamic variables

So, I'm just starting with YAML and k8s and maybe this questions comes from lack of understanding how YAML and Helm works together.
But I was wondering if I can declare a variable inside Values.YAML file that will be changed during the run of the scripts?
I was thinking about accumulating value for each pod I am starting, that will be saved as an environment variable in each pod. I can manually create different value for each pod but I was wondering if there is an automatic way to do so?
Hope my question is clear :)
Helm allows for conditionals using its templating. For example, I can have this in my values.yaml
environment: preprod
And then this inside a yaml within my helm chart
{{ if eq .Values.environment "preprod" }}
## Do preprod stuff here
{{ end }}
{{ if eq .Values.environment "prod" }}
## Do prod stuff here
{{ end }}
This means if I ran helm install, then .Values.environment would resolve to "preprod" and the block within the {{ if eq .Values.environment "preprod" }}...{{ end }} would be printed in the yaml.
If I wanted to override that default, I can by adding the --set switch (details here)
helm install --set environment=prod
Which would cause the .Values.environment variable to resolve to "prod" instead, and the block within {{ if eq .Values.environment "prod" }} ... {{ end }} would be output instead.
Helm templates are stateless, and the variables structure is immutable.
Can I declare a variable inside the values.yaml file that will be changed during the run of the scripts?
That's not possible, no.
If you have experience with functional programming, there are some related tricks you can use in the context of Helm templates. A template only takes one parameter, but you can make that parameter be a list, and then pass some state forward through a series of recursive template calls. If you're tempted to do this, consider writing a Kubernetes operator instead: even if you have to learn Go to do it, the language is much more mainstream and practical than the template language, and it's much easier to test.
This having been said:
... accumulating value for each pod I am starting ...
If all you're asking for is a set of Pods that are very similar except that they have sequential names, this is one of the things a StatefulSet provides.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: some-name
spec:
replicas: 5
The Pods generated by this StatefulSet will be named some-name-0, some-name-1, and so on. Your application code can see these names via the hostname command and language-specific equivalents. That could meet your needs.
If you need something more complex, you can also use a template range loop to generate a series of documents. Each document needs to begin with a --- YAML start-of-document marker. You need to be aware that range rebinds the . special template variable as appears in constructs like .Values, and I tend to save that away.
{{- $top := . }}
{{- $i := range until 5 }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "mychart.fullname" $top }}-{{ $i }}
spec: { ... }
{{- end }}
You should almost always use higher-level constructs like Deployments, StatefulSets, or Jobs instead of creating bare Pods. Trying to fit within their patterns will usually be a little easier than trying to manually create several very slightly different Pods.

Import parent template with subchart values

I have multiple sucharts with applications and a parent chart that will deploy them.
All subcharts have the same manifests for the underlying application. Therefore I decided to create a library and put general variables from subcharts in it.
Example from lib:
{{- define "app.connect.common.release.common_libs.servicetemplate" -}}
apiVersion: v1
kind: Service
metadata:
labels:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
name: {{ .Values.application.name }}-service
namespace: {{ .Values.global.environment.namespace }}
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: 8080
- name: http
port: 80
targetPort: 8080
selector:
app: {{ .Values.application.name }}
status:
loadBalancer: {}
{{- end }}
I declared a dependency in Chart.yaml and executed helm dep up. Then in my subchart I'm importing this template. But when I'm trying to run --dry-run on parent chart I'm receiving the following error:
Error: template: app.connect.common.release/charts/app.connect.common.release.chtmgr/templates/service.yaml:1:4: executing "app.connect.common.release/charts/app.connect.common.release.chtmgr/templates/service.yaml" at <include "app.connect.common.release.common_libs.servicetemplate" .>: error calling include: template: app.connect.common.release/charts/app.connect.common.release.chtmgr/charts/app.connect.common.release.common_libs/templates/_helpers.tpl:169:18: executing "app.connect.common.release.common_libs.servicetemplate" at <.Values.application.name>: nil pointer evaluating interface {}.name
My values values.yaml in the subchart:
application:
name: chtmgr-api
image: cht-mgr-api
The same error with named template.
Is it possible to put general values from subchart in parent template(example _helper.tpl) and import it in subchart?
If not, how do you implement this?
I've checked a lot of resources but still don't have an idea am I going in the right direction.
The Helm template define action creates a "function". It implicitly takes a single "parameter", using the special variable name ., and .Values is actually a lookup in .. It does not "capture" .Values at the point where it is defined; it uses the Values property of the parameter that's passed to it.
This means the template will behave differently when it's called in different contexts. As the Helm documentation on Subcharts and Global Variables describes, when executing the subchart, the top-level . parameter will have its Values replaced by the subchart's key in the primary values.
There's three ways to work around this:
If you're using Helm 3, you can directly import a value from the subchart into the parent. (I'm not clear what version of Helm exactly this was added, or if the syntax works in a separate requirements.yaml file.) Declare the subchart dependency in your Chart.yaml as
dependencies:
- name: subchart
import-values:
- child: application
parent: application
and the template you show should work unmodified.
(There's a more involved path that involves the subchart explicitly exporting values to the parent. I'm not sure if this is that useful to you: the paths will still be different in the two charts, and in any case Helm values can't contain computed values.)
You already use .Values.global in your example; this will have the same value in the parent chart and all included charts.
# vvvvvv inserted "global" here
name: {{ .Values.global.application.name }}-service
namespace: {{ .Values.global.environment.namespace }}
You can also use template logic to try to look in more places for the application dictionary. This will only work from the parent chart and the subchart proper and not any other sibling charts; I believe it will also only work if the configuration is embedded in the parent chart's values.yaml or an external helm install -f values file, but won't find content in the included chart's values.yaml.
{{/* Get the subchart configuration, or an empty dict (assuming we're in the parent) */}}
{{- $subchart := .Values.subchart | default dict -}}
{{/* Find the application values (assuming first we're in the subchart) */}}
{{- $application := .Values.application | default $subchart.application | default dict -}}
{{/* Then use it */}}
name: {{ $application.name }}-service
This logic sets the variable $application, normally, to .Values.application. If that's not set (you're in the parent chart) then it in effect looks up .Values.subchart.application, and if that's not available either then it uses an empty dictionary. This is (again, using programming terminology) eagerly evaluated – even if you're not falling back to the default, Helm will always look up .Values.subchart – so we use a second variable to have the .Values.subchart, or an empty dict (if you're in the subchart). In both cases looking up a nonexistent key in an empty dict isn't an error but looking up on an unset value is.
Found a solution. Problem was related to the chart/template names all of them include '.' in the names what causes problem. Don't include dots in your templates/charts names. Example how not to do: "ibext.common.connect.etc".
My assumption
I'm quite newbie in helm and my assumption that helm when template engine is looking in the subchart it goes through the following way to find variables .Values.subchartName in my case it was .Values.ibext.common.connect.etc So when it faced with .Values.ibext it can't understand what it is(but helm doesn't show anything).
But this is only my assumption. If someone there understands the behaviour of template engine in this case please reveal the secret.
Be aware the *.tpl files must be in the templates folder. Double check all folder names and the structure.

Can Helm conditionally install main chart based on parameter in values.yaml

I am not clear if dependencies in Helm3 is just for subcharts.
I have
license: false in values.yaml
And I need to install my chart only if license is set to true.
I went through
https://helm.sh/docs/topics/charts/#tags-and-condition-fields-in-dependencies
but I couldnt get a way to block the main chart installation.
That's correct, dependencies is being used for sub-charts of your main chart. In case you need to deploy your main chart in certain conditions I would suggest following the same steps of the default chart template. For example, you will find a file called serviceaccount.yaml which have the following condition:
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
...
{{- end -}}
This means the whole block will not be evaluated unless it met the condition specified. In your case, you need to set a condition in all the chart's templates regardless the kind
{{- if .Values.license -}}
...
{{- end -}}
This answer helped me.
How to fail a helm release based on inputs in values.yaml
I just have to add this condition at one template file and the helm chart wouldn't be rendered.
Problem with {{- if .Values.license -}} condition is, helm release would still be provisioned with K8 resources being empty.Also it should be added in all template files.

How to use Kubeseal to seal a helm-templated secret?

Imagine a secret like this:
apiVersion: v1
kind: Secret
metadata:
name: {{ include "test-cicd.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "test-cicd.name" . }}
helm.sh/chart: {{ include "test-cicd.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
type: Opaque
data:
secret.yaml: |
{{ if eq .Values.env "prod" }}
foo: bar-prod
foo2: bar2_prod
{{ else if eq .Values.evn "dev" }}
foo: bar-dev
{{ end }}
Is it possible to seal this using Kubeseal?
Upon doing it now, I get invalid map key: map[interface {}]interface {}{"include \"test-cicd.fullname\" .":interface {}(nil)} which is probably because it is not a "valid" yaml file.
One thing that I tried was:
1. Removing the helm templating lines
2. Generating the sealedsecret
3. Templating the sealedsecret using helm
But by doing this, the sealedsecret could not be decrypted by the cluster-side operator on deployment time.
mkmik gave an answer to my question on Github, so I'm quoting it here as well just for the records.
So, you're composing a secret value with client-side templating.
Parts of your secret.yaml file are secret, yet parts must be templating directives (the if) and hence cannot be encrypted.
You have two options:
you encrypt your secrets somehow using some client-side vault software, possibly with helm integration (e.g. https://github.com/futuresimple/helm-secrets). That requires every user (and CI environment) that applies that helm chart, to be able to decrypt the secrets.
you re-factor your secrets so that secrets are "atomic", and use sealed-secrets to benefit from its "one-way encryption" approach, which allows your devops users (and CI automation) to apply the helm charts without ever seeing the secret values themselves.
The rest of this answer assumes you picked option (2)
Now, since you decided to use Helm, you have to deal with the fact that helm templates are not json/yaml files, but instead they are Go templates, and hence they cannot be manipulated by tools designed to manipulated structured data formats.
Luckily, kubeseal has a --raw command, that allows you to encrypt individual secret values and put them manually in whatever file format you're using to describe your k8s resources.
So, assuming you want to create a Helm template for SealedSecrets resource, which takes the name and label values as paramters, and also chooses which secrets to put also based on boolean prod/dev parameter, this example might work for you:
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: {{ include "test-cicd.fullname" . }}
annotations:
# this is because the name is a deployment time parameter
# consider also using "cluster-wide" if the namespace is also a parameter
# please make sure you understand the implications, see README
sealedsecrets.bitnami.com/namespace-wide: "true"
labels:
app.kubernetes.io/name: {{ include "test-cicd.name" . }}
helm.sh/chart: {{ include "test-cicd.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
type: Opaque
spec:
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "test-cicd.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
encryptedData:
{{ if eq .Values.env "prod" }}
foo: AgASNmKx2+QYbbhSxBE0KTa91sDBeNSaicvgBPW8Y/q/f806c7lKfF0mnxzEirjBsvF67C/Yp0fwSokIpKyy3gXtatg8rhf8uiQAA3VjJGkl5VYLcad0t6hKQyIfHsD7wrocm36uz9hpH30DRPWtL5qy4Z+zbzHj8AvEV+xTpBHCSyJPF2hyvHXTr6iQ6KJrAKy04MDwjyQzllN5OQJT2w4zhVgTxXSg/c7m50U/znbcJ1x5vWLXLSeiDRrsJEJeNoPQM8OHmosf5afSOTDWQ4IhG3srSBfDExSFGBIC41OT2CUUmCCtrc9o61LJruqshZ3PkiS7PqejytgwLpw/GEnj2oa/uNSStiP9oa9mCY6IUMujwjF9rKLIT456DlrnsS0bYXO2NmYwSfFX+KDbEhCIVFMbMupMSZp9Ol2DTim5SLIgIza/fj0CXaO3jGiltSQ0aM8gLSMK9n3c1V+X5hKmzMI3/Xd01QmhMmwqKp+oy21iidLJjtz67EiWyfIg1l7hiD5IIVlM9Gvg3k67zij5mOcXPkFnMmUQhQWxVKgAf4z8qEgprt03C+q+Wwwt25UDhQicpwoGtVQzU5ChJi09ja5LeW4RrvDf2B5KRp9HXoj1eu93MMl1Kcnx+X7uVT5OqQz28c4wOLT4FDItFzh8zREGZbiG/B3o1vI8MmwvxXj++pQ7SfBxoz9Xe8gmQ7BuXno=
foo2: AgAkaTBYcESwogPiauZ15YbNldmk4a9esyYuR2GDt7hNcv+ycPLHmnsJcYs0hBtqucmrO3HbgCy/hQ6dMRCY12RA7w7XsFqNjZy3kavnhqwM6YkHntK2INwercRNQpO6B9bH6MxQTXcxfJbPqaPt30iTnTAhtpN47lueoyIoka4WWzwG/3PAikXhIlkTaq0hrclRJHRqg4z8Kmcaf5A/BRL2xX8syHbjA7MK9/OoK+zytv+LGrbLLHUtuhNNNQ2PG9u05rP6+59wRduQojEDtB9FTCa+daS+04/F4H1vi6XUNnjkK+Xna1T2Eavyuq2GieKj/7ig96et/4HoTAz44zwVhh8/pk0IFC8srcH3p+rFtZZmjvbURrFahEjFZbav3BDMBNhrU8SI3MDN0Abiyvz4vJJfSxIYcyLD1EQ507q7ZXrqYN/v1EiYgYUACi0JGxSWHB9TlCkZOAdCl+hroXEhBN2u5utLJ12njBQJ8ACNQDOYf+CmtV0y7foCZ6Aaap0pV7a8twyqK8c17kImzfi102Zel8ALfLAzdAXBV9c1+1pH76turnTCE33aSMQlaVF3VTmFQWqB8uIO/FQhZDPo8u/ki3L8J31nepup4/WE7i59IT0/9qGh2LKql4oAv6v4D7qtKziN6DvG7bsJlj14Dln0roiTfTWEEnBqdDER+GKZJlKayOWsPQdN0Wp+2KVfwLM=
{{ else if eq .Values.evn "dev" }}
foo: AgAkaTBYcESwogPi..........
{{ end }}
An alternative approach would be to have two templates, one for prod and one for dev and use Helm templating logic to pick the right file depending on which environment you're deploying to.
Anyway, each of those base64 blobs can be produced with:
$ kubeseal --raw --scope namespace-wide --from-file=yoursecret.txt
Pro-tip, you can pipe the secret if it's not in a file:
$ echo -n yoursecret | kubeseal --raw --scope namespace-wide --from-file=/dev/stdin
Then you have to paste the output of that command into your Helm Go template.
My approach
Use different .values.yml files for different environments
Create .secrets.yml files to store secret values (include in .gitignore)
Make a git pre-commit hook that uses kubeseal --raw to encrypt the individual secrets and then write them to the values file
Store the values file in git.
I wrote a gist on this: https://gist.github.com/foogunlana/b75175b4ff62bc07258ea78274c698cd
I would not put credentials from different environment into a single secret as it can be deployed into different cluster with different sealed controller.
Why don't you just separate secret files for each environment?
For seal a secret I use the following command:
kubeseal --name=name-of-the-config --controller-namespace=fluxcd \
--controller-name=sealed-secrets --format yaml \
< secret.yaml > sealedsecret.yaml
You can detect the controller-name and controller-namespace of the helm release by:
kubectl get HelmRelease -A -o jsonpath="{.items[?(#.spec.chart#.name=='sealed-secrets')]}"

Can I add arbitrary config to a pod spec deployed with a helm chart without modifying the helm chart?

Im using this helm chart to deploy: https://github.com/helm/charts/tree/master/stable/atlantis
It deploys this stateful set: https://github.com/helm/charts/blob/master/stable/atlantis/templates/statefulset.yaml
Is there a way I can add arbitrary config values to a pod spec that was deployed with a helm chart without having to modify the chart? For example I want to add an env: var that gets its value from a secret to the pod spec of the stateful set this chart deploys
Can I create my own helm chart that references this helm chart and add to the config of the pod spec? again without modifying the original chart?
EDIT: what Im talking about is adding an env var like this:
env:
- name: GET_THIS_VAR_IN_ATLANTIS
valueFrom:
secretKeyRef:
name: my-secret
key: abc
Maybe I can create another chart as a parent of this chart and override the entire env: block?
Is there a way I can add arbitrary config values to a pod spec that was deployed with a helm chart without having to modify the chart?
You can only make changes that the chart itself supports.
If you look at the StatefulSet definition you linked to, there are a lot of {{ if .Values.foo }} knobs there. This is an fairly customizable chart and you probably can change most things. As a chart author, you'd have to explicitly write all of these conditionals and macro expansions in.
For example I want to add an env: var that gets its value from a secret to the pod spec of the stateful set this chart deploys
This very specific chart contains a block
{{- range $key, $value := .Values.environment }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
so you could write a custom Helm YAML values file and add in
environment:
arbitraryKey: "any fixed value you want"
and then use the helm install -f option to supply that option when you install the chart.
This chart does not support injecting environment values from secrets, beyond a half-dozen specific values it supports by default (e.g., GitHub tokens).
As I say, this isn't generic at all: this is very specific to what this specific chart supports in its template expansions.
Should have marked the previous answer as the answer but things have changed in helm3.
While there is still no built-in way of patching a chart there is now builtin support for a "post renderer" https://helm.sh/docs/topics/advanced/
So, calling kustomize as a post renderer would probably be what most would suggest now with helm3