10 microservices on kubernetes with helm3 charts, and saw that all of them have similar structure standard, deployment, service, hpa, network policies etc. and basically the <helm_chart_name>/templates directory is 99% same on all with some if statements on top of file whether we want to deploy that resource,
{{ if .Values.hpa.create }}
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Values.deployment.name }}
...
spec:
scaleTargetRef:
...
{{ end }}
and in values passing yes/no whether we want it - Is there some tool to easily create template for the helm charts ? To create Helm chart with this 5 manifests pre-populated with the reference to values as above ?
What you need is the Library Charts:
A library chart is a type of Helm chart that defines chart primitives
or definitions which can be shared by Helm templates in other charts.
This allows users to share snippets of code that can be re-used across
charts, avoiding repetition and keeping charts DRY.
You can find more details and examples in the linked documentation.
I think closest thing to the thing I want is https://helm.sh/docs/topics/library_charts/
Related
we are deploying multiple microservices using helm charts but all those microservices are using one shared objects such as configmap.
Requirement should be when we do helm deployment microservices will be installed/upgraded using "helm upgrade --install", but it should not try to deploy shared configmap for all the microservice.
Shared configmap has to be deployed only once but all the microservices has to use that shared configmap, how can we achieve this with helm and what will be the helm structure and concepts for my case.
Please be more specific in your question, you are deploying each of you ms separately?
Let's suppose you have 3 ms - A, B and C. If you want to use shared config map for all of those, so you have next options:
Create top level chart, which includes A, B and C sub charts, as well as configmap definition. This way when you deploy your TLC, config map will be created once.
If you need to deploy each chart independently, in this case create separate chart which includes only config map, deploy it first, and then deploy all A, B and C charts.
Upd.
Example of how you can perform lookup and check if resource exists, and based on this create or not create.
{{- $secretName:= (tpl .Values.your.secret.secretName .)}}
{{- $secret:= (lookup "v1" "Secret" .Release.Namespace $secretName)}}
{{- if not $secret}}
{{- $defUsername:= (b64enc "test")}}
{{- $defPassword:= (b64enc "test")}}
apiVersion: v1
kind: Secret
metadata:
name: "{{$secretName}}"
namespace: "{{.Release.Namespace}}"
type: Opaque
data:
username: {{$defUsername}}
password: {{$defPassword}}
{{- end}}
I created a new chart with 2 podPresets and 2 deployments and when I go to run helm install the deployment(pod) object is created first and then podPresets hence my values from podPreset are not applied to the pods, but when I manually create podPreset first and then deployment the presets are applied properly, Is there a way I can specify in helm as to which object should be created first.
Posting this as Community Wiki for better visibility as answer was provided in comments below another answer made by #Rastko.
PodPresents
A Pod Preset is an API resource for injecting additional runtime
requirements into a Pod at creation time. Using a Pod Preset allows
pod template authors to not have to explicitly provide all information
for every pod. This way, authors of pod templates consuming a specific
service do not need to know all the details about that service.
For more information, please check official docs.
Order of deploying objects in Helm
Order of deploying is hardcoded in Helm. List can be found here.
In addition, if resource is not in the list it will be executed as last one.
Answer to question from comments*
Answer to your question - To achieve order different then default one, you can create two helm charts in which one with deployments is executed afterwards with preinstall hook making sure that presets are there.
Pre-install hook annotation allows to execute after templates are rendered, but before any resources are created.
This workaround was mentioned on Github thread. Example for service:
apiVersion: v1
kind: Service
metadata:
name: foo
annotations:
"helm.sh/hook": "pre-install"
As additional information, there is possibility to define weight for a hook which will help build a deterministic executing order.
annotations:
"helm.sh/hook-weight": "5"
For more details regarding this annotation, please check this Stackoverflow qustion.
Since you are using Helm charts and have full control of this part, why not make optional parts in your helm charts that you can activate with an external value?
This would be a lot more "Helm native" way:
{{- if eq .Values.prodSecret "enabled"}}
- name: prod_db_password
valueFrom:
secretKeyRef:
name: prod_db_password
key: password
{{- end}}
Then you just need to add --set prodSecret=enabled when executing your Helm chart.
I have a helm chart that deploys a number of Kubernetes resources. One of them is a resource that is of a Custom Resource Definition (CRD) type (ServiceMonitor used by prometheus-operator).
I am looking for a way, how to "tell" helm that I'd want to create this resource only if such a CRD is defined in the cluster OR to ignore errors only caused by the fact that such a CRD is missing.
Is that possible and how can I achieve that?
Helm's Capabilities object can tell you if an entire API class is installed in the cluster. I don't think it can test for a specific custom resource type.
In your .tpl files, you can wrap the entire file in a {{ if }}...{{ end }} block. Helm doesn't especially care if the rendered version of a file is empty.
That would lead you to a file like:
{{ if .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
...
{{ end -}}
That would get installed if the operator is installed in the cluster, and skipped if not.
If you are on Helm 3 you can put your CRD in the crds/ directory. Helm will treat it differently, see the docs here.
In Helm 2 there is another mechanism using the crd-install hook. You can add the following to your CRD:
annotations:
"helm.sh/hook": crd-install
There are some limitations with this approach so if you are using Helm 3 that would be preferred.
In Helm v3, you can test for specific resources:
{{ if .Capabilities.APIVersions.Has "monitoring.coreos.com/v1/ServiceMonitor" -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
...
spec:
...
{{- end }}
https://helm.sh/docs/chart_template_guide/builtin_objects/
Im using this helm chart to deploy: https://github.com/helm/charts/tree/master/stable/atlantis
It deploys this stateful set: https://github.com/helm/charts/blob/master/stable/atlantis/templates/statefulset.yaml
Is there a way I can add arbitrary config values to a pod spec that was deployed with a helm chart without having to modify the chart? For example I want to add an env: var that gets its value from a secret to the pod spec of the stateful set this chart deploys
Can I create my own helm chart that references this helm chart and add to the config of the pod spec? again without modifying the original chart?
EDIT: what Im talking about is adding an env var like this:
env:
- name: GET_THIS_VAR_IN_ATLANTIS
valueFrom:
secretKeyRef:
name: my-secret
key: abc
Maybe I can create another chart as a parent of this chart and override the entire env: block?
Is there a way I can add arbitrary config values to a pod spec that was deployed with a helm chart without having to modify the chart?
You can only make changes that the chart itself supports.
If you look at the StatefulSet definition you linked to, there are a lot of {{ if .Values.foo }} knobs there. This is an fairly customizable chart and you probably can change most things. As a chart author, you'd have to explicitly write all of these conditionals and macro expansions in.
For example I want to add an env: var that gets its value from a secret to the pod spec of the stateful set this chart deploys
This very specific chart contains a block
{{- range $key, $value := .Values.environment }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
so you could write a custom Helm YAML values file and add in
environment:
arbitraryKey: "any fixed value you want"
and then use the helm install -f option to supply that option when you install the chart.
This chart does not support injecting environment values from secrets, beyond a half-dozen specific values it supports by default (e.g., GitHub tokens).
As I say, this isn't generic at all: this is very specific to what this specific chart supports in its template expansions.
Should have marked the previous answer as the answer but things have changed in helm3.
While there is still no built-in way of patching a chart there is now builtin support for a "post renderer" https://helm.sh/docs/topics/advanced/
So, calling kustomize as a post renderer would probably be what most would suggest now with helm3
I have 10 applications to deploy to Kubernetes. Each of the deployments depends on an init container that is basically identical except for a single parameter (and it doesn't make conceptual sense for me to decouple this init container from the application). So far I've been copy-pasting this init container into each deployment.yaml file, but I feel like that's got to be a better way of doing this!
I haven't seen a great solution from my research, though the only thing I can think of so far is to use something like Helm to package up the init container and deploy it as part of some dependency-based way (Argo?).
Has anyone else with this issue found a solution they were satisfied with?
A Helm template can contain an arbitrary amount of text, just so long as when all of the macros are expanded it produces a valid YAML Kubernetes manifest. ("Valid YAML" is trickier than it sounds because the indentation matters.)
The simplest way to do this would be to write a shared Helm template that included the definition for the init container:
_init_container.tpl:
{{- define "common.myinit" -}}
name: myinit
image: myname/myinit:{{ .Values.initTag }}
# Other things from a container spec
{{ end -}}
Then in your deployment, include this:
deployment.yaml:
apiVersion: v1
kind: Deployment
spec:
template:
spec:
initContainers:
- {{ include "common.myinit" . | indent 10 | strip }}
Then you can copy the _init_container.tpl file into each of your individual services.
If you want to avoid the copy-and-paste (reasonable enough) you can create a Helm chart that contains only templates and no actual Kubernetes resources. You need to set up some sort of repository to hold this chart. Put the _init_container.tpl into that shared chart, declare it as a dependency is the chart metadata, and reference the template in your deployment YAML in the same way (Go template names are shared across all included charts).