kube helm charts - multiple values files - kubernetes

it might be simple question but can't find anywhere if it's duable;
Is it possible to have values files for helm charts (stable/jenkins let's say) and have two different values files for it?
I would like in values_a.yaml have some values like these:
master:
componentName: "jenkins-master"
image: "jenkins/jenkins"
tag: "lts"
...
password: {{ .Values.secrets.masterPassword }}
and in the values_b.yaml - which will be encrypted with AWS KMS
secrets:
masterPassword: xxx
the above code doesn't work and wanted to know, as you can put those vars in kube manifests like it
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.config.name }}
namespace: {{ .Values.config.namespace }}
...
can they be somehow passed to other values files
EDIT:
If it was possible I would just put the
master:
password: xxx
in values_b.yaml but vars cannot be duplicated, and the official helm chart expects the master.password val from that file - so has to somehow pass it there but in encrypted way

I'm not quite sure but this feature of helm might help you.
Helm gives you the functionality to pass custom Values.yaml which have higher precedence over the fields of the main Values.yaml while performing helm install or helm upgrade.
For Helm 3
$ helm install <name> ./mychart -f myValues.yaml
For Helm 2
$ helm install --name <name> ./mychart --values myValues.yaml

The valid answer is from David Maze in the comments of the response of Kamol Hasan.
You can use multiple -f or --values options: helm install ... -f
values_a.yaml -f values_b.yaml. But you can't use templating in any
Helm values file unless the chart specifically supports it (using the
tpl function).
If you use multiple -f then latest value files override earlier ones.

Related

helm dependencies with different namespaces

Right now, I have to install multiple helm charts in different namespaces for my product to work. I am trying to create a super helm chart in which I am planning to add the helm charts (of my tools, as mentioned above) and install them in one shot. My problem is, as these tools are in different namespaces I am not sure where to specify the namespace key where I want that particular dependency (chart) to be installed. For e.g. if below is the Charts.yaml of my super helm chart
dependencies:
- name: first_chart
version: 1.2.3
repository: https://firstchart.repo
- name: second_chart
version: 1.5.6
repository: https://secondchart.repo
I want my first chart to be installed in namespace foo and the second chart to be installed in namespace bar.
I was looking at using conditions but I believe conditions will only take a boolean as a value.
I stumbled upon this link (https://github.com/helm/helm/issues/2060) which says the we can do it in Helm 3 but mostly on how to keep releases between different namespaces. It does not specifically answer my question.
There is no builtin way to do this with pure Helm, but there is with helmfile.
Your example as helmfile.yaml:
releases:
- name: chart1 # name of the release (helm install <...> first_chart)
chart: repo1/first_chart
version: 1.2.3
namespace: foo
- name: chart2
chart: repo2/second_chart
version: 1.5.6
namespace: bar
# in case you want helmfile to automatically update repos
repositories:
- name: repo1
url: https://firstchart.repo
- name: repo2
url: https://secondchart.repo
Then, run:
helmfile sync => run helm install/upgrade on all releases, or
helmfile apply => same as sync, but do a diff first to only upgrade/install releases that changed
There is way more to helmfile, but this is the gist.
PS: if you struggle with values or want to have something similar to how umbrella Chart values are handled, have a look at helmfile: a simple trick to handle values intuitively
The way I solved this for my clusters with with ArgoCD's App of Apps cluster bootstrapping model. Of course, it requires that ArgoCD is install the cluster. However, for many reasons not relevant to this answer I would highly encourage installing ArgoCD regardless of the easy of bootstrapping capabilities.
Assuming ArgoCD is in place the structure is a single Helm chart containing templates for each of the child charts it will deploy and managed via Argo's Application CRD. You will notice there is a definition as part of the CRD, spec.destination.namespace, which governs where the chart will be deployed.
An example Application template which governs my cert-manager chart deployment to the cert-manager namespace looks like:
{{- if .Values.certManager.enabled }}
# ref: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#applications
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add a this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
# The project the application belongs to.
project: cluster-configs
# Source of the application manifests
source:
repoURL: https://github.com/yourOrg/Helm
targetRevision: {{ .Values.targetRevision }}
path: charts/cert-manager-chart
# helm specific config
helm:
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
{{- range .Values.certManager.valueFiles }}
- {{ . }}
{{- end }}
# Optional Helm version to template with. If omitted it will fall back to look at the 'apiVersion' in Chart.yaml
# and decide which Helm binary to use automatically. This field can be either 'v2' or 'v3'.
version: v3
# Destination cluster and namespace to deploy the application
destination:
server: https://kubernetes.default.svc
namespace: cert-manager
{{- end }}
With a corresponding values.yaml file for this parent chart which may look something like the following with the path to desired value file(s) in that child chart's directory specified.
targetRevision: v1.11.0
certManager:
enabled: true
valueFiles:
- "values.yaml"
clusterAutoScaler:
valueFiles:
- "envs/dev-account/saas/values.yaml"
clusterResourceLimits:
valueFiles:
- "values.yaml"
externalDns:
valueFiles:
- "envs/dev-account/saas/values.yaml"
ingressNginx:
enabled: true
valueFiles:
- "values.yaml"
Below is a screenshot of one of my app of apps directory to complete the example.

Using same spec across different deployment in argocd

I am currently using Kustomize. We are have multiple deployments and services. These have the same spec but different names. Is it possible to store the spec in individual files & refer them across all the deployments files?
Helm is a good fit for the solution.
However, since we were already using Kustomize & migration to Helm would have needed time, we solved the problem using namePrefix & label modifiers in Kustomize.
Use Helm, in ArgoCD create a pipeline with helm:3 container and create a helm-chart directory or repository. Pull the chart repository, deploy with helm. Use values.yaml for the dynamic values you want to use. Also, you will need to add kubeconfig file to your pipeline but that is another issue.
This is the best offer I can give. For further information I need to inspect ArgoCD.
I was faced with this problem and I resolved it using Helm3 charts:
I have a chart. Yaml file where I indicated my release name and version
values. Yam where I define all variable to use for a specific environment.
Values-test. Yaml a file to use, for example, in a test environment where you should only put the variable that must be changed from an environment to another.
I hope that can help you to resolve your issue.
I would also suggest using Helm. However a restriction of Helm is that you cannot create dynamic values.yaml files (https://github.com/helm/helm/issues/6699) - this can be very annoying, especially for multi-environment setups. However, ArgoCD provides a very nice way to do this with its Application type.
The solution is to create a custom Helm chart for generating your ArgoCD applications (which can be called with different config for each environment). The templates in this helm chart will generate ArgoCD Application types. This type supports a source.helm.values field where you can dynamically set the values.yaml.
For example, the values.yaml for HashiCorp Vault can be highly complex and this is a scenario where a dynamic values.yaml per environment is highly desirable (as this prevents having multiple values.yaml files for each environment which are large but very similar).
If your custom ArgoCD helm chart is my-argocd-application-helm, then the following are example values.yaml and the template which generates your Vault application i.e.
values.yaml
server: 1.2.3.4 # Target kubernetes server for all applications
vault:
name: vault-dev
repoURL: https://git.acme.com/myapp/vault-helm.git
targetRevision: master
path: helm/vault-chart
namespace: vault
hostname: 5.6.7.8 # target server for Vault
...
templates/vault-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {{ .Values.vault.name }}
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: 'vault'
server: {{ .Values.server }}
project: 'default'
source:
path: '{{ .Values.vault.path }}'
repoURL: {{ .Values.vault.repoURL }}
targetRevision: {{ .Values.vault.targetRevision }}
helm:
# Dynamically generate `values.yaml`
values: |
vault:
server:
ingress:
activeService: true
hosts:
- host: {{ required "Please set 'vault.hostname'" .Values.vault.hostname | quote }}
paths:
- /
ha:
enabled: true
config: |
ui = true
...
These values will then override any base configuration residing in the values.yaml specified by {{ .Values.vault.repoURL }} which can contain config which doesn't change for each environment.

helm deploy with no objects

I'm doing a very simple chart with helm.
It consists on deploying a chart with just one object ("/templates/pod.yaml"), that have to be deployed just if a parameter of file Values.yaml is true.
To provide an example of my case, this is what I have:
/templates/pod.yaml
{{- if eq .Values.shoudBeDeployed true }}
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
{{- end}}
Values.yaml
shoudBeDeployed: true
So when I use shoudBeDeployed with true value, helm installs it correctly.
My problem is that when shoudBeDeployed is false, helm doesn't deploy anything (as I expected), but helm shows the following message:
Error: release CHART_NAME failed: no objects visited
And if I execute helm ls I get that CHART_NAME is deployed with STATUS FAILED.
My question is if there is a way to not have it as a failed helm deploy. So I would like to not see it when using the command helm ls
I know that I could move the logic of shoudBeDeployed variable outside the chart, and then deploy the chart or not depending on its value, but I would like to know if there is a solution just using helm.
#pcampana I think there is no way to stop helm deployment if there is nothing to deploy. But here is a trick that you can use to delete a helm chart if it is
FAILED.
helm install --name temp demo --atomic
where demo is the helm chart directory and temp is release name .
release name is mandatory for this to work.
One scenario is when you see error
Error: release temp failed: no objects visited
you can use above command to deploy helm chart.
I think this might be useful for you.

How to reference a custom value file for sub-charts in helm?

I have been implementing helm sub-chart by referring helm sub chart documentation. According to the documentation it worked for me. This works fine with default value files. But when I try to refer my own value file, the values are not there in the configmap.
My value file is values.staging.yaml.
eg :-
config.yaml in mysubchart
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
salad: {{ .Values.dessert }}
values.staging.yaml in mysubchart
dessert: banana
values.yaml in mysubchart
dessert: cake
Only 'cake' is referenced as the value. I need to reference banana as the value.
I have tried following commands.
helm install --dry-run --debug mychart --values mychart/charts/mysubchart/values.staging.yaml
helm install --dry-run --debug --name mychart mychart -f mychart/charts/mysubchart/values.staging.yaml
helm install --name mychart mychart -f mychart/charts/mysubchart/values.staging.yaml
In each instance the configmap does not refer the value in the values.staging.yaml.
Is there a way to do this?
Thank you .!
As described in Overriding Values of a Child Chart in your link, you need to wrap the subchart values in a key matching the name of the subchart.
Any values file you pass with helm install -f is always interpreted at the top level, even if it's physically located in a subchart's directory. A typical values file could look like
mysubchart:
dessert: banana

Setting nested data structures from helm command line?

I'm installing the prometheus-redis-exporter Helm chart. Its Deployment object has a way to inject annotations:
# deployment.yaml
...
template:
metadata:
annotations:
{{ toYaml .Values.annotations | indent 8 }}
Typically if I was providing a values file, I could just do this:
# values.yaml
annotations:
foo: bar
bash: baz
And then install the chart with:
helm install --values values.yaml
However, in some cases it is more simple for me to specify these values on the command line with --set instead, I'm just not sure how I would specify a nested set like that.
How can I set the above annotations object when installing a helm chart on the commandline:
helm install --set <what_goes_here>
The helm docu has a section The Format and Limitations of --set, which contains what you are looking for.
--set outer.inner=value results in:
outer:
inner: value
Therefore your whole helm command looks like this:
helm install --set annotations.foo=bar,annotations.bash=baz stable/prometheus-redis-exporter
Just to add, if you are looking to override a key with a "." in the key name, add a back slash ("\") before the ".".
for example, with values (taken from grafana):
grafana.ini:
server:
root_url: https://my.example.com
To edit the root_url value we would pass
--set grafana\.ini.server.root_url=https://your.example.com