Reassign a value in helm chart - kubernetes-helm

Following is the values.yaml in helm chart:
global:
namespace: istio
chart-1:
istioNamespace: istio
chart-2:
targetNamespace: istio
Is there a way where istioNamespace and targetNamespace can refer global.namespace?

Since this is a YAML, you can make use of its anchor, aliases, merge keys to re-use the values/data.DIY.
In your case, you could do something like this:
In the YAML Document, you can refer to a previously defined anchor with an alias
global:
namespace: &ns "istio"
chart-1:
istioNamespace: *ns
chart-2:
targetNamespace: *ns
NOTE: if you try to override this with other values.yaml that might not work as expected.
This is useful when you are doing it in the same YAML file.
Here is a reference link. and in official docs as well.
I have tried a few of them in my day to day works with helm charts, and they work pretty well with Values. yaml

Related

Using same spec across different deployment in argocd

I am currently using Kustomize. We are have multiple deployments and services. These have the same spec but different names. Is it possible to store the spec in individual files & refer them across all the deployments files?
Helm is a good fit for the solution.
However, since we were already using Kustomize & migration to Helm would have needed time, we solved the problem using namePrefix & label modifiers in Kustomize.
Use Helm, in ArgoCD create a pipeline with helm:3 container and create a helm-chart directory or repository. Pull the chart repository, deploy with helm. Use values.yaml for the dynamic values you want to use. Also, you will need to add kubeconfig file to your pipeline but that is another issue.
This is the best offer I can give. For further information I need to inspect ArgoCD.
I was faced with this problem and I resolved it using Helm3 charts:
I have a chart. Yaml file where I indicated my release name and version
values. Yam where I define all variable to use for a specific environment.
Values-test. Yaml a file to use, for example, in a test environment where you should only put the variable that must be changed from an environment to another.
I hope that can help you to resolve your issue.
I would also suggest using Helm. However a restriction of Helm is that you cannot create dynamic values.yaml files (https://github.com/helm/helm/issues/6699) - this can be very annoying, especially for multi-environment setups. However, ArgoCD provides a very nice way to do this with its Application type.
The solution is to create a custom Helm chart for generating your ArgoCD applications (which can be called with different config for each environment). The templates in this helm chart will generate ArgoCD Application types. This type supports a source.helm.values field where you can dynamically set the values.yaml.
For example, the values.yaml for HashiCorp Vault can be highly complex and this is a scenario where a dynamic values.yaml per environment is highly desirable (as this prevents having multiple values.yaml files for each environment which are large but very similar).
If your custom ArgoCD helm chart is my-argocd-application-helm, then the following are example values.yaml and the template which generates your Vault application i.e.
values.yaml
server: 1.2.3.4 # Target kubernetes server for all applications
vault:
name: vault-dev
repoURL: https://git.acme.com/myapp/vault-helm.git
targetRevision: master
path: helm/vault-chart
namespace: vault
hostname: 5.6.7.8 # target server for Vault
...
templates/vault-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {{ .Values.vault.name }}
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: 'vault'
server: {{ .Values.server }}
project: 'default'
source:
path: '{{ .Values.vault.path }}'
repoURL: {{ .Values.vault.repoURL }}
targetRevision: {{ .Values.vault.targetRevision }}
helm:
# Dynamically generate `values.yaml`
values: |
vault:
server:
ingress:
activeService: true
hosts:
- host: {{ required "Please set 'vault.hostname'" .Values.vault.hostname | quote }}
paths:
- /
ha:
enabled: true
config: |
ui = true
...
These values will then override any base configuration residing in the values.yaml specified by {{ .Values.vault.repoURL }} which can contain config which doesn't change for each environment.

Helm - documentation specification for very unusual and not regular Kubernetes YAML format

on microsoft docs I found that I should define this Helm YAML file for creating Kubernetes Ingress Controller:
controller:
service:
loadBalancerIP: 10.240.0.42
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
So you can easily notice it DOES NOT have usual Kubernetes apiVersion and kind specification.
and after, on the same link that I need to execute the helm command to create Ingress:
helm install nginx-ingress ingress-nginx/ingress-nginx \
-f internal-ingress.yaml \
..............
As you see - suggested Helm file is not very usual, but I would like to stick to those official Microsoft instructions!
Again, it does not have specification for creating Ingress Controller using the regular apiVersion and kind notation like on many links and examples that can be found on the internet.
https://github.com/helm/charts/blob/master/stable/nginx-ingress/templates/controller-service.yaml
https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
Can I set custom ports for a Kubernetes ingress to listen on besides 80 / 443?
So I really feel very very confused!!! Can you please help me on this - I need to set it's Port but I really could not find specification documentation and examples for this one YAML Microsoft example which actually works!!!
controller:
service:
loadBalancerIP: 10.240.0.42
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
Where and how I can find correct syntax how I can "translate" regular "apiVersion" and "kind" specification into this?
Why they are making confusion in this way with this different format???
Please help! Thanks
The -f does not set a "manifest" as stated in the Microsoft docs. As per helm install --help:
-f, --values strings specify values in a YAML file
or a URL (can specify multiple)
The default values file contains the values to be passed into the chart.
User-supplied values with -f are merged with the default value files to generate the final manifest. The precedence order is:
The values.yaml file in the chart
If this is a subchart, the values.yaml file of a parent chart
A values file if passed into helm install or helm upgrade with the -f flag (helm install -f myvals.yaml ./mychart)
Individual parameters passed with --set (such as helm install --set foo=bar ./mychart)
The list above is in order of specificity: values.yaml is the default, which can be overridden by a parent chart's values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
What are you doing is overriding the controller value on top of the default values file. You can find the original/default values for the ingress-nginx chart here.

GitOps (Flex) install of standard Jenkins Helm chart in Kubernetes via HelmRelease operator

I've just started working with Weavework's Flux GitOps system in Kubernetes. I have regular deployments (deployments, services, volumes, etc.) working fine. I'm trying for the first time to deploy a Helm chart.
I've followed the instructions in this tutorial: https://github.com/fluxcd/helm-operator-get-started and have its sample service working after making a few small changes. So I believe that I have all the right tooling in place, including the custom HelmRelease K8s operator.
I want to deploy Jenkins via Helm, which if I do manually is as simple as this Helm command:
helm install --set persistence.existingClaim=jenkins --set master.serviceType=LoadBalancer jenkins stable/jenkins
I want to convert this to a HelmRelease object in my Flex-managed GitHub repo. Here's what I've got, per what documentation I can find:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
updating-applications/
fluxcd.io/ignore: "false"
spec:
releaseName: jenkins
chart:
git: https://github.com/helm/charts/tree/master
path: stable/jenkins
ref: master
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer
I have this in the file 'jenkins/jenkins.yaml' from the root of the location in my git repo that Flex is monitoring. Adding this file does nothing...I get no new K8s objects, no HelmRelease object, and no new Helm release when I run "helm list -n jenkins".
I see some mention of having to have 'image' tags in my 'values' section, but since I don't need to specify any images in my manual call to Helm, I'm not sure what I would add in terms of 'image' tags. I've also seen examples of HelmRelease definitions that don't have 'image' tags, so it seems that they aren't absolutely necessary.
I've played around with adding a few annotations to my 'metadata' section:
annotations:
# fluxcd.io/automated: "true"
# per: https://blog.baeke.info/2019/10/10/gitops-with-weaveworks-flux-installing-and-updating-applications/
fluxcd.io/ignore: "false"
But none of that has helped to get things rolling. Can anyone tell me what I have to do to get the equivalent of the simple Helm command I gave at the top of this post to work with Flex/GitOps?
Have you tried checking the logs on the fluxd and flux-helm-operator pods? I would start there to see what error message you're getting. One thing that i'm seeing is that you're using https for git. You may want to double check, but I don't recall ever seeing any documentation configuring chart pulls via git to use anything other than SSH. Moreover, I'd recommend just pulling that chart from the stable helm repository anyhow:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
annotations: #not sure what updating-applications/ was?
fluxcd.io/ignore: "false" #pretty sure this is false by default and can be omitted
spec:
releaseName: jenkins
chart:
repository: https://kubernetes-charts.storage.googleapis.com/
name: jenkins
version: 1.9.16
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer

Helm chart deployment ordering

I created a new chart with 2 podPresets and 2 deployments and when I go to run helm install the deployment(pod) object is created first and then podPresets hence my values from podPreset are not applied to the pods, but when I manually create podPreset first and then deployment the presets are applied properly, Is there a way I can specify in helm as to which object should be created first.
Posting this as Community Wiki for better visibility as answer was provided in comments below another answer made by #Rastko.
PodPresents
A Pod Preset is an API resource for injecting additional runtime
requirements into a Pod at creation time. Using a Pod Preset allows
pod template authors to not have to explicitly provide all information
for every pod. This way, authors of pod templates consuming a specific
service do not need to know all the details about that service.
For more information, please check official docs.
Order of deploying objects in Helm
Order of deploying is hardcoded in Helm. List can be found here.
In addition, if resource is not in the list it will be executed as last one.
Answer to question from comments*
Answer to your question - To achieve order different then default one, you can create two helm charts in which one with deployments is executed afterwards with preinstall hook making sure that presets are there.
Pre-install hook annotation allows to execute after templates are rendered, but before any resources are created.
This workaround was mentioned on Github thread. Example for service:
apiVersion: v1
kind: Service
metadata:
name: foo
annotations:
"helm.sh/hook": "pre-install"
As additional information, there is possibility to define weight for a hook which will help build a deterministic executing order.
annotations:
"helm.sh/hook-weight": "5"
For more details regarding this annotation, please check this Stackoverflow qustion.
Since you are using Helm charts and have full control of this part, why not make optional parts in your helm charts that you can activate with an external value?
This would be a lot more "Helm native" way:
{{- if eq .Values.prodSecret "enabled"}}
- name: prod_db_password
valueFrom:
secretKeyRef:
name: prod_db_password
key: password
{{- end}}
Then you just need to add --set prodSecret=enabled when executing your Helm chart.

helm - programmatically override subchart values.yaml

I'm writing a helm chart that uses the stable/redis chart as a subchart.
I need to override the storage class name used for both microservices within my chart, and within the redis chart.
I'm using helm 2.12.3
I would like to be able to specify redis.master.persistence.storageClass in terms of a template, like so
storage:
storageClasses:
name: azurefile
redis:
usePassword: false
master:
persistence:
storageClass: {{ $.Values.storage.storageClasses.name }}
Except, as I understand, templates aren't supported within values.yaml
As this is a public chart, I'm not able to modify it to depend on a global value as described here in the documentation
I considered using {{ $.Values.redis.master.persistence.storageClass }} elsewhere in my chart rather than {{ $.Values.storage.storageClasses.name }}, but this would:
Not hide the complexity of the dependencies of my chart
Not scale if I was to add yet another subchart dependency
In my values.yaml file I have:
storage:
storageClasses:
name: azurefile
redis:
master:
persistence:
storageClass: azurefile
I would like to specify a single value in values.yaml that can be overwritten at chart deploy time.
e.g. like this
helm install --set storage.storageClasses.name=foo mychart
rather than
helm install --set storage.storageClasses.name=foo --set redis.master.persistence.storageClass mychart
As you correctly mentioned, helm value files are plain yaml files which cannot contain any templates. For your use case, you'd need to use a templating system for your values files also which basically means you are also generating your value files on the go. I'd suggest taking a look at helmfile. This lets you share values file across multiple charts and application environments.