Using same spec across different deployment in argocd - kubernetes

I am currently using Kustomize. We are have multiple deployments and services. These have the same spec but different names. Is it possible to store the spec in individual files & refer them across all the deployments files?

Helm is a good fit for the solution.
However, since we were already using Kustomize & migration to Helm would have needed time, we solved the problem using namePrefix & label modifiers in Kustomize.

Use Helm, in ArgoCD create a pipeline with helm:3 container and create a helm-chart directory or repository. Pull the chart repository, deploy with helm. Use values.yaml for the dynamic values you want to use. Also, you will need to add kubeconfig file to your pipeline but that is another issue.
This is the best offer I can give. For further information I need to inspect ArgoCD.

I was faced with this problem and I resolved it using Helm3 charts:
I have a chart. Yaml file where I indicated my release name and version
values. Yam where I define all variable to use for a specific environment.
Values-test. Yaml a file to use, for example, in a test environment where you should only put the variable that must be changed from an environment to another.
I hope that can help you to resolve your issue.

I would also suggest using Helm. However a restriction of Helm is that you cannot create dynamic values.yaml files (https://github.com/helm/helm/issues/6699) - this can be very annoying, especially for multi-environment setups. However, ArgoCD provides a very nice way to do this with its Application type.
The solution is to create a custom Helm chart for generating your ArgoCD applications (which can be called with different config for each environment). The templates in this helm chart will generate ArgoCD Application types. This type supports a source.helm.values field where you can dynamically set the values.yaml.
For example, the values.yaml for HashiCorp Vault can be highly complex and this is a scenario where a dynamic values.yaml per environment is highly desirable (as this prevents having multiple values.yaml files for each environment which are large but very similar).
If your custom ArgoCD helm chart is my-argocd-application-helm, then the following are example values.yaml and the template which generates your Vault application i.e.
values.yaml
server: 1.2.3.4 # Target kubernetes server for all applications
vault:
name: vault-dev
repoURL: https://git.acme.com/myapp/vault-helm.git
targetRevision: master
path: helm/vault-chart
namespace: vault
hostname: 5.6.7.8 # target server for Vault
...
templates/vault-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {{ .Values.vault.name }}
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: 'vault'
server: {{ .Values.server }}
project: 'default'
source:
path: '{{ .Values.vault.path }}'
repoURL: {{ .Values.vault.repoURL }}
targetRevision: {{ .Values.vault.targetRevision }}
helm:
# Dynamically generate `values.yaml`
values: |
vault:
server:
ingress:
activeService: true
hosts:
- host: {{ required "Please set 'vault.hostname'" .Values.vault.hostname | quote }}
paths:
- /
ha:
enabled: true
config: |
ui = true
...
These values will then override any base configuration residing in the values.yaml specified by {{ .Values.vault.repoURL }} which can contain config which doesn't change for each environment.

Related

Using ingress host from k8s secret in helm

I am trying to deploy a service using helm. The cluster is Azure AKS & I have one DNS zone associated with a cluster that can be used for ingress.
But the issue is that the DNS zone is in k8s secret & I want to use it in ingress as host. like below
spec:
tls:
- hosts:
- {{ .Chart.Name }}.{{ .Values.tls.host }}
rules:
- host: {{ .Chart.Name }}.{{ .Values.tls.host }}
http:
paths:
-
pathType: Prefix
backend:
service:
name: {{ .Chart.Name }}
port:
number: 80
path: "/"
I want .Values.tls.host value from secret. Currently, it is hardcoded in values.yaml file.
This is a community wiki answer posted for better visibility. Feel free to expand it.
For the current version of Helm (3.8.0), it seems not possible to use values right from Secret with standard approach.
Based on the information from Helm website:
A template directive is enclosed in {{ and }} blocks.
The values that are passed into a template can be thought of as namespaced
objects, where a dot (.) separates each namespaced element.
Objects are passed into a template from the template engine and can be:
Release
Values
Chart
Files
Capabilities
Template
Contents for Values objects can come from multiple sources:
The values.yaml file in the chart
If this is a subchart, the values.yaml file of a parent chart
A values file if passed into helm install or helm upgrade with the -f flag (helm install -f myvals.yaml ./mychart)
Individual parameters passed with --set (such as helm install --set foo=bar ./mychart)
Helm cannot fetch values from Kubernetes, and ingress manifest do not support this type of reference.
You need to do this outside of helm and stitch the commands together, in your pipeline or from some script or manually in the shell.
helm install my-app chart/ \
--set tls.host="$(kubectl get secret my-dns-zone \
-o go-template='{{index .data "dns-zone" | base64decode }}')"

helm dependencies with different namespaces

Right now, I have to install multiple helm charts in different namespaces for my product to work. I am trying to create a super helm chart in which I am planning to add the helm charts (of my tools, as mentioned above) and install them in one shot. My problem is, as these tools are in different namespaces I am not sure where to specify the namespace key where I want that particular dependency (chart) to be installed. For e.g. if below is the Charts.yaml of my super helm chart
dependencies:
- name: first_chart
version: 1.2.3
repository: https://firstchart.repo
- name: second_chart
version: 1.5.6
repository: https://secondchart.repo
I want my first chart to be installed in namespace foo and the second chart to be installed in namespace bar.
I was looking at using conditions but I believe conditions will only take a boolean as a value.
I stumbled upon this link (https://github.com/helm/helm/issues/2060) which says the we can do it in Helm 3 but mostly on how to keep releases between different namespaces. It does not specifically answer my question.
There is no builtin way to do this with pure Helm, but there is with helmfile.
Your example as helmfile.yaml:
releases:
- name: chart1 # name of the release (helm install <...> first_chart)
chart: repo1/first_chart
version: 1.2.3
namespace: foo
- name: chart2
chart: repo2/second_chart
version: 1.5.6
namespace: bar
# in case you want helmfile to automatically update repos
repositories:
- name: repo1
url: https://firstchart.repo
- name: repo2
url: https://secondchart.repo
Then, run:
helmfile sync => run helm install/upgrade on all releases, or
helmfile apply => same as sync, but do a diff first to only upgrade/install releases that changed
There is way more to helmfile, but this is the gist.
PS: if you struggle with values or want to have something similar to how umbrella Chart values are handled, have a look at helmfile: a simple trick to handle values intuitively
The way I solved this for my clusters with with ArgoCD's App of Apps cluster bootstrapping model. Of course, it requires that ArgoCD is install the cluster. However, for many reasons not relevant to this answer I would highly encourage installing ArgoCD regardless of the easy of bootstrapping capabilities.
Assuming ArgoCD is in place the structure is a single Helm chart containing templates for each of the child charts it will deploy and managed via Argo's Application CRD. You will notice there is a definition as part of the CRD, spec.destination.namespace, which governs where the chart will be deployed.
An example Application template which governs my cert-manager chart deployment to the cert-manager namespace looks like:
{{- if .Values.certManager.enabled }}
# ref: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#applications
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add a this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
# The project the application belongs to.
project: cluster-configs
# Source of the application manifests
source:
repoURL: https://github.com/yourOrg/Helm
targetRevision: {{ .Values.targetRevision }}
path: charts/cert-manager-chart
# helm specific config
helm:
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
{{- range .Values.certManager.valueFiles }}
- {{ . }}
{{- end }}
# Optional Helm version to template with. If omitted it will fall back to look at the 'apiVersion' in Chart.yaml
# and decide which Helm binary to use automatically. This field can be either 'v2' or 'v3'.
version: v3
# Destination cluster and namespace to deploy the application
destination:
server: https://kubernetes.default.svc
namespace: cert-manager
{{- end }}
With a corresponding values.yaml file for this parent chart which may look something like the following with the path to desired value file(s) in that child chart's directory specified.
targetRevision: v1.11.0
certManager:
enabled: true
valueFiles:
- "values.yaml"
clusterAutoScaler:
valueFiles:
- "envs/dev-account/saas/values.yaml"
clusterResourceLimits:
valueFiles:
- "values.yaml"
externalDns:
valueFiles:
- "envs/dev-account/saas/values.yaml"
ingressNginx:
enabled: true
valueFiles:
- "values.yaml"
Below is a screenshot of one of my app of apps directory to complete the example.

Helm best practices

I am new to helm and liked the idea of helm to create versions for the deployments and package them as artifact in jfrog articatory but one thing that I am unclear about is easiness of creating it.
I am comfortable with kubernetes mainfest and creating it is very simple where you don't have to handcraft a yaml.
You can simply run kubectl command in dry-run mode and export most of the yaml tags as below:
kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx-manifest.yaml
Now for creating helm, I need to run helm create and key in all the values needed by helm yaml files.
Curious if helm has such shortcuts that kubectl provides to create charts easily which keys in required value through command line while generating charts?
Also is there a migration utility available that supports converting the deployment manifest to helm charts?
helm create does what you are looking for. It creates a directory with all the basic stuff so that you don't need to manually create each file/directory. However, it can't create the content of a Chart it has no clue about.
But, there is no magic behind the scenes, a chart consists in templates and values. The templates are the same YAML files you are used to work with, except that you can replace whatever you want to make "dynamic" with the placeholders used by Helm. That's it.
So, in other words, just keep exporting as you are (I strongly suggest stopping doing this and create proper files suited for your needs) and add placeholders ({{ .Values.foo }})
For example, this is the template for a service I have:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name | default .Chart.Name }}
spec:
ports:
- port: {{ .Values.port }}
protocol: TCP
targetPort: {{ .Values.port }}
selector:
app: {{ .Values.name | default .Chart.Name }}

Can I add arbitrary config to a pod spec deployed with a helm chart without modifying the helm chart?

Im using this helm chart to deploy: https://github.com/helm/charts/tree/master/stable/atlantis
It deploys this stateful set: https://github.com/helm/charts/blob/master/stable/atlantis/templates/statefulset.yaml
Is there a way I can add arbitrary config values to a pod spec that was deployed with a helm chart without having to modify the chart? For example I want to add an env: var that gets its value from a secret to the pod spec of the stateful set this chart deploys
Can I create my own helm chart that references this helm chart and add to the config of the pod spec? again without modifying the original chart?
EDIT: what Im talking about is adding an env var like this:
env:
- name: GET_THIS_VAR_IN_ATLANTIS
valueFrom:
secretKeyRef:
name: my-secret
key: abc
Maybe I can create another chart as a parent of this chart and override the entire env: block?
Is there a way I can add arbitrary config values to a pod spec that was deployed with a helm chart without having to modify the chart?
You can only make changes that the chart itself supports.
If you look at the StatefulSet definition you linked to, there are a lot of {{ if .Values.foo }} knobs there. This is an fairly customizable chart and you probably can change most things. As a chart author, you'd have to explicitly write all of these conditionals and macro expansions in.
For example I want to add an env: var that gets its value from a secret to the pod spec of the stateful set this chart deploys
This very specific chart contains a block
{{- range $key, $value := .Values.environment }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
so you could write a custom Helm YAML values file and add in
environment:
arbitraryKey: "any fixed value you want"
and then use the helm install -f option to supply that option when you install the chart.
This chart does not support injecting environment values from secrets, beyond a half-dozen specific values it supports by default (e.g., GitHub tokens).
As I say, this isn't generic at all: this is very specific to what this specific chart supports in its template expansions.
Should have marked the previous answer as the answer but things have changed in helm3.
While there is still no built-in way of patching a chart there is now builtin support for a "post renderer" https://helm.sh/docs/topics/advanced/
So, calling kustomize as a post renderer would probably be what most would suggest now with helm3

helm - programmatically override subchart values.yaml

I'm writing a helm chart that uses the stable/redis chart as a subchart.
I need to override the storage class name used for both microservices within my chart, and within the redis chart.
I'm using helm 2.12.3
I would like to be able to specify redis.master.persistence.storageClass in terms of a template, like so
storage:
storageClasses:
name: azurefile
redis:
usePassword: false
master:
persistence:
storageClass: {{ $.Values.storage.storageClasses.name }}
Except, as I understand, templates aren't supported within values.yaml
As this is a public chart, I'm not able to modify it to depend on a global value as described here in the documentation
I considered using {{ $.Values.redis.master.persistence.storageClass }} elsewhere in my chart rather than {{ $.Values.storage.storageClasses.name }}, but this would:
Not hide the complexity of the dependencies of my chart
Not scale if I was to add yet another subchart dependency
In my values.yaml file I have:
storage:
storageClasses:
name: azurefile
redis:
master:
persistence:
storageClass: azurefile
I would like to specify a single value in values.yaml that can be overwritten at chart deploy time.
e.g. like this
helm install --set storage.storageClasses.name=foo mychart
rather than
helm install --set storage.storageClasses.name=foo --set redis.master.persistence.storageClass mychart
As you correctly mentioned, helm value files are plain yaml files which cannot contain any templates. For your use case, you'd need to use a templating system for your values files also which basically means you are also generating your value files on the go. I'd suggest taking a look at helmfile. This lets you share values file across multiple charts and application environments.