Helm dependencies - Reference to values inside the helm dependancy (dependency outputs) - kubernetes

I'll give the context first then the problem: I am interested in centralising a set of values that would be in my values.yaml.
Initial plan was to create a config map with the centralised values that I could load using the lookup helm function. Sadly for me the CD tool I use (ArgoCD) doesn't support lookup.
Current chain of thought would be to create a dummy helm chart that would contain the centralised values. I would set this helm chart as a dependency. Can I get some outputs out of this dependancy that can be used elsewhere? If yes, how to refer to them in the values.yaml?

One approach could be like this:
Create a folder structure such as this
yourappfolder
- Chart.yaml
- value.yaml // the main value file
- value.dev.yaml //contains env-specific values and override value.yaml
and publish a completely new helm chart like my-generic-chart to a registry with default values already placed and update it in Chart.yaml as a dependency
# Chart.yaml
apiVersion: v2
name: myapplication
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.16.0"
dependencies:
- name: my-generic-chart
version: 5.6.0
repository: "URL to your my-default-chart"
and don't forget to put all the values under my-default-chart
# values.yaml
my-default-chart:
image: nginx
imageTag: 1
envs:
log-format: "json"
log-level: "none"
..
..
# values.dev.yaml
my-default-chart:
imageTag: 1.1
envs:
log-level: "debug"
..
..
the values.dev.yaml file will override the values in values.yaml and both of them together will override the values in the generic chart default values.yaml file.
Now you have to create a generic chart that fits to all of your applications or create a generic chart for each type of application

Figured it is possible by 2 ways:
Exporting/Importing child values: https://helm.sh/docs/topics/charts/#importing-child-values-via-dependencies and I found a great related answer: How to use parameters of one subchart in another subchart in Helm
Using "Templates": https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#sharing-templates-with-subcharts "Parent charts and subcharts can share templates. Any defined block in any chart is available to other charts."
I create one chart "data" that doesn't create anything but creates template function (for example "data.project") then I use another
├── Chart.lock
├── Chart.yaml
├── charts
│   ├── data
│   │   ├── Chart.yaml
│   │   ├── templates
│   │   │   └── _helpers.tpl
│   │   └── values.yaml
│   ├── sub0
│   │   ├── Chart.yaml
│   │   ├── templates
│   │   │   └── configmap.yaml
│   │   └── values.yaml
└── values.yaml
charts/data/templates/_helpers.tpl contains:
{{- define "data.project" -}}
project_name
{{- end }}
The top Chart.yaml contains:
---
apiVersion: v2
name: multi
type: application
version: 0.1.0
appVersion: "1.16.0"
dependencies:
- name: data
version: 0.1.0
repository: "file://charts/data" # This is while testing locally, once shared in a central repository this will achieve the centralised data information
import-values:
- data
- name: sub0
version: 0.1.0
repository: "file://charts/sub0"
Then from anywhere I can do:
"{{ include \"data.project\" . }}"
And obtain the value
Of course the "data" chart will need to live in a separate repository

Related

Flux v2 - How to Deploy Same Helm Chart, Multiple Times, Into Different Namespaces

We are building out a small cluster for a dev team.
Ive been working through this repo: https://github.com/fluxcd/flux2-kustomize-helm-example
The infrastructure part went fine.
Now instead of apps I need to create a way for each developer, to deploy/maintain their own version of the application they are working on.
├── clusters
│   └── qa
│   ├── deploys.bak
│   ├── flux-system
│   │   ├── gotk-components.yaml
│   │   ├── gotk-sync.yaml
│   │   └── kustomization.yaml
│   └── infrastructure.yaml
├── deploys
│   ├── base
│   ├── dev1
│   ├── dev2
│   ├── dev3
│   └── staging
In deploys/base it would be great to specify a Namespace, a HelmRelease, and a Kubernetes Secret.
Then in deploys/dev1 it would be great if we could include the base but have a way of overriding the namespace everything goes into.
So you would have namespaces app-dev1, app-dev2 etc.
This would allow us to only really have to override the ingress information, and the image tag for the app.
Thanks for any information on this.
You need to add a patch to your kustomization.
patches:
- target:
kind: HelmRelease
name: .*-helm-release
version: v2beta1
patch: |-
- op: add
path: /spec/targetNamespace
value: dev1
- op: replace
path: /metadata/namespace
value: dev1
Add this to every env that you want.

Helm: use packaged values.dev.yaml for install

Given the chart structure:
└── myChart
├── Chart.yaml
├── templates
│   ├── ...
│   └── service.yaml
├── values.dev.yaml
└── values.yaml
values.dev.yaml gets packaged with the chart.tgz. Is it possible to use values.dev.yaml for values (-f) when installing ?

Flux V2 not pushing new image version to git repo

I've upgrade from Flux V1 to V2. It all went fairly smooth but I can't seem to get the ImageUpdateAutomation to work. Flux knows I have images to update but it doesn't change the container image in the deployment.yaml manifest and commit the changes to Github. I have no errors in my logs so I'm at a bit of a loss as to what to do next.
I have an file structure that looks something like this:
├── README.md
├── staging
│   ├── api
│   │   ├── deployment.yaml
│   │   ├── automation.yaml
│   │   └── service.yaml
│   ├── app
│   │   ├── deployment.yaml
│   │   ├── automation.yaml
│   │   └── service.yaml
│   ├── flux-system
│   │   ├── gotk-components.yaml
│   │   ├── gotk-sync.yaml
│   │   └── kustomization.yaml
│   ├── image_update_automation.yaml
My staging/api/automation.yaml is pretty strait-forward:
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageRepository
metadata:
name: api
namespace: flux-system
spec:
image: xxx/api
interval: 1m0s
secretRef:
name: dockerhub
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImagePolicy
metadata:
name: api
namespace: flux-system
spec:
imageRepositoryRef:
name: api
policy:
semver:
range: ">=1.0.0"
My staging/image_update_automation.yaml looks something like this:
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageUpdateAutomation
metadata:
name: flux-system
namespace: flux-system
spec:
git:
checkout:
ref:
branch: master
commit:
author:
email: fluxcdbot#users.noreply.github.com
name: fluxcdbot
messageTemplate: '{{range .Updated.Images}}{{println .}}{{end}}'
push:
branch: master
interval: 1m0s
sourceRef:
kind: GitRepository
name: flux-system
update:
path: ./staging
strategy: Setters
Everything seems to be ok here:
❯ flux get image repository
NAME READY MESSAGE LAST SCAN SUSPENDED
api True successful scan, found 23 tags 2021-07-28T17:11:02-06:00 False
app True successful scan, found 18 tags 2021-07-28T17:11:02-06:00 False
❯ flux get image policy
NAME READY MESSAGE LATEST IMAGE
api True Latest image tag for 'xxx/api' resolved to: 1.0.1 xxx/api:1.0.1
app True Latest image tag for 'xxx/app' resolved to: 3.2.1 xxx/app:3.2.1
As you can see from the policy output the LATEST IMAGE api is 1.0.1, however when I view the current version of my app and api they have not been updated.
kubectl get deployment api -n xxx -o json | jq '.spec.template.spec.containers[0].image'
"xxx/api:0.1.5"
Any advice on this would be much appreciated.
My issue was that I didn't add the comment after my image declaration in my deployment yaml. More details. Honestly, I'm surprised this is not Annotation instead of a comment.
spec:
containers:
- image: docker.io/xxx/api:0.1.5 # {"$imagepolicy": "flux-system:api"}

Kustomize - Create multi and single deployment using same namespace

Kustomize directory structure
├── base
│   ├── deployment.yaml
│   └── kustomization.yaml
└── overlays
└── prod
├── kustomization.yaml
├── namespace-a
│   ├── deployment-a1
│   │   ├── kustomization.yaml
│   │   └── patch.yaml
│   ├── deployment-a2
│   │   ├── kustomization.yaml
│   │   └── patch.yaml
│   ├── kustomization.yaml
│   └── namespace.yaml
├── namespace-b
│   ├── deployment-b1
│   │   ├── kustomization.yaml
│   │   └── patch.yaml
│   ├── deployment-b2
│   │   ├── kustomization.yaml
│   │   └── patch.yaml
│   ├── kustomization.yaml
│   └── namespace.yaml
└── namespace-c
As you can see above, I have prod environment with namesapce-a and namespace-b and few more.
To create deployment for all, I can simply run the below command:
> kustomize overlays/prod
Which works flawlessly, both namespaces are created along with other deployment files for all deployments.
To create a deployment for only namespace-a:
> kustomize overlays/prod/namespace-a
That also works. :)
But that's not where the story ends for me at-least.
I would like to keep the current functionality and be able to deploy deployment-a1, deployment-a2 ...
> kustomize overlays/prod/namespace-a/deployment-a1
If I put the namespace.yaml inside deployment-a1 folder and add it in kustomization.yaml
then the above command works but previous 2 fails with error because now we have 2 namespace files with same name.
I have 2 queries.
Can this directory structure be improved?
How can I create namesapce with single deployment without breaking the other functionality?
Full code can be seen here
In your particular case, in the most ideal scenario, all the required namespaces should already be created before running the kustomize command.
However, I know that you would like to create namespaces dynamically as needed.
Using a Bash script as some kind of wrapper can definitely help with this approach, but I'm not sure if you want to use this.
Below, I'll show you how this can work, and you can choose if it's right for you.
First, I created a kustomize-wrapper script that requires two arguments:
The name of the Namespace you want to use.
Path to the directory containing the kustomization.yaml file.
kustomize-wrapper.sh
$ cat kustomize-wrapper.sh
#!/bin/bash
if [ -z "$1" ] || [ -z "$2" ]; then
echo "Pass required arguments !"
echo "Usage: $0 NAMESPACE KUSTOMIZE_PATH"
exit 1
else
NAMESPACE=$1
KUSTOMIZE_PATH=$2
fi
echo "Creating namespace"
sed -i "s/name:.*/name: ${NAMESPACE}/" namespace.yaml
kubectl apply -f namespace.yaml
echo "Setting namespace: ${NAMESPACE} in the kustomization.yaml file"
sed -i "s/namespace:.*/namespace: ${NAMESPACE}/" base/kustomization.yaml
echo "Deploying resources in the ${NAMESPACE}"
kustomize build ${KUSTOMIZE_PATH} | kubectl apply -f -
As you can see, this script creates a namespace using the namespace.yaml file as the template. It then sets the same namespace in the base/kustomization.yaml file and finally runs the kustomize command with the path you provided as the second argument.
namespace.yaml
$ cat namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name:
base/kustomization.yaml
$ cat base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace:
resources:
- deployment.yaml
Directory structure
$ tree
.
├── base
│ ├── deployment.yaml
│ └── kustomization.yaml
├── kustomize-wrapper.sh
├── namespace.yaml
└── overlays
└── prod
├── deployment-a1
│ ├── kustomization.yaml
│ └── patch.yaml
├── deployment-a2
│ ├── kustomization.yaml
│ └── patch.yaml
└── kustomization.yaml
We can check if it works as expected.
Creating the namespace-a Namespace along with app-deployment-a1 and app-deployment-a2 Deployments:
$ ./kustomize-wrapper.sh namespace-a overlays/prod
Creating namespace
namespace/namespace-a created
Setting namespace: namespace-a in the kustomization.yaml file
deployment.apps/app-deployment-a1 created
deployment.apps/app-deployment-a2 created
Creating only the namespace-a Namespace and app-deployment-a1 Deployment:
$ ./kustomize-wrapper.sh namespace-a overlays/prod/deployment-a1
Creating namespace
namespace/namespace-a created
Setting namespace: namespace-a in the kustomization.yaml file
deployment.apps/app-deployment-a1 created

referring a resource yaml from another directory in kustomization

I have a resource yaml file in a folder structure given below
base
---- first.yaml
main
---- kustomization.yaml
In kustomization.yaml I am referring the first.yaml as
resources:
../base/first.yaml
But I am getting an error when i do apply of kubectl apply -f kustomizatio.yaml
accumulating resources: accumulating resources from '../base/first.yaml': security; file '../base/first.yaml' is not in or below '../base'
How can i call the first.yaml resource from the folder base to the kustomization in main folder?
Kustomize cannot refer to individual resources in parent directories, it can only refer to resources in current or child directories, but it can refer to other Kustomize directories.
The following would be a valid configuration for what you have:
.
├── base
│   ├── main
│   │   ├── kustomization.yaml
│   │   └── resource.yaml
│   └── stuff
│   ├── first.yaml
│   └── kustomization.yaml
└── cluster
└── kustomization.yaml
Contents of base/main/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resource.yaml
Contents of base/stuff/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- first.yaml
Contents of cluster/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base/main
- ../base/stuff
Run kustomize build from one folder down, kustomize build ./main. You aren't allowed to .. up past where kustomize started from, just to be safe.