Kustomize HelmChartInflationGeneration Error With ChartName Not Found - kubernetes

I have the following chartInflator.yml file:
apiVersion: builtin
kind: ChartInflator
metadata:
name: project-helm-inflator
chartName: helm-k8s
chartHome: ../../../helm-k8s/
releaseName: project-monitoring-chart
values: ../../values.yaml
releaseNamespace: project-monitoring-ns
When I ran it using this, I got the error message below:
$ kustomize build .
Error: loading generator plugins: failed to load generator: plugin HelmChartInflationGenerator.builtin.[noGrp]/project-helm-inflator.[noNs] fails configuration: chart name cannot be empty
Here is my project structure:
project
- helm-k8s
- values.yml
- Chart.yml
- templates
- base
- project-namespace.yml
- grafana
- grafana-service.yml
- grafana-deployment.yml
- grafana-datasource-config.yml
- prometheus
- prometheus-service.yml
- prometheus-deployment.yml
- prometheus-config.yml
- prometheus-roles.yml
- kustomization.yml
- prod
- kustomization.yml
- test
- kustomization.yml

I think you may have found some outdated documentation for the helm chart generator. The canonical documentation for this is here. Reading that implies several changes:
Include the inflator directly in your kustomization.yaml in the helmCharts section.
Use name instead of chartName.
Set chartHome in the helmGlobals section rather than per-chart.
That gets us something like this in our kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmGlobals:
chartHome: ../../../helm-k8s/
helmCharts:
- name: helm-k8s
releaseName: project-monitoring-chart
values: ../../values.yaml
releaseNamespace: project-monitoring-ns
I don't know if this will actually work -- you haven't provided a reproducer in your question, and I'm not familiar enough with Helm to whip one up on the spot -- but I will note that your project layout is highly unusual. You appear to be trying to use Kustomize to deploy a Helm chart that contains your kustomize configuration, and it's not clear what the benefit is of this layout vs. just creating a helm chart and then using kustomize to inflate it from outside of the chart templates directory.
You may need to add --load-restrictor LoadRestrictionsNone when calling kustomize build for this to work; by default, the chartHome location must be contained by the same directory that contains your kustomization.yaml.
Update
To make sure things are clear, this is what I'm recommending:
Remove the kustomize bits from your helm chart, so that it looks like this.
Publish your helm charts somewhere. I've set up github pages for that repository and published the charts at http://oddbit.com/open-electrons-deployments/.
Use kustomize to deploy the chart with transformations. Here we add a -prod suffix to all the resources:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: open-electrons-monitoring
repo: http://oddbit.com/open-electrons-deployments/
nameSuffix: -prod

Related

Patching multiple resources with Kustomize

I have a kustomization.yaml file defined which consists of two resources that are downloaded from a git repo. Both of these resources want to create a Namespace with the same name, which creates an error when I try to build the final kustomization file: may not add resource with an already registered id: ~G_v1_Namespace|~X|rabbitmq-system'. I tried using patches to get rid of this unwanted namespace however, it looks like that works only if ONE resource is defined. As soon as I add the other resource, it stops working.
bases:
- ../../base
namespace: test
resources:
- https://github.com/rabbitmq/cluster-operator/releases/download/v1.13.0/cluster-operator.yml
- https://github.com/rabbitmq/messaging-topology-operator/releases/download/v1.6.0/messaging-topology-operator.yaml
patchesStrategicMerge:
- |-
apiVersion: v1
kind: Namespace
metadata:
name: rabbitmq-system
$patch: delete
What I think is happening is Kustomize loads the resources first and finds two identical Namespaces defined and it doesn't take patches into consideration. Can I fix this behavior somehow?

Jenkins deployment with Kustomize - how to add JENKINS_OPTS

I feel like this should be an already asked question, but I'm having difficulties finding a concrete answer. I'm deploying Jenkins through ArgoCD by defining the deployment via kustomize (kubernetes yaml). I want to inject a prefix to have Jenkins start on /jenkins, but I don't see a way to add it. I saw online that I can have a env tag, but no full example of this was available. Where would I inject a prefix value if using kubernetes yaml for a Jenkins deployment?
So, I solved this issue myself, and I'd like to post the answer as this is the top searched question when searching "Kustomize Jenkins_opts".
In your project, assuming you are using Kustomize to deploy Jenkins (This will work with any app deployment where you want to inject values when deploying), you should have a project structure similar to this:
ProjectA
|
|---> app.yaml //contains the yaml definitions for your deployment
|---> kustomize.yaml //entry file to run Kustomize to deploy your app
Add a new file to your project structure. Name it whatever you want, I named mine something like app-env.yaml. It will look something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
template:
spec:
containers:
- name: jenkins
env:
- name: JENKINS_OPTS
value: --prefix=/jenkins
This will specifically inject the --prefix flag to assign the prefix value for the URL to Jenkins on deployment to the Jenkins container. You can add multiple env variables. You can inject any value you want. My example is using Jenkins specific flags as this question centered around Jenkins, but it works for any app. Add this file to your Kustomize file from earlier:
namePrefix: kustomize-
resources:
- app.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesStrategicMerge:
- app-env.yaml
When your app is deployed via K8s, it will run the startup process for your app, while passing the values defined in your env file. Hope this helps anyone else.

How to share resources/patches with multiple overlays using kustomize?

I have a kube-prometheus deployed to multiple environments using kustomize.
kube-prometheus is a base and each environment is an overlay.
Let's say I want to deploy dashboards to overlays, which means I need to deploy the same ConfigMaps and the same patch to each overlay.
Ideally, I want to avoid changing the base as it is declared outside of my repo and to keep things DRY and not to copy the same configs all over the place.
Is there a way to achieve this?
Folder structure:
/base/
/kube-prometheus/
/overlays/
/qa/ <---
/dev/ <--- I want to share resources+patches between those
/staging/ <---
The proper way to do this is using components.
Components can encapsulate both resources and patches together.
In my case, I wanted to add ConfigMaps (resource) and mount this ConfigMaps to my Deployment (patch) without repeating the patches.
So my overlay would look like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/kube-prometheus/ # Base
components:
- ../../components/grafana-aws-dashboards/ # Folder with kustomization.yaml that includes both resources and patches
And this is the component:
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
resources:
- grafana-dashboard-aws-apigateway.yaml
- grafana-dashboard-aws-auto-scaling.yaml
- grafana-dashboard-aws-ec2-jwillis.yaml
- grafana-dashboard-aws-ec2.yaml
- grafana-dashboard-aws-ecs.yaml
- grafana-dashboard-aws-elasticache-redis.yaml
- grafana-dashboard-aws-elb-application-load-balancer.yaml
- grafana-dashboard-aws-elb-classic-load-balancer.yaml
- grafana-dashboard-aws-lambda.yaml
- grafana-dashboard-aws-rds-os-metrics.yaml
- grafana-dashboard-aws-rds.yaml
- grafana-dashboard-aws-s3.yaml
- grafana-dashboard-aws-storagegateway.yaml
patchesStrategicMerge:
- grafana-mount-aws-dashboards.yaml
This approach is documented here:
https://kubectl.docs.kubernetes.io/guides/config_management/components/
In your main kustomization, or some top-level overlay, you should be able to call for a common folder or repository.
Have you tried something like this:
resources:
- github.com/project/repo?ref=x.y.z
If this doesn't answer your question, could you please edit your post and give us some context?

helm dependencies with different namespaces

Right now, I have to install multiple helm charts in different namespaces for my product to work. I am trying to create a super helm chart in which I am planning to add the helm charts (of my tools, as mentioned above) and install them in one shot. My problem is, as these tools are in different namespaces I am not sure where to specify the namespace key where I want that particular dependency (chart) to be installed. For e.g. if below is the Charts.yaml of my super helm chart
dependencies:
- name: first_chart
version: 1.2.3
repository: https://firstchart.repo
- name: second_chart
version: 1.5.6
repository: https://secondchart.repo
I want my first chart to be installed in namespace foo and the second chart to be installed in namespace bar.
I was looking at using conditions but I believe conditions will only take a boolean as a value.
I stumbled upon this link (https://github.com/helm/helm/issues/2060) which says the we can do it in Helm 3 but mostly on how to keep releases between different namespaces. It does not specifically answer my question.
There is no builtin way to do this with pure Helm, but there is with helmfile.
Your example as helmfile.yaml:
releases:
- name: chart1 # name of the release (helm install <...> first_chart)
chart: repo1/first_chart
version: 1.2.3
namespace: foo
- name: chart2
chart: repo2/second_chart
version: 1.5.6
namespace: bar
# in case you want helmfile to automatically update repos
repositories:
- name: repo1
url: https://firstchart.repo
- name: repo2
url: https://secondchart.repo
Then, run:
helmfile sync => run helm install/upgrade on all releases, or
helmfile apply => same as sync, but do a diff first to only upgrade/install releases that changed
There is way more to helmfile, but this is the gist.
PS: if you struggle with values or want to have something similar to how umbrella Chart values are handled, have a look at helmfile: a simple trick to handle values intuitively
The way I solved this for my clusters with with ArgoCD's App of Apps cluster bootstrapping model. Of course, it requires that ArgoCD is install the cluster. However, for many reasons not relevant to this answer I would highly encourage installing ArgoCD regardless of the easy of bootstrapping capabilities.
Assuming ArgoCD is in place the structure is a single Helm chart containing templates for each of the child charts it will deploy and managed via Argo's Application CRD. You will notice there is a definition as part of the CRD, spec.destination.namespace, which governs where the chart will be deployed.
An example Application template which governs my cert-manager chart deployment to the cert-manager namespace looks like:
{{- if .Values.certManager.enabled }}
# ref: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#applications
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add a this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
# The project the application belongs to.
project: cluster-configs
# Source of the application manifests
source:
repoURL: https://github.com/yourOrg/Helm
targetRevision: {{ .Values.targetRevision }}
path: charts/cert-manager-chart
# helm specific config
helm:
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
{{- range .Values.certManager.valueFiles }}
- {{ . }}
{{- end }}
# Optional Helm version to template with. If omitted it will fall back to look at the 'apiVersion' in Chart.yaml
# and decide which Helm binary to use automatically. This field can be either 'v2' or 'v3'.
version: v3
# Destination cluster and namespace to deploy the application
destination:
server: https://kubernetes.default.svc
namespace: cert-manager
{{- end }}
With a corresponding values.yaml file for this parent chart which may look something like the following with the path to desired value file(s) in that child chart's directory specified.
targetRevision: v1.11.0
certManager:
enabled: true
valueFiles:
- "values.yaml"
clusterAutoScaler:
valueFiles:
- "envs/dev-account/saas/values.yaml"
clusterResourceLimits:
valueFiles:
- "values.yaml"
externalDns:
valueFiles:
- "envs/dev-account/saas/values.yaml"
ingressNginx:
enabled: true
valueFiles:
- "values.yaml"
Below is a screenshot of one of my app of apps directory to complete the example.

Kubectl - How to Read Ingress Hosts from Config Variables?

I have a ConfigMap with a variable for my domain:
apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
MY_DOMAIN: mydomain.com
and my goal is to use the MY_DOMAIN variable inside my Ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
⮕ - config.MY_DOMAIN
secretName: mytls
rules:
⮕ - host: config.MY_DOMAIN
http:
paths:
- backend:
serviceName: myservice
servicePort: 3000
But obviously the config above is not valid. So how can this be achieved?
The configMapRef and secretMapRef for the envFrom and valueFrom functions are only available for environment variables which means they cannot be used in this context. The desired functionality is not available in vanilla Kubernetes as of 1.18.0.
However, it can be done. Helm and Kustomize are probably the two best ways to accomplish this but it could also be done with sed or awk. Helm is a templating engine for Kubernetes manifests. Meaning, you create generic manifests, template out the deltas between your desired manifests with the generic manifests by variables, and then provide a variables file. Then, at runtime, the variables from your variables file are automatically injected into the template for you.
Another way to accomplish this is why Kustomize. Which is what I would personally recommend. Kustomize is like Helm in that it deals with producing customized manifests from generic ones, but it doesn't do so through templating. Kustomize is unique in that it performs merge patches between YAML or JSON files at runtime. These patches are referred to as Overlays so it is often referred to as an overlay engine to differentiate itself from traditional templating engines. Reason being Kustomize can be used with recursive directory trees of bases and overlays. Which makes it much more scalable for environments where dozens, hundreds, or thousands of manifests might need to be generated from boilerplate generic examples.
So how do we do this? Well, with Kustomize you would first define a kustomization.yml file. Within you would define your Resources. In this case, myingress:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- myingress.yml
So create a example directory and make a subdirectory called base inside it. Create ./example/base/kustomization.yml and populate it with the kustomization above. Now create a ./example/base/myingress.yml file and populate it with the example myingress file you gave above.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
- config.MY_DOMAIN
secretName: mytls
rules:
- host: config.MY_DOMAIN
http:
paths:
- backend:
serviceName: myservice
servicePort: 3000
Now we need to define our first overlay. We'll create two different domain configurations to provide an example of how overlays work. First create a ./example/overlays/domain-a directory and create a kustomization.yml file within it with the following contents:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../../base/
patchesStrategicMerge:
- ing_patch.yml
configMapGenerator:
- name: config_a
literals:
- MY_DOMAIN='domain_a'
At this point we have defined ing_patch.yml and config_a in this file. ing_patch.yml will serve as our ingress Patch and config_a will serve as our configMap. However, in this case we'll be taking advantage of a Kustomize feature known as a configMapGenerator rather than manually creating configMap files for single literal key:value pairs.
Now that we have done this, we have to actually make our first patch! Since the deltas in your ingress are pretty small, it's not that hard. Create ./example/overlays/domain_a/ing_patch.yml and populate it with:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
- domain.a.com
rules:
- host: domain.a.com
Perfect, you have created your first overlay. Now you can use kubectl or kustomize to generate your resultant manifest to apply to the Kubernetes API Server.
Kubectl Build: kubectl kustomize ./example/overlays/domain_a
Kustomize Build: kustomize build ./example/overlays/domain_a
Run one of the above Build commands and review the STDOUT produced in your terminal. Notice how it contains two files, myingress and config? And myingress contains the Domain configuration present in your overlay's patch?
So, at this point you're probably asking. Why does Kustomize exist if Kubectl supports the features by default? Well Kustomize started as an external project initially and the Kustomize binary is often running a newer release than the version available in Kubectl.
The next step is to create a second overlay. So go ahead and cp your first overlay over: cp -r ./example/overlays/domain_a ./example/overlays/domain_b.
Now that you have done that, open up ./example/overlays/domain_b/ing_patch.yml up in a text editor and change the contents to look like so:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
- domain.b.com
rules:
- host: domain.b.com
Save the file and then build your two separate overlays:
kustomize build ./example/overlays/domain_a
kustomize build ./example/overlays/domain_b
Notice how each generated stream of STDOUT varies based on the patch present in the Overlay directory? You can continue to abstract this pattern by making your Bases the Overlays for other bases. Or by making your Overlays the Bases for other Overlays. Doing so can allow you to scale this project in extremely powerful and efficient ways. Apply them to your API Server if you wish:
kubectl apply -k ./example/overlays/domain_a
kubectl apply -k ./example/overlays/domain_b
This is only the beginning of Kustomize really. As you might have guessed after seeing the configMapGenerator field in the kustomization.yml file for each overlay, Kustomize has a LOT of features baked in. It can add labels to all of your resources, it can override their namespaces or container image information, etc.
I hope this helps. Let me know if you have any other questions.