Patching multiple resources with Kustomize - kubernetes

I have a kustomization.yaml file defined which consists of two resources that are downloaded from a git repo. Both of these resources want to create a Namespace with the same name, which creates an error when I try to build the final kustomization file: may not add resource with an already registered id: ~G_v1_Namespace|~X|rabbitmq-system'. I tried using patches to get rid of this unwanted namespace however, it looks like that works only if ONE resource is defined. As soon as I add the other resource, it stops working.
bases:
- ../../base
namespace: test
resources:
- https://github.com/rabbitmq/cluster-operator/releases/download/v1.13.0/cluster-operator.yml
- https://github.com/rabbitmq/messaging-topology-operator/releases/download/v1.6.0/messaging-topology-operator.yaml
patchesStrategicMerge:
- |-
apiVersion: v1
kind: Namespace
metadata:
name: rabbitmq-system
$patch: delete
What I think is happening is Kustomize loads the resources first and finds two identical Namespaces defined and it doesn't take patches into consideration. Can I fix this behavior somehow?

Related

Kustomize HelmChartInflationGeneration Error With ChartName Not Found

I have the following chartInflator.yml file:
apiVersion: builtin
kind: ChartInflator
metadata:
name: project-helm-inflator
chartName: helm-k8s
chartHome: ../../../helm-k8s/
releaseName: project-monitoring-chart
values: ../../values.yaml
releaseNamespace: project-monitoring-ns
When I ran it using this, I got the error message below:
$ kustomize build .
Error: loading generator plugins: failed to load generator: plugin HelmChartInflationGenerator.builtin.[noGrp]/project-helm-inflator.[noNs] fails configuration: chart name cannot be empty
Here is my project structure:
project
- helm-k8s
- values.yml
- Chart.yml
- templates
- base
- project-namespace.yml
- grafana
- grafana-service.yml
- grafana-deployment.yml
- grafana-datasource-config.yml
- prometheus
- prometheus-service.yml
- prometheus-deployment.yml
- prometheus-config.yml
- prometheus-roles.yml
- kustomization.yml
- prod
- kustomization.yml
- test
- kustomization.yml
I think you may have found some outdated documentation for the helm chart generator. The canonical documentation for this is here. Reading that implies several changes:
Include the inflator directly in your kustomization.yaml in the helmCharts section.
Use name instead of chartName.
Set chartHome in the helmGlobals section rather than per-chart.
That gets us something like this in our kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmGlobals:
chartHome: ../../../helm-k8s/
helmCharts:
- name: helm-k8s
releaseName: project-monitoring-chart
values: ../../values.yaml
releaseNamespace: project-monitoring-ns
I don't know if this will actually work -- you haven't provided a reproducer in your question, and I'm not familiar enough with Helm to whip one up on the spot -- but I will note that your project layout is highly unusual. You appear to be trying to use Kustomize to deploy a Helm chart that contains your kustomize configuration, and it's not clear what the benefit is of this layout vs. just creating a helm chart and then using kustomize to inflate it from outside of the chart templates directory.
You may need to add --load-restrictor LoadRestrictionsNone when calling kustomize build for this to work; by default, the chartHome location must be contained by the same directory that contains your kustomization.yaml.
Update
To make sure things are clear, this is what I'm recommending:
Remove the kustomize bits from your helm chart, so that it looks like this.
Publish your helm charts somewhere. I've set up github pages for that repository and published the charts at http://oddbit.com/open-electrons-deployments/.
Use kustomize to deploy the chart with transformations. Here we add a -prod suffix to all the resources:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: open-electrons-monitoring
repo: http://oddbit.com/open-electrons-deployments/
nameSuffix: -prod

How to share resources/patches with multiple overlays using kustomize?

I have a kube-prometheus deployed to multiple environments using kustomize.
kube-prometheus is a base and each environment is an overlay.
Let's say I want to deploy dashboards to overlays, which means I need to deploy the same ConfigMaps and the same patch to each overlay.
Ideally, I want to avoid changing the base as it is declared outside of my repo and to keep things DRY and not to copy the same configs all over the place.
Is there a way to achieve this?
Folder structure:
/base/
/kube-prometheus/
/overlays/
/qa/ <---
/dev/ <--- I want to share resources+patches between those
/staging/ <---
The proper way to do this is using components.
Components can encapsulate both resources and patches together.
In my case, I wanted to add ConfigMaps (resource) and mount this ConfigMaps to my Deployment (patch) without repeating the patches.
So my overlay would look like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/kube-prometheus/ # Base
components:
- ../../components/grafana-aws-dashboards/ # Folder with kustomization.yaml that includes both resources and patches
And this is the component:
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
resources:
- grafana-dashboard-aws-apigateway.yaml
- grafana-dashboard-aws-auto-scaling.yaml
- grafana-dashboard-aws-ec2-jwillis.yaml
- grafana-dashboard-aws-ec2.yaml
- grafana-dashboard-aws-ecs.yaml
- grafana-dashboard-aws-elasticache-redis.yaml
- grafana-dashboard-aws-elb-application-load-balancer.yaml
- grafana-dashboard-aws-elb-classic-load-balancer.yaml
- grafana-dashboard-aws-lambda.yaml
- grafana-dashboard-aws-rds-os-metrics.yaml
- grafana-dashboard-aws-rds.yaml
- grafana-dashboard-aws-s3.yaml
- grafana-dashboard-aws-storagegateway.yaml
patchesStrategicMerge:
- grafana-mount-aws-dashboards.yaml
This approach is documented here:
https://kubectl.docs.kubernetes.io/guides/config_management/components/
In your main kustomization, or some top-level overlay, you should be able to call for a common folder or repository.
Have you tried something like this:
resources:
- github.com/project/repo?ref=x.y.z
If this doesn't answer your question, could you please edit your post and give us some context?

Kubectl - How to Read Ingress Hosts from Config Variables?

I have a ConfigMap with a variable for my domain:
apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
MY_DOMAIN: mydomain.com
and my goal is to use the MY_DOMAIN variable inside my Ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
⮕ - config.MY_DOMAIN
secretName: mytls
rules:
⮕ - host: config.MY_DOMAIN
http:
paths:
- backend:
serviceName: myservice
servicePort: 3000
But obviously the config above is not valid. So how can this be achieved?
The configMapRef and secretMapRef for the envFrom and valueFrom functions are only available for environment variables which means they cannot be used in this context. The desired functionality is not available in vanilla Kubernetes as of 1.18.0.
However, it can be done. Helm and Kustomize are probably the two best ways to accomplish this but it could also be done with sed or awk. Helm is a templating engine for Kubernetes manifests. Meaning, you create generic manifests, template out the deltas between your desired manifests with the generic manifests by variables, and then provide a variables file. Then, at runtime, the variables from your variables file are automatically injected into the template for you.
Another way to accomplish this is why Kustomize. Which is what I would personally recommend. Kustomize is like Helm in that it deals with producing customized manifests from generic ones, but it doesn't do so through templating. Kustomize is unique in that it performs merge patches between YAML or JSON files at runtime. These patches are referred to as Overlays so it is often referred to as an overlay engine to differentiate itself from traditional templating engines. Reason being Kustomize can be used with recursive directory trees of bases and overlays. Which makes it much more scalable for environments where dozens, hundreds, or thousands of manifests might need to be generated from boilerplate generic examples.
So how do we do this? Well, with Kustomize you would first define a kustomization.yml file. Within you would define your Resources. In this case, myingress:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- myingress.yml
So create a example directory and make a subdirectory called base inside it. Create ./example/base/kustomization.yml and populate it with the kustomization above. Now create a ./example/base/myingress.yml file and populate it with the example myingress file you gave above.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
- config.MY_DOMAIN
secretName: mytls
rules:
- host: config.MY_DOMAIN
http:
paths:
- backend:
serviceName: myservice
servicePort: 3000
Now we need to define our first overlay. We'll create two different domain configurations to provide an example of how overlays work. First create a ./example/overlays/domain-a directory and create a kustomization.yml file within it with the following contents:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../../base/
patchesStrategicMerge:
- ing_patch.yml
configMapGenerator:
- name: config_a
literals:
- MY_DOMAIN='domain_a'
At this point we have defined ing_patch.yml and config_a in this file. ing_patch.yml will serve as our ingress Patch and config_a will serve as our configMap. However, in this case we'll be taking advantage of a Kustomize feature known as a configMapGenerator rather than manually creating configMap files for single literal key:value pairs.
Now that we have done this, we have to actually make our first patch! Since the deltas in your ingress are pretty small, it's not that hard. Create ./example/overlays/domain_a/ing_patch.yml and populate it with:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
- domain.a.com
rules:
- host: domain.a.com
Perfect, you have created your first overlay. Now you can use kubectl or kustomize to generate your resultant manifest to apply to the Kubernetes API Server.
Kubectl Build: kubectl kustomize ./example/overlays/domain_a
Kustomize Build: kustomize build ./example/overlays/domain_a
Run one of the above Build commands and review the STDOUT produced in your terminal. Notice how it contains two files, myingress and config? And myingress contains the Domain configuration present in your overlay's patch?
So, at this point you're probably asking. Why does Kustomize exist if Kubectl supports the features by default? Well Kustomize started as an external project initially and the Kustomize binary is often running a newer release than the version available in Kubectl.
The next step is to create a second overlay. So go ahead and cp your first overlay over: cp -r ./example/overlays/domain_a ./example/overlays/domain_b.
Now that you have done that, open up ./example/overlays/domain_b/ing_patch.yml up in a text editor and change the contents to look like so:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
- domain.b.com
rules:
- host: domain.b.com
Save the file and then build your two separate overlays:
kustomize build ./example/overlays/domain_a
kustomize build ./example/overlays/domain_b
Notice how each generated stream of STDOUT varies based on the patch present in the Overlay directory? You can continue to abstract this pattern by making your Bases the Overlays for other bases. Or by making your Overlays the Bases for other Overlays. Doing so can allow you to scale this project in extremely powerful and efficient ways. Apply them to your API Server if you wish:
kubectl apply -k ./example/overlays/domain_a
kubectl apply -k ./example/overlays/domain_b
This is only the beginning of Kustomize really. As you might have guessed after seeing the configMapGenerator field in the kustomization.yml file for each overlay, Kustomize has a LOT of features baked in. It can add labels to all of your resources, it can override their namespaces or container image information, etc.
I hope this helps. Let me know if you have any other questions.

How to override a namespace override

In the following scenario I have my containers defined in ../base/.
In this /dev/ directory I want to start all the deployments and statefulsets in namespace dev.
The rub is that I also want to run the local-path-storage CSI in the local-path-storage namespace. kustomize will override it and create it in the "dev" namespace.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: dev
bases:
- ../base
resources:
- local-path-storage.yaml
How can I undo the namespace override for just local-path-storage.yaml?
This functionality doesn't exist in Kustomize yet. There's an open issue addressing this, but no open PRs at the time of this writing.
The quickest solution here is to remove the namespace setting in the dev/kustomize.yaml and hand-set the namespace in all resources in dev.
Another option, shamelessly copied from the issue I cited earlier, is to create a transformer to get around this:
#!/usr/bin/env /usr/bin/python3
import sys
import yaml
with open(sys.argv[1], "r") as stream:
try:
data = yaml.safe_load(stream)
except yaml.YAMLError as exc:
print("Error parsing NamespaceTransformer input", file=sys.stderr)
# See kubectl api-resources --namespaced=false
denylist = [
"ComponentStatus",
"Namespace",
"Node",
"PersistentVolume",
"MutatingWebhookConfiguration",
"ValidatingWebhookConfiguration",
"CustomResourceDefinition",
"APIService",
"MeshPolicy",
"TokenReview",
"SelfSubjectAccessReview",
"SelfSubjectRulesReview",
"SubjectAccessReview",
"CertificateSigningRequest",
"ClusterIssuer",
"BGPConfiguration",
"ClusterInformation",
"FelixConfiguration",
"GlobalBGPConfig",
"GlobalFelixConfig",
"GlobalNetworkPolicy",
"GlobalNetworkSet",
"HostEndpoint",
"IPPool",
"PodSecurityPolicy",
"NodeMetrics",
"PodSecurityPolicy",
"ClusterRoleBinding",
"ClusterRole",
"ClusterRbacConfig",
"PriorityClass",
"StorageClass",
"VolumeAttachment",
]
try:
for yaml_input in yaml.safe_load_all(sys.stdin):
if yaml_input['kind'] not in denylist:
if "namespace" not in yaml_input["metadata"]:
yaml_input["metadata"]["namespace"] = data["namespace"]
print("---")
print(yaml.dump(yaml_input, default_flow_style=False))
except yaml.YAMLError as exc:
print("Error parsing YAML input\n\n%s\n\n" % input, file=sys.stderr)
Unfortunately it is not possible, the namespace override in kustomization assume all resources should belong to the same namespace.
Your alternative are:
Create separate kustomization for resources that does not belong to the same namespace.
Deploy resources that does not need kustomization by using kubectl apply -f .
Use alternative replacement approach like suggested by Eric staples.
I generally create one kustomization per set of resources, that are deployed together in a namespace to make the kustomization simple and independent from any other resources.
It is possible since kustomize 4.5.6 by adding a namespaceTransformer. You want to set the field unsetOnly to true.
Here is an example:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
resources:
- local-path-storage.yaml
transformers:
- |-
apiVersion: builtin
kind: NamespaceTransformer
metadata:
name: notImportantHere
namespace: dev
unsetOnly: true
This should set the namespace to dev for all resources that DO NOT have a namespace set.
Link to namespaceTransformer spec: https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_namespacetransformer_
I am faced with the same problem.
My approach to this problem is to break it up into multiple steps.
I would have stepone, steptwo folders.
tree ./project/
./project/
├── stepone
│   ├── base
│   └── overlay
└── steptwo
├── base
└── overlay
Now I can move the part of the deployment that should not have the namespace override into steptwo or vice versa. Depending on your deployment needs.
I am working on complex transitions from a heml template with over 200 files outputted from the templates.
I am simply breaking up the deployment into different steps and use kustomize at each step to manage just the portion of the deployment where isolation is required.
It does add some effort, but it still gives the isolation I need till kustomize finds a good way to handle this complexity of the namespace overide. This takes #Diego-mendes answer and encapsulates the different parts into their own folders.

Is kustomize for k8s backward chaining?

The README for kustomize says that
It's like make, in that what it does is declared in a file, and it's like sed, in that it emits edited text.
Does this analogy extend beyond the fact that files are used to declare what is needed?
Or, is kustomize backward chaining like make in that it reads all command input before working out what it has to do rather than work sequentially and step through the command input like bash working through a shell script?
EDIT: Jeff Regan, of the Kustomize team in Google, explains the model for the way kustomize works towards the beginning of his talk Kustomize: Kubernetes Configuration Customization. He also shows how kustomize may be daisy chained so the output of one kustomize may serve as the input to another kustomize. It seems that, as pointed out by ITChap below, kustomize starts by gathering all the resources referenced in the kustomization.yml file in the base dir. It the executes sequentially in a series of steps to perform the required substitutions and transformations interatively. Repeating the substitution/tranformation step as often as needed to complete. It then spits out the generated YAML on stdout. So I would say that it is not backward chaining like make but rather somewhere in between. HTH.
What I noticed so far is that kustomize will first accumulate the content of all the base resources then apply the transformations from your kustomization.yml files. If you have multiple level of overlays, it doesn't seem to pass the result from one level to the next.
Let's consider the following:
./base/pod.yml:
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: busybox
./base/kustomization.yml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../pod.yml
./overlays/l1/kustomization.yml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
nameSuffix: "-l1"
./overlays/l2/kustomization.yml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../l1
nameSuffix: "-l2"
When running kustomize build overlays/l2 you are going to get a pod named test-l1-l2 as expected.
But if you try to patch the base pod you would have to reference the pod using:
patchesJson6902:
- target:
version: v1
kind: Pod
name: test
path: patch.yml
in your ./overlays/l1/kustomization.yml but also in ./overlays/l2/kustomization.yml. At the time the patch of l2 is applied, the referenced resource is still test and not test-l1.
I don't know kustomize well enough to understand the intention behind this but these are my observations. Hope it answers your question.
PS: this might change with https://github.com/kubernetes-sigs/kustomize/issues/1036