Exclude Resource in kustomization.yaml - kubernetes

I have a kustomize base that I'd like to re-use without editing it. Unfortunately, it creates a namespace I don't want to create. I'd like to simply remove that resource from consideration when compiling the manifests and add a resource for mine since I can't patch a namespace to change the name.
Can this be done? How?

You can omit the specific resource by using a delete directive of Strategic Merge Patch like this.
Folder structure
$ tree .
.
├── base
│   ├── kustomization.yaml
│   └── namespace.yaml
└── overlays
├── dev
│   └── kustomization.yaml
└── prod
├── delete-ns-b.yaml
└── kustomization.yaml
File content
$ cat base/kustomization.yaml
resources:
- namespace.yaml
$ cat base/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
---
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ cat overlays/dev/kustomization.yaml
bases:
- ../../base
$ cat overlays/prod/delete-ns-b.yaml
$patch: delete
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ cat overlays/prod/kustomization.yaml
bases:
- ../../base
patchesStrategicMerge:
- delete-ns-b.yaml
Behavior of kustomize
$ kustomize build overlays/dev
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
---
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ kustomize build overlays/prod
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
In this case, we have two namespaces in the base folder. In dev, kustomize produces 2 namespaces because there is no patch. But, in prod, kustomize produces only one namespace because delete patch deletes namespace ns-b.

I found that my understanding of not being able to change a namespace name was incorrect. Using the patch capability, you actually can change the name of a resource including namespaces.
This is what I ended up using:
patches:
- target:
kind: Namespace
name: application
patch: |-
- op: replace
path: /metadata/name
value: my-application

I encountered this problem and eventually took a different approach to solving it. It's worth thinking back through your requirements and asking yourself why you would want kustomize to omit a resource? In my case - and I would imagine this is the most common use-case - I wanted kustomize to omit a resource because I didn't want to apply it to the target kubernetes cluster but kustomize doesn't provide an easy way to do this. Would it not be better for the filtration to take place when applying the resource to the cluster rather than when generating them? The solution which I eventually applied was to filter the resources by label when applying to the cluster. You can add an exclusion label in an overlay to prevent the resource from being applied.
e.g.
$ kustomize build . | kubectl apply -l apply-resource!=no -f -

Related

Why did flux failed decoding Kubernetes YAML? missing Resource metadata

I am following Building a GITOPS pipeline with EKS.
After flux create source and flux create kustomization
my fleet-intra repo
tree -L 3
.
└── adorable-mushroom-1655535375
├── flux-system
│   ├── gotk-components.yaml
│   ├── gotk-sync.yaml
│   └── kustomization.yaml
└── guestbook-gitops.yaml
I have problem with kustomization
flux get kustomizations
NAME REVISION SUSPENDED READY MESSAGE
flux-system main/75d8189db82f1c2c77d22a9deb6baea06f179d0c False False failed to decode Kubernetes YAML from /tmp/kustomization-985813897/adorable-mushroom-1655535375/guestbook-gitops.yaml: missing Resource metadata
My guestbook-gitops.yaml looks like this
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: guestbook-gitops
namespace: flux-system
spec:
interval: 1h0m0s
path: ./deploy
prune: true
sourceRef:
kind: GitRepository
name: guestbook-gitops
What is wrong with metadata?
In the link you had shared there sample kustomization yaml. you can prepare kustomize yaml in similar manner and check it.

Patching multiple resources with Kustomize

I have a kustomization.yaml file defined which consists of two resources that are downloaded from a git repo. Both of these resources want to create a Namespace with the same name, which creates an error when I try to build the final kustomization file: may not add resource with an already registered id: ~G_v1_Namespace|~X|rabbitmq-system'. I tried using patches to get rid of this unwanted namespace however, it looks like that works only if ONE resource is defined. As soon as I add the other resource, it stops working.
bases:
- ../../base
namespace: test
resources:
- https://github.com/rabbitmq/cluster-operator/releases/download/v1.13.0/cluster-operator.yml
- https://github.com/rabbitmq/messaging-topology-operator/releases/download/v1.6.0/messaging-topology-operator.yaml
patchesStrategicMerge:
- |-
apiVersion: v1
kind: Namespace
metadata:
name: rabbitmq-system
$patch: delete
What I think is happening is Kustomize loads the resources first and finds two identical Namespaces defined and it doesn't take patches into consideration. Can I fix this behavior somehow?

How to upgrade at once multiple releases with Helm for the same chart

I have multiple apps based on the same chart deployed with Helm. Let's imagine you deploy your app multiple times with different configurations:
helm install myapp-01 mycharts/myapp
helm install myapp-02 mycharts/myapp
helm install myapp-03 mycharts/myapp
And after I update the chart files, I want to update all the releases, or maybe a certain range of releases. I managed to create a PoC script like this:
helm list -f myapp -o json | jq -r '.[].name' | while read i; do helm upgrade ${i} mycharts/myapp; done
While this works I will need to do a lot of things to have full functionality and error control.
Is there any CLI tool or something I can use in a CI/CD environment to update a big number of releases (say hundreds of them)? I've been investigating Rancher and Autohelm, but I couldn't find such functionality.
Thanks to the tip provided by #Jonas I've managed to create a simple structure to deploy and update lots of pods with the same image base.
I created a folder structure like this:
├── kustomization.yaml
├── base
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ ├── namespace.yaml
│ └── service.yaml
└── overlays
├── one
│ ├── deployment.yaml
│ └── kustomization.yaml
└── two
├── deployment.yaml
└── kustomization.yaml
So the main trick here is to have a kustomization.yaml file in the main folder that points to every app:
resources:
- overlays/one
- overlays/two
namePrefix: winnp-
Then in the base/kustomization.yaml I point to the base files:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
- namespace.yaml
And then in each app I use namespaces, sufixes and commonLabels for the deployments and services, and a patch to rename the base namespace:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ns-one
nameSuffix: "-one"
commonLabels:
app: vbserver-one
bases:
- ../../base
patchesStrategicMerge:
- deployment.yaml
patches:
- target:
version: v1 # apiVersion
kind: Namespace
name: base
patch: |-
- op: replace
path: /metadata/name
value: ns-one
Now, with a simple command I can deploy or modify all the apps:
kubectl apply -k .
So to update the image I only have to change the deployment.yaml file with the new image and run the command again.
I uploaded a full example of what I did in this GitHub repo

How to override a namespace override

In the following scenario I have my containers defined in ../base/.
In this /dev/ directory I want to start all the deployments and statefulsets in namespace dev.
The rub is that I also want to run the local-path-storage CSI in the local-path-storage namespace. kustomize will override it and create it in the "dev" namespace.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: dev
bases:
- ../base
resources:
- local-path-storage.yaml
How can I undo the namespace override for just local-path-storage.yaml?
This functionality doesn't exist in Kustomize yet. There's an open issue addressing this, but no open PRs at the time of this writing.
The quickest solution here is to remove the namespace setting in the dev/kustomize.yaml and hand-set the namespace in all resources in dev.
Another option, shamelessly copied from the issue I cited earlier, is to create a transformer to get around this:
#!/usr/bin/env /usr/bin/python3
import sys
import yaml
with open(sys.argv[1], "r") as stream:
try:
data = yaml.safe_load(stream)
except yaml.YAMLError as exc:
print("Error parsing NamespaceTransformer input", file=sys.stderr)
# See kubectl api-resources --namespaced=false
denylist = [
"ComponentStatus",
"Namespace",
"Node",
"PersistentVolume",
"MutatingWebhookConfiguration",
"ValidatingWebhookConfiguration",
"CustomResourceDefinition",
"APIService",
"MeshPolicy",
"TokenReview",
"SelfSubjectAccessReview",
"SelfSubjectRulesReview",
"SubjectAccessReview",
"CertificateSigningRequest",
"ClusterIssuer",
"BGPConfiguration",
"ClusterInformation",
"FelixConfiguration",
"GlobalBGPConfig",
"GlobalFelixConfig",
"GlobalNetworkPolicy",
"GlobalNetworkSet",
"HostEndpoint",
"IPPool",
"PodSecurityPolicy",
"NodeMetrics",
"PodSecurityPolicy",
"ClusterRoleBinding",
"ClusterRole",
"ClusterRbacConfig",
"PriorityClass",
"StorageClass",
"VolumeAttachment",
]
try:
for yaml_input in yaml.safe_load_all(sys.stdin):
if yaml_input['kind'] not in denylist:
if "namespace" not in yaml_input["metadata"]:
yaml_input["metadata"]["namespace"] = data["namespace"]
print("---")
print(yaml.dump(yaml_input, default_flow_style=False))
except yaml.YAMLError as exc:
print("Error parsing YAML input\n\n%s\n\n" % input, file=sys.stderr)
Unfortunately it is not possible, the namespace override in kustomization assume all resources should belong to the same namespace.
Your alternative are:
Create separate kustomization for resources that does not belong to the same namespace.
Deploy resources that does not need kustomization by using kubectl apply -f .
Use alternative replacement approach like suggested by Eric staples.
I generally create one kustomization per set of resources, that are deployed together in a namespace to make the kustomization simple and independent from any other resources.
It is possible since kustomize 4.5.6 by adding a namespaceTransformer. You want to set the field unsetOnly to true.
Here is an example:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
resources:
- local-path-storage.yaml
transformers:
- |-
apiVersion: builtin
kind: NamespaceTransformer
metadata:
name: notImportantHere
namespace: dev
unsetOnly: true
This should set the namespace to dev for all resources that DO NOT have a namespace set.
Link to namespaceTransformer spec: https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_namespacetransformer_
I am faced with the same problem.
My approach to this problem is to break it up into multiple steps.
I would have stepone, steptwo folders.
tree ./project/
./project/
├── stepone
│   ├── base
│   └── overlay
└── steptwo
├── base
└── overlay
Now I can move the part of the deployment that should not have the namespace override into steptwo or vice versa. Depending on your deployment needs.
I am working on complex transitions from a heml template with over 200 files outputted from the templates.
I am simply breaking up the deployment into different steps and use kustomize at each step to manage just the portion of the deployment where isolation is required.
It does add some effort, but it still gives the isolation I need till kustomize finds a good way to handle this complexity of the namespace overide. This takes #Diego-mendes answer and encapsulates the different parts into their own folders.

Adjusting Kubernetes configurations depending on environment

I want to describe my services in kubernetes template files. Is it possible to parameterise values like number or replicas, so that I can set this at deploy time.
The goal here is to be able to run my services locally in minikube (where I'll only need one replica) and have them be as close to those running in staging/live as possible.
I'd like to be able to change the number of replicas, use locally mounted volumes and make other minor changes, without having to write a seperate template files that would inevitably diverge from each other.
Helm
Helm is becoming the standard for templatizing kubernetes deployments. A helm chart is a directory consisting of yaml files with golang variable placeholders
---
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
You define the default value of a 'value' in the 'values.yaml' file
replicaCount: 1
You can optionally overwrite the value using the --set command line
helm install foo --set replicaCount=42
Helm can also point to an external answer file
helm install foo -f ./dev.yaml
helm install foo -f ./prod.yaml
dev.yaml
---
replicaCount: 1
prod.yaml
---
replicaCount: 42
Another advantage of Helm over simpler solutions like envbsubst is that Helm supports plugins. One powerful plugin is the helm-secrets plugin that lets you encrypt sensitive data using pgp keys. https://github.com/futuresimple/helm-secrets
If using helm + helm-secrets your setup may look like the following where your code is in one repo and your data is in another.
git repo with helm charts
stable
|__mysql
|__Values.yaml
|__Charts
|__apache
|__Values.yaml
|__Charts
incubator
|__mysql
|__Values.yaml
|__Charts
|__apache
|__Values.yaml
|__Charts
Then in another git repo that contains the environment specific data
values
|__ mysql
|__dev
|__values.yaml
|__secrets.yaml
|__prod
|__values.yaml
|__secrets.yaml
You then have a wrapper script that references the values and the secrets files
helm secrets upgrade foo --install -f ./values/foo/$environment/values.yaml -f ./values/foo/$environment/secrets.yaml
envsubst
As mentioned in other answers, envsubst is a very powerful yet simple way to make your own templates. An example from kiminehart
apiVersion: extensions/v1beta1
kind: Deployment
# ...
architecture: ${GOOS}
GOOS=amd64 envsubst < mytemplate.tmpl > mydeployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
# ...
architecture: amd64
Kubectl
There is a feature request to allow kubectl to do some of the same features of helm and allow for variable substitution. There is a background document that strongly suggest that the feature will never be added, and instead is up to external tools like Helm and envsubst to manage templating.
(edit)
Kustomize
Kustomize is a new project developed by google that is very similar to helm. Basically you have 2 folders base and overlays. You then run kustomize build someapp/overlays/production and it will generate the yaml for that environment.
someapp/
├── base/
│ ├── kustomization.yaml
│ ├── deployment.yaml
│ ├── configMap.yaml
│ └── service.yaml
└── overlays/
├── production/
│ └── kustomization.yaml
│ ├── replica_count.yaml
└── staging/
├── kustomization.yaml
└── cpu_count.yaml
It is simpler and has less overhead than helm, but does not have plugins for managing secrets. You could combine kustomize with sops or envsubst to manage secrets.
https://kubernetes.io/blog/2018/05/29/introducing-kustomize-template-free-configuration-customization-for-kubernetes/
I'm hoping someone will give me a better answer, but in the meantime, you can feed your configuration through envsubst (see gettext and this for mac).
Example config, text.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test
spec:
replicas: ${NUM_REPLICAS}
...
Then run:
$ NUM_REPLICAS=2 envsubst < test.yaml | kubectl apply -f -
deployment "test" configured
The final dash is required. This doesn't solve the problem with volumes of course, but it helps a little. You could write a script/makefile to automate this for environment.