Handle "shared" resources with Kustomize - kubernetes

All namespaces in my cluster are supposed to trust the same root CA. I have a mono repo with all my Kustomize files, and I'm trying to avoid having to add the root CA certificate everywhere.
My idea was to go for something like that in my kustomization files:
# [project_name]/[namespace_name]/bases/project/kustomization.yml
configMapGenerator:
- name: trusted-root-ca
files:
- ../../../../root-ca/root-ca.pem
So at least if I want to update the root CA, I do it in one place. This results to
file sources: [../../../../root-ca/root-ca.pem]: security; file 'root-ca/root-ca.pem' is not in or below [project_name]/[namespace_name]/bases/project/
So I guess this is not the way to go. (Reading from Common config across multiple environments and applications with Kustomize I can see why it's behaving like that and disable that behavior seems to be a bad idea). I'm looking for a better way to do this.

This seems like a good place to use a component. For example, if I have my files organized like this:
.
├── components
│   └── trusted-root-ca
│   ├── kustomization.yaml
│   └── root-ca.pem
└── projects
├── kustomization.yaml
├── project1
│   ├── kustomization.yaml
│   └── namespace.yaml
└── project2
├── kustomization.yaml
└── namespace.yaml
And in components/trusted-root-ca/kustomization.yaml I have:
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
configMapGenerator:
- name: trusted-root-ca
files:
- root-ca.pem
generatorOptions:
disableNameSuffixHash: true
Then in projects/project1/kustomization.yaml I can write:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: project1
components:
- ../../components/trusted-root-ca
resources:
- namespace.yaml
And similarly in projects/project2/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: project2
components:
- ../../components/trusted-root-ca
resources:
- namespace.yaml
And in projects/kustomization.yaml I have:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- project1
- project2
Then if, from the top directory, I run kustomize build projects, the output will look like:
apiVersion: v1
kind: Namespace
metadata:
name: project1
spec: {}
---
apiVersion: v1
kind: Namespace
metadata:
name: project2
spec: {}
---
apiVersion: v1
data:
root-ca.pem: |
...cert data here...
kind: ConfigMap
metadata:
name: trusted-root-ca
namespace: project1
---
apiVersion: v1
data:
root-ca.pem: |
...cert data here...
kind: ConfigMap
metadata:
name: trusted-root-ca
namespace: project2

Related

Secret hash issue when run once-off job in kustomize

When using kustomize, I am trying to use job to perform some once-off job. But somehow, the kustomize just doesn't recognise hashed secret. The below is the relevant codes.
.
├── base
│ └── postgres.yaml
├── jobs
│ ├── postgres-cli.yaml
│ └── kustomization.yaml
└── overlays
└── dev
├── kustomization.yaml
└── postgres-secrects.properties
base/postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
name: postgres
template:
metadata:
labels:
name: postgres
spec:
containers:
- name: postgres
image: postgres:14-alpine
envFrom:
- secretRef:
name: postgres-secrets
# ...
overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/postgres.yaml
secretGenerator:
- name: postgres-secrets
envs:
- postgres-secrets.properties
base/overlays/dev/postgres-secrects.properties
POSTGRES_USER=postgres
POSTGRES_PASSWORD=123
jobs/postgres-cli.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: postgres-cli
spec:
template:
metadata:
name: postgres-cli
spec:
restartPolicy: Never
containers:
- image: my-own-image
name: postgres-cli
envFrom:
- secretRef:
name: postgres-secrets # errors here cannot cannot recognise
# ...
jobs/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Component
commonLabels:
environment: local
resources:
- ./postgres-cli.yaml
To start my stack, I run kubectl apply -k ./overlay/dev.
Then, when I try to run the postgres-cli, I try to run kubectl apply -k /jobs, it complains like below: secret "postgres-secrets" not found
Do we have a way to find the secret back when apply the job?
.
├── base
│ └── postgres.yaml
├── jobs
│ ├── postgres-cli.yaml
│ └── kustomization.yaml
└── overlays
└── dev
├── kustomization.yaml
└── postgres-secrects.properties
Having this structure it means that the secret is in the overlay at dev level. You try to run the job from one level below and the sercret is not there. To run this you have two ways or by kubectl apply -k ./overlay/dev. or by moving the overlay into jobs and create only one kustomization.

How to Add Logic to Kustomize File

I am trying to deploy a K8s application using Kustomize. Up to now I have done simple implementations where we have a few of the K8s files such as ingress.yaml with something like the following:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingressname
namespace: namespace
labels:
app: appname
spec:
tls:
- hosts:
- $(variable1)
secretName: $(variable2)-tls
Under my overlays directory for each environment, I then have another kustomize.yaml which gives the values in a configmap:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- path
configMapGenerator:
- behavior: merge
literals:
- variable1=foo
- variable2=bar
name: configmapname
images:
- name: imagename
newName: registryurl
This works well, but now I need to do something more complicated. Say for example I have multiple ingress. Instead of creating multiple base ingress yaml files, how can I have one base yaml file that creates every ingress based on the values in my overlay file? Is that possible?
Kustomize isn't a templating system and doesn't do variable substitution. It can perform a variety of YAML patching tricks, so one option you have is to start with a base manifest like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingressname
spec:
tls:
- hosts: []
secretName:
And then patch it in your kustomization.yaml files:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- target:
kind: Ingress
name: ingressname
patch: |
- op: replace
path: /spec/tls
value:
- hosts:
- host1.example.com
secretName: host1-tls
What I've shown here works well if you have an application consisting of a single Ingress and you want to produce multiple variants (maybe one per cluster, or per namespace, or something). That is, you have:
A Deployment
A Service
An Ingress
(etc.)
Then you would have one directory for each variant of the app, giving you a layout something like:
.
├── base
│   ├── deployment.yaml
│   ├── ingress.yaml
│   ├── kustomization.yaml
│   └── service.yaml
└── overlays
├── variant1
│   └── kustomization.yaml
└── variant2
└── kustomization.yaml
If your application has multiple Ingress resources, and you want to apply the same patch to all of them, Kustomize can do that. If you were to modify the patch in your kustomization.yaml so that it looks like this instead:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- target:
kind: Ingress
name: ".*"
patch: |
- op: replace
path: /spec/tls
value:
- hosts:
- host1.example.com
secretName: host1-tls
This would apply the same patch to all matching Ingress resources (which is "all of them", in this case, because we used .* as our match expression).

How can I replace variables in annotation via Kustomize?

Any ideas how can I replace variables via Kustomize? I simply want to use a different ACCOUNT_ID and IAM_ROLE_NAME for each overlay.
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::${ACCOUNT_ID}:role/${IAM_ROLE_NAME}
Thanks in advance!
Kustomize doesn't use "variables". The way you would typically handle this is by patching the annotation in an overlay. That is, you might start with a base directory that looks like:
base
├── kustomization.yaml
└── serviceaccount.yaml
Where serviceaccount.yaml contains your ServiceAccount manifest:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
annotions:
eks.amazonaws.com/role-arn: "THIS VALUE DOESN'T MATTER"
And kustomization.yaml looks like:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- serviceaccount.yaml
Then in your overlays, you would replace the eks.amazonaws.com/role-arn annotation by using a patch. For example, if you had an overlay called production, you might end up with this layout:
.
├── base
│   ├── kustomization.yaml
│   └── serviceaccount.yaml
└── overlay
└── production
├── kustomization.yaml
└── patch_aws_creds.yaml
Where overlay/production/patch_aws_creds.yaml looks like:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::1234:role/production-role
And overlay/production/kustomization.yaml looks like:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- patch_aws_creds.yaml
With this in place, running...
kustomize build overlay/production
...would generate output using your production role information, and so forth for any other overlays you choose to create.
If you don't like the format of the strategic merge patch, you can use a json patch document instead. Here's what it would look like inline in your kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- target:
version: v1
kind: ServiceAccount
name: my-service-account
patch: |-
- op: replace
path: /metadata/annotations/eks.amazonaws.com~1role-arn
value: arn:aws:iam::1234:role/production-role
I don't think this really gets you anything, though.
You can resolve this case using json-pointer: ~1
Change the / by ~1 in path:
path: /metadata/annotations/eks.amazonaws.com~1role-arn
Ref:https://jsonpatch.com/#json-pointer

kustomize: how to pass `newTag` from command line

I am using https://kustomize.io/ have below is my kustomization.yaml file,
I have multiple docker images and during deployment all having same tag. I can manually change all the tag value and I can run it kubectl apply -k . through command prompt.
Question is, I don't want to change this file manually, I want to send tag value as a command line argument within kubectl apply -k . command. Is there way to do that? Thanks.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
images:
- name: foo/bar
newTag: "36"
- name: zoo/too
newTag: "36"
It my opinion using "file" approach is the correct way and the best solution even while working with your test scenario.
By the "correct way" I mean this is how you should work with kustomize - keeping your environment specific data into separate directory.
kustomize supports the best practice of storing one’s entire configuration in a version control system.
Before kustomize build . you can change those values using:
kustomize edit set image foo/bar=12.5
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
...
images:
- name: foo/bar
newName: "12.5"
Using envsubst approach:
deployment.yaml and kustomization.yaml in base directory:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: the-deployment
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: test
image: foo/bar:1.2
directory tree with test overlay:
├── base
│   ├── deployment.yaml
│   └── kustomization.yaml
└── overlays
└── test
└── kustomization2.yaml
Create new kustomization2.yaml with variables in overlay/test directory:
cd overlays/test
cat kustomization2.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
images:
- name: foo/bar
newTag: "${IMAGE_TAG}"
export IMAGE_TAG="2.2.11" ; envsubst < kustomization2.yaml > kustomization.yaml ; kustomize build .
output:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- image: foo/bar:2.2.11
name: test
Files in directory after envsubst :
.
├── base
│   ├── deployment.yaml
│   └── kustomization.yaml
└── overlays
└── test
├── kustomization2.yaml
└── kustomization.yaml
You can always pipe the result from kustomize build . into kubectl to change the image on the fly :
kustomize build . | kubectl set image -f - test=nginx3:4 --local -o yaml
Output:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- image: nginx3:4
name: test
Note:
build in solution
Build-time side effects from CLI args or env variables
Changing kustomize build configuration output as a result of additional >arguments or flags to build, or by consulting shell environment variable >values in build code, would frustrate that goal.
kustomize insteads offers kustomization file edit commands. Like any shell >command, they can accept environment variable arguments.
For example, to set the tag used on an image to match an environment >variable, run
kustomize edit set image nginx:$MY_NGINX_VERSION
as part of some encapsulating work flow executed before kustomize build

Selectively apply nameprefix/namesuffix in kustomize

Currently we are using ${HOME}/bin/kustomize edit set nameprefix prefix1
But it is adding nameprefix to all of our resources like deployment.yaml and service.yaml.
We want to apply nameprefix to deployment.yaml only and not apply it to service.yaml
Posting for better visibility:
If you are using:
kustomize edit set nameprefix prefix1
This command will set namePrefix inside your current kustomization.
As stated in the question - this is the way how it works, namePrefix will be used for all specified resources inside kustomization.yaml.
Please consider the following scenario using the idea of an overlay and base with kustomization.
Tested with:
kustomize/v4.0.1
Base declare resources and settings shared in common and overlay declare additional differences.
.
├── base
│ ├── [deployment.yaml] Deployment nginx
│ ├── [kustomization.yaml] Kustomization
│ └── [service.yaml] Service nginx
└── prod
├── [kustomization.yaml] Kustomization
└── kustomizeconfig
└── [deploy-prefix-transformer.yaml] PrefixSuffixTransformer customPrefixer
base: common files
#deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
run: nginx
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
run: nginx
#kustomization.yaml
resources:
- deployment.yaml
- service.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
overlay/prod: kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
nameSuffix: -Suffix1
transformers:
- ./kustomizeconfig/deploy-prefix-transformer.yaml
overlay/prod/kustomizeconfig: deploy-prefix-transformer.yaml
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: customPrefixer
prefix: "deploymentprefix-"
fieldSpecs:
- kind: Deployment
path: metadata/name
As you can see, using this structure and builtin plugin PrefixSuffixTransformer you can get the desired effect:
kustomize build overlay/prod/
apiVersion: v1
kind: Service
metadata:
labels:
run: nginx
name: nginx-Suffix1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploymentprefix-nginx-Suffix1
spec:
selector:
matchLabels:
run: nginx
This configuration (overlay/prod/kustomization.yaml) will apply nameSuffix: -Suffix1 to all resources specified in base directory and using PrefixSuffixTransformer will add in this specific example prefix: "deploymentprefix-" to deployment.metadata.name
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: customPrefixer
prefix: "deploymentprefix-"
fieldSpecs:
- kind: Deployment
path: metadata/name
/kustomizeconfig/deploy-prefix-transformer.yaml
There is github issue about that
is it possible to have kustomization file avoid adding prefixes to few kinds ?
And there are 2 examples provided by #jbrette with which you can achieve what you need.
no prefix to secret
canary using skip
Additionally you can take a look at these pull requests:
https://github.com/kubernetes/enhancements/pull/1232
https://github.com/kubernetes-sigs/kustomize/pull/1491
For those who stumble across this, I had an issue get it to work with ServiceAccount type.
Issue was that I needed to prevent suffix from being added. Apparently namePrefix "should" prevent that too but in actuality I had to add:
nameSuffix:
- path: metadata/name
apiVersion: v1
kind: serviceaccount
skip: true
Note kind type with lowercase. Using standard kind for ServiceAccount makes it to fail.