patch kubernetes cronjob with kustomize - kubernetes

I am trying to patch a cronjob, but somehow it doesn't work as I would expect. I use the same folder structure for a deployment and that works.
This is the folder structure:
.
├── base
│ ├── kustomization.yaml
│ └── war.cron.yaml
└── overlays
└── staging
├── kustomization.yaml
├── war.cron.patch.yaml
└── war.cron.staging.env
base/kustomization.yaml
---
kind: Kustomization
resources:
- war.cron.yaml
base/war.cron.yaml
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: war-event-cron
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: war-event-cron
image: my-registry/war-service
imagePullPolicy: IfNotPresent
command:
- python
- run.py
args:
- sync-events
envFrom:
- secretRef:
name: war-event-cron-secret
restartPolicy: OnFailure
Then I am trying to patch this in the staging overlay.
overlays/staging/kustomization.yaml
---
kind: Kustomization
namespace: staging
bases:
- "../../base"
patchesStrategicMerge:
- war.cron.patch.yaml
secretGenerator:
- name: war-event-cron-secret
behavior: create
envs:
- war.cron.staging.env
overlays/staging/war.cron.patch.yaml
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: war-event-cron
spec:
jobTemplate:
spec:
template:
spec:
containers:
- name: war-event-cron
image: my-registry/war-service:nightly
args:
- sync-events
- --debug
But the result of kustomize build overlays/staging/ is not what I want. The command is gone and the secret is not referenced.
apiVersion: v1
data:
...
kind: Secret
metadata:
name: war-event-cron-secret-d8m6bh7284
namespace: staging
type: Opaque
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: war-event-cron
namespace: staging
spec:
jobTemplate:
spec:
template:
spec:
containers:
- args:
- sync-events
- --debug
image: my-registry/war-service:nightly
name: war-event-cron
restartPolicy: OnFailure
schedule: '*/5 * * * *'

It's known bug in kustomize - check and follow this topic (created ~ one month ago) on GitHub for more information.
For now, fix for your issue is to use apiVersion:batch/v1beta1 instead of apiVersion: batch/v1 in base/war.cron.yaml and overlays/staging/war.cron.patch.yaml files.

Related

How to change container name and image in kustomization.yaml

I want to change all the dev-app to demo-app using kustomization.
In my base deployment.yaml I have the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: dev-app
name: dev-app
spec:
replicas: 1
selector:
matchLabels:
app: dev-app
template:
metadata:
labels:
service: dev-app
spec:
imagePullSecrets:
- name: my-docker-secret
containers:
- name: dev-app
image: the-my-app
imagePullPolicy: Always
ports:
- containerPort: 1234
restartPolicy: Always
In my overlays/demo kustomization.yaml I have the following:
bases:
- ../../base
resources:
- deployment.yaml
namespace: demo-app
images:
- name: the-my-app
newName: my.docker.registry.com/my-project/my-app
newTag: test
when I do this:
kubectl apply -k my-kustomization-dir
result look like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: dev-app
name: dev-app
spec:
replicas: 1
selector:
matchLabels:
app: dev-app
template:
metadata:
labels:
service: dev-app
spec:
imagePullSecrets:
- name: my-docker-secret
containers:
- name: dev-app
image: my.docker.registry.com/my-project/my-app:test
imagePullPolicy: Always
ports:
- containerPort: 1234
restartPolicy: Always
I want to change the name in the container to demo-app.
containers:
- name: dev-app
if possible help me with best possible way to replace all the dev-app name,label,service tag to demo-app
A way to do this is to replicate your deployment file into your overlays folder and change what you need to change, for example:
Path structure:
├── base
│   ├── deployment.yaml
│   └── kustomization.yaml
└── overlays
├── dev
│   ├── deployment.yaml
│   └── kustomization.yaml
└── prod
├── deployment.yaml
└── kustomization.yaml
deployment file of base/ folder:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: dev-app
name: dev-app
spec:
replicas: 1
selector:
matchLabels:
app: dev-app
template:
metadata:
labels:
service: dev-app
spec:
imagePullSecrets:
- name: my-docker-secret
containers:
- name: dev-app
image: the-my-app
imagePullPolicy: Always
ports:
- containerPort: 1234
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 500M
deployment file of overlays/prod folder:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: demo-app
name: demo-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
service: demo-app
spec:
imagePullSecrets:
- name: my-docker-secret
containers:
- name: demo-app
image: the-my-app
imagePullPolicy: Always
ports:
- containerPort: 1234
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 500M
Kustomization file of overlays/prod:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: deployment.yaml
target:
kind: Deployment
options:
allowNameChange: true
namespace: prod-demo-app
images:
- name: the-my-app
newName: my.docker.registry.com/my-project/my-app
newTag: test
I suggest you use Helm, a much more robust templating engine. You can also use a combination of both Helm and Kustomize, Helm for templating and Kustomize for resource management, patches for a specific configuration, and overlays.

Kubernetes patch multiple resources not working

I'm trying to apply the same job history limits to a number of CronJobs using a patch like the following, named kubeJobHistoryLimit.yml:
apiVersion: batch/v1beta1
kind: CronJob
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
My kustomization.yml looks like:
bases:
- ../base
configMapGenerator:
- name: inductions-config
env: config.properties
patches:
- path: kubeJobHistoryLimit.yml
target:
kind: CronJob
patchesStrategicMerge:
- job_specific_patch_1.yml
- job_specific_patch_2.yml
...
resources:
- secrets-uat.yml
And at some point in my CI pipeline I have:
kubectl --kubeconfig $kubeconfig apply --force -k ./
The kubectl version is 1.21.9.
The issue is that the job history limit values don't seem to be getting picked up. Is there something wrong w/ the configuration or the version of K8s I'm using?
With kustomize 4.5.2, your patch as written doesn't apply; it fails with:
Error: trouble configuring builtin PatchTransformer with config: `
path: kubeJobHistoryLimit.yml
target:
kind: CronJob
`: unable to parse SM or JSON patch from [apiVersion: batch/v1
kind: CronJob
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
]
This is because it's missing metadata.name, which is required, even if it's ignored when patching multiple objects. If I modify the patch to look like this:
apiVersion: batch/v1
kind: CronJob
metadata:
name: ignored
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
It seems to work.
If I have base/cronjob1.yaml that looks like:
apiVersion: batch/v1
kind: CronJob
metadata:
name: cronjob1
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 5
jobTemplate:
spec:
template:
spec:
containers:
- command:
- sleep
- 60
image: docker.io/alpine:latest
name: example
schedule: 30 3 * * *
Then using the above patch and a overlay/kustomization.yaml like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- path: kubeJobHistoryLimit.yml
target:
kind: CronJob
I see the following output from kustomize build overlay:
apiVersion: batch/v1
kind: CronJob
metadata:
name: cronjob2
spec:
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- command:
- sleep
- 60
image: docker.io/alpine:latest
name: example
schedule: 30 3 * * *
successfulJobsHistoryLimit: 1
You can see the two attributes have been updated correctly.

Replace contents of an item in a list using Kustomize

I'm having difficulty trying to get kustomize to replace contents of an item in a list.
My kustomize file
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resource.yaml
patches:
- patch.yaml
My patch.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
value: web-pgdb
My resource.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
valueFrom:
secretKeyRef:
name: web-pgdb
key: database
kustomize build returns
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- env:
- name: PG_DATABASE
value: web-pgdb
valueFrom:
secretKeyRef:
key: database
name: web-pgdb
name: web-service-migration
what i want kustomize build to return
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- env:
- name: PG_DATABASE
value: web-pgdb
name: web-service-migration
If I remember correctly patches in kustomize by default uses strategic merge, so you need to nullify valueFrom, so your patch should look like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
value: web-pgdb
valueFrom: null
More details about strategic merge patch and how to delete maps: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#maps

How to deal with a namespace different from the globally set one?

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ns1
resources:
- r1a.yaml
- r1b.yaml
- r1c.yaml
- r1d.yaml
- r1e.yaml
- r2.yaml # needs to be placed in namespace ns2
Let's assume above situation. The problem is objects specified in r2.yaml would be place in ns1 even if ns2 is explicitely referenced in metadata.namespace.
How do I have to deal with this? Or how can I solve this (as I assume there a multiple options)?
I've looked into this and I came up with one idea.
├── base
│ ├── [nginx.yaml] Deployment nginx ns: default
| ├── [nginx2.yaml] Deployment nginx ns: default
| ├── [nginx3.yaml] Deployment nginx ns: default
| ├── [nginx4.yaml] Deployment nginx ns: default
| ├── [nginx5.yaml] Deployment nginx ns: nginx
│ └── [kustomization.yaml] Kustomization
└── prod
├── [kustomization.yaml] Kustomization
└── [patch.yaml] patching namespace
You need to have 2 directories, in this setup are: base and prod. In base directory you should use your base YAMLs and kustomization.yaml file. In my scenario I have 6 YAMLs: nginx/1/2/3/4.yaml based on Kubernetes Documentation, and nginx5.yaml which looks the same but with additional spec.namespace: nginx.
In base directory:
$ cat kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- nginx1.yaml
- nginx2.yaml
- nginx3.yaml
- nginx4.yaml
- nginx5.yaml
And 5 YAMLs with nginx.
In Prod directory:
You should have 2 files. kustomization.yaml and patch.yaml.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ns1
bases:
- ../base
patchesJson6902:
- target:
group: apps
version: v1
kind: Deployment
name: nginx-deployment-5
path: patch.yaml
$ cat patch.yaml
- op: replace
path: /metadata/namespace
value: nginx
When you will use kustomize build . in prod directory, all nginx-deployment/-2/-3/-4 will be in the namespace: ns1 and nginx-deployment-5 will be in namespace: nginx.
~/prod (project)$ kustomize build .
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-5
namespace: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: ns1
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-2
namespace: ns1
spec:
Useful links:
Kustomize Builtin Plugins
Customizong
Kustomization Patches

kustomize, secretGenerator & patchesStrategicMerge: envFrom.secretRef not reading hashed secret name

In my kustomization.yaml I have:
...
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
patchesStrategicMerge:
- app.yaml
And then in my app.yaml (the patch) I have:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
template:
spec:
containers:
- name: server
envFrom:
- secretRef:
name: db-env
When I try build this via kustomize build k8s/development I get back out:
apiVersion: apps/v1
kind: Deployment
...
spec:
containers:
- envFrom:
- secretRef:
name: db-env
name: server
When it should be:
- envFrom:
- secretRef:
name: db-env-4g95hhmhfc
How do I get the secretGenerator name hashing to apply to patchesStrategicMerge too?
Or alternatively, what's the proper way to inject some environment vars into a deployment for a specific overlay?
This for development.
My file structure is like:
❯ tree k8s
k8s
├── base
│   ├── app.yaml
│   └── kustomization.yaml
├── development
│   ├── app.yaml
│   ├── golinks.sql
│   ├── kustomization.yaml
│   ├── mariadb.yaml
│   ├── my.cnf
│   └── my.env
└── production
├── ingress.yaml
└── kustomization.yaml
Where base/kustomization.yaml is:
namespace: go-mpen
resources:
- app.yaml
images:
- name: server
newName: reg/proj/server
and development/kustomization.yaml is:
resources:
- ../base
- mariadb.yaml
configMapGenerator:
- name: mariadb-config
files:
- my.cnf
- name: initdb-config
files:
- golinks.sql # TODO: can we mount this w/out a config file?
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
patchesStrategicMerge:
- app.yaml
This works fine for me with kustomize v3.8.4. Can you please check your version and if disableNameSuffixHash is not perhaps set to you true.
Here are the manifests used by me to test this:
➜ app.yaml deployment.yaml kustomization.yaml my.env
app.yaml
kind: Deployment
metadata:
name: app-deployment
spec:
template:
spec:
containers:
- name: server
envFrom:
- secretRef:
name: db-env
deplyoment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
and my kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
patchesStrategicMerge:
- app.yaml
resources:
- deployment.yaml
And here is the result:
apiVersion: v1
data:
ASD: MTIz
kind: Secret
metadata:
name: db-env-f5tt4gtd7d
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
- envFrom:
- secretRef:
name: db-env-f5tt4gtd7d
name: server