Kustomize - Merge multiple configMapGenerators - kubernetes

So I'm dealing with a structure like this:
.
├── 1
│   ├── env-vars
│   └── kustomization.yaml
├── 2
│   ├── env-vars
│   └── kustomization.yaml
├── env-vars
├── kustomization.yaml
└── shared
├── env-vars
└── kustomization.yaml
while env-vars within each level has some env vars and
$cat kustomization.yaml
bases:
- 1/
- 2/
namePrefix: toplevel-
configMapGenerator:
- name: env-cm
behavior: merge
envs:
- env-vars
$cat 1/kustomization.yaml
bases:
- ./../shared
namePrefix: first-
configMapGenerator:
- name: env-cm
behavior: merge
envs:
- env-vars
$cat 2/kustomization.yaml
bases:
- ./../shared
namePrefix: second-
configMapGenerator:
- name: env-cm
behavior: merge
envs:
- env-vars
$cat shared/kustomization.yaml
configMapGenerator:
- name: env-cm
behavior: create
envs:
- env-vars
I'm essentially trying to create 2 configmaps with some shared values (which are injected from different resources: from shared directory and the top-level directory)
kustomize build . fails with some conflict errors for finding multiple objects:
Error: merging from generator <blah>: found multiple objects <blah> that could accept merge of ~G_v1_ConfigMap|~X|env-cm
Unfortunately I need to use merge on the top-level configMapGenerator, since there are some labels injected to 1 and 2 configmaps (so createing a top-level configmap altho addresses the env-vars, but excludes the labels)
Any suggestion on how to address this issue is appreciated

I believe this should solve your issue.
kustomization.yaml which is located in base or /:
$ cat kustomization.yaml
resources:
- ./1
- ./2
namePrefix: toplevel-
configMapGenerator:
- name: first-env-cm
behavior: merge
envs:
- env-vars
- name: second-env-cm
behavior: merge
envs:
- env-vars
With help of search I found this github issue which is I'd say the same issue. And then a pull-request with changes in code. We can see that during kustomize render merge behaviour was changed to look for currentId instead of originalId. Knowing that we can address to exact "pre-rendered" configmaps separately.

Related

Unable to remove a "Sizelimit" property using kustomize

I have sizeLimit property under emptyDir set to 2Gi in my template base file. I want to remove the sizelimit and just have emptyDir: {}. I've been unable to achieve this using Kustomization overlays. I will detail my folder structure and kustomization yamls below.
Folder Structure:
application
├── majorbase
│   ├── kustomization.yaml
│   └── resources
│   └── web-template.yaml
├── minorbase
│   ├── kustomization.yaml
│   └── resources
└── myoverlays
├── kustomization.yaml
└── resources
└── my-new-web.yaml
The folder myoverlays contains the following contents in it's kustomization.yaml file
bases:
- ../minorbase
patchesStrategicMerge:
- resources/my-new-web.yaml
The folder minorbase contains the following contents in it's kustomization.yaml file
bases:
- ../majorbase
The folder majorbase contains the following contents in it's kustomization.yaml file
resources:
- resources/web-template.yaml
The section I want to edit looks like this in the majorbase/template.
volumes:
- name: test-vol
emptyDir:
sizeLimit: "2Gi"
The above configuration needs to be updated using overlays as below.
volumes:
- name: test-vol
emptyDir: {}
This is where my problem lies. Kustomization just picks the 2Gi value mentioned in the base whenever I remove the sizelimit in my overlays. When I mention different value to sizeLimit such as "1Gi" in my overlays file, kustomization is picking up the change. What is the cause of this behaviour? Is it possible to achieve what I'm trying to do here?
NB: This answer assumes a recent version of Kustomize (I'm running 4.5.2 locally). Your examples are using deprecated syntax (the bases section was deprecated in version 2.1.0, for example).
Your problem is that you're using a strategicMerge patch, and you're merging and empty map ({}) with {"sizeLimit": "26gi"}. If you merge an empty map with anything, it's a no-op: you end up with the "anything".
To explicitly delete an element, you have a few choices.
You can use the $patch: replace directive (you can find an example of that here) to have Kustomize replace the emptyDir element, rather than merging the contents. That would look like:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
volumes:
- name: test-vol
emptyDir:
$patch: replace
The corresponding kustomization.yaml might look something like:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: resources/my-new-web.yaml
Alternately, you can use a JSONPatch patch, which is good for explicitly deleting fields:
- path: /spec/volumes/0/emptyDir/sizeLimit
op: remove
Where kustomization.yaml would look like:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ....//base
patches:
- target:
kind: Pod
name: example
path: resources/my-new-web.yaml
You can find a complete runnable demonstration of this here.

Kustomize overlays when using a shared ConfigMap

I have an environment made of pods that address their target environment based on an environment variable called CONF_ENV that could be test, stage or prod.
The application running inside the Pod has the same source code across environments, the configuration file is picked according to the CONF_ENV environment variable.
I'v encapsulated this CONF_ENV in *.properties files just because I may have to add more environment variables later, but I make sure that each property file contains the expected CONF_ENV e.g.:
test.properites has CONF_ENV=test,
prod.properties has CONF_ENV=prod, and so on...
I struggle to make this work with Kustomize overlays, because I want to define a ConfigMap as a shared resource across all the pods within the same overlay e.g. test (each pod in their own directory, along other stuff when needed).
So the idea is:
base/ (shared) with the definition of the Namespace, the ConfigMap (and potentially other shared resources
base/pod1/ with the definition of pod1 picking from the shared ConfigMap (this defaults to test, but in principle it could be different)
Then the overlays:
overlay/test that patches the base with CONF_ENV=test (e.g. for overlay/test/pod1/ and so on)
overlay/prod/ that patches the base with CONF_ENV=prod (e.g. for overlay/prod/pod1/ and so on)
Each directory with their own kustomize.yaml.
The above doesn't work because when going into e.g. overlay/test/pod1/ and I invoke the command kubectl kustomize . to check the output YAML, then I get all sorts of errors depending on how I defined the lists for the YAML keys bases: or resources:.
I am trying to share the ConfigMap across the entire CONF_ENV environment in an attempt to minimize the boilerplate YAML by leveraging the patching-pattern with Kustomize.
The Kubernetes / Kustomize YAML directory structure works like this:
├── base
│ ├── configuration.yaml # I am trying to share this!
│ ├── kustomization.yaml
│ ├── my_namespace.yaml # I am trying to share this!
│ ├── my-scheduleset-etl-misc
│ │ ├── kustomization.yaml
│ │ └── my_scheduleset_etl_misc.yaml
│ ├── my-scheduleset-etl-reporting
│ │ ├── kustomization.yaml
│ │ └── my_scheduleset_etl_reporting.yaml
│ └── test.properties # I am trying to share this!
└── overlay
└── test
├── kustomization.yaml # here I want tell "go and pick up the shared resources in the base dir"
├── my-scheduleset-etl-misc
│ ├── kustomization.yaml
│ └── test.properties # I've tried to share this one level above, but also to add this inside the "leaf" level for a given pod
└── my-scheduleset-etl-reporting
└── kustomization.yaml
The command kubectl with Kustomize:
sometimes complains that the shared namespace does not exist:
error: merging from generator &{0xc001d99530 { map[] map[]} {{ my-schedule-set-props merge {[CONF_ENV=test] [] [] } <nil>}}}:
id resid.ResId{Gvk:resid.Gvk{Group:"", Version:"v1", Kind:"ConfigMap", isClusterScoped:false}, Name:"my-schedule-set-props", Namespace:""}
does not exist; cannot merge or replace
sometimes doesn't allow to have shared resources inside an overlay:
error: loading KV pairs: env source files: [../test.properties]:
security; file '/my/path/to/yaml/overlay/test/test.properties'
is not in or below '/my/path/to/yaml/overlay/test/my-scheduleset-etl-misc'
sometimes doesn't allow cycles when I am trying to have multiple bases - the shared resources and the original pod definition:
error: accumulating resources: accumulation err='accumulating resources from '../':
'/my/path/to/yaml/overlay/test' must resolve to a file':
cycle detected: candidate root '/my/path/to/yaml/overlay/test'
contains visited root '/my/path/to/yaml/overlay/test/my-scheduleset-etl-misc'
The overlay kustomization.yaml files inside the pod dirs have:
bases:
- ../ # tried with/without this to share the ConfigMap
- ../../../base/my-scheduleset-etl-misc/
The kustomization.yaml at the root of the overlay has:
bases:
- ../../base
The kustomization.yaml at the base dir contains this configuration for the ConfigMap:
# https://gist.github.com/hermanbanken/3d0f232ffd86236c9f1f198c9452aad9
configMapGenerator:
- name: my-schedule-set-props
namespace: my-ss-schedules
envs:
- test.properties
vars:
- name: CONF_ENV
objref:
kind: ConfigMap
name: my-schedule-set-props
apiVersion: v1
fieldref:
fieldpath: data.CONF_ENV
configurations:
- configuration.yaml
With configuration.yaml containing:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
varReference:
- path: spec/confEnv/value
kind: Pod
How do I do this?
How do I make sure that I minimise the amount of YAML by sharing all the ConfigMap stuff and the Pods definitions as much as I can?
If I understand your goal correctly, I think you may be grossly over-complicating things. I think you want a common properties file defined in your base, but you want to override specific properties in your overlays. Here's one way of doing that.
In base, I have:
$ cd base
$ tree
.
├── example.properties
├── kustomization.yaml
└── pod1
├── kustomization.yaml
└── pod.yaml
Where example.properties contains:
SOME_OTHER_VAR=somevalue
CONF_ENV=test
And kustomization.yaml contains:
resources:
- pod1
configMapGenerator:
- name: example-props
envs:
- example.properties
I have two overlays defined, test and prod:
$ cd ../overlays
$ tree
.
├── prod
│   ├── example.properties
│   └── kustomization.yaml
└── test
└── kustomization.yaml
test/kustomization.yaml looks like this:
resources:
- ../../base
It's just importing the base without any changes, since the value of CONF_ENV from the base directory is test.
prod/kustomization.yaml looks like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
configMapGenerator:
- name: example-props
behavior: merge
envs:
- example.properties
And prod/example.properties looks like:
CONF_ENV=prod
If I run kustomize build overlays/test, I get as output:
apiVersion: v1
data:
CONF_ENV: test
SOME_OTHER_VAR: somevalue
kind: ConfigMap
metadata:
name: example-props-7245222b9b
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- command:
- sleep
- 1800
envFrom:
- configMapRef:
name: example-props-7245222b9b
image: docker.io/alpine
name: alpine
If I run kustomize build overlays/prod, I get:
apiVersion: v1
data:
CONF_ENV: prod
SOME_OTHER_VAR: somevalue
kind: ConfigMap
metadata:
name: example-props-h4b5tc869g
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- command:
- sleep
- 1800
envFrom:
- configMapRef:
name: example-props-h4b5tc869g
image: docker.io/alpine
name: alpine
That is, everything looks as you would expect given the configuration in base, but we have provided a new value for CONF_ENV.
You can find all these files here.

kustomize edit set image doesn't work with kustomize multibases and common base

I am using this example:
├── base
│   ├── kustomization.yaml
│   └── pod.yaml
├── dev
│   └── kustomization.yaml
├── kustomization.yaml
├── production
│   └── kustomization.yaml
└── staging
└── kustomization.yaml
and in kustomization.yaml file in root:
resources:
- ./dev
- ./staging
- ./production
I also have the image transformer code in dev, staging, production kustomization.yaml:
images:
- name: my-app
newName: gcr.io/my-platform/my-app
To build a single deployment manifest, I use:
(cd dev && kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
which simply works!
to build deployment manifest for all overlays (dev, staging, production), I use:
(kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
which uses the kustomization.yaml in root which contains all resources(dev, staging, production).
It does work and the final build is printed on console but without the image tag.
It seems like the kusotmize edit set image only updates the kustomizaion.yaml of the current dir.
Is there anything which can be done to handle this scenario in an easy and efficient way so the final output contains image tag as well for all deployments?
To test please use this repo
It took some time to realise what happens here. I'll explain step by step what happens and how it should work.
What happens
Firstly I re-created the same structure:
$ tree
.
├── base
│   ├── kustomization.yaml
│   └── pod.yaml
├── dev
│   └── kustomization.yaml
├── kustomization.yaml
└── staging
└── kustomization.yaml
When you run this command for single deployment:
(cd dev && kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
you change working directory to dev, manually override image from gcr.io/my-platform/my-app and adding tag 0.0.2 and then render the deployment.
The thing is previously added transformer code gets overridden by the command above. You can remove transformer code, run the command above and get the same result. And after running the command you will find out that your dev/kustomization.yaml will look like:
resources:
- ./../base
namePrefix: dev-
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: my-app
newName: gcr.io/my-platform/my-app
newTag: 0.0.2
Then what happens when you run this command from main directory:
(kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
kustomize firstly goes to overlays and do transformation code which is located in overlays/kustomization.yaml. When this part is finished, image name is not my-app, but gcr.io/my-platform/my-app.
At this point kustomize edit command tries to find image with name my-app and can't do so and therefore does NOT apply the tag.
What to do
You need to use transformed image name if you run kustomize edit in main working directory:
$ kustomize edit set image gcr.io/my-platform/my-app=*:0.0.4 && kustomize build .
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-app
name: dev-myapp-pod
spec:
containers:
- image: gcr.io/my-platform/my-app:0.0.4
name: my-app
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-app
name: stag-myapp-pod
spec:
containers:
- image: gcr.io/my-platform/my-app:0.0.4
name: my-app

Flux V2 not pushing new image version to git repo

I've upgrade from Flux V1 to V2. It all went fairly smooth but I can't seem to get the ImageUpdateAutomation to work. Flux knows I have images to update but it doesn't change the container image in the deployment.yaml manifest and commit the changes to Github. I have no errors in my logs so I'm at a bit of a loss as to what to do next.
I have an file structure that looks something like this:
├── README.md
├── staging
│   ├── api
│   │   ├── deployment.yaml
│   │   ├── automation.yaml
│   │   └── service.yaml
│   ├── app
│   │   ├── deployment.yaml
│   │   ├── automation.yaml
│   │   └── service.yaml
│   ├── flux-system
│   │   ├── gotk-components.yaml
│   │   ├── gotk-sync.yaml
│   │   └── kustomization.yaml
│   ├── image_update_automation.yaml
My staging/api/automation.yaml is pretty strait-forward:
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageRepository
metadata:
name: api
namespace: flux-system
spec:
image: xxx/api
interval: 1m0s
secretRef:
name: dockerhub
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImagePolicy
metadata:
name: api
namespace: flux-system
spec:
imageRepositoryRef:
name: api
policy:
semver:
range: ">=1.0.0"
My staging/image_update_automation.yaml looks something like this:
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageUpdateAutomation
metadata:
name: flux-system
namespace: flux-system
spec:
git:
checkout:
ref:
branch: master
commit:
author:
email: fluxcdbot#users.noreply.github.com
name: fluxcdbot
messageTemplate: '{{range .Updated.Images}}{{println .}}{{end}}'
push:
branch: master
interval: 1m0s
sourceRef:
kind: GitRepository
name: flux-system
update:
path: ./staging
strategy: Setters
Everything seems to be ok here:
❯ flux get image repository
NAME READY MESSAGE LAST SCAN SUSPENDED
api True successful scan, found 23 tags 2021-07-28T17:11:02-06:00 False
app True successful scan, found 18 tags 2021-07-28T17:11:02-06:00 False
❯ flux get image policy
NAME READY MESSAGE LATEST IMAGE
api True Latest image tag for 'xxx/api' resolved to: 1.0.1 xxx/api:1.0.1
app True Latest image tag for 'xxx/app' resolved to: 3.2.1 xxx/app:3.2.1
As you can see from the policy output the LATEST IMAGE api is 1.0.1, however when I view the current version of my app and api they have not been updated.
kubectl get deployment api -n xxx -o json | jq '.spec.template.spec.containers[0].image'
"xxx/api:0.1.5"
Any advice on this would be much appreciated.
My issue was that I didn't add the comment after my image declaration in my deployment yaml. More details. Honestly, I'm surprised this is not Annotation instead of a comment.
spec:
containers:
- image: docker.io/xxx/api:0.1.5 # {"$imagepolicy": "flux-system:api"}

referring a resource yaml from another directory in kustomization

I have a resource yaml file in a folder structure given below
base
---- first.yaml
main
---- kustomization.yaml
In kustomization.yaml I am referring the first.yaml as
resources:
../base/first.yaml
But I am getting an error when i do apply of kubectl apply -f kustomizatio.yaml
accumulating resources: accumulating resources from '../base/first.yaml': security; file '../base/first.yaml' is not in or below '../base'
How can i call the first.yaml resource from the folder base to the kustomization in main folder?
Kustomize cannot refer to individual resources in parent directories, it can only refer to resources in current or child directories, but it can refer to other Kustomize directories.
The following would be a valid configuration for what you have:
.
├── base
│   ├── main
│   │   ├── kustomization.yaml
│   │   └── resource.yaml
│   └── stuff
│   ├── first.yaml
│   └── kustomization.yaml
└── cluster
└── kustomization.yaml
Contents of base/main/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resource.yaml
Contents of base/stuff/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- first.yaml
Contents of cluster/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base/main
- ../base/stuff
Run kustomize build from one folder down, kustomize build ./main. You aren't allowed to .. up past where kustomize started from, just to be safe.