I have a resource yaml file in a folder structure given below
base
---- first.yaml
main
---- kustomization.yaml
In kustomization.yaml I am referring the first.yaml as
resources:
../base/first.yaml
But I am getting an error when i do apply of kubectl apply -f kustomizatio.yaml
accumulating resources: accumulating resources from '../base/first.yaml': security; file '../base/first.yaml' is not in or below '../base'
How can i call the first.yaml resource from the folder base to the kustomization in main folder?
Kustomize cannot refer to individual resources in parent directories, it can only refer to resources in current or child directories, but it can refer to other Kustomize directories.
The following would be a valid configuration for what you have:
.
├── base
│ ├── main
│ │ ├── kustomization.yaml
│ │ └── resource.yaml
│ └── stuff
│ ├── first.yaml
│ └── kustomization.yaml
└── cluster
└── kustomization.yaml
Contents of base/main/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resource.yaml
Contents of base/stuff/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- first.yaml
Contents of cluster/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base/main
- ../base/stuff
Run kustomize build from one folder down, kustomize build ./main. You aren't allowed to .. up past where kustomize started from, just to be safe.
Related
I have an environment made of pods that address their target environment based on an environment variable called CONF_ENV that could be test, stage or prod.
The application running inside the Pod has the same source code across environments, the configuration file is picked according to the CONF_ENV environment variable.
I'v encapsulated this CONF_ENV in *.properties files just because I may have to add more environment variables later, but I make sure that each property file contains the expected CONF_ENV e.g.:
test.properites has CONF_ENV=test,
prod.properties has CONF_ENV=prod, and so on...
I struggle to make this work with Kustomize overlays, because I want to define a ConfigMap as a shared resource across all the pods within the same overlay e.g. test (each pod in their own directory, along other stuff when needed).
So the idea is:
base/ (shared) with the definition of the Namespace, the ConfigMap (and potentially other shared resources
base/pod1/ with the definition of pod1 picking from the shared ConfigMap (this defaults to test, but in principle it could be different)
Then the overlays:
overlay/test that patches the base with CONF_ENV=test (e.g. for overlay/test/pod1/ and so on)
overlay/prod/ that patches the base with CONF_ENV=prod (e.g. for overlay/prod/pod1/ and so on)
Each directory with their own kustomize.yaml.
The above doesn't work because when going into e.g. overlay/test/pod1/ and I invoke the command kubectl kustomize . to check the output YAML, then I get all sorts of errors depending on how I defined the lists for the YAML keys bases: or resources:.
I am trying to share the ConfigMap across the entire CONF_ENV environment in an attempt to minimize the boilerplate YAML by leveraging the patching-pattern with Kustomize.
The Kubernetes / Kustomize YAML directory structure works like this:
├── base
│ ├── configuration.yaml # I am trying to share this!
│ ├── kustomization.yaml
│ ├── my_namespace.yaml # I am trying to share this!
│ ├── my-scheduleset-etl-misc
│ │ ├── kustomization.yaml
│ │ └── my_scheduleset_etl_misc.yaml
│ ├── my-scheduleset-etl-reporting
│ │ ├── kustomization.yaml
│ │ └── my_scheduleset_etl_reporting.yaml
│ └── test.properties # I am trying to share this!
└── overlay
└── test
├── kustomization.yaml # here I want tell "go and pick up the shared resources in the base dir"
├── my-scheduleset-etl-misc
│ ├── kustomization.yaml
│ └── test.properties # I've tried to share this one level above, but also to add this inside the "leaf" level for a given pod
└── my-scheduleset-etl-reporting
└── kustomization.yaml
The command kubectl with Kustomize:
sometimes complains that the shared namespace does not exist:
error: merging from generator &{0xc001d99530 { map[] map[]} {{ my-schedule-set-props merge {[CONF_ENV=test] [] [] } <nil>}}}:
id resid.ResId{Gvk:resid.Gvk{Group:"", Version:"v1", Kind:"ConfigMap", isClusterScoped:false}, Name:"my-schedule-set-props", Namespace:""}
does not exist; cannot merge or replace
sometimes doesn't allow to have shared resources inside an overlay:
error: loading KV pairs: env source files: [../test.properties]:
security; file '/my/path/to/yaml/overlay/test/test.properties'
is not in or below '/my/path/to/yaml/overlay/test/my-scheduleset-etl-misc'
sometimes doesn't allow cycles when I am trying to have multiple bases - the shared resources and the original pod definition:
error: accumulating resources: accumulation err='accumulating resources from '../':
'/my/path/to/yaml/overlay/test' must resolve to a file':
cycle detected: candidate root '/my/path/to/yaml/overlay/test'
contains visited root '/my/path/to/yaml/overlay/test/my-scheduleset-etl-misc'
The overlay kustomization.yaml files inside the pod dirs have:
bases:
- ../ # tried with/without this to share the ConfigMap
- ../../../base/my-scheduleset-etl-misc/
The kustomization.yaml at the root of the overlay has:
bases:
- ../../base
The kustomization.yaml at the base dir contains this configuration for the ConfigMap:
# https://gist.github.com/hermanbanken/3d0f232ffd86236c9f1f198c9452aad9
configMapGenerator:
- name: my-schedule-set-props
namespace: my-ss-schedules
envs:
- test.properties
vars:
- name: CONF_ENV
objref:
kind: ConfigMap
name: my-schedule-set-props
apiVersion: v1
fieldref:
fieldpath: data.CONF_ENV
configurations:
- configuration.yaml
With configuration.yaml containing:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
varReference:
- path: spec/confEnv/value
kind: Pod
How do I do this?
How do I make sure that I minimise the amount of YAML by sharing all the ConfigMap stuff and the Pods definitions as much as I can?
If I understand your goal correctly, I think you may be grossly over-complicating things. I think you want a common properties file defined in your base, but you want to override specific properties in your overlays. Here's one way of doing that.
In base, I have:
$ cd base
$ tree
.
├── example.properties
├── kustomization.yaml
└── pod1
├── kustomization.yaml
└── pod.yaml
Where example.properties contains:
SOME_OTHER_VAR=somevalue
CONF_ENV=test
And kustomization.yaml contains:
resources:
- pod1
configMapGenerator:
- name: example-props
envs:
- example.properties
I have two overlays defined, test and prod:
$ cd ../overlays
$ tree
.
├── prod
│ ├── example.properties
│ └── kustomization.yaml
└── test
└── kustomization.yaml
test/kustomization.yaml looks like this:
resources:
- ../../base
It's just importing the base without any changes, since the value of CONF_ENV from the base directory is test.
prod/kustomization.yaml looks like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
configMapGenerator:
- name: example-props
behavior: merge
envs:
- example.properties
And prod/example.properties looks like:
CONF_ENV=prod
If I run kustomize build overlays/test, I get as output:
apiVersion: v1
data:
CONF_ENV: test
SOME_OTHER_VAR: somevalue
kind: ConfigMap
metadata:
name: example-props-7245222b9b
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- command:
- sleep
- 1800
envFrom:
- configMapRef:
name: example-props-7245222b9b
image: docker.io/alpine
name: alpine
If I run kustomize build overlays/prod, I get:
apiVersion: v1
data:
CONF_ENV: prod
SOME_OTHER_VAR: somevalue
kind: ConfigMap
metadata:
name: example-props-h4b5tc869g
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- command:
- sleep
- 1800
envFrom:
- configMapRef:
name: example-props-h4b5tc869g
image: docker.io/alpine
name: alpine
That is, everything looks as you would expect given the configuration in base, but we have provided a new value for CONF_ENV.
You can find all these files here.
I am using this example:
├── base
│ ├── kustomization.yaml
│ └── pod.yaml
├── dev
│ └── kustomization.yaml
├── kustomization.yaml
├── production
│ └── kustomization.yaml
└── staging
└── kustomization.yaml
and in kustomization.yaml file in root:
resources:
- ./dev
- ./staging
- ./production
I also have the image transformer code in dev, staging, production kustomization.yaml:
images:
- name: my-app
newName: gcr.io/my-platform/my-app
To build a single deployment manifest, I use:
(cd dev && kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
which simply works!
to build deployment manifest for all overlays (dev, staging, production), I use:
(kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
which uses the kustomization.yaml in root which contains all resources(dev, staging, production).
It does work and the final build is printed on console but without the image tag.
It seems like the kusotmize edit set image only updates the kustomizaion.yaml of the current dir.
Is there anything which can be done to handle this scenario in an easy and efficient way so the final output contains image tag as well for all deployments?
To test please use this repo
It took some time to realise what happens here. I'll explain step by step what happens and how it should work.
What happens
Firstly I re-created the same structure:
$ tree
.
├── base
│ ├── kustomization.yaml
│ └── pod.yaml
├── dev
│ └── kustomization.yaml
├── kustomization.yaml
└── staging
└── kustomization.yaml
When you run this command for single deployment:
(cd dev && kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
you change working directory to dev, manually override image from gcr.io/my-platform/my-app and adding tag 0.0.2 and then render the deployment.
The thing is previously added transformer code gets overridden by the command above. You can remove transformer code, run the command above and get the same result. And after running the command you will find out that your dev/kustomization.yaml will look like:
resources:
- ./../base
namePrefix: dev-
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: my-app
newName: gcr.io/my-platform/my-app
newTag: 0.0.2
Then what happens when you run this command from main directory:
(kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
kustomize firstly goes to overlays and do transformation code which is located in overlays/kustomization.yaml. When this part is finished, image name is not my-app, but gcr.io/my-platform/my-app.
At this point kustomize edit command tries to find image with name my-app and can't do so and therefore does NOT apply the tag.
What to do
You need to use transformed image name if you run kustomize edit in main working directory:
$ kustomize edit set image gcr.io/my-platform/my-app=*:0.0.4 && kustomize build .
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-app
name: dev-myapp-pod
spec:
containers:
- image: gcr.io/my-platform/my-app:0.0.4
name: my-app
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-app
name: stag-myapp-pod
spec:
containers:
- image: gcr.io/my-platform/my-app:0.0.4
name: my-app
I've upgrade from Flux V1 to V2. It all went fairly smooth but I can't seem to get the ImageUpdateAutomation to work. Flux knows I have images to update but it doesn't change the container image in the deployment.yaml manifest and commit the changes to Github. I have no errors in my logs so I'm at a bit of a loss as to what to do next.
I have an file structure that looks something like this:
├── README.md
├── staging
│ ├── api
│ │ ├── deployment.yaml
│ │ ├── automation.yaml
│ │ └── service.yaml
│ ├── app
│ │ ├── deployment.yaml
│ │ ├── automation.yaml
│ │ └── service.yaml
│ ├── flux-system
│ │ ├── gotk-components.yaml
│ │ ├── gotk-sync.yaml
│ │ └── kustomization.yaml
│ ├── image_update_automation.yaml
My staging/api/automation.yaml is pretty strait-forward:
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageRepository
metadata:
name: api
namespace: flux-system
spec:
image: xxx/api
interval: 1m0s
secretRef:
name: dockerhub
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImagePolicy
metadata:
name: api
namespace: flux-system
spec:
imageRepositoryRef:
name: api
policy:
semver:
range: ">=1.0.0"
My staging/image_update_automation.yaml looks something like this:
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageUpdateAutomation
metadata:
name: flux-system
namespace: flux-system
spec:
git:
checkout:
ref:
branch: master
commit:
author:
email: fluxcdbot#users.noreply.github.com
name: fluxcdbot
messageTemplate: '{{range .Updated.Images}}{{println .}}{{end}}'
push:
branch: master
interval: 1m0s
sourceRef:
kind: GitRepository
name: flux-system
update:
path: ./staging
strategy: Setters
Everything seems to be ok here:
❯ flux get image repository
NAME READY MESSAGE LAST SCAN SUSPENDED
api True successful scan, found 23 tags 2021-07-28T17:11:02-06:00 False
app True successful scan, found 18 tags 2021-07-28T17:11:02-06:00 False
❯ flux get image policy
NAME READY MESSAGE LATEST IMAGE
api True Latest image tag for 'xxx/api' resolved to: 1.0.1 xxx/api:1.0.1
app True Latest image tag for 'xxx/app' resolved to: 3.2.1 xxx/app:3.2.1
As you can see from the policy output the LATEST IMAGE api is 1.0.1, however when I view the current version of my app and api they have not been updated.
kubectl get deployment api -n xxx -o json | jq '.spec.template.spec.containers[0].image'
"xxx/api:0.1.5"
Any advice on this would be much appreciated.
My issue was that I didn't add the comment after my image declaration in my deployment yaml. More details. Honestly, I'm surprised this is not Annotation instead of a comment.
spec:
containers:
- image: docker.io/xxx/api:0.1.5 # {"$imagepolicy": "flux-system:api"}
So I'm dealing with a structure like this:
.
├── 1
│ ├── env-vars
│ └── kustomization.yaml
├── 2
│ ├── env-vars
│ └── kustomization.yaml
├── env-vars
├── kustomization.yaml
└── shared
├── env-vars
└── kustomization.yaml
while env-vars within each level has some env vars and
$cat kustomization.yaml
bases:
- 1/
- 2/
namePrefix: toplevel-
configMapGenerator:
- name: env-cm
behavior: merge
envs:
- env-vars
$cat 1/kustomization.yaml
bases:
- ./../shared
namePrefix: first-
configMapGenerator:
- name: env-cm
behavior: merge
envs:
- env-vars
$cat 2/kustomization.yaml
bases:
- ./../shared
namePrefix: second-
configMapGenerator:
- name: env-cm
behavior: merge
envs:
- env-vars
$cat shared/kustomization.yaml
configMapGenerator:
- name: env-cm
behavior: create
envs:
- env-vars
I'm essentially trying to create 2 configmaps with some shared values (which are injected from different resources: from shared directory and the top-level directory)
kustomize build . fails with some conflict errors for finding multiple objects:
Error: merging from generator <blah>: found multiple objects <blah> that could accept merge of ~G_v1_ConfigMap|~X|env-cm
Unfortunately I need to use merge on the top-level configMapGenerator, since there are some labels injected to 1 and 2 configmaps (so createing a top-level configmap altho addresses the env-vars, but excludes the labels)
Any suggestion on how to address this issue is appreciated
I believe this should solve your issue.
kustomization.yaml which is located in base or /:
$ cat kustomization.yaml
resources:
- ./1
- ./2
namePrefix: toplevel-
configMapGenerator:
- name: first-env-cm
behavior: merge
envs:
- env-vars
- name: second-env-cm
behavior: merge
envs:
- env-vars
With help of search I found this github issue which is I'd say the same issue. And then a pull-request with changes in code. We can see that during kustomize render merge behaviour was changed to look for currentId instead of originalId. Knowing that we can address to exact "pre-rendered" configmaps separately.
Kustomize directory structure
├── base
│ ├── deployment.yaml
│ └── kustomization.yaml
└── overlays
└── prod
├── kustomization.yaml
├── namespace-a
│ ├── deployment-a1
│ │ ├── kustomization.yaml
│ │ └── patch.yaml
│ ├── deployment-a2
│ │ ├── kustomization.yaml
│ │ └── patch.yaml
│ ├── kustomization.yaml
│ └── namespace.yaml
├── namespace-b
│ ├── deployment-b1
│ │ ├── kustomization.yaml
│ │ └── patch.yaml
│ ├── deployment-b2
│ │ ├── kustomization.yaml
│ │ └── patch.yaml
│ ├── kustomization.yaml
│ └── namespace.yaml
└── namespace-c
As you can see above, I have prod environment with namesapce-a and namespace-b and few more.
To create deployment for all, I can simply run the below command:
> kustomize overlays/prod
Which works flawlessly, both namespaces are created along with other deployment files for all deployments.
To create a deployment for only namespace-a:
> kustomize overlays/prod/namespace-a
That also works. :)
But that's not where the story ends for me at-least.
I would like to keep the current functionality and be able to deploy deployment-a1, deployment-a2 ...
> kustomize overlays/prod/namespace-a/deployment-a1
If I put the namespace.yaml inside deployment-a1 folder and add it in kustomization.yaml
then the above command works but previous 2 fails with error because now we have 2 namespace files with same name.
I have 2 queries.
Can this directory structure be improved?
How can I create namesapce with single deployment without breaking the other functionality?
Full code can be seen here
In your particular case, in the most ideal scenario, all the required namespaces should already be created before running the kustomize command.
However, I know that you would like to create namespaces dynamically as needed.
Using a Bash script as some kind of wrapper can definitely help with this approach, but I'm not sure if you want to use this.
Below, I'll show you how this can work, and you can choose if it's right for you.
First, I created a kustomize-wrapper script that requires two arguments:
The name of the Namespace you want to use.
Path to the directory containing the kustomization.yaml file.
kustomize-wrapper.sh
$ cat kustomize-wrapper.sh
#!/bin/bash
if [ -z "$1" ] || [ -z "$2" ]; then
echo "Pass required arguments !"
echo "Usage: $0 NAMESPACE KUSTOMIZE_PATH"
exit 1
else
NAMESPACE=$1
KUSTOMIZE_PATH=$2
fi
echo "Creating namespace"
sed -i "s/name:.*/name: ${NAMESPACE}/" namespace.yaml
kubectl apply -f namespace.yaml
echo "Setting namespace: ${NAMESPACE} in the kustomization.yaml file"
sed -i "s/namespace:.*/namespace: ${NAMESPACE}/" base/kustomization.yaml
echo "Deploying resources in the ${NAMESPACE}"
kustomize build ${KUSTOMIZE_PATH} | kubectl apply -f -
As you can see, this script creates a namespace using the namespace.yaml file as the template. It then sets the same namespace in the base/kustomization.yaml file and finally runs the kustomize command with the path you provided as the second argument.
namespace.yaml
$ cat namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name:
base/kustomization.yaml
$ cat base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace:
resources:
- deployment.yaml
Directory structure
$ tree
.
├── base
│ ├── deployment.yaml
│ └── kustomization.yaml
├── kustomize-wrapper.sh
├── namespace.yaml
└── overlays
└── prod
├── deployment-a1
│ ├── kustomization.yaml
│ └── patch.yaml
├── deployment-a2
│ ├── kustomization.yaml
│ └── patch.yaml
└── kustomization.yaml
We can check if it works as expected.
Creating the namespace-a Namespace along with app-deployment-a1 and app-deployment-a2 Deployments:
$ ./kustomize-wrapper.sh namespace-a overlays/prod
Creating namespace
namespace/namespace-a created
Setting namespace: namespace-a in the kustomization.yaml file
deployment.apps/app-deployment-a1 created
deployment.apps/app-deployment-a2 created
Creating only the namespace-a Namespace and app-deployment-a1 Deployment:
$ ./kustomize-wrapper.sh namespace-a overlays/prod/deployment-a1
Creating namespace
namespace/namespace-a created
Setting namespace: namespace-a in the kustomization.yaml file
deployment.apps/app-deployment-a1 created