kustomize, secretGenerator & patchesStrategicMerge: envFrom.secretRef not reading hashed secret name - kubernetes

In my kustomization.yaml I have:
...
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
patchesStrategicMerge:
- app.yaml
And then in my app.yaml (the patch) I have:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
template:
spec:
containers:
- name: server
envFrom:
- secretRef:
name: db-env
When I try build this via kustomize build k8s/development I get back out:
apiVersion: apps/v1
kind: Deployment
...
spec:
containers:
- envFrom:
- secretRef:
name: db-env
name: server
When it should be:
- envFrom:
- secretRef:
name: db-env-4g95hhmhfc
How do I get the secretGenerator name hashing to apply to patchesStrategicMerge too?
Or alternatively, what's the proper way to inject some environment vars into a deployment for a specific overlay?
This for development.
My file structure is like:
❯ tree k8s
k8s
├── base
│   ├── app.yaml
│   └── kustomization.yaml
├── development
│   ├── app.yaml
│   ├── golinks.sql
│   ├── kustomization.yaml
│   ├── mariadb.yaml
│   ├── my.cnf
│   └── my.env
└── production
├── ingress.yaml
└── kustomization.yaml
Where base/kustomization.yaml is:
namespace: go-mpen
resources:
- app.yaml
images:
- name: server
newName: reg/proj/server
and development/kustomization.yaml is:
resources:
- ../base
- mariadb.yaml
configMapGenerator:
- name: mariadb-config
files:
- my.cnf
- name: initdb-config
files:
- golinks.sql # TODO: can we mount this w/out a config file?
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
patchesStrategicMerge:
- app.yaml

This works fine for me with kustomize v3.8.4. Can you please check your version and if disableNameSuffixHash is not perhaps set to you true.
Here are the manifests used by me to test this:
➜ app.yaml deployment.yaml kustomization.yaml my.env
app.yaml
kind: Deployment
metadata:
name: app-deployment
spec:
template:
spec:
containers:
- name: server
envFrom:
- secretRef:
name: db-env
deplyoment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
and my kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
patchesStrategicMerge:
- app.yaml
resources:
- deployment.yaml
And here is the result:
apiVersion: v1
data:
ASD: MTIz
kind: Secret
metadata:
name: db-env-f5tt4gtd7d
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
- envFrom:
- secretRef:
name: db-env-f5tt4gtd7d
name: server

Related

How to change container name and image in kustomization.yaml

I want to change all the dev-app to demo-app using kustomization.
In my base deployment.yaml I have the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: dev-app
name: dev-app
spec:
replicas: 1
selector:
matchLabels:
app: dev-app
template:
metadata:
labels:
service: dev-app
spec:
imagePullSecrets:
- name: my-docker-secret
containers:
- name: dev-app
image: the-my-app
imagePullPolicy: Always
ports:
- containerPort: 1234
restartPolicy: Always
In my overlays/demo kustomization.yaml I have the following:
bases:
- ../../base
resources:
- deployment.yaml
namespace: demo-app
images:
- name: the-my-app
newName: my.docker.registry.com/my-project/my-app
newTag: test
when I do this:
kubectl apply -k my-kustomization-dir
result look like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: dev-app
name: dev-app
spec:
replicas: 1
selector:
matchLabels:
app: dev-app
template:
metadata:
labels:
service: dev-app
spec:
imagePullSecrets:
- name: my-docker-secret
containers:
- name: dev-app
image: my.docker.registry.com/my-project/my-app:test
imagePullPolicy: Always
ports:
- containerPort: 1234
restartPolicy: Always
I want to change the name in the container to demo-app.
containers:
- name: dev-app
if possible help me with best possible way to replace all the dev-app name,label,service tag to demo-app
A way to do this is to replicate your deployment file into your overlays folder and change what you need to change, for example:
Path structure:
├── base
│   ├── deployment.yaml
│   └── kustomization.yaml
└── overlays
├── dev
│   ├── deployment.yaml
│   └── kustomization.yaml
└── prod
├── deployment.yaml
└── kustomization.yaml
deployment file of base/ folder:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: dev-app
name: dev-app
spec:
replicas: 1
selector:
matchLabels:
app: dev-app
template:
metadata:
labels:
service: dev-app
spec:
imagePullSecrets:
- name: my-docker-secret
containers:
- name: dev-app
image: the-my-app
imagePullPolicy: Always
ports:
- containerPort: 1234
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 500M
deployment file of overlays/prod folder:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: demo-app
name: demo-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
service: demo-app
spec:
imagePullSecrets:
- name: my-docker-secret
containers:
- name: demo-app
image: the-my-app
imagePullPolicy: Always
ports:
- containerPort: 1234
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 500M
Kustomization file of overlays/prod:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: deployment.yaml
target:
kind: Deployment
options:
allowNameChange: true
namespace: prod-demo-app
images:
- name: the-my-app
newName: my.docker.registry.com/my-project/my-app
newTag: test
I suggest you use Helm, a much more robust templating engine. You can also use a combination of both Helm and Kustomize, Helm for templating and Kustomize for resource management, patches for a specific configuration, and overlays.

Secret hash issue when run once-off job in kustomize

When using kustomize, I am trying to use job to perform some once-off job. But somehow, the kustomize just doesn't recognise hashed secret. The below is the relevant codes.
.
├── base
│ └── postgres.yaml
├── jobs
│ ├── postgres-cli.yaml
│ └── kustomization.yaml
└── overlays
└── dev
├── kustomization.yaml
└── postgres-secrects.properties
base/postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
name: postgres
template:
metadata:
labels:
name: postgres
spec:
containers:
- name: postgres
image: postgres:14-alpine
envFrom:
- secretRef:
name: postgres-secrets
# ...
overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/postgres.yaml
secretGenerator:
- name: postgres-secrets
envs:
- postgres-secrets.properties
base/overlays/dev/postgres-secrects.properties
POSTGRES_USER=postgres
POSTGRES_PASSWORD=123
jobs/postgres-cli.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: postgres-cli
spec:
template:
metadata:
name: postgres-cli
spec:
restartPolicy: Never
containers:
- image: my-own-image
name: postgres-cli
envFrom:
- secretRef:
name: postgres-secrets # errors here cannot cannot recognise
# ...
jobs/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Component
commonLabels:
environment: local
resources:
- ./postgres-cli.yaml
To start my stack, I run kubectl apply -k ./overlay/dev.
Then, when I try to run the postgres-cli, I try to run kubectl apply -k /jobs, it complains like below: secret "postgres-secrets" not found
Do we have a way to find the secret back when apply the job?
.
├── base
│ └── postgres.yaml
├── jobs
│ ├── postgres-cli.yaml
│ └── kustomization.yaml
└── overlays
└── dev
├── kustomization.yaml
└── postgres-secrects.properties
Having this structure it means that the secret is in the overlay at dev level. You try to run the job from one level below and the sercret is not there. To run this you have two ways or by kubectl apply -k ./overlay/dev. or by moving the overlay into jobs and create only one kustomization.

patch kubernetes cronjob with kustomize

I am trying to patch a cronjob, but somehow it doesn't work as I would expect. I use the same folder structure for a deployment and that works.
This is the folder structure:
.
├── base
│ ├── kustomization.yaml
│ └── war.cron.yaml
└── overlays
└── staging
├── kustomization.yaml
├── war.cron.patch.yaml
└── war.cron.staging.env
base/kustomization.yaml
---
kind: Kustomization
resources:
- war.cron.yaml
base/war.cron.yaml
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: war-event-cron
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: war-event-cron
image: my-registry/war-service
imagePullPolicy: IfNotPresent
command:
- python
- run.py
args:
- sync-events
envFrom:
- secretRef:
name: war-event-cron-secret
restartPolicy: OnFailure
Then I am trying to patch this in the staging overlay.
overlays/staging/kustomization.yaml
---
kind: Kustomization
namespace: staging
bases:
- "../../base"
patchesStrategicMerge:
- war.cron.patch.yaml
secretGenerator:
- name: war-event-cron-secret
behavior: create
envs:
- war.cron.staging.env
overlays/staging/war.cron.patch.yaml
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: war-event-cron
spec:
jobTemplate:
spec:
template:
spec:
containers:
- name: war-event-cron
image: my-registry/war-service:nightly
args:
- sync-events
- --debug
But the result of kustomize build overlays/staging/ is not what I want. The command is gone and the secret is not referenced.
apiVersion: v1
data:
...
kind: Secret
metadata:
name: war-event-cron-secret-d8m6bh7284
namespace: staging
type: Opaque
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: war-event-cron
namespace: staging
spec:
jobTemplate:
spec:
template:
spec:
containers:
- args:
- sync-events
- --debug
image: my-registry/war-service:nightly
name: war-event-cron
restartPolicy: OnFailure
schedule: '*/5 * * * *'
It's known bug in kustomize - check and follow this topic (created ~ one month ago) on GitHub for more information.
For now, fix for your issue is to use apiVersion:batch/v1beta1 instead of apiVersion: batch/v1 in base/war.cron.yaml and overlays/staging/war.cron.patch.yaml files.

How to deal with a namespace different from the globally set one?

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ns1
resources:
- r1a.yaml
- r1b.yaml
- r1c.yaml
- r1d.yaml
- r1e.yaml
- r2.yaml # needs to be placed in namespace ns2
Let's assume above situation. The problem is objects specified in r2.yaml would be place in ns1 even if ns2 is explicitely referenced in metadata.namespace.
How do I have to deal with this? Or how can I solve this (as I assume there a multiple options)?
I've looked into this and I came up with one idea.
├── base
│ ├── [nginx.yaml] Deployment nginx ns: default
| ├── [nginx2.yaml] Deployment nginx ns: default
| ├── [nginx3.yaml] Deployment nginx ns: default
| ├── [nginx4.yaml] Deployment nginx ns: default
| ├── [nginx5.yaml] Deployment nginx ns: nginx
│ └── [kustomization.yaml] Kustomization
└── prod
├── [kustomization.yaml] Kustomization
└── [patch.yaml] patching namespace
You need to have 2 directories, in this setup are: base and prod. In base directory you should use your base YAMLs and kustomization.yaml file. In my scenario I have 6 YAMLs: nginx/1/2/3/4.yaml based on Kubernetes Documentation, and nginx5.yaml which looks the same but with additional spec.namespace: nginx.
In base directory:
$ cat kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- nginx1.yaml
- nginx2.yaml
- nginx3.yaml
- nginx4.yaml
- nginx5.yaml
And 5 YAMLs with nginx.
In Prod directory:
You should have 2 files. kustomization.yaml and patch.yaml.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ns1
bases:
- ../base
patchesJson6902:
- target:
group: apps
version: v1
kind: Deployment
name: nginx-deployment-5
path: patch.yaml
$ cat patch.yaml
- op: replace
path: /metadata/namespace
value: nginx
When you will use kustomize build . in prod directory, all nginx-deployment/-2/-3/-4 will be in the namespace: ns1 and nginx-deployment-5 will be in namespace: nginx.
~/prod (project)$ kustomize build .
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-5
namespace: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: ns1
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-2
namespace: ns1
spec:
Useful links:
Kustomize Builtin Plugins
Customizong
Kustomization Patches

How to mount jar file to tomcat container

I have a folder in my project, which contains 1 properties file and 1 jar file(db-driver) file.
I need to copy both of these files to /usr/local/tomcat/lib directory on my pod. I am not sure how to achieve this in kubernetes yaml file. Below is my yaml file where I am trying to achieve this using configMap, but pod creation fails with error "configmap references non-existent config key: app.properties"
Target /usr/local/tomcat/lib already has other jar files so I am trying to use configMap to not override entire directory and just add 2 files which are specific to my application.
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcatdeployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: appvolume
mountPath: /usr/local/data
- name: config
mountPath: /usr/local/tomcat/lib
subPath: ./configuration
ports:
- name: http
containerPort: 8080
protocol: TCP
volumes:
- name: appvolume
- name: config
configMap:
name: config-map
items:
- key: app.properties
path: app.properties
---
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
key: app.properties
Current Directory structure...
.
├── configuration
│   ├── app.properties
│   └── mysql-connector-java-5.1.21.jar
├── deployment.yaml
└── service.yaml
Please share your valuable feedback on how to achieve this.
Regards.
Please try this:
kubectl create configmap config-map --from-file=app.properties --from-file=mysql-connector-java-5.1.21.jar
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcatdeployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: config
mountPath: /usr/local/tomcat/lib/conf
ports:
- name: http
containerPort: 8080
protocol: TCP
volumes:
- name: config
configMap:
name: config-map
or
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcatdeployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat3
image: tomcat:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: config
mountPath: /usr/local/tomcat/lib/app.properties
subPath: app.properties
- name: config
mountPath: /usr/local/tomcat/lib/mysql-connector-java-5.1.21.jar
subPath: mysql-connector-java-5.1.21.jar
ports:
- name: http
containerPort: 8080
protocol: TCP
volumes:
- name: config
configMap:
name: config-map
items:
- key: app.properties
path: app.properties
- key: mysql-connector-java-5.1.21.jar
path: mysql-connector-java-5.1.21.jar
it's normal to have this error because in this volume declaration you mentioned that key: app.properties otherwise in the configmap key: app.properties so here the key is key and the value is app.properties so you must in the volume declaration change :
volumes:
- name: appvolume
- name: config
configMap:
name: config-map
items:
- key: app.properties
path: app.properties
to :
volumes:
- name: appvolume
- name: config
configMap:
name: config-map
items:
- key: key
path: app.properties
for more you can refer here : add-configmap-data-to-a-volume