Secret hash issue when run once-off job in kustomize - kubernetes

When using kustomize, I am trying to use job to perform some once-off job. But somehow, the kustomize just doesn't recognise hashed secret. The below is the relevant codes.
.
├── base
│ └── postgres.yaml
├── jobs
│ ├── postgres-cli.yaml
│ └── kustomization.yaml
└── overlays
└── dev
├── kustomization.yaml
└── postgres-secrects.properties
base/postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
name: postgres
template:
metadata:
labels:
name: postgres
spec:
containers:
- name: postgres
image: postgres:14-alpine
envFrom:
- secretRef:
name: postgres-secrets
# ...
overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/postgres.yaml
secretGenerator:
- name: postgres-secrets
envs:
- postgres-secrets.properties
base/overlays/dev/postgres-secrects.properties
POSTGRES_USER=postgres
POSTGRES_PASSWORD=123
jobs/postgres-cli.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: postgres-cli
spec:
template:
metadata:
name: postgres-cli
spec:
restartPolicy: Never
containers:
- image: my-own-image
name: postgres-cli
envFrom:
- secretRef:
name: postgres-secrets # errors here cannot cannot recognise
# ...
jobs/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Component
commonLabels:
environment: local
resources:
- ./postgres-cli.yaml
To start my stack, I run kubectl apply -k ./overlay/dev.
Then, when I try to run the postgres-cli, I try to run kubectl apply -k /jobs, it complains like below: secret "postgres-secrets" not found
Do we have a way to find the secret back when apply the job?

.
├── base
│ └── postgres.yaml
├── jobs
│ ├── postgres-cli.yaml
│ └── kustomization.yaml
└── overlays
└── dev
├── kustomization.yaml
└── postgres-secrects.properties
Having this structure it means that the secret is in the overlay at dev level. You try to run the job from one level below and the sercret is not there. To run this you have two ways or by kubectl apply -k ./overlay/dev. or by moving the overlay into jobs and create only one kustomization.

Related

Handle "shared" resources with Kustomize

All namespaces in my cluster are supposed to trust the same root CA. I have a mono repo with all my Kustomize files, and I'm trying to avoid having to add the root CA certificate everywhere.
My idea was to go for something like that in my kustomization files:
# [project_name]/[namespace_name]/bases/project/kustomization.yml
configMapGenerator:
- name: trusted-root-ca
files:
- ../../../../root-ca/root-ca.pem
So at least if I want to update the root CA, I do it in one place. This results to
file sources: [../../../../root-ca/root-ca.pem]: security; file 'root-ca/root-ca.pem' is not in or below [project_name]/[namespace_name]/bases/project/
So I guess this is not the way to go. (Reading from Common config across multiple environments and applications with Kustomize I can see why it's behaving like that and disable that behavior seems to be a bad idea). I'm looking for a better way to do this.
This seems like a good place to use a component. For example, if I have my files organized like this:
.
├── components
│   └── trusted-root-ca
│   ├── kustomization.yaml
│   └── root-ca.pem
└── projects
├── kustomization.yaml
├── project1
│   ├── kustomization.yaml
│   └── namespace.yaml
└── project2
├── kustomization.yaml
└── namespace.yaml
And in components/trusted-root-ca/kustomization.yaml I have:
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
configMapGenerator:
- name: trusted-root-ca
files:
- root-ca.pem
generatorOptions:
disableNameSuffixHash: true
Then in projects/project1/kustomization.yaml I can write:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: project1
components:
- ../../components/trusted-root-ca
resources:
- namespace.yaml
And similarly in projects/project2/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: project2
components:
- ../../components/trusted-root-ca
resources:
- namespace.yaml
And in projects/kustomization.yaml I have:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- project1
- project2
Then if, from the top directory, I run kustomize build projects, the output will look like:
apiVersion: v1
kind: Namespace
metadata:
name: project1
spec: {}
---
apiVersion: v1
kind: Namespace
metadata:
name: project2
spec: {}
---
apiVersion: v1
data:
root-ca.pem: |
...cert data here...
kind: ConfigMap
metadata:
name: trusted-root-ca
namespace: project1
---
apiVersion: v1
data:
root-ca.pem: |
...cert data here...
kind: ConfigMap
metadata:
name: trusted-root-ca
namespace: project2

Kustomize/Kubernetes - Nested envFrom injection

I'm trying to add envs injection based on envFrom
A simplified structure looks something like that:
├── base
│   ├ ─ backend
│   ├── backend.properties
│   ├── app1
│   │   ├── app1_backend.properties
├ ── deployment.yaml
│   │   ├── ingress.yaml
│   │   ├── kustomization.yaml
├── common.properties
├── frontend
│   ├── app1
│ ├── app1_frontend.properties
│   │   ├── deployment.yaml
│   │   ├── ingress.yaml
│   │   ├── kustomization.yaml
│   │   └── service.yaml
│   ├── frontend.properties
│   └── kustomization.yaml
└── kustomization.yaml
I would like to generate properties on the main level(common), backend/frontend level, and particular app level.
So I was trying to add the following patch on main level and it works:
- op: add
path: /spec/template/spec/containers/0/envFrom
value:
- configMapRef:
name: common-properties
and following code to nested directories(backend/frontend/particular app)
- op: add
path: "/spec/template/spec/containers/0/envFrom/-"
value:
configMapRef:
name: backend-properties
But it doesn't work with the following error:
add operation does not apply: doc is missing path: "/spec/template/spec/containers/0/envFrom/-": missing value
I have seen some examples on GitHub where that syntax was used: https://github.com/search?l=YAML&p=1&q=%2Fspec%2Ftemplate%2Fspec%2Fcontainers%2F0%2FenvFrom%2F-&type=Code (you have to be logged in to see results) And I'm not sure this stopped work on specific Kustomize version(I'm using the newest version - 4.5.3) or it never worked
I have already written some Kustomize patches and syntax with /- to resources usually worked fine to resources that already exist on the manifest.
It's possible to inject that envFrom on different levels?
It's hard to diagnose your problem without a reproducible example, but if I start with this Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
spec:
containers:
- name: example
image: docker.io/alpine:latest
envFrom:
- configMapRef:
name: example-config
And use this kustomization.yaml, which includes your patch without
changes:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
patches:
- target:
kind: Deployment
name: example
patch: |-
- op: add
path: "/spec/template/spec/containers/0/envFrom/-"
value:
configMapRef:
name: backend-properties
Then everything seems to work and I get the resulting output from
kustomize build:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
spec:
containers:
- envFrom:
- configMapRef:
name: example-config
- configMapRef:
name: backend-properties
image: docker.io/alpine:latest
name: example

Kustomize: how to reference a value from a ConfigMap in another resource/overlay?

I have a couple of overlays (dev, stg, prod) pulling data from multiple bases where each base contains a single service so that each overlay can pick and choose what services it needs. I generate the manifests from the dev/stg/prod directories.
A simplified version of my Kubernetes/Kustomize directory structure looks like this:
├── base
│ ├── ServiceOne
│ │ ├── kustomization.yaml
│ │ └── service_one_config.yaml
│ ├── ServiceTwo
│ │ ├── kustomization.yaml
│ │ └── service_two_config.yaml
│ └── ConfigMap
│ ├── kustomization.yaml
│ └── config_map_constants.yaml
└── overlays
├── dev
│ ├── kustomization.yaml
│ └── dev_patch.yaml
├── stg
│ ├── kustomization.yaml
│ └── stg_patch.yaml
└── prod
├── kustomization.yaml
└── prod_patch.yaml
Under base/ConfigMap, config_map_constants.yaml file contains key/value pairs that are non-secrets:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: myApp
name: global-config-map
namespace: myNamespace
data:
aws_region: "us-west"
env_id: "1234"
If an overlay just needs a default value, it should reference the key/value pair as is, and if it needs a custom value, I would use a patch to override the value.
kustomization.yaml from base/ConfigMap looks like this and refers to ConfigMap as a resource:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- config_map_constants.yaml
QUESTION: how do I reference "aws_region" in my overlays' yaml files so that I can retrieve the value?
For example, I want to be able to do something like this in base/ServiceOne/service_one_config.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
app: myApp
aws_region: ../ConfigMap/${aws_region} #pseudo syntax
name: service_one
spec:
env_id: ../ConfigMap/${env_id} #pseudo syntax
I am able to build the ConfigMap and append it to my services but I am struggling to find how to reference its contents within other resources.
EDIT:
Kustomize version: v4.5.2
You can try using https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/
For your scenario, if you want to reference the aws-region into your Service labels. You need to create a replacement file.
replacements/region.yaml
source:
kind: ConfigMap
fieldPath: data.aws-region
targets:
- select:
kind: Service
name: service_one
fieldPaths:
- metadata.labels.aws_region
And add it to your kustomization.yaml
replacements:
- path: replacements/region.yaml
Kustomize output should be similar to this
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myApp
aws_region: us-west-1
name: service_one

kustomize: how to pass `newTag` from command line

I am using https://kustomize.io/ have below is my kustomization.yaml file,
I have multiple docker images and during deployment all having same tag. I can manually change all the tag value and I can run it kubectl apply -k . through command prompt.
Question is, I don't want to change this file manually, I want to send tag value as a command line argument within kubectl apply -k . command. Is there way to do that? Thanks.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
images:
- name: foo/bar
newTag: "36"
- name: zoo/too
newTag: "36"
It my opinion using "file" approach is the correct way and the best solution even while working with your test scenario.
By the "correct way" I mean this is how you should work with kustomize - keeping your environment specific data into separate directory.
kustomize supports the best practice of storing one’s entire configuration in a version control system.
Before kustomize build . you can change those values using:
kustomize edit set image foo/bar=12.5
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
...
images:
- name: foo/bar
newName: "12.5"
Using envsubst approach:
deployment.yaml and kustomization.yaml in base directory:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: the-deployment
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: test
image: foo/bar:1.2
directory tree with test overlay:
├── base
│   ├── deployment.yaml
│   └── kustomization.yaml
└── overlays
└── test
└── kustomization2.yaml
Create new kustomization2.yaml with variables in overlay/test directory:
cd overlays/test
cat kustomization2.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
images:
- name: foo/bar
newTag: "${IMAGE_TAG}"
export IMAGE_TAG="2.2.11" ; envsubst < kustomization2.yaml > kustomization.yaml ; kustomize build .
output:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- image: foo/bar:2.2.11
name: test
Files in directory after envsubst :
.
├── base
│   ├── deployment.yaml
│   └── kustomization.yaml
└── overlays
└── test
├── kustomization2.yaml
└── kustomization.yaml
You can always pipe the result from kustomize build . into kubectl to change the image on the fly :
kustomize build . | kubectl set image -f - test=nginx3:4 --local -o yaml
Output:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- image: nginx3:4
name: test
Note:
build in solution
Build-time side effects from CLI args or env variables
Changing kustomize build configuration output as a result of additional >arguments or flags to build, or by consulting shell environment variable >values in build code, would frustrate that goal.
kustomize insteads offers kustomization file edit commands. Like any shell >command, they can accept environment variable arguments.
For example, to set the tag used on an image to match an environment >variable, run
kustomize edit set image nginx:$MY_NGINX_VERSION
as part of some encapsulating work flow executed before kustomize build

kustomize, secretGenerator & patchesStrategicMerge: envFrom.secretRef not reading hashed secret name

In my kustomization.yaml I have:
...
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
patchesStrategicMerge:
- app.yaml
And then in my app.yaml (the patch) I have:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
template:
spec:
containers:
- name: server
envFrom:
- secretRef:
name: db-env
When I try build this via kustomize build k8s/development I get back out:
apiVersion: apps/v1
kind: Deployment
...
spec:
containers:
- envFrom:
- secretRef:
name: db-env
name: server
When it should be:
- envFrom:
- secretRef:
name: db-env-4g95hhmhfc
How do I get the secretGenerator name hashing to apply to patchesStrategicMerge too?
Or alternatively, what's the proper way to inject some environment vars into a deployment for a specific overlay?
This for development.
My file structure is like:
❯ tree k8s
k8s
├── base
│   ├── app.yaml
│   └── kustomization.yaml
├── development
│   ├── app.yaml
│   ├── golinks.sql
│   ├── kustomization.yaml
│   ├── mariadb.yaml
│   ├── my.cnf
│   └── my.env
└── production
├── ingress.yaml
└── kustomization.yaml
Where base/kustomization.yaml is:
namespace: go-mpen
resources:
- app.yaml
images:
- name: server
newName: reg/proj/server
and development/kustomization.yaml is:
resources:
- ../base
- mariadb.yaml
configMapGenerator:
- name: mariadb-config
files:
- my.cnf
- name: initdb-config
files:
- golinks.sql # TODO: can we mount this w/out a config file?
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
patchesStrategicMerge:
- app.yaml
This works fine for me with kustomize v3.8.4. Can you please check your version and if disableNameSuffixHash is not perhaps set to you true.
Here are the manifests used by me to test this:
➜ app.yaml deployment.yaml kustomization.yaml my.env
app.yaml
kind: Deployment
metadata:
name: app-deployment
spec:
template:
spec:
containers:
- name: server
envFrom:
- secretRef:
name: db-env
deplyoment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
and my kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
patchesStrategicMerge:
- app.yaml
resources:
- deployment.yaml
And here is the result:
apiVersion: v1
data:
ASD: MTIz
kind: Secret
metadata:
name: db-env-f5tt4gtd7d
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
- envFrom:
- secretRef:
name: db-env-f5tt4gtd7d
name: server