use Kustomize to create additional namespaces with a suffix - kubernetes

Just currently battling an issue with kustomize and not having much look.
I have my config setup and are using kustomize (v4.5.7) to have separate base, variants and environment configuration. I’m trying to use the setup to deploy a copy of my dev environment onto the same cluster using different namespaces and a suffix.
The idea is that everything would be deployed using a suffix for the name (and got this working but it only does the names and not the namespaces) and drop them into separate namespaces with a suffix.
I’m currently defining all the namspaces with the following config:
apiVersion: v1
kind: Namespace
metadata:
name: mynamespace
Now i want to be able to deploy copies of the NS named mynamespace-mysuffix
I’ve managed to implemented a suffix for the names of the object alongside a PrefixSuffixTransformer to update the namespaces in the objects created to mynamespace-mysuffix
This unfortunately doesn’t update the namespace configuration and leaves things in tact. In short it tries to deploy the objects into namespaces that do not exist.
This is the working PrefixSuffixTransformer amending the namespace set in the various objects:
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: customSuffixer
suffix: "-mysuffix"
fieldSpecs:
- path: metadata/name
- path: metadata/namespace
trying to target the namespace objects unsuccessfully with the following additional PrefixSuffixTransformer
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: nsSuffixer
suffix: "-mysuffix"
fieldSpecs:
- kind: Namespace
path: metadata/name
Was hoping on this last part working but no success. Anyone any suggestions as to how I can get the additional namespaces created with a suffix?

If I understand your question correctly, the solution is just to add a namespace: declaration to the kustomization.yaml file in your variants.
For example, if I have a base directory that contains:
namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: example
spec: {}
service.yaml
apiVersion: v1
kind: Service
metadata:
name: example
spec:
ports:
- name: http
port: 80
targetPort: http
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: example
resources:
- namespace.yaml
- service.yaml
And I create a variant in overlays/example, with this kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: example-mysuffix
resources:
- ../../base
nameSuffix: -mysuffix
Then running kustomize build overlays/example results in:
apiVersion: v1
kind: Namespace
metadata:
name: example
spec: {}
---
apiVersion: v1
kind: Service
metadata:
name: example-mysuffix
namespace: example
spec:
ports:
- name: http
port: 80
targetPort: http
As you have described in your question, the Namespace resource wasn't renamed by the nameSuffix configuration. But if I simply add a namespace: declaration to the kustomization.yaml, like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: example-mysuffix
resources:
- ../../base
nameSuffix: -mysuffix
Then I get the desired output:
apiVersion: v1
kind: Namespace
metadata:
name: example-mysuffix
spec: {}
---
apiVersion: v1
kind: Service
metadata:
name: example-mysuffix
namespace: example-mysuffix
spec:
ports:
- name: http
port: 80
targetPort: http

Related

How to remove ingress annotation using Kustomize

In the Base Ingress file I have added the following annotation nginx.ingress.kubernetes.io/auth-snippet and it needs to be removed in one of the environment.
Base Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/auth-snippet: test
I created a ingress-patch.yml in overlays and added the below
- op: remove
path: /metadata/annotations/nginx.ingress.kubernetes.io/auth-snippet
But it gives the below error when executing Kustomize Build
Error: remove operation does not apply: doc is missing path: "/metadata/annotations/nginx.ingress.kubernetes.io/auth-snippet": missing value
The path /metadata/annotations/nginx.ingress.kubernetes.io/auth-snippet doesn't work because / is the character that JSONPath uses to separate elements in the document; there's no way for a JSONPath parser to know that the / in nginx.ingress.kubernetes.io/auth-snippet means something different from the / in /metadata/annotations.
The JSON Pointer RFC (which is the syntax used to specify the path component of a patch) tells us that we need to escape / characters using ~1. If we have the following in ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
example-annotation: foo
nginx.ingress.kubernetes.io/auth-snippet: test
And write our kustomization.yaml like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ingress.yaml
patches:
- target:
kind: Ingress
name: ingress
patch: |
- op: remove
path: /metadata/annotations/nginx.ingress.kubernetes.io~1auth-snippet
Then the output of kustomize build is:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
example-annotation: foo
name: ingress

How to use Kustomize to configure Traefik 2.x IngressRoute (metadata.name, spec.routes[0].services[0].name & spec.routes[0].match = Host() )

We have a EKS cluster running with Traefik deployed in CRD style (full setup on GitHub) and wan't to deploy our app https://gitlab.com/jonashackt/microservice-api-spring-boot with the Kubernetes objects Deployment, Service and IngressRoute (see configuration repository here). The manifests look like this:
deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-api-spring-boot
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: microservice-api-spring-boot
branch: main
template:
metadata:
labels:
app: microservice-api-spring-boot
branch: main
spec:
containers:
- image: registry.gitlab.com/jonashackt/microservice-api-spring-boot:c25a74c8f919a72e3f00928917dc4ab2944ab061
name: microservice-api-spring-boot
ports:
- containerPort: 8098
imagePullSecrets:
- name: gitlab-container-registry
service.yml:
apiVersion: v1
kind: Service
metadata:
name: microservice-api-spring-boot
spec:
ports:
- port: 80
targetPort: 8098
selector:
app: microservice-api-spring-boot
branch: main
traefik-ingress-route.yml:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: microservice-api-spring-boot-ingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`microservice-api-spring-boot-BRANCHNAME.tekton-argocd.de`)
kind: Rule
services:
- name: microservice-api-spring-boot
port: 80
We already use Kustomize and especially the kustomize CLI (on a Mac or in GitHub Actions install with brew install kustomize) with the following folder structure:
├── deployment.yml
├── kustomization.yaml
├── service.yml
└── traefik-ingress-route.yml
Our kustomization.yaml looks like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yml
- service.yml
- traefik-ingress-route.yml
images:
- name: registry.gitlab.com/jonashackt/microservice-api-spring-boot
newTag: foobar
commonLabels:
branch: foobar
nameSuffix: foobar
Now changing the metadata.name dynamically to add a suffix to the Deployment's, Service's and IngressRoute's .metadata.name from within our GitHub Actions workflow is easy with kustomize CLI (because we want the suffix to use a prefixed -, we need to use the -- -barfoo syntax here):
kustomize edit set namesuffix -- -barfoo
Check the result with
kustomize build .
Also changing the .spec.selector.matchLabels.branch, .spec.template.metadata.labels.branch and .spec.selector.branch in the Deployment and Service is no problem:
kustomize edit set label branch:barfoo
Changing the .spec.template.spec.containers[0].image of our Deployment works with:
kustomize edit set image registry.gitlab.com/jonashackt/microservice-api-spring-boot:barfoo
But looking into our IngressRoute it seems that .spec.routes[0].services[0].name and .spec.routes[0].match = Host() can't be changed with Kustomize out of the box?! So how can we change both fields without the need for a replacement tooling like yq or even sed/ envsubst?
1. Change the IngressRoutes .spec.routes[0].services[0].name with Kustomize
Changing the IngressRoutes .spec.routes[0].services[0].name is possible with Kustomize using a NameReference transformer (see docs here) - luckily I found inspiration in this issue. Therefore we need to include the configurations keyword in our kustomize.yaml:
nameSuffix: foobar
configurations:
# Tie target Service metadata.name to IngressRoute's spec.routes.services.name
# Once Service name is changed, the IngressRoute referrerd service name will be changed as well.
- nameReference.yml
We also need to add file called nameReference.yml:
nameReference:
- kind: Service
fieldSpecs:
- kind: IngressRoute
path: spec/routes/services/name
As you can see we tie the Service's name to the IngressRoutes spec/routes/services/name. Now running
kustomize edit set namesuffix barfoo
will not only change the metadata.name tags of the Deployment, Service and IngressRoute - but also the .spec.routes[0].services[0].name of the IngressRoute, since it is now linked to the metadata.name of the Service. Note that this only, if both the referrer and the target's have a name tag.
2. Change a part of the IngressRoutes .spec.routes[0].match = Host()
The second part of the question ask how to change a part of the IngressRoutes .spec.routes[0].match = Host(). There's an open issue in the Kustomize GitHub project. Right now Kustomize doesn't support this use case - only writing a custom generator plugin for Kustomize. As this might not be a preferred option, there's another way inspired by this blog post. As we can create yaml files inline in our console using the syntax cat > ./myyamlfile.yml <<EOF ... EOF we could also use the inline variable substitution.
So first define the branch name as variable:
RULE_HOST_BRANCHNAME=foobar
And then use the described syntax to create a ingressroute-patch.yml file inline:
cat > ./ingressroute-patch.yml <<EOF
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: microservice-api-spring-boot-ingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(\`microservice-api-spring-boot-$RULE_HOST_BRANCHNAME.tekton-argocd.de\`)
kind: Rule
services:
- name: microservice-api-spring-boot
port: 80
EOF
The last step is to use the ingressroute-patch.yml file as patchesStrategicMerge inside our kustomization.yaml like this:
patchesStrategicMerge:
- ingressroute-patch.yml
Now running kustomize build . should output the correct Deployment, Service and IngressRoute for our setup:
apiVersion: v1
kind: Service
metadata:
labels:
branch: barfoo
name: microservice-api-spring-boot-barfoo
spec:
ports:
- port: 80
targetPort: 8098
selector:
app: microservice-api-spring-boot
branch: barfoo
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
branch: barfoo
name: microservice-api-spring-boot-barfoo
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: microservice-api-spring-boot
branch: barfoo
template:
metadata:
labels:
app: microservice-api-spring-boot
branch: barfoo
spec:
containers:
- image: registry.gitlab.com/jonashackt/microservice-api-spring-boot:barfoo
name: microservice-api-spring-boot
ports:
- containerPort: 8098
imagePullSecrets:
- name: gitlab-container-registry
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
labels:
branch: barfoo
name: microservice-api-spring-boot-ingressroute-barfoo
namespace: default
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`microservice-api-spring-boot-barfoo.tekton-argocd.de`)
services:
- name: microservice-api-spring-boot-barfoo
port: 80

Use variable in a patchesJson6902 of a kustomization.yaml file

I would like to set the name field in a Namespace resource and also replace the namespace field in a Deployment resource with the same value, for example my-namespace.
Here is kustomization.json:
namespace: <NAMESPACE>
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /metadata/name
value: <NAMESPACE>
target:
kind: Namespace
name: system
version: v1
resources:
- manager.yaml
and manager.yaml:
apiVersion: v1
kind: Namespace
metadata:
labels:
control-plane: controller-manager
name: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
I tried using kustomize edit set namespace my-namespace && kustomize build, but it only changes the namespace field in the Deployment object.
Is there a way to change both field without using sed, in 'pure' kustomize and without having to change manually value in kustomization.json?
Is there a way to change both field without using sed, in 'pure' kustomize and without having to change manually value in kustomization.json?
I managed to achieve somewhat similar with the following configuration:
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- deployment.yaml
depyloment.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
And here is the output of the command that you used:
➜ kustomize kustomize edit set namespace my-namespace7 && kustomize build .
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace7
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: my-namespace7
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
What is happening here is that once you set the namespace globally in kustomization.yaml it will apply it to your targets which looks to me that looks an easier way to achieve what you want.
I cannot test your config without manager_patch.yaml content. If you wish to go with your way further you will have update the question with the file content.

Kubernetes API to create a CRD using Minikube, with deployment pod in pending state

I have a problem with Kubernetes API and CRD, while creating a deployment with a single nginx pod, i would like to access using port 80 from a remote server, and locally as well. After seeing the pod in a pending state and running the kubectl get pods and then after around 40 seconds on average, the pod disappears, and then a different nginx pod name is starting up, this seems to be in a loop.
The error is
* W1214 23:27:19.542477 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
I was following this article about service accounts and roles,
https://thorsten-hans.com/custom-resource-definitions-with-rbac-for-serviceaccounts#create-the-clusterrolebinding
I am not even sure i have created this correctly?
Do i even need to create the ServiceAccount_v1.yaml, PolicyRule_v1.yaml and ClusterRoleBinding.yaml files to resolve my error above.
All of my .yaml files for this are below,
CustomResourceDefinition_v1.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# name must match the spec fields below, and be in the form: <plural>.<group>
name: webservers.stable.example.com
spec:
# group name to use for REST API: /apis/<group>/<version>
group: stable.example.com
names:
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: WebServer
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: webservers
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- ws
# singular name to be used as an alias on the CLI and for display
singular: webserver
# either Namespaced or Cluster
scope: Cluster
# list of versions supported by this CustomResourceDefinition
versions:
- name: v1
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
# Each version can be enabled/disabled by Served flag.
served: true
# One and only one version must be marked as the storage version.
storage: true
Deployments_v1_apps.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: nginx-deployment
spec:
# 1 Pods should exist at all times.
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 100
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: nginx
spec:
containers:
# Run this image
- image: nginx:1.14
name: nginx
ports:
- containerPort: 80
hostname: nginx
nodeName: webserver01
securityContext:
runAsNonRoot: True
#status:
#availableReplicas: 1
Ingress_v1_networking.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
resource:
kind: nginx-service
name: nginx-deployment
#service:
# name: nginx
# port: 80
#serviceName: nginx
#servicePort: 80
Service_v1_core.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
ServiceAccount_v1.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: user
namespace: example
PolicyRule_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "example.com:webservers:reader"
rules:
- apiGroups: ["example.com"]
resources: ["ResourceAll"]
verbs: ["VerbAll"]
ClusterRoleBinding_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "example.com:webservers:cdreader-read"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "example.com:webservers:reader"
subjects:
- kind: ServiceAccount
name: user
namespace: example

Selectively apply nameprefix/namesuffix in kustomize

Currently we are using ${HOME}/bin/kustomize edit set nameprefix prefix1
But it is adding nameprefix to all of our resources like deployment.yaml and service.yaml.
We want to apply nameprefix to deployment.yaml only and not apply it to service.yaml
Posting for better visibility:
If you are using:
kustomize edit set nameprefix prefix1
This command will set namePrefix inside your current kustomization.
As stated in the question - this is the way how it works, namePrefix will be used for all specified resources inside kustomization.yaml.
Please consider the following scenario using the idea of an overlay and base with kustomization.
Tested with:
kustomize/v4.0.1
Base declare resources and settings shared in common and overlay declare additional differences.
.
├── base
│ ├── [deployment.yaml] Deployment nginx
│ ├── [kustomization.yaml] Kustomization
│ └── [service.yaml] Service nginx
└── prod
├── [kustomization.yaml] Kustomization
└── kustomizeconfig
└── [deploy-prefix-transformer.yaml] PrefixSuffixTransformer customPrefixer
base: common files
#deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
run: nginx
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
run: nginx
#kustomization.yaml
resources:
- deployment.yaml
- service.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
overlay/prod: kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
nameSuffix: -Suffix1
transformers:
- ./kustomizeconfig/deploy-prefix-transformer.yaml
overlay/prod/kustomizeconfig: deploy-prefix-transformer.yaml
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: customPrefixer
prefix: "deploymentprefix-"
fieldSpecs:
- kind: Deployment
path: metadata/name
As you can see, using this structure and builtin plugin PrefixSuffixTransformer you can get the desired effect:
kustomize build overlay/prod/
apiVersion: v1
kind: Service
metadata:
labels:
run: nginx
name: nginx-Suffix1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploymentprefix-nginx-Suffix1
spec:
selector:
matchLabels:
run: nginx
This configuration (overlay/prod/kustomization.yaml) will apply nameSuffix: -Suffix1 to all resources specified in base directory and using PrefixSuffixTransformer will add in this specific example prefix: "deploymentprefix-" to deployment.metadata.name
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: customPrefixer
prefix: "deploymentprefix-"
fieldSpecs:
- kind: Deployment
path: metadata/name
/kustomizeconfig/deploy-prefix-transformer.yaml
There is github issue about that
is it possible to have kustomization file avoid adding prefixes to few kinds ?
And there are 2 examples provided by #jbrette with which you can achieve what you need.
no prefix to secret
canary using skip
Additionally you can take a look at these pull requests:
https://github.com/kubernetes/enhancements/pull/1232
https://github.com/kubernetes-sigs/kustomize/pull/1491
For those who stumble across this, I had an issue get it to work with ServiceAccount type.
Issue was that I needed to prevent suffix from being added. Apparently namePrefix "should" prevent that too but in actuality I had to add:
nameSuffix:
- path: metadata/name
apiVersion: v1
kind: serviceaccount
skip: true
Note kind type with lowercase. Using standard kind for ServiceAccount makes it to fail.