How to override a namespace override - kubernetes

In the following scenario I have my containers defined in ../base/.
In this /dev/ directory I want to start all the deployments and statefulsets in namespace dev.
The rub is that I also want to run the local-path-storage CSI in the local-path-storage namespace. kustomize will override it and create it in the "dev" namespace.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: dev
bases:
- ../base
resources:
- local-path-storage.yaml
How can I undo the namespace override for just local-path-storage.yaml?

This functionality doesn't exist in Kustomize yet. There's an open issue addressing this, but no open PRs at the time of this writing.
The quickest solution here is to remove the namespace setting in the dev/kustomize.yaml and hand-set the namespace in all resources in dev.
Another option, shamelessly copied from the issue I cited earlier, is to create a transformer to get around this:
#!/usr/bin/env /usr/bin/python3
import sys
import yaml
with open(sys.argv[1], "r") as stream:
try:
data = yaml.safe_load(stream)
except yaml.YAMLError as exc:
print("Error parsing NamespaceTransformer input", file=sys.stderr)
# See kubectl api-resources --namespaced=false
denylist = [
"ComponentStatus",
"Namespace",
"Node",
"PersistentVolume",
"MutatingWebhookConfiguration",
"ValidatingWebhookConfiguration",
"CustomResourceDefinition",
"APIService",
"MeshPolicy",
"TokenReview",
"SelfSubjectAccessReview",
"SelfSubjectRulesReview",
"SubjectAccessReview",
"CertificateSigningRequest",
"ClusterIssuer",
"BGPConfiguration",
"ClusterInformation",
"FelixConfiguration",
"GlobalBGPConfig",
"GlobalFelixConfig",
"GlobalNetworkPolicy",
"GlobalNetworkSet",
"HostEndpoint",
"IPPool",
"PodSecurityPolicy",
"NodeMetrics",
"PodSecurityPolicy",
"ClusterRoleBinding",
"ClusterRole",
"ClusterRbacConfig",
"PriorityClass",
"StorageClass",
"VolumeAttachment",
]
try:
for yaml_input in yaml.safe_load_all(sys.stdin):
if yaml_input['kind'] not in denylist:
if "namespace" not in yaml_input["metadata"]:
yaml_input["metadata"]["namespace"] = data["namespace"]
print("---")
print(yaml.dump(yaml_input, default_flow_style=False))
except yaml.YAMLError as exc:
print("Error parsing YAML input\n\n%s\n\n" % input, file=sys.stderr)

Unfortunately it is not possible, the namespace override in kustomization assume all resources should belong to the same namespace.
Your alternative are:
Create separate kustomization for resources that does not belong to the same namespace.
Deploy resources that does not need kustomization by using kubectl apply -f .
Use alternative replacement approach like suggested by Eric staples.
I generally create one kustomization per set of resources, that are deployed together in a namespace to make the kustomization simple and independent from any other resources.

It is possible since kustomize 4.5.6 by adding a namespaceTransformer. You want to set the field unsetOnly to true.
Here is an example:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
resources:
- local-path-storage.yaml
transformers:
- |-
apiVersion: builtin
kind: NamespaceTransformer
metadata:
name: notImportantHere
namespace: dev
unsetOnly: true
This should set the namespace to dev for all resources that DO NOT have a namespace set.
Link to namespaceTransformer spec: https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_namespacetransformer_

I am faced with the same problem.
My approach to this problem is to break it up into multiple steps.
I would have stepone, steptwo folders.
tree ./project/
./project/
├── stepone
│   ├── base
│   └── overlay
└── steptwo
├── base
└── overlay
Now I can move the part of the deployment that should not have the namespace override into steptwo or vice versa. Depending on your deployment needs.
I am working on complex transitions from a heml template with over 200 files outputted from the templates.
I am simply breaking up the deployment into different steps and use kustomize at each step to manage just the portion of the deployment where isolation is required.
It does add some effort, but it still gives the isolation I need till kustomize finds a good way to handle this complexity of the namespace overide. This takes #Diego-mendes answer and encapsulates the different parts into their own folders.

Related

Patching multiple resources with Kustomize

I have a kustomization.yaml file defined which consists of two resources that are downloaded from a git repo. Both of these resources want to create a Namespace with the same name, which creates an error when I try to build the final kustomization file: may not add resource with an already registered id: ~G_v1_Namespace|~X|rabbitmq-system'. I tried using patches to get rid of this unwanted namespace however, it looks like that works only if ONE resource is defined. As soon as I add the other resource, it stops working.
bases:
- ../../base
namespace: test
resources:
- https://github.com/rabbitmq/cluster-operator/releases/download/v1.13.0/cluster-operator.yml
- https://github.com/rabbitmq/messaging-topology-operator/releases/download/v1.6.0/messaging-topology-operator.yaml
patchesStrategicMerge:
- |-
apiVersion: v1
kind: Namespace
metadata:
name: rabbitmq-system
$patch: delete
What I think is happening is Kustomize loads the resources first and finds two identical Namespaces defined and it doesn't take patches into consideration. Can I fix this behavior somehow?

ConfigMap that can reference current Namespace

I'm working with a Pod (Shiny Proxy) that talks to Kubernetes API to start other pods. I'm wanting to make this generic, and so don't want to hardcode the namespace (because I intend to have multiple of these, deployed probably as an OpenShift Template or similar).
I am using Kustomize to set the namespace on all objects. Here's what my kustomization.yaml looks like for my overlay:
bases:
- ../../base
namespace: shiny
commonAnnotations:
technical_contact: A Local Developer <somedev#example.invalid>
Running Shiny Proxy and having it start the pods I need it to (I have service accounts and RBAC already sorted) works, so long as as in the configuration for Shiny Proxy I specify (hard-code) the namespace that the new pods should be generated in. The default namespace that Shiny Proxy will use is (unfortunately) 'default', which is inappropriate for my needs.
Currently for the configuration I'm using a ConfigMap (perhaps I should move to a Kustomize ConfigMapGenerator)
The ConfigMap in question is currently like the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: shiny-proxy
data:
application_yml: |
...
container-backend: kubernetes
kubernetes:
url: https://kubernetes.default.svc.cluster.local:443
namespace: shiny
...
The above works, but 'shiny' is hardcoded; I would like to be able to do something like the following:
namespace: { .metadata.namespace }
But this doesn't appear to work in a ConfigMap, and I don't see anything in the documentation that would lead to believe that it would, or that a similar thing appears possible within the ConfigMap machinery.
Looking over the Kustomize documentation doesn't fill me with clarity either, particularly as the configuration file is essentially plain-text (and not a YAML document as far as the ConfigMap is concerned). I've seen some use of Vars, but https://github.com/kubernetes-sigs/kustomize/issues/741 leads to believe that's a non-starter.
Is there a nice declarative way of handling this? Or should I be looking to have the templating smarts happen within the container, which seems kinda wrong to me, but I am still new to Kubernetes (and OpenShift)
I'm using CodeReady Containers 1.24 (OpenShift 4.7.2) which is essentially Kubernetes 1.20 (IIRC). I'm preferring to keep this fairly well aligned with Kubernetes without getting too OpenShift specific, but this is still early days.
Thanks,
Cameron
If you don't want to hard-code a specific data in your manifest file, you can consider using Kustomize plugins. In this case, the sedtransformer plugin may be useful. This is an example plugin, maintained and tested by the kustomize maintainers, but not built-in to kustomize.
As you can see in the Kustomize plugins guide:
Kustomize offers a plugin framework allowing people to write their own resource generators and transformers.
For more information on creating and using Kustomize plugins, see Extending Kustomize.
I will create an example to illustrate how you can use the sedtransformer plugin in your case.
Suppose I have a shiny-proxy ConfigMap:
NOTE: I don't specify a namespace, I use namespace: NAMESPACE instead.
$ cat cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: shiny-proxy
data:
application_yml: |
container-backend: kubernetes
kubernetes:
url: https://kubernetes.default.svc.cluster.local:443
namespace: NAMESPACE
something_else:
something: something
To use the sedtransformer plugin, we first need to create the plugin’s configuration file which contains a YAML configuration object:
NOTE: In argsOneLiner: I specify that NAMESPACE should be replaced with shiny.
$ cat sedTransformer.yaml
apiVersion: someteam.example.com/v1
kind: SedTransformer
metadata:
name: sedtransformer
argsOneLiner: s/NAMESPACE/shiny/g
Next, we need to put the SedTransformer Bash script in the right place.
When loading, kustomize will first look for an executable file called
$XDG_CONFIG_HOME/kustomize/plugin/${apiVersion}/LOWERCASE(${kind})/${kind}
I create the necessary directories and download the SedTransformer script from the Github:
NOTE: The downloaded script need to be executable.
$ mkdir -p $HOME/.config/kustomize/plugin/someteam.example.com/v1/sedtransformer
$ cd $HOME/.config/kustomize/plugin/someteam.example.com/v1/sedtransformer
$ wget https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/plugin/someteam.example.com/v1/sedtransformer/SedTransformer
$ chmod a+x SedTransformer
Finally, we can check if it works as expected:
NOTE: To use this plugin, you need to provide the --enable-alpha-plugins flag.
$ tree
.
├── cm.yaml
├── kustomization.yaml
└── sedTransformer.yaml
0 directories, 3 files
$ cat kustomization.yaml
resources:
- cm.yaml
transformers:
- sedTransformer.yaml
$ kustomize build --enable-alpha-plugins .
apiVersion: v1
data:
application_yml: |
container-backend: kubernetes
kubernetes:
url: https://kubernetes.default.svc.cluster.local:443
namespace: shiny
something_else:
something: something
kind: ConfigMap
metadata:
name: shiny-proxy
Using the sedtransformer plugin can be especially useful if you want to replace NAMESPACE in a number of places.
I found the easiest way of doing this was to use an entrypoint script in the container that harvested the downward API (?) service credentials (specifically the namespace secret) that get mounted in the container, and exposes that as an environment variable.
export SHINY_K8S_NAMESPACE=`cat /run/secrets/kubernetes.io/serviceaccount/namespace`
cd /opt/shiny-proxy/working
exec java ${JVM_OPTIONS} -jar /opt/shiny-proxy/shiny-proxy.jar
Within the application configuration (shiny-proxy), it supports the use of environment variables in its configuration file, so I can refer to the pod's namespace using ${SHINY_K8S_NAMESPACE}
Although, I've just now seen the idea of a fieldRef (from https://docs.openshift.com/enterprise/3.2/dev_guide/downward_api.html), and would be generalisable to things other than just namespace.
apiVersion: v1
kind: Pod
metadata:
name: dapi-env-test-pod
spec:
containers:
- name: env-test-container
image: gcr.io/google_containers/busybox
command: ["/bin/sh", "-c", "env"]
env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: Never

Exclude Resource in kustomization.yaml

I have a kustomize base that I'd like to re-use without editing it. Unfortunately, it creates a namespace I don't want to create. I'd like to simply remove that resource from consideration when compiling the manifests and add a resource for mine since I can't patch a namespace to change the name.
Can this be done? How?
You can omit the specific resource by using a delete directive of Strategic Merge Patch like this.
Folder structure
$ tree .
.
├── base
│   ├── kustomization.yaml
│   └── namespace.yaml
└── overlays
├── dev
│   └── kustomization.yaml
└── prod
├── delete-ns-b.yaml
└── kustomization.yaml
File content
$ cat base/kustomization.yaml
resources:
- namespace.yaml
$ cat base/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
---
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ cat overlays/dev/kustomization.yaml
bases:
- ../../base
$ cat overlays/prod/delete-ns-b.yaml
$patch: delete
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ cat overlays/prod/kustomization.yaml
bases:
- ../../base
patchesStrategicMerge:
- delete-ns-b.yaml
Behavior of kustomize
$ kustomize build overlays/dev
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
---
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ kustomize build overlays/prod
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
In this case, we have two namespaces in the base folder. In dev, kustomize produces 2 namespaces because there is no patch. But, in prod, kustomize produces only one namespace because delete patch deletes namespace ns-b.
I found that my understanding of not being able to change a namespace name was incorrect. Using the patch capability, you actually can change the name of a resource including namespaces.
This is what I ended up using:
patches:
- target:
kind: Namespace
name: application
patch: |-
- op: replace
path: /metadata/name
value: my-application
I encountered this problem and eventually took a different approach to solving it. It's worth thinking back through your requirements and asking yourself why you would want kustomize to omit a resource? In my case - and I would imagine this is the most common use-case - I wanted kustomize to omit a resource because I didn't want to apply it to the target kubernetes cluster but kustomize doesn't provide an easy way to do this. Would it not be better for the filtration to take place when applying the resource to the cluster rather than when generating them? The solution which I eventually applied was to filter the resources by label when applying to the cluster. You can add an exclusion label in an overlay to prevent the resource from being applied.
e.g.
$ kustomize build . | kubectl apply -l apply-resource!=no -f -

Is kustomize for k8s backward chaining?

The README for kustomize says that
It's like make, in that what it does is declared in a file, and it's like sed, in that it emits edited text.
Does this analogy extend beyond the fact that files are used to declare what is needed?
Or, is kustomize backward chaining like make in that it reads all command input before working out what it has to do rather than work sequentially and step through the command input like bash working through a shell script?
EDIT: Jeff Regan, of the Kustomize team in Google, explains the model for the way kustomize works towards the beginning of his talk Kustomize: Kubernetes Configuration Customization. He also shows how kustomize may be daisy chained so the output of one kustomize may serve as the input to another kustomize. It seems that, as pointed out by ITChap below, kustomize starts by gathering all the resources referenced in the kustomization.yml file in the base dir. It the executes sequentially in a series of steps to perform the required substitutions and transformations interatively. Repeating the substitution/tranformation step as often as needed to complete. It then spits out the generated YAML on stdout. So I would say that it is not backward chaining like make but rather somewhere in between. HTH.
What I noticed so far is that kustomize will first accumulate the content of all the base resources then apply the transformations from your kustomization.yml files. If you have multiple level of overlays, it doesn't seem to pass the result from one level to the next.
Let's consider the following:
./base/pod.yml:
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: busybox
./base/kustomization.yml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../pod.yml
./overlays/l1/kustomization.yml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
nameSuffix: "-l1"
./overlays/l2/kustomization.yml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../l1
nameSuffix: "-l2"
When running kustomize build overlays/l2 you are going to get a pod named test-l1-l2 as expected.
But if you try to patch the base pod you would have to reference the pod using:
patchesJson6902:
- target:
version: v1
kind: Pod
name: test
path: patch.yml
in your ./overlays/l1/kustomization.yml but also in ./overlays/l2/kustomization.yml. At the time the patch of l2 is applied, the referenced resource is still test and not test-l1.
I don't know kustomize well enough to understand the intention behind this but these are my observations. Hope it answers your question.
PS: this might change with https://github.com/kubernetes-sigs/kustomize/issues/1036

Adjusting Kubernetes configurations depending on environment

I want to describe my services in kubernetes template files. Is it possible to parameterise values like number or replicas, so that I can set this at deploy time.
The goal here is to be able to run my services locally in minikube (where I'll only need one replica) and have them be as close to those running in staging/live as possible.
I'd like to be able to change the number of replicas, use locally mounted volumes and make other minor changes, without having to write a seperate template files that would inevitably diverge from each other.
Helm
Helm is becoming the standard for templatizing kubernetes deployments. A helm chart is a directory consisting of yaml files with golang variable placeholders
---
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
You define the default value of a 'value' in the 'values.yaml' file
replicaCount: 1
You can optionally overwrite the value using the --set command line
helm install foo --set replicaCount=42
Helm can also point to an external answer file
helm install foo -f ./dev.yaml
helm install foo -f ./prod.yaml
dev.yaml
---
replicaCount: 1
prod.yaml
---
replicaCount: 42
Another advantage of Helm over simpler solutions like envbsubst is that Helm supports plugins. One powerful plugin is the helm-secrets plugin that lets you encrypt sensitive data using pgp keys. https://github.com/futuresimple/helm-secrets
If using helm + helm-secrets your setup may look like the following where your code is in one repo and your data is in another.
git repo with helm charts
stable
|__mysql
|__Values.yaml
|__Charts
|__apache
|__Values.yaml
|__Charts
incubator
|__mysql
|__Values.yaml
|__Charts
|__apache
|__Values.yaml
|__Charts
Then in another git repo that contains the environment specific data
values
|__ mysql
|__dev
|__values.yaml
|__secrets.yaml
|__prod
|__values.yaml
|__secrets.yaml
You then have a wrapper script that references the values and the secrets files
helm secrets upgrade foo --install -f ./values/foo/$environment/values.yaml -f ./values/foo/$environment/secrets.yaml
envsubst
As mentioned in other answers, envsubst is a very powerful yet simple way to make your own templates. An example from kiminehart
apiVersion: extensions/v1beta1
kind: Deployment
# ...
architecture: ${GOOS}
GOOS=amd64 envsubst < mytemplate.tmpl > mydeployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
# ...
architecture: amd64
Kubectl
There is a feature request to allow kubectl to do some of the same features of helm and allow for variable substitution. There is a background document that strongly suggest that the feature will never be added, and instead is up to external tools like Helm and envsubst to manage templating.
(edit)
Kustomize
Kustomize is a new project developed by google that is very similar to helm. Basically you have 2 folders base and overlays. You then run kustomize build someapp/overlays/production and it will generate the yaml for that environment.
someapp/
├── base/
│ ├── kustomization.yaml
│ ├── deployment.yaml
│ ├── configMap.yaml
│ └── service.yaml
└── overlays/
├── production/
│ └── kustomization.yaml
│ ├── replica_count.yaml
└── staging/
├── kustomization.yaml
└── cpu_count.yaml
It is simpler and has less overhead than helm, but does not have plugins for managing secrets. You could combine kustomize with sops or envsubst to manage secrets.
https://kubernetes.io/blog/2018/05/29/introducing-kustomize-template-free-configuration-customization-for-kubernetes/
I'm hoping someone will give me a better answer, but in the meantime, you can feed your configuration through envsubst (see gettext and this for mac).
Example config, text.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test
spec:
replicas: ${NUM_REPLICAS}
...
Then run:
$ NUM_REPLICAS=2 envsubst < test.yaml | kubectl apply -f -
deployment "test" configured
The final dash is required. This doesn't solve the problem with volumes of course, but it helps a little. You could write a script/makefile to automate this for environment.