Kubectl - How to Read Ingress Hosts from Config Variables? - kubernetes

I have a ConfigMap with a variable for my domain:
apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
MY_DOMAIN: mydomain.com
and my goal is to use the MY_DOMAIN variable inside my Ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
⮕ - config.MY_DOMAIN
secretName: mytls
rules:
⮕ - host: config.MY_DOMAIN
http:
paths:
- backend:
serviceName: myservice
servicePort: 3000
But obviously the config above is not valid. So how can this be achieved?

The configMapRef and secretMapRef for the envFrom and valueFrom functions are only available for environment variables which means they cannot be used in this context. The desired functionality is not available in vanilla Kubernetes as of 1.18.0.
However, it can be done. Helm and Kustomize are probably the two best ways to accomplish this but it could also be done with sed or awk. Helm is a templating engine for Kubernetes manifests. Meaning, you create generic manifests, template out the deltas between your desired manifests with the generic manifests by variables, and then provide a variables file. Then, at runtime, the variables from your variables file are automatically injected into the template for you.
Another way to accomplish this is why Kustomize. Which is what I would personally recommend. Kustomize is like Helm in that it deals with producing customized manifests from generic ones, but it doesn't do so through templating. Kustomize is unique in that it performs merge patches between YAML or JSON files at runtime. These patches are referred to as Overlays so it is often referred to as an overlay engine to differentiate itself from traditional templating engines. Reason being Kustomize can be used with recursive directory trees of bases and overlays. Which makes it much more scalable for environments where dozens, hundreds, or thousands of manifests might need to be generated from boilerplate generic examples.
So how do we do this? Well, with Kustomize you would first define a kustomization.yml file. Within you would define your Resources. In this case, myingress:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- myingress.yml
So create a example directory and make a subdirectory called base inside it. Create ./example/base/kustomization.yml and populate it with the kustomization above. Now create a ./example/base/myingress.yml file and populate it with the example myingress file you gave above.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
- config.MY_DOMAIN
secretName: mytls
rules:
- host: config.MY_DOMAIN
http:
paths:
- backend:
serviceName: myservice
servicePort: 3000
Now we need to define our first overlay. We'll create two different domain configurations to provide an example of how overlays work. First create a ./example/overlays/domain-a directory and create a kustomization.yml file within it with the following contents:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../../base/
patchesStrategicMerge:
- ing_patch.yml
configMapGenerator:
- name: config_a
literals:
- MY_DOMAIN='domain_a'
At this point we have defined ing_patch.yml and config_a in this file. ing_patch.yml will serve as our ingress Patch and config_a will serve as our configMap. However, in this case we'll be taking advantage of a Kustomize feature known as a configMapGenerator rather than manually creating configMap files for single literal key:value pairs.
Now that we have done this, we have to actually make our first patch! Since the deltas in your ingress are pretty small, it's not that hard. Create ./example/overlays/domain_a/ing_patch.yml and populate it with:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
- domain.a.com
rules:
- host: domain.a.com
Perfect, you have created your first overlay. Now you can use kubectl or kustomize to generate your resultant manifest to apply to the Kubernetes API Server.
Kubectl Build: kubectl kustomize ./example/overlays/domain_a
Kustomize Build: kustomize build ./example/overlays/domain_a
Run one of the above Build commands and review the STDOUT produced in your terminal. Notice how it contains two files, myingress and config? And myingress contains the Domain configuration present in your overlay's patch?
So, at this point you're probably asking. Why does Kustomize exist if Kubectl supports the features by default? Well Kustomize started as an external project initially and the Kustomize binary is often running a newer release than the version available in Kubectl.
The next step is to create a second overlay. So go ahead and cp your first overlay over: cp -r ./example/overlays/domain_a ./example/overlays/domain_b.
Now that you have done that, open up ./example/overlays/domain_b/ing_patch.yml up in a text editor and change the contents to look like so:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
tls:
- hosts:
- domain.b.com
rules:
- host: domain.b.com
Save the file and then build your two separate overlays:
kustomize build ./example/overlays/domain_a
kustomize build ./example/overlays/domain_b
Notice how each generated stream of STDOUT varies based on the patch present in the Overlay directory? You can continue to abstract this pattern by making your Bases the Overlays for other bases. Or by making your Overlays the Bases for other Overlays. Doing so can allow you to scale this project in extremely powerful and efficient ways. Apply them to your API Server if you wish:
kubectl apply -k ./example/overlays/domain_a
kubectl apply -k ./example/overlays/domain_b
This is only the beginning of Kustomize really. As you might have guessed after seeing the configMapGenerator field in the kustomization.yml file for each overlay, Kustomize has a LOT of features baked in. It can add labels to all of your resources, it can override their namespaces or container image information, etc.
I hope this helps. Let me know if you have any other questions.

Related

Patching multiple resources with Kustomize

I have a kustomization.yaml file defined which consists of two resources that are downloaded from a git repo. Both of these resources want to create a Namespace with the same name, which creates an error when I try to build the final kustomization file: may not add resource with an already registered id: ~G_v1_Namespace|~X|rabbitmq-system'. I tried using patches to get rid of this unwanted namespace however, it looks like that works only if ONE resource is defined. As soon as I add the other resource, it stops working.
bases:
- ../../base
namespace: test
resources:
- https://github.com/rabbitmq/cluster-operator/releases/download/v1.13.0/cluster-operator.yml
- https://github.com/rabbitmq/messaging-topology-operator/releases/download/v1.6.0/messaging-topology-operator.yaml
patchesStrategicMerge:
- |-
apiVersion: v1
kind: Namespace
metadata:
name: rabbitmq-system
$patch: delete
What I think is happening is Kustomize loads the resources first and finds two identical Namespaces defined and it doesn't take patches into consideration. Can I fix this behavior somehow?

What is the role of ControllerManagerConfig generated in kubebuilder

I use kubebuilder to quickly develop k8s operator, and now I save the yaml deployed by kustomize to a file in the following way.
create: manifests kustomize ## Create chart
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
$(KUSTOMIZE) build config/default --output yamls
I found a configmap, but it is not referenced by other resources.
apiVersion: v1
data:
controller_manager_config.yaml: |
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: 31568e44.ys7.com
kind: ConfigMap
metadata:
name: myoperator-manager-config
namespace: myoperator-system
I am a little curious what it does? can i delete it?
I really appreciate any help with this.
It is another way to provide controller configuration. Check that https://book.kubebuilder.io/component-config-tutorial/custom-type.html .
However, it should be mounted in your deployment and the file path must be provided in the --config flag (the name of the flag can be different in your case, but "config" is used in the tutorial - it depends on your code)

ConfigMap that can reference current Namespace

I'm working with a Pod (Shiny Proxy) that talks to Kubernetes API to start other pods. I'm wanting to make this generic, and so don't want to hardcode the namespace (because I intend to have multiple of these, deployed probably as an OpenShift Template or similar).
I am using Kustomize to set the namespace on all objects. Here's what my kustomization.yaml looks like for my overlay:
bases:
- ../../base
namespace: shiny
commonAnnotations:
technical_contact: A Local Developer <somedev#example.invalid>
Running Shiny Proxy and having it start the pods I need it to (I have service accounts and RBAC already sorted) works, so long as as in the configuration for Shiny Proxy I specify (hard-code) the namespace that the new pods should be generated in. The default namespace that Shiny Proxy will use is (unfortunately) 'default', which is inappropriate for my needs.
Currently for the configuration I'm using a ConfigMap (perhaps I should move to a Kustomize ConfigMapGenerator)
The ConfigMap in question is currently like the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: shiny-proxy
data:
application_yml: |
...
container-backend: kubernetes
kubernetes:
url: https://kubernetes.default.svc.cluster.local:443
namespace: shiny
...
The above works, but 'shiny' is hardcoded; I would like to be able to do something like the following:
namespace: { .metadata.namespace }
But this doesn't appear to work in a ConfigMap, and I don't see anything in the documentation that would lead to believe that it would, or that a similar thing appears possible within the ConfigMap machinery.
Looking over the Kustomize documentation doesn't fill me with clarity either, particularly as the configuration file is essentially plain-text (and not a YAML document as far as the ConfigMap is concerned). I've seen some use of Vars, but https://github.com/kubernetes-sigs/kustomize/issues/741 leads to believe that's a non-starter.
Is there a nice declarative way of handling this? Or should I be looking to have the templating smarts happen within the container, which seems kinda wrong to me, but I am still new to Kubernetes (and OpenShift)
I'm using CodeReady Containers 1.24 (OpenShift 4.7.2) which is essentially Kubernetes 1.20 (IIRC). I'm preferring to keep this fairly well aligned with Kubernetes without getting too OpenShift specific, but this is still early days.
Thanks,
Cameron
If you don't want to hard-code a specific data in your manifest file, you can consider using Kustomize plugins. In this case, the sedtransformer plugin may be useful. This is an example plugin, maintained and tested by the kustomize maintainers, but not built-in to kustomize.
As you can see in the Kustomize plugins guide:
Kustomize offers a plugin framework allowing people to write their own resource generators and transformers.
For more information on creating and using Kustomize plugins, see Extending Kustomize.
I will create an example to illustrate how you can use the sedtransformer plugin in your case.
Suppose I have a shiny-proxy ConfigMap:
NOTE: I don't specify a namespace, I use namespace: NAMESPACE instead.
$ cat cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: shiny-proxy
data:
application_yml: |
container-backend: kubernetes
kubernetes:
url: https://kubernetes.default.svc.cluster.local:443
namespace: NAMESPACE
something_else:
something: something
To use the sedtransformer plugin, we first need to create the plugin’s configuration file which contains a YAML configuration object:
NOTE: In argsOneLiner: I specify that NAMESPACE should be replaced with shiny.
$ cat sedTransformer.yaml
apiVersion: someteam.example.com/v1
kind: SedTransformer
metadata:
name: sedtransformer
argsOneLiner: s/NAMESPACE/shiny/g
Next, we need to put the SedTransformer Bash script in the right place.
When loading, kustomize will first look for an executable file called
$XDG_CONFIG_HOME/kustomize/plugin/${apiVersion}/LOWERCASE(${kind})/${kind}
I create the necessary directories and download the SedTransformer script from the Github:
NOTE: The downloaded script need to be executable.
$ mkdir -p $HOME/.config/kustomize/plugin/someteam.example.com/v1/sedtransformer
$ cd $HOME/.config/kustomize/plugin/someteam.example.com/v1/sedtransformer
$ wget https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/plugin/someteam.example.com/v1/sedtransformer/SedTransformer
$ chmod a+x SedTransformer
Finally, we can check if it works as expected:
NOTE: To use this plugin, you need to provide the --enable-alpha-plugins flag.
$ tree
.
├── cm.yaml
├── kustomization.yaml
└── sedTransformer.yaml
0 directories, 3 files
$ cat kustomization.yaml
resources:
- cm.yaml
transformers:
- sedTransformer.yaml
$ kustomize build --enable-alpha-plugins .
apiVersion: v1
data:
application_yml: |
container-backend: kubernetes
kubernetes:
url: https://kubernetes.default.svc.cluster.local:443
namespace: shiny
something_else:
something: something
kind: ConfigMap
metadata:
name: shiny-proxy
Using the sedtransformer plugin can be especially useful if you want to replace NAMESPACE in a number of places.
I found the easiest way of doing this was to use an entrypoint script in the container that harvested the downward API (?) service credentials (specifically the namespace secret) that get mounted in the container, and exposes that as an environment variable.
export SHINY_K8S_NAMESPACE=`cat /run/secrets/kubernetes.io/serviceaccount/namespace`
cd /opt/shiny-proxy/working
exec java ${JVM_OPTIONS} -jar /opt/shiny-proxy/shiny-proxy.jar
Within the application configuration (shiny-proxy), it supports the use of environment variables in its configuration file, so I can refer to the pod's namespace using ${SHINY_K8S_NAMESPACE}
Although, I've just now seen the idea of a fieldRef (from https://docs.openshift.com/enterprise/3.2/dev_guide/downward_api.html), and would be generalisable to things other than just namespace.
apiVersion: v1
kind: Pod
metadata:
name: dapi-env-test-pod
spec:
containers:
- name: env-test-container
image: gcr.io/google_containers/busybox
command: ["/bin/sh", "-c", "env"]
env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: Never

How to deploy a bunch of yaml files?

I would like to deploy a bunch of yaml files https://github.com/quay/quay/tree/master/deploy/k8s on my kubernetes cluster and would like to know, what is the best approach to deploy these at once.
You can directly apply folder
kubectl create -f ./<foldername>
kubectl apply -f ./<foldername>
You can also add mutiliple files in one command
kubectl apply -f test.yaml,test-1.yaml
You can also merge all YAML files into a single file and manage it further.
Marge YAML file using ---
For example :
apiVersion: v1
kind: Service
metadata:
name: test-data
labels:
app: test-data
spec:
ports:
- name: http
port: 80
targetPort: 9595
- name: https
port: 9595
targetPort: 9595
selector:
app: test-data
tier: frontend
---
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
ports:
- name: http
port: 80
targetPort: 9595
- name: https
port: 9595
targetPort: 9595
selector:
app: test-app
tier: frontend
kubectl apply -f <folder-name>
A simple way to deploy all files in a given folder.
You may consider using Helm (The package manager for Kubernetes). Just like we use yum or apt-get for Linux, we use helm for k8s.
Using Helm, you can deploy multiple resources (bunch of YAMLs) in one go. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on. Also, you don't need to combine all your YAMLs; they can remain separate as part of a given chart. Besides, if one chart depends on another, you can use the helm dependency feature.
The reason why i use Helm is because whenever i deploy a chart, helm tracks it as a release. Any change to a chart get a new release version. This way, upgrade (or rollback) becomes very easy and you can confidently say what went as part of a given release.
Also, if you have different microservices that have stuff in common, then helm provides a feature called Library Chart using which you can create definitions that can be re-used across charts, thus keeping your charts DRY.
Have a look at this introductory video: https://www.youtube.com/watch?v=Zzwq9FmZdsU&t=2s
I would advise linking the yaml's into one. The purpose of a deployment and service yaml is to deploy your application onto the cluster in one fell swoop. You can define many deployments and services within the one file. In your case, a tool such as Kustomize will help you combine them. Kustomize comes preinstalled with kubectl.
You can combine your yamls called a Multi-Resource yaml into one file using the --- operator. i.e.
apiVersion: v1
kind: Service
metadata:
name: foo
spec:
...
---
apiVersion: v1
kind: Service
metadata:
name: bar
spec:
...
Then make a kustomization.yaml which combines all your multi-resource yamls. There is a good guide on this here: https://levelup.gitconnected.com/kubernetes-merge-multiple-yaml-into-one-e8844479a73a
The documentation from k8 is here: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/

Is kustomize for k8s backward chaining?

The README for kustomize says that
It's like make, in that what it does is declared in a file, and it's like sed, in that it emits edited text.
Does this analogy extend beyond the fact that files are used to declare what is needed?
Or, is kustomize backward chaining like make in that it reads all command input before working out what it has to do rather than work sequentially and step through the command input like bash working through a shell script?
EDIT: Jeff Regan, of the Kustomize team in Google, explains the model for the way kustomize works towards the beginning of his talk Kustomize: Kubernetes Configuration Customization. He also shows how kustomize may be daisy chained so the output of one kustomize may serve as the input to another kustomize. It seems that, as pointed out by ITChap below, kustomize starts by gathering all the resources referenced in the kustomization.yml file in the base dir. It the executes sequentially in a series of steps to perform the required substitutions and transformations interatively. Repeating the substitution/tranformation step as often as needed to complete. It then spits out the generated YAML on stdout. So I would say that it is not backward chaining like make but rather somewhere in between. HTH.
What I noticed so far is that kustomize will first accumulate the content of all the base resources then apply the transformations from your kustomization.yml files. If you have multiple level of overlays, it doesn't seem to pass the result from one level to the next.
Let's consider the following:
./base/pod.yml:
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: busybox
./base/kustomization.yml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../pod.yml
./overlays/l1/kustomization.yml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
nameSuffix: "-l1"
./overlays/l2/kustomization.yml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../l1
nameSuffix: "-l2"
When running kustomize build overlays/l2 you are going to get a pod named test-l1-l2 as expected.
But if you try to patch the base pod you would have to reference the pod using:
patchesJson6902:
- target:
version: v1
kind: Pod
name: test
path: patch.yml
in your ./overlays/l1/kustomization.yml but also in ./overlays/l2/kustomization.yml. At the time the patch of l2 is applied, the referenced resource is still test and not test-l1.
I don't know kustomize well enough to understand the intention behind this but these are my observations. Hope it answers your question.
PS: this might change with https://github.com/kubernetes-sigs/kustomize/issues/1036