Patch all containers in a Deployment with Kustomize - kubernetes

I need to add volumes and related mount points to all containers of a deployment with Kustomize.
I'm trying json patch for that
- op: add
path: /spec/template/spec/volumes/-
value: [...]
---
- op: add
path: /spec/template/spec/containers/0/volumeMounts/-
value: [...]
This will work only on one container (will somehow add it to the "2nd" container when there are 2 despite index 0). This will fail if there is no existing volume or volumeMount with "doc missing value".
If I try to use regular patches, I need to specify the container name. Since I have 8 deployments with different container names, I'm looking for something flexible enough to avoid repeating the patch 8 times.
Kustomize docs shows an example to add a resource to multiple deployments here https://github.com/kubernetes-sigs/kustomize/blob/master/examples/patchMultipleObjects.md but that doesn't not cover adding one resource to multiple containers.

Related

Facing "The Pod "web-nginx" is invalid: spec.initContainers: Forbidden: pod updates may not add or remove containers" applying pod with initcontainers

I was trying to make file before application gets up in kubernetes cluster with initcontainers,
But when i am setting up the pod.yaml and trying to apply it with "kubectl apply -f pod.yaml" it throws below error
error-image
Like the error says, you cannot update a Pod adding or removing containers. To quote the documentation ( https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement )
Kubernetes doesn't prevent you from managing Pods directly. It is
possible to update some fields of a running Pod, in place. However,
Pod update operations like patch, and replace have some limitations
This is because usually, you don't create Pods directly, instead you use Deployments, Jobs, StatefulSets (and more) which are high-level resources that defines Pods templates. When you modify the template, Kubernetes simply delete the old Pod and then schedule the new version.
In your case:
you could delete the pod first, then create it again with the new specs you defined. But take into consideration that the Pod may be scheduled on a different node of the cluster (if you have more than one) and that may have a different IP Address as Pods are disposable entities.
Change your definition with a slightly more complex one, a Deployment ( https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ ) which can be changed as desired and, each time you'll make a change to its definition, the old Pod will be removed and a new one will be scheduled.
From the spec of your Pod, I see that you are using a volume to share data between the init container and the main container. This is the optimal way but you don't necessarily need to use a hostPath. If the only needs for the volume is to share data between init container and other containers, you can simply use emptyDir type, which acts as a temporary volume that can be shared between containers and that will be cleaned up when the Pod is removed from the cluster for any reason.
You can check the documentation here: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

Kustomize: How to set metadata.name of a resource we need to patch [duplicate]

This question already has answers here:
How can I create a namespace with kustomize?
(6 answers)
Closed 1 year ago.
How to set metadata.name from a variable value, when kustomizing a base ressource.
For creating for example a namespace, but we don’t know its name in advance but need to “kustomize” like adding commonLabels etc to it?
The way Kustomize operates is that you kustomize a base resource already defined with an apiVersion, kind, metadata.name. So I haven't found a way to afterwards set the final resource name.
If I understand you correctly there are few options depending on your needs:
Use Helm
Helm helps you manage Kubernetes applications — Helm Charts help you
define, install, and upgrade even the most complex Kubernetes
application.
Use PodPreset
You can use a PodPreset object to inject information like secrets,
volume mounts, and environment variables etc into pods at creation
time.
Use ConfigMaps
ConfigMaps allow you to decouple configuration artifacts from image
content to keep containerized applications portable.
You can modify your deployments dynamically and than run kubectl replace -f FILE command. Or use kubectl edit DEPLOYMENT command in order to automatically applly the changes.
Please let me know if that helped.

Install Order in K8s

I have a set of yaml files which are of different Kinds like
1 PVC
1 PV (The above PVC claims this PV)
1 Service
1 StatefulSet Object (The above Service is for this Stateful Set
1 Config Map (The above Stateful set uses this config map
Does the Install order of these objects matter to bring up an application using these?
If you do kubectl apply -f dir on a directory containing all of those files then it should work, at least if you have the latest version as there have been bugs raised and addressed in this area.
However, there are some dependencies which aren't hard dependencies and for which there is discussion. For this reason some are choosing to order the resources themselves or use a deployment tool like helm which deploys resources in a certain order.

How to make an environment variable different across two pods of the same deployment in kubernetes?

Based on this it is possible to create environment variables that are the same across all the pods of the deployment that you define.
Is there a way to instruct Kubernetes deployment to create pods that have different environment variables?
Use case:
Let's say that I have a monitoring container and i want to create 4 replicas of it. This container has a service that is mailing if an environment variables defines so. Eg, if the env var IS_MASTER is true, then the service proceeds to send those e-mails.
apiVersion: v1
kind: Deployment
metadata:
...
spec:
...
replicas: 4
...
template:
...
spec:
containers:
-env:
-name: IS_MASTER
value: <------------- True only in one of the replicas
(In my case I'm using helm, but the same thing can be without helm as well)
What you are looking for is, as far as I know, more like an anti-pattern than impossible.
From what I understand, you seem to be looking to deploy a scalable/HA monitoring platform that wouldn't mail X times on alerts, so you can either make a sidecar container that will talk to its siblings and "elect" the master-mailer (a StatefulSet will make it easier in this case), or just separate the mailer from the monitoring and make them talk to each other through a Service. That would allow you to load-balance both monitoring and mailing separately.
monitoring-1 \ / mailer-1
monitoring-2 --- > mailer.svc -- mailer-2
monitoring-3 / \ mailer-3
Any mailing request will be handled by one and only one mailer from the pool, but that's assuming your Monitoring Pods aren't all triggered together on alerts... If that's not the case, then regardless of your "master" election for the mailer, you will have to tackle that first.
And by tackling that first I mean adding a master-election logic to your monitoring platform, to orchestrate master fail-overs on events, there are a few ways to do so, but it really depends on what your monitoring platform is and can do...
Although, if your replicas are just there to extend compute power somehow and your master is expected to be static, then simply use a StatefulSet, and add a one liner at runtime doing if hostname == $statefulset-name-0 then MASTER, but I feel like it's not the best idea.
By definition, each pod in a deployment is identical to its other replicas. This is not possible in the yaml definition.
An optional solution will be to override the pod command and have it process and calculate the value of the variable, set the variable (export IS_MASTER=${resolved_value}) and trigger the default entrypoint for the container.
It means you'll have to figure out a logic to implement this (i.e. how does the pod know it should be IS_MASTER=true?). This is an implementation detail that can be done with a DB or other shared common resource used as a flag or semaphore.
All the Pod replicas in the deployment will have the same environment variables and no unique value to identify a particular Pod. Creating multiple Deployments is a better solution.
Not sure why, the OP is for only one Deployment. One solution is to use StatefulSets. The node names would be like web-0, web1, web-2 and so on. In the code check for the host name, if it is web-0 then send emails or else do something else.
It's a dirty solution, but I can't think of a better solution than creating multiple deployments.
One other solution is to use the same Helm Chart for both cases and run one helm deployment for each case. You can overwrite env variables with helm (using --set .Values.foo.deployment.isFirst= "0" or "1")
Please note that Helm/K8s will not allow you to POST the very same configuration twice.
So you will have to conditionally apply some Kubernetes specific configuration (Secrets, ConfigMaps, Secrets etc) on the first deployment only.
{{- if eq .Values.foo.deployment.isFirst "1" }}
...
...
{{- end }}

Automatic subdirectories in Kubernetes configmaps?

(A very similar question was asked about 2 years ago, though it was specifically about secrets, I doubt the story is any different for configmaps... but at the least, I can present the use case and why the existing workarounds aren't viable for us.)
Given a simple, cut-down deployment.yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
volumeMounts:
- name: vol
mountPath: /app/Configuration
volumes:
- name: vol
configMap:
name: configs
and the matching configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: configs
labels:
k8s-app: example
data:
example1.json: |-
{
"key1": "value1"
}
example2.json: |-
{
"key2": "value2"
}
the keys in configmap.yaml, whatever they may be, are simply created as files, without deployment.yaml needing to be modified or have any specifics other than the mountPath.
The problem is that the actual structure has subfolders to handle region-specific values that override the root ones:
Configuration \ example1.json
Configuration \ example2.json
Configuration \ us \ example1.json
Configuration \ us \ ca \ example2.json
The number and nature of these could obviously vary, for as many different countries and regions imaginable and for each separately configured module. The intent was to provide a tool to the end user that would allow them to set up and manage these configurations, which would behind the scenes automatically generate the configmap.yaml and update it in kubernetes.
However, unless there's a trick I haven't found yet, this seems to be outside of kubernetes's abilities, in a couple ways.
First of all, there is no syntax that allows one to specify configmap keys that are directories, nor include a subdirectory path in a key:
data:
# one possible approach (currently complains that it doesn't validate '[-._a-zA-Z0-9]+')
/us/example1.json: |-
{
"key1": "value1"
}
# another idea; this obviously results in 'invalid type for io.k8s.api.core.v1.ConfigMap.data: got "map", expected "string"'
us:
example2.json: |-
{
"key2": "value2"
}
So what are our options to accomplish this?
Wellll, we could map the keys to specific locations using the items: -key: path: approach in the deployment.yaml's volumes: -configMap: node,
and/or generate several nodes in the deployment.yaml's volumeMounts: node,
using either subPath: (which is basically the same as using items: -key: -path: in the volumes: configMap:),
or individual separate configmaps for each subdirectory, and mounting them all as different volumes in the deployment.yaml.
All of these methods would require massive and incredibly verbose changes to the deployment.yaml, leaking out knowledge it shouldn't have any reason to know about, making it mutable and continually re-generated rather than static, complicating rolling out settings updates to deployed pods, etc. etc. etc. It's just Not Good. And all of that just to have mapped one directory, just because it contains subdirectories...
Surely this CAN'T be the way it's SUPPOSED to work? What am I missing? How should I proceed?
From a "container-native" perspective, having a large file system tree of configuration files that the application processes at startup to arrive at its canonical configuration is an anti-pattern. Better to have a workflow that produces a single file, which can be stored in a ConfigMap and easily inspected in its final form. See, for instance, nginx ingress.
But obviously not everyone is rewriting their apps to better align with the kubernetes approach. The simplest way then to get a full directory tree of configuration files into a container at deploy time is to use initContainers and emptyDir mounts.
Package the config file tree into a container (sometimes called a "data-only" container), and have the container start script just copy the config tree into the emptyDir mount. The application can then consume the tree as it expects to.
Depending on the scale of your config tree another viable option might be to simulate a subtree with, say, underscores instead of slashes in the file "paths" inside the configmap. This will make you loose general filesystem performance (which should never be a problem if you are just having configs to read) and force you to rewrite a little bit of your applications code (file pattern traversing instead of directory traversing when accessing configs), but should solve your use case at a fairly cheap price.
A few workarounds:
have a data only container built with the data on it...
FROM scratch
... # copy data here
then add it as a sidecar mounting the volume on the other container...
create a tar ball from the config, convert it to a configmap, mount in a container and change the container command to untar the config before start...
rename the files with some special char instead of /, like us#example.json and use a script to mv them as in the beginning.
All of this is very hacky... The best scenario is to refactor them to be used in flat folder and create them with something like kustomize:
kustomize edit add configmap my-configmap --from-file='./*.json'