Is there a way to make kustomize merge headings in a yaml file instead of completely replacing that heading with the overlay yaml - merge

i have a base yaml and an overlay yaml and using "kustomize" i want to merge these two yaml. what happens with me is that on running kustomize build there comes an output but it is not expected why? because the kustomize instead of filling the custom information from the overlay yaml replaces the whole heading tag of the base with the overlay "containers" in my case. The intended behaviour that i need is the it should somehow fill in the missing information using the overlay yaml instead of replacement.
base yaml:
apiVersion: v1
kind: Pod
metadata:
name: temp
labels:
tier: temp
spec:
containers:
- name: temp
image: temp
imagePullPolicy: Always
command: temp
args:
temp
envFrom:
- configMapRef:
name: temp
volumeMounts:
volumes:
overlay yaml:
apiVersion: v1
kind: Pod
metadata:
name: temp
labels:
tier: temp
spec:
containers:
volumeMounts:
- name: temppathname
mountPath: /temppath
volumes:
- name: temppathname
hostPath:
type: temp
path: temppath
Expected result after kustomize build:
apiVersion: v1
kind: Pod
metadata:
name: temp
labels:
tier: temp
spec:
containers:
- name: temp
image: temp
imagePullPolicy: Always
command: temp
args:
["sleep 9000"]
envFrom:
- configMapRef:
name: temp
volumeMounts:
- name: temppathname
mountPath: /temppath
volumes:
- name: temppathname
hostPath:
type: temp
path: temppath
what i'm getting:
apiVersion: v1
kind: Pod
metadata:
labels:
tier: temp
name: temp
spec:
containers:
volumeMounts:
- name: temppathname
mountPath: /temppath
volumes:
- name: temppathname
hostPath:
type: temp
path: temppath

In your base.yaml the value for the key containers is a sequence (node). In your overlay.yaml the value for the key containers is a mapping. Of course those two cannot be merged.
Not knowing kustomize at all, it seems logical that because those cannot be merged, the overlay replaces that whole sequence node with the mapping node. Your expectation that the mapping of the overlay is merged with a mapping that happens to be an item (in this case the only item) in the sequence of the base seems completely arbitrary. Which item would need to be taken if there had been multiple items? The first? The last? The last one before item five that is a mapping?
If your overlay.yaml looked like:
apiVersion: v1
kind: Pod
metadata:
name: temp
labels:
tier: temp
spec:
containers:
- volumeMounts: # < created a sequence item here by inserting an item indicator
- name: temppathname
mountPath: /temppath
volumes:
- name: temppathname
hostPath:
type: temp
path: temppath
then I could understand your expectation (and maybe the above change can be applied to make it work, I don't have a way to test).

I find that the easiest way to deal with this is to use JSONPatch. I would remove the empty fields of the base as in:
apiVersion: v1
kind: Pod
metadata:
name: temp
labels:
tier: temp
spec:
containers:
- name: temp
image: temp
imagePullPolicy: Always
command: temp
args:
temp
envFrom:
- configMapRef:
name: temp
Then in your overlay create a new patch, for example named create_volume.yml:
- op: add
path: /spec/volumes/-
value:
name: temppathname
hostPath:
type: temp
path: temppath
- op: add
path: /spec/containers/0/volumeMounts/-
value:
name: temppathname
mountPath: /temppath
And finally in the overlay kustomization.yml add:
patchesJson6902:
- target:
version: v1
kind: Pod
name: temp
path: create_volume.yml
If it doesn't work you might have to play with the API group in the patch target. I only patched deployments until now and my target would be:
- target:
group: apps
version: v1
kind: Deployment
name: temp
path: create_volume.yml

Related

Kubernetes Deployment: Is there a way to copy a file from a mount directory to the root directory / after the deployment manifest is applied

The problem is your mount path can not be / but I need to move the demo.txt file into / once the container is created.
I have this sample deployment.yaml:
kind: ConfigMap
apiVersion: v1
metadata:
name: demo-configfile
data:
myfile: |
This my demo file's text info
This is just dummy text
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
selector:
matchLabels:
name: demo-configmaps-test
template:
metadata:
labels:
name: demo-configmaps-test
spec:
containers:
- name: demo-container
image: alpine
imagePullPolicy: Always
command: ['sh', '-c', 'sleep 36000']
volumeMounts:
- name: demo-files
mountPath: /demo/files
volumes:
- name: demo-files
configMap:
name: demo-configfile
items:
- key: myfile
path: demo.txt

Replacing properties file in container using configmaps in kubernetes

I am trying to replace properties file in container using configMap and volumeMount in deployment.yaml file.
Below is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-properties
spec:
selector:
matchLabels:
app: agent-2
replicas: 2
template:
metadata:
labels:
app: agent-2
spec:
containers:
- name: agent-2
image: agent:latest
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/usr/local/tomcat/webapps/agent/WEB-INF/classes/conf/application.properties"
name: "applictaion-conf"
subPath: "application.properties"
volumes:
- name: applictaion-conf
configMap:
name: dddeagent-configproperties
items:
- key: "application.properties"
path: "application.properties"
Below is snippet from configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-configp
data:
application.properties: |-
AGENT_HOME = /var/ddeagenthome
LIC_MAXITERATION=5
LIC_MAXDELAY=10000
After deployment, complete folder structure is getting mounted instead of single file. Because of which all the files are getting deleted from existing folder.
Version - 1.21.13
I checked this configuration and there are few misspelling. You are referring to config map "dddeagent-configproperties" but you have defined a ConfigMap object named as "agent-configp".
configMap: name: dddeagent-configproperties
Should be:
configMap: name: agent-configp
Besides that there a few indentation errors, so I will paste a fixed files at the end of the answer.
To the point of your question: your approach is correct and as I tested in my setup everything was working properly without any issues. I created a sample pod with mounted the ConfigMap the same way you are doing it (in the directory where there are other files). The ConfigMap was mounted as a file as it should and other files were still available in the directory.
Mounts:
/app/upload/test-folder/file-1 from application-conf (rw,path="application.properties")
Your approach is the same as described here.
Please double check that on the pod without mounted config map the directory /usr/local/tomcat/webapps/agent/WEB-INF/classes/conf really exists and other files are here. As your image is not public avaiable, I checked with the tomcat image and /usr/local/tomcat/webapps/ directory is empty. Note that even if this directory is empty, the Kubernetes will create agent/WEB-INF/classes/conf directories and application.properties file here, when you want to mount a file.
Fixed deployment and ConfigMap files with good indentation and without misspellings:
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-properties
spec:
selector:
matchLabels:
app: agent-2
replicas: 2
template:
metadata:
labels:
app: agent-2
spec:
containers:
- name: agent-2
image: agent:latest
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/usr/local/tomcat/webapps/agent/WEB-INF/classes/conf/application.properties"
name: "application-conf"
subPath: "application.properties"
volumes:
- name: application-conf
configMap:
name: agent-configp
items:
- key: "application.properties"
path: "application.properties"
Config file:
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-configp
data:
application.properties: |-
AGENT_HOME = /var/ddeagenthome
LIC_MAXITERATION=5
LIC_MAXDELAY=1000

Define/change Kubernetes SSH key file name in a YAML

I have a secret:
apiVersion: v1
kind: Secret
metadata:
name: secret-ssh-auth
type: kubernetes.io/ssh-auth
data:
ssh-privatekey: |
SEVMTE9PT09PT09PT09PT09PT09PCg==
and deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: secret-ssh-auth
mountPath: /root/.ssh
volumes:
- name: secret-ssh-auth
secret:
secretName: secret-ssh-auth
defaultMode: 0400
It creates a file with this path /root/.ssh/ssh-privatekey while I want to have /root/.ssh/id_rsa name instead.
I know we can solve it by running a kubectl command, but I want to handle it inside the YAML file.
So, how to do that by the YAML file?
Based on the Kubernetes documentation the ssh-privatekey key is mandatory, in this case, you can leave it empty via stringData key, then define another one by data key like this:
apiVersion: v1
kind: Secret
metadata:
name: secret-ssh-auth
type: kubernetes.io/ssh-auth
stringData:
ssh-privatekey: |
-
data:
id_rsa: |
SEVMTE9PT09PT09PT09PT09PT09PCg==
Got the same problem, and revolved it by simply defining the spec.volumes like this, which renames the key with the path value:
volumes:
- name: privatekey
secret:
secretName: private-key
items:
- key: ssh-privatekey
path: id_rsa
defaultMode: 384
then refer it inside the container definition:
containers:
- name: xxx
volumeMounts:
- name: privatekey
mountPath: /path/to/.ssh

Unable to create POD in kubernetes1 Getting error

I am learning Kubernetes and creating a Pod with configmap. I created a yaml file to create a POD. But I am getting below error
error: error parsing yuvi.yaml: error converting YAML to JSON: YAML: line 13: mapping values are not allowed in this context
My configmap name: yuviconfigmap
YAML file:
apiVersion: v1
kind: Pod
metadata:
name: yuvipod1
spec:
containers:
- name: yuvicontain
image: nginx
volumeMounts:
- name: yuvivolume
mountPath: /etc/voulme
volumes:
- name: yuvivolume
configMap:
name: yuviconfigmap
The yaml is not as per what kubernetes expects. Below should work.
apiVersion: v1
kind: Pod
metadata:
name: yuvipod1
spec:
containers:
- name: yuvicontain
image: nginx
volumeMounts:
- name: yuvivolume
mountPath: /etc/voulme
volumes:
- name: yuvivolume
configMap:
name: yuviconfigmap

Is there a way to dynamically add values in deployment.yml files?

I have deployment.yml file where i'm mounting service logging folder to a folder in host machine.
The issue is when i run multiple instances using the same deployment.yml file like scaling up all the instances are logging to a same file. Is there a way to solve this by dynamically creating folder in host machine based on container id or something. Any suggestions is appreciated.
My current deployment.yml file is
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
spec:
selector:
matchLabels:
app: logstash
replicas: 2
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: logstash:6.8.6
volumeMounts:
- mountPath: /usr/share/logstash/config/
name: config
- mountPath: /usr/share/logstash/logs/
name: logs
volumes:
- name: config
hostPath:
path: "/etc/logstash/"
- name: logs
hostPath:
path: "/var/logs/logstash"
There are some fields in kubernetes which you can get dynamically like node name, pod name, pod ip, etc. Refer this (https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) doc for examples.
Here is an example where you can set node-name as an environment variable.
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
You can change your deployment in such a way that it creates a file by adding node name to it.. In this way you can have different file name on each node. Recommended is to create a daemonset instead of deployment which will spawn one pod on each selected nodes (selection can be done using node selector).
you can use sed for dynamically adding some values
for example:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
spec:
selector:
matchLabels:
app: logstash
replicas: 2
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: logstash:6.8.6
volumeMounts:
- mountPath: /usr/share/logstash/config/
name: config
- mountPath: /usr/share/logstash/logs/
name: logs
volumes:
- name: config
hostPath:
path: {path}
- name: logs
hostPath:
path: "/var/logs/logstash"
Now I want to add dynamically add the path
I will simply
set -i "s|{path}:'/etc/logstash/'|g" deployment.yml
In this way, you can put as many values as you want before deploying the file.