Is there a way to dynamically add values in deployment.yml files? - kubernetes

I have deployment.yml file where i'm mounting service logging folder to a folder in host machine.
The issue is when i run multiple instances using the same deployment.yml file like scaling up all the instances are logging to a same file. Is there a way to solve this by dynamically creating folder in host machine based on container id or something. Any suggestions is appreciated.
My current deployment.yml file is
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
spec:
selector:
matchLabels:
app: logstash
replicas: 2
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: logstash:6.8.6
volumeMounts:
- mountPath: /usr/share/logstash/config/
name: config
- mountPath: /usr/share/logstash/logs/
name: logs
volumes:
- name: config
hostPath:
path: "/etc/logstash/"
- name: logs
hostPath:
path: "/var/logs/logstash"

There are some fields in kubernetes which you can get dynamically like node name, pod name, pod ip, etc. Refer this (https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) doc for examples.
Here is an example where you can set node-name as an environment variable.
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
You can change your deployment in such a way that it creates a file by adding node name to it.. In this way you can have different file name on each node. Recommended is to create a daemonset instead of deployment which will spawn one pod on each selected nodes (selection can be done using node selector).

you can use sed for dynamically adding some values
for example:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
spec:
selector:
matchLabels:
app: logstash
replicas: 2
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: logstash:6.8.6
volumeMounts:
- mountPath: /usr/share/logstash/config/
name: config
- mountPath: /usr/share/logstash/logs/
name: logs
volumes:
- name: config
hostPath:
path: {path}
- name: logs
hostPath:
path: "/var/logs/logstash"
Now I want to add dynamically add the path
I will simply
set -i "s|{path}:'/etc/logstash/'|g" deployment.yml
In this way, you can put as many values as you want before deploying the file.

Related

Replacing properties file in container using configmaps in kubernetes

I am trying to replace properties file in container using configMap and volumeMount in deployment.yaml file.
Below is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-properties
spec:
selector:
matchLabels:
app: agent-2
replicas: 2
template:
metadata:
labels:
app: agent-2
spec:
containers:
- name: agent-2
image: agent:latest
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/usr/local/tomcat/webapps/agent/WEB-INF/classes/conf/application.properties"
name: "applictaion-conf"
subPath: "application.properties"
volumes:
- name: applictaion-conf
configMap:
name: dddeagent-configproperties
items:
- key: "application.properties"
path: "application.properties"
Below is snippet from configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-configp
data:
application.properties: |-
AGENT_HOME = /var/ddeagenthome
LIC_MAXITERATION=5
LIC_MAXDELAY=10000
After deployment, complete folder structure is getting mounted instead of single file. Because of which all the files are getting deleted from existing folder.
Version - 1.21.13
I checked this configuration and there are few misspelling. You are referring to config map "dddeagent-configproperties" but you have defined a ConfigMap object named as "agent-configp".
configMap: name: dddeagent-configproperties
Should be:
configMap: name: agent-configp
Besides that there a few indentation errors, so I will paste a fixed files at the end of the answer.
To the point of your question: your approach is correct and as I tested in my setup everything was working properly without any issues. I created a sample pod with mounted the ConfigMap the same way you are doing it (in the directory where there are other files). The ConfigMap was mounted as a file as it should and other files were still available in the directory.
Mounts:
/app/upload/test-folder/file-1 from application-conf (rw,path="application.properties")
Your approach is the same as described here.
Please double check that on the pod without mounted config map the directory /usr/local/tomcat/webapps/agent/WEB-INF/classes/conf really exists and other files are here. As your image is not public avaiable, I checked with the tomcat image and /usr/local/tomcat/webapps/ directory is empty. Note that even if this directory is empty, the Kubernetes will create agent/WEB-INF/classes/conf directories and application.properties file here, when you want to mount a file.
Fixed deployment and ConfigMap files with good indentation and without misspellings:
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-properties
spec:
selector:
matchLabels:
app: agent-2
replicas: 2
template:
metadata:
labels:
app: agent-2
spec:
containers:
- name: agent-2
image: agent:latest
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/usr/local/tomcat/webapps/agent/WEB-INF/classes/conf/application.properties"
name: "application-conf"
subPath: "application.properties"
volumes:
- name: application-conf
configMap:
name: agent-configp
items:
- key: "application.properties"
path: "application.properties"
Config file:
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-configp
data:
application.properties: |-
AGENT_HOME = /var/ddeagenthome
LIC_MAXITERATION=5
LIC_MAXDELAY=1000

How to fetch configmap from kubernetes pod

I have one spring boot microservice running on docker container, below is the Dockerfile
FROM java:8-jre
MAINTAINER <>
WORKDIR deploy/
#COPY config/* /deploy/config/
COPY ./ms.console.jar /deploy/
CMD chmod +R 777 ./ms.console.jar
CMD ["java","-jar","/deploy/ms.console.jar","console"]
EXPOSE 8384
here my configuration stores in external folder, i.e /config/console-server.yml and when I started the application, internally it will load the config (spring boot functionality).
Now I want to separate this configuration using configmap, for that I simply created one configmap and storing all the configuration details.
kubectl create configmap console-configmap
--from-file=./config/console-server.yml
kubectl describe configmap console-configmap
below are the description details:
Name: console-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
console-server.yml:
----
server:
http:
port: 8385
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain,text/css,application/javascript
min-response-size: 2048
---
spring:
thymeleaf:
prefix: classpath:/static
application:
name: console-service
profiles:
active: native
servlet:
multipart:
max-file-size: 30MB
max-request-size: 30MB
---
host:
gateway: http://apigateway:4000
webhook: http://localhost:9000
my deployment yml is:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
envFrom:
- configMapRef:
name: console-configmap
imagePullSecrets:
- name: regcresd
My doubt is, I commented config folder in the Dockerfile, so while running pods, it's throwing exception because of no configuration, how I will inject this console-configmap to my deployment, what I tried already shared, but getting same issues.
First of all, how are you consuming the .yml file in your application? If you consume your yml file contents as environment variables, your config should just work fine. But I suspect that you want to consume the contents from the config file inside the container. If that is the case you have to create a volume out of the configmap as follows:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/config
name: config
volumes:
- name: config
configMap:
name: console-configmap
imagePullSecrets:
- name: regcresd
The file will be available in the path /app/config/console-server.yml. You have to modify it as per your needs.
do you need to load key:value pairs from the config file as environment variables then below spec would work
envFrom:
- configMapRef:
name: console-configmap
if you need the config as a file inside pod then mount the configmap as volume. following link would be helpful
https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/

How to use pod hostname in subPathExpr filed in volume for kind as Deployment in kubernetes v1.13.0

Not able to use subPathExpr or subPath in volume when kind is Deployment.
Tried using subpath giving some env variable, but is was not creating the folder with value, it is getting created with ${xyz}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc
spec:
replicas: 1
selector:
matchLabels:
app: abc
template:
metadata:
labels:
app: abc
spec:
env:
- name: NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- mountPath: /opt/logs
name: abc
subPath: $(NAME)
volumes:
- name: abc
hostPath:
path: /opt/abc
type: Directory
i want to create the volume directory with pod hostname, but not able to create
example :
if pod name is xyzservice-3216544-fv4
i want to create the volume directory like /opt/abc/xyzservice-3216544-fv4
What's your Kubernetes Cluster Version?
Using subPath with expanded environment variables is a new FEATURE(alpha) in v1.14

How to dynamically populate values into Kubernetes yaml files

I would like to pass in some of the values in kubernetes yaml files during runtime like reading from config/properties file.
what is the best way to do that?
In the below example, I do not want to hardcode the port value, instead read the port number from config file.
Ex:
logstash.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: 33044 (looking to read this port from config file)
env:
- name: INPUT_PORT
value: "5044"
config.yaml
logstash_port: 33044
This sounds like a perfect use case for Helm (www.helm.sh).
Helm Charts helps you define, install, and upgrade Kubernetes applications. You can use a pre-defined chart (like Nginx, etc) or create your own chart.
Charts are structured like:
mychart/
Chart.yaml
values.yaml
charts/
templates/
...
In the templates folder, you can include your ReplicationController files (and any others). In the values.yaml file you can specify any variables you wish to share amongst the templates (like port numbers, file paths, etc).
The values file can be as simple or complex as you require. An example of a values file:
myTestService:
containerPort: 33044
image: "logstash"
You can then reference these values in your template file using:
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: {{ .Values.myTestService.containerPort }}
env:
- name: INPUT_PORT
value: "5044"
Once finished you can compile into Helm chart using helm package mychart. To deploy to your Kubernetes cluster you can use helm install mychart-VERSION.tgz. That will then deploy your chart to the cluster. The version number is set within the Chart.yaml file.
You can use Kubernetes ConfigMaps for this. ConfigMaps are introduced to include external configuration files such as property files.
First create a ConfigMap artifact out of your property like follows:
kubectl create configmap my-config --from-file=db.properties
Then in your Deployment yaml you can provide it as a volume binding or environment variables
Volume binding :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
volumeMounts:
- name: config-volume
mountPath: /etc/creds <mount path>
volumes:
- name: config-volume
configMap:
name: my-config
Here under mountPath you need to provide the location of your container where your property file should resides. And underconfigMap name you should define the name of your configMap you created.
Environment variables way :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
env:
- name: DB_PROPERTIES
valueFrom:
configMapKeyRef:
name: my-config
items:
- key: <propert name>
path: <path/to/property>
Here under the configMapKeyRef section under name you should define your config map name you created. e.g. my-config. Under the items you should define the key(s) of your property file and path to each of the key, Kubernetes will automatically resolve the value of the property internally.
You can find more about ConfigMap here.
https://kubernetes-v1-4.github.io/docs/user-guide/configmap/
There are some parameters you can't change once a pod is created. containerPort is one of them.
You can add a new container to a pod though. And open a new port.
The parameters you CAN change, you can do it either by dynamically creating or modifying the original deployment (say with sed) and running kubectl replace -f FILE command, or through kubectl edit DEPLOYMENT command; which automatically applies the changes.

Kubernetes: Configure Deployment to mount directory from local Kubernetes host?

I need to provide access to the file /var/docker.sock on the Kubernetes host (actually, a GKE instance) to a container running on that host.
To do this I'd like to mount the directory into the container, by configuring the mount in the deployment.yaml for the container deployment.
How would I specify this in the deployment configuration?
Here is the current configuration, I have for the deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: appd-sa-agent
spec:
replicas: 1
template:
metadata:
labels:
run: appd-sa-agent
spec:
containers:
- name: appd-sa-agent
image: docker.io/archbungle/appd-sa-agent:latest
ports:
- containerPort: 443
env:
- name: APPD_HOST
value: "https://graffiti201707132327203.saas.appdynamics.com"
How would I specify mounting the localhost file path to a directory mountpoint on the container?
Thanks!
T.
You need to define a hostPath volume.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: appd-sa-agent
spec:
replicas: 1
template:
metadata:
labels:
run: appd-sa-agent
spec:
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock
containers:
- name: appd-sa-agent
image: docker.io/archbungle/appd-sa-agent:latest
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
ports:
- containerPort: 443
env:
- name: APPD_HOST
value: "https://graffiti201707132327203.saas.appdynamics.com"
you need to use the hostPath option. Here is the sample yaml file.
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath