Kubernetes pod level configuration externalization in spring boot app - kubernetes

I need some help from the community, I'm still new to K8 and Spring Boot. Thanks all in advance.
what I need is to have 4 K8 pods running in K8 environment and each pod have slightly different configuration from each other, for example, I have a property in one of my java class called regions, it extract it's value from Application.yml, like
#Value("${regions}")
Private String regions;
Now after deploy it to K8 env I want to have 4 pods(I can configure it in helm file) running and in each pod the regions field should have different value.
Is this something achievable ? Can anyone please give any advice ?

If you want to run 4 different pods with different configurations, you have to deploy the 4 different deployments in kubernetes.
You can create the different configmap as per need storing the whole Application.yaml file or environment variables and inject it to different deployments.
how to store whole application.yaml inside config map
apiVersion: v1
kind: ConfigMap
metadata:
name: yaml-region-first
data:
application.yaml: |-
data: test,
region: first-region
the same way you can create the config map with the second deployment.
apiVersion: v1
kind: ConfigMap
metadata:
name: yaml-region-second
data:
application.yaml: |-
data: test,
region: second-region
you can inject this configmap to each deployment
example :
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-app
name: hello-app
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: hello-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: hello-app
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /etc/nginx/app.yaml
name: yaml-file
readOnly: true
volumes:
- configMap:
name: yaml-region-second
optional: false
name: yaml-file
accordingly, you can also create the helm chart.
If you just to pass the single environment instead of storing the whole file inside the configmap you can directly add value to the deployment.
Example :
apiVersion: v1
kind: Pod
metadata:
name: print-greeting
spec:
containers:
- name: env-print-demo
image: bash
env:
- name: REGION
value: "one"
- name: HONORIFIC
value: "The Most Honorable"
- name: NAME
value: "Kubernetes"
command: ["echo"]
args: ["$(GREETING) $(HONORIFIC) $(NAME)"]
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
for each deployment, your environment will be different and in helm, you can dynamically also update or overwrite it using CLI command.

Related

Passing values from initContainers to container spec

I have a kubernetes deployment with the below spec that gets installed via helm 3.
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
spec:
replicas: 1
template:
spec:
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
args:
- --listen=0.0.0.0:80
- --client-id=gk-client
- --discovery-url={{ .Values.discoveryUrl }}
I need to pass the discoveryUrl value as a helm value, which is the public IP address of the nginx-ingress pod that I deploy via a different helm chart. I install the above deployment like below:
helm3 install my-nginx-ingress-chart
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].ip}')
helm3 install my-gatekeeper-chart --set discovery_url=${INGRESS_IP}
This works fine, however, Now instead of these two helm3 install, I want to have a single helm3 install, where both the nginx-ingress and the gatekeeper deployment should be created.
I understand that in the initContainer of my-gatekeeper-image we can get the nginx-ingress ip address, but I am not able to understand how to set that as an environment variable or pass to the container spec.
There are some stackoverflow questions that mention that we can create a persistent volume or secret to achieve this, but I am not sure, how that would work if we have to delete them. I do not want to create any extra objects and maintain the lifecycle of them.
It is not possible to do this without mounting a persistent volume. But the creation of persistent volume can be backed by just an in-memory store, instead of a block storage device. That way, we do not have to do any extra lifecycle management. The way to achieve that is:
apiVersion: v1
kind: ConfigMap
metadata:
name: gatekeeper
data:
gatekeeper.sh: |-
#!/usr/bin/env bash
set -e
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].name}')
# Do other validations/cleanup
echo $INGRESS_IP > /opt/gkconf/discovery_url;
exit 0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
replicas: 1
selector:
matchLabels:
app: gatekeeper
template:
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
initContainers:
- name: gkinit
command: [ "/opt/gk-init.sh" ]
image: 'bitnami/kubectl:1.12'
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
- mountPath: /opt/gk-init.sh
name: gatekeeper
subPath: gatekeeper.sh
readOnly: false
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
# ENTRYPOINT of above image should read the
# file /opt/gkconf/discovery_url and then launch
# the actual gatekeeper binary
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
volumes:
- name: gkconf
emptyDir:
medium: Memory
- name: gatekeeper
configMap:
name: gatekeeper
defaultMode: 0555
Using init containers is indeed a valid solution but you need to be aware that by doing so you are adding complexity to your deployment.
This is because you would also need to create serviceaccount with permisions to be able to read service objects from inside of init container. Then, when having the IP, you can't just set env variable for gatekeeper container without recreating a pod so you would need to save the IP e.g. to shared file and read it from it when starting gatekeeper.
Alternatively you can reserve ip address if your cloud provided supports this feature and use this static IP when deploying nginx service:
apiVersion: v1
kind: Service
[...]
type: LoadBalancer
loadBalancerIP: "YOUR.IP.ADDRESS.HERE"
Let me know if you have any questions or if something needs clarification.

How to copy a local file into a helm deployment

I'm trying to deploy in Kubernetes several pods using a mongo image with a initialization script in them. I'm using helm for the deployment. Since I'm beginning with the official Mongo docker image, I'm trying to add a script at /docker-entrypoint-initdb.d so it will be executed right at the beginning to initialize some parameters of my Mongo.
What I don't know is how can I insert my script, that is, let's say, in my local machine, in /docker-entrypoint-initdb.d using helm.
I'm trying to do something like docker run -v hostfile:mongofile but I need the equivalent in helm, so this will be done in all the pods of the deployment
You can use configmap. Lets put nginx configuration file to container via configmap. We have directory name called nginx with same level values.yml. Inside there we have actual configuration file.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-file
labels:
app: ...
data:
nginx.conf: |-
{{ .Files.Get "nginx/nginx.conf" | indent 4 }}
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: SomeDeployment
...
spec:
replicas:
selector:
matchLabels:
app: ...
release: ...
template:
metadata:
labels:
app: ...
release: ...
spec:
volumes:
- name: nginx-conf
configMap:
name: nginx-config-file
items:
- key: nginx.conf
path: nginx.conf
containers:
- name: ...
image: ...
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
You can also check initContainers concept from this link :
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

configmap change doesn't reflect automatically on respective pods

apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice1
replicas: 3 # tells deployment to run 3 pods matching the template
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice1
spec:
containers:
- name: consoleservice
image: chintamani/insightvu:ms-console1
readinessProbe:
httpGet:
path: /
port: 8385
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /deploy/config
name: config
volumes:
- name: config
configMap:
name: console-config
For creating configmap I am using this command:
kubectl create configmap console-config --from-file=deploy/config
While changing in configmap it doesn't reflect automatically, every time I have to restart the pod. How can I do it automatically?
thank you guys .Able to fix it ,I am using reloader to reflect on pods if any changes done inside
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
then add the annotation inside your deployment.yml file .
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
annotations:
configmap.reloader.stakater.com/reload: "console-config"
It will restart your pods gradually .
Pod and configmap are completely separate in Kubernetes and pods don't automatically restart itself if there is a configmap change.
There are few alternatives to achieve this.
Use wave, it's a Kubernetes controller which look for specific annotation and update the deployment if there is any change in configmap https://github.com/pusher/wave
Use of https://github.com/stakater/Reloader, reloader can watch configmap changes and can update the pod to pick the new configuration.
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
spec:
template:
metadata:
You can add a customize configHash annotation in deployment and in CI/CD or while deploying the application use yq to replace that value with hash of configmap, so in case of any change in configmap. Kubernetes will detect the change in annotation of deployment and reload the pods with new configuration.
yq w --inplace deployment.yaml spec.template.metadata.annotations.configHash $(kubectl get cm/configmap -oyaml | sha256sum)
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: application
spec:
selector:
matchLabels:
app: consoleservice1
replicas: 3
template:
metadata:
labels:
app: consoleservice1
annotations:
configHash: ""
Reference: here

How to add external yml config file into Kubernetes config map

I have a config file inside a config folder i.e console-service.yml. I am trying to load at runtime using configMap, below is my deployment yml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: consoleservice
spec:
replicas: 1
template:
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: docker.example.com/app:1
volumeMounts:
- name: console-config-volume
mountPath: /config/console-server.yml
subPath: console-server.yml
readOnly: true
volumes:
- name: console-config-volume
configMap:
name: console-config
kind: ConfigMap
apiVersion: v1
metadata:
name: consoleservice
data:
pool.size.core: 1
pool.size.max: 16
I am new to configMap. How I can read .yml configuration from config/ location?
There are two possible solutions for your problem.
1. Embedd your File directly into the ConfigMap
This coul look similar to this:
kind: ConfigMap
apiVersion: v1
metadata:
name: some-yaml
file.yaml: |
pool:
size:
core: 1
max: 16
2. Create a ConfigMap from your YAML-File
Would be done by using kubectl:
kubectl create configmap some-yaml \
--from-file=./some-yaml-file.yaml
This would create a ConfigMap containg the selected file. You can add multiple files to a single ConfigMap.
You can find more informations in the Documentation.

How to dynamically populate values into Kubernetes yaml files

I would like to pass in some of the values in kubernetes yaml files during runtime like reading from config/properties file.
what is the best way to do that?
In the below example, I do not want to hardcode the port value, instead read the port number from config file.
Ex:
logstash.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: 33044 (looking to read this port from config file)
env:
- name: INPUT_PORT
value: "5044"
config.yaml
logstash_port: 33044
This sounds like a perfect use case for Helm (www.helm.sh).
Helm Charts helps you define, install, and upgrade Kubernetes applications. You can use a pre-defined chart (like Nginx, etc) or create your own chart.
Charts are structured like:
mychart/
Chart.yaml
values.yaml
charts/
templates/
...
In the templates folder, you can include your ReplicationController files (and any others). In the values.yaml file you can specify any variables you wish to share amongst the templates (like port numbers, file paths, etc).
The values file can be as simple or complex as you require. An example of a values file:
myTestService:
containerPort: 33044
image: "logstash"
You can then reference these values in your template file using:
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: {{ .Values.myTestService.containerPort }}
env:
- name: INPUT_PORT
value: "5044"
Once finished you can compile into Helm chart using helm package mychart. To deploy to your Kubernetes cluster you can use helm install mychart-VERSION.tgz. That will then deploy your chart to the cluster. The version number is set within the Chart.yaml file.
You can use Kubernetes ConfigMaps for this. ConfigMaps are introduced to include external configuration files such as property files.
First create a ConfigMap artifact out of your property like follows:
kubectl create configmap my-config --from-file=db.properties
Then in your Deployment yaml you can provide it as a volume binding or environment variables
Volume binding :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
volumeMounts:
- name: config-volume
mountPath: /etc/creds <mount path>
volumes:
- name: config-volume
configMap:
name: my-config
Here under mountPath you need to provide the location of your container where your property file should resides. And underconfigMap name you should define the name of your configMap you created.
Environment variables way :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
env:
- name: DB_PROPERTIES
valueFrom:
configMapKeyRef:
name: my-config
items:
- key: <propert name>
path: <path/to/property>
Here under the configMapKeyRef section under name you should define your config map name you created. e.g. my-config. Under the items you should define the key(s) of your property file and path to each of the key, Kubernetes will automatically resolve the value of the property internally.
You can find more about ConfigMap here.
https://kubernetes-v1-4.github.io/docs/user-guide/configmap/
There are some parameters you can't change once a pod is created. containerPort is one of them.
You can add a new container to a pod though. And open a new port.
The parameters you CAN change, you can do it either by dynamically creating or modifying the original deployment (say with sed) and running kubectl replace -f FILE command, or through kubectl edit DEPLOYMENT command; which automatically applies the changes.