How to copy a local file into a helm deployment - mongodb

I'm trying to deploy in Kubernetes several pods using a mongo image with a initialization script in them. I'm using helm for the deployment. Since I'm beginning with the official Mongo docker image, I'm trying to add a script at /docker-entrypoint-initdb.d so it will be executed right at the beginning to initialize some parameters of my Mongo.
What I don't know is how can I insert my script, that is, let's say, in my local machine, in /docker-entrypoint-initdb.d using helm.
I'm trying to do something like docker run -v hostfile:mongofile but I need the equivalent in helm, so this will be done in all the pods of the deployment

You can use configmap. Lets put nginx configuration file to container via configmap. We have directory name called nginx with same level values.yml. Inside there we have actual configuration file.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-file
labels:
app: ...
data:
nginx.conf: |-
{{ .Files.Get "nginx/nginx.conf" | indent 4 }}
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: SomeDeployment
...
spec:
replicas:
selector:
matchLabels:
app: ...
release: ...
template:
metadata:
labels:
app: ...
release: ...
spec:
volumes:
- name: nginx-conf
configMap:
name: nginx-config-file
items:
- key: nginx.conf
path: nginx.conf
containers:
- name: ...
image: ...
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
You can also check initContainers concept from this link :
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

Related

Kubernetes :Validation Error(Deployment.spec.template.spec.container[0]): unknown field "ConfigMapref" in io.k8s.api.core.v1.Container

I am doing my first deployment in Kubernetes and I've hosted my API in my namespace and it's up and running. So I tried to connect my API with MongoDB. Added my database details in ConfigMaps via Rancher.
I tried to invoke the DB in my deployment YAML file but got an error stating Unknown Field - ConfigMapref
Below is my deployment YAML file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfistproject
namespace: Owncloud
spec
replicas: 2
selector:
matchLables:
app: myfirstproject
version: 1.0.0
template:
metadata:
labels:
app: myfirstproject
version: 1.0.0
spec:
containers:
- name: myfirstproject
image: **my image repo location**
imagePullPolicy: always
ports:
- containerPort: 80
configMapRef:
- name: myfirstprojectdb # This is the name of the config map created via rancher
myfirstprojectdb ConfigMap will store all the details like the database name, username, password, etc.
On executing the pipeline I get the below error.
How do I need to refer my config map in deployment yaml?
Validation Error(Deployment.spec.template.spec.container[0]): unknown field "ConfigMapref" in io.k8s.api.core.v1.Container
There are some more typos (e.g. missing : after spec or Always should be with capital letter). Also indentation should be consistent in the whole yaml file - see yaml indentation and separation.
I corrected your yaml so it passes api server's check + added config map reference (considering it contains env variables):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfistproject
namespace: Owncloud
spec:
replicas: 2
selector:
matchLabels:
app: myfirstproject
version: 1.0.0
template:
metadata:
labels:
app: myfirstproject
version: 1.0.0
spec:
containers:
- name: myfirstproject
image: **my image repo location**
imagePullPolicy: Always
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: myfirstprojectdb
Useful link:
Configure all key-value pairs in a ConfigMap as container environment variables which is related to this question.

Replacing properties file in container using configmaps in kubernetes

I am trying to replace properties file in container using configMap and volumeMount in deployment.yaml file.
Below is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-properties
spec:
selector:
matchLabels:
app: agent-2
replicas: 2
template:
metadata:
labels:
app: agent-2
spec:
containers:
- name: agent-2
image: agent:latest
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/usr/local/tomcat/webapps/agent/WEB-INF/classes/conf/application.properties"
name: "applictaion-conf"
subPath: "application.properties"
volumes:
- name: applictaion-conf
configMap:
name: dddeagent-configproperties
items:
- key: "application.properties"
path: "application.properties"
Below is snippet from configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-configp
data:
application.properties: |-
AGENT_HOME = /var/ddeagenthome
LIC_MAXITERATION=5
LIC_MAXDELAY=10000
After deployment, complete folder structure is getting mounted instead of single file. Because of which all the files are getting deleted from existing folder.
Version - 1.21.13
I checked this configuration and there are few misspelling. You are referring to config map "dddeagent-configproperties" but you have defined a ConfigMap object named as "agent-configp".
configMap: name: dddeagent-configproperties
Should be:
configMap: name: agent-configp
Besides that there a few indentation errors, so I will paste a fixed files at the end of the answer.
To the point of your question: your approach is correct and as I tested in my setup everything was working properly without any issues. I created a sample pod with mounted the ConfigMap the same way you are doing it (in the directory where there are other files). The ConfigMap was mounted as a file as it should and other files were still available in the directory.
Mounts:
/app/upload/test-folder/file-1 from application-conf (rw,path="application.properties")
Your approach is the same as described here.
Please double check that on the pod without mounted config map the directory /usr/local/tomcat/webapps/agent/WEB-INF/classes/conf really exists and other files are here. As your image is not public avaiable, I checked with the tomcat image and /usr/local/tomcat/webapps/ directory is empty. Note that even if this directory is empty, the Kubernetes will create agent/WEB-INF/classes/conf directories and application.properties file here, when you want to mount a file.
Fixed deployment and ConfigMap files with good indentation and without misspellings:
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-properties
spec:
selector:
matchLabels:
app: agent-2
replicas: 2
template:
metadata:
labels:
app: agent-2
spec:
containers:
- name: agent-2
image: agent:latest
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/usr/local/tomcat/webapps/agent/WEB-INF/classes/conf/application.properties"
name: "application-conf"
subPath: "application.properties"
volumes:
- name: application-conf
configMap:
name: agent-configp
items:
- key: "application.properties"
path: "application.properties"
Config file:
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-configp
data:
application.properties: |-
AGENT_HOME = /var/ddeagenthome
LIC_MAXITERATION=5
LIC_MAXDELAY=1000

How to use dynamic/variable image tag in a Kubernetes deployment?

In our project, which also uses Kustomize, our base deployment.yaml file looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:IMAGE_TAG # <------------------------------
ports:
- containerPort: 80
Then we use sed to replace IMAGE_TAG with the version of the image we want to deploy.
Is there a more sophisticated way to do this, rather than editing the text yaml file using sed?
There is a specific transformer for this called the images transformer.
You can keep your deployment as it is, with or without tag:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
and then in your kustomization file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: nginx
newTag: MYNEWTAG
Do keep in mind that this will replace the tag of all the nginx images of all the resources included in your kustomization file. If you need to run multiple versions of nginx you can replace the image name in your deployment by a placeholder and have different entries in the transformer.
It is possible to use an image tag from an environment variable, without having to edit files for each different tag. This is useful if your image tag needs to vary without changing version-controlled files.
Standard kubectl is enough for this purpose. In short, use a configMapGenerator with data populated from environment variables. Then add replacements that refer to this ConfigMap data to replace relevant image tags.
Example
Continuing with your example deployment.yaml, you could have a kustomization.yaml file in the same folder that looks like so:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Generate a ConfigMap based on the environment variables in the file `.env`.
configMapGenerator:
- name: my-config-map
envs:
- .env
replacements:
- source:
# Replace any matches by the value of environment variable `MY_IMAGE_TAG`.
kind: ConfigMap
name: my-config-map
fieldPath: data.MY_IMAGE_TAG
targets:
- select:
# In each Deployment resource …
kind: Deployment
fieldPaths:
# … match the image of container `nginx` …
- spec.template.spec.containers.[name=nginx].image
options:
# … but replace only the second part (image tag) when split by ":".
delimiter: ":"
index: 1
resources:
- deployment.yaml
In the same folder, you need a file .env with the environment variable name only (note: just the name, no value assigned):
MY_IMAGE_TAG
Now MY_IMAGE_TAG from the local environment is integrated as the image tag when running kubectl kustomize, kubectl apply --kustomize, etc.
Demo:
MY_IMAGE_TAG=foobar kubectl kustomize .
This prints the generated image tag, which is foobar as desired:
# …
spec:
# …
template:
# …
spec:
containers:
- image: nginx:foobar
name: nginx
ports:
- containerPort: 80
Alternatives
Keep the following in mind from the configMapGenerator documentation:
Note: It's recommended to use the local environment variable population functionality sparingly - an overlay with a patch is often more maintainable. Setting values from the environment may be useful when they cannot easily be predicted, such as a git SHA.
If you are simply looking to share a fixed image tag between multiple files, see the already suggested images transformer.
#evolutics answer is probably "proper" way to do it but if you are customizing only image tag every deployment you could consider putting $IMAGE_TAG variable in deployment.yaml and using envsubst command.
Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:$IMAGE_TAG
ports:
- containerPort: 80
IMAGE_TAG=my_image_tag envsubst '${IMAGE_TAG}' < deployment.yaml > final_deployment.yaml

How to fetch configmap from kubernetes pod

I have one spring boot microservice running on docker container, below is the Dockerfile
FROM java:8-jre
MAINTAINER <>
WORKDIR deploy/
#COPY config/* /deploy/config/
COPY ./ms.console.jar /deploy/
CMD chmod +R 777 ./ms.console.jar
CMD ["java","-jar","/deploy/ms.console.jar","console"]
EXPOSE 8384
here my configuration stores in external folder, i.e /config/console-server.yml and when I started the application, internally it will load the config (spring boot functionality).
Now I want to separate this configuration using configmap, for that I simply created one configmap and storing all the configuration details.
kubectl create configmap console-configmap
--from-file=./config/console-server.yml
kubectl describe configmap console-configmap
below are the description details:
Name: console-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
console-server.yml:
----
server:
http:
port: 8385
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain,text/css,application/javascript
min-response-size: 2048
---
spring:
thymeleaf:
prefix: classpath:/static
application:
name: console-service
profiles:
active: native
servlet:
multipart:
max-file-size: 30MB
max-request-size: 30MB
---
host:
gateway: http://apigateway:4000
webhook: http://localhost:9000
my deployment yml is:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
envFrom:
- configMapRef:
name: console-configmap
imagePullSecrets:
- name: regcresd
My doubt is, I commented config folder in the Dockerfile, so while running pods, it's throwing exception because of no configuration, how I will inject this console-configmap to my deployment, what I tried already shared, but getting same issues.
First of all, how are you consuming the .yml file in your application? If you consume your yml file contents as environment variables, your config should just work fine. But I suspect that you want to consume the contents from the config file inside the container. If that is the case you have to create a volume out of the configmap as follows:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/config
name: config
volumes:
- name: config
configMap:
name: console-configmap
imagePullSecrets:
- name: regcresd
The file will be available in the path /app/config/console-server.yml. You have to modify it as per your needs.
do you need to load key:value pairs from the config file as environment variables then below spec would work
envFrom:
- configMapRef:
name: console-configmap
if you need the config as a file inside pod then mount the configmap as volume. following link would be helpful
https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/

How to dynamically populate values into Kubernetes yaml files

I would like to pass in some of the values in kubernetes yaml files during runtime like reading from config/properties file.
what is the best way to do that?
In the below example, I do not want to hardcode the port value, instead read the port number from config file.
Ex:
logstash.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: 33044 (looking to read this port from config file)
env:
- name: INPUT_PORT
value: "5044"
config.yaml
logstash_port: 33044
This sounds like a perfect use case for Helm (www.helm.sh).
Helm Charts helps you define, install, and upgrade Kubernetes applications. You can use a pre-defined chart (like Nginx, etc) or create your own chart.
Charts are structured like:
mychart/
Chart.yaml
values.yaml
charts/
templates/
...
In the templates folder, you can include your ReplicationController files (and any others). In the values.yaml file you can specify any variables you wish to share amongst the templates (like port numbers, file paths, etc).
The values file can be as simple or complex as you require. An example of a values file:
myTestService:
containerPort: 33044
image: "logstash"
You can then reference these values in your template file using:
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: {{ .Values.myTestService.containerPort }}
env:
- name: INPUT_PORT
value: "5044"
Once finished you can compile into Helm chart using helm package mychart. To deploy to your Kubernetes cluster you can use helm install mychart-VERSION.tgz. That will then deploy your chart to the cluster. The version number is set within the Chart.yaml file.
You can use Kubernetes ConfigMaps for this. ConfigMaps are introduced to include external configuration files such as property files.
First create a ConfigMap artifact out of your property like follows:
kubectl create configmap my-config --from-file=db.properties
Then in your Deployment yaml you can provide it as a volume binding or environment variables
Volume binding :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
volumeMounts:
- name: config-volume
mountPath: /etc/creds <mount path>
volumes:
- name: config-volume
configMap:
name: my-config
Here under mountPath you need to provide the location of your container where your property file should resides. And underconfigMap name you should define the name of your configMap you created.
Environment variables way :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
env:
- name: DB_PROPERTIES
valueFrom:
configMapKeyRef:
name: my-config
items:
- key: <propert name>
path: <path/to/property>
Here under the configMapKeyRef section under name you should define your config map name you created. e.g. my-config. Under the items you should define the key(s) of your property file and path to each of the key, Kubernetes will automatically resolve the value of the property internally.
You can find more about ConfigMap here.
https://kubernetes-v1-4.github.io/docs/user-guide/configmap/
There are some parameters you can't change once a pod is created. containerPort is one of them.
You can add a new container to a pod though. And open a new port.
The parameters you CAN change, you can do it either by dynamically creating or modifying the original deployment (say with sed) and running kubectl replace -f FILE command, or through kubectl edit DEPLOYMENT command; which automatically applies the changes.