Kubernetes mix config map and host path - kubernetes

Here is my deployment file and I am applying it with command kubectl apply -f deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: scan-deployment
labels:
app: scan
spec:
replicas: 1
selector:
matchLabels:
app: scan
template:
metadata:
labels:
app: scan
spec:
volumes:
- name: shared-files
configMap:
name: shared-files
containers:
- name: scan-svc
image: scan:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: shared-files
mountPath: /shared
Here I have my file test.txt file which is located in ./shared/test.txt that I have configured using configMap.
Now the thing is that I have subFolder in my local disk ./shared/xml, ./shared/report and I want to map /shared with container but not able to do it. I don't want to map config files from these subfolder. Deployed code creates report in xml and pdf format and saves it in respective folder and I want to map that to local disk.

Configmaps don't support recursive directories. When you create Configmap from ./shared directory, only the regular files in the directory are included. Sub directories are ignored.
You should create a regular volume from ./shared directory then mount it to the container.

It would be nice, if you would tell us why you are not able to do it, but I assume it overwrites your content under /shared, which is an expected behavior.
When you map your /shared; on the host with /shared in the container, it is going to be mapped correctly, but once configMap gets created, it is going to overwrite the content of the path, by whatever is in the configMap.

Related

How to create a volume that mounts a file which it's path configured in a ConfigMap

I'll describe what is my target then show what I had done to achieve it... my goal is to:
create a configmap that holds a path for properties file
create a deployment, that has a volume mounting the file from the path configured in configmap
What I had done:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: "my.properties"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-client-deployment
spec:
selector:
matchLabels:
app: my-client
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: my-client
spec:
containers:
- name: my-client-container
image: {{ .Values.image.client}}
imagePullPolicy: {{ .Values.pullPolicy.client }}
ports:
- containerPort: 80
env:
- name: MY_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: my-configmap
key: my_properties_file_name
volumeMounts:
- name: config
mountPath: "/etc/config"
readOnly: true
imagePullSecrets:
- name: secret-private-registry
volumes:
# You set volumes at the Pod level, then mount them into containers inside that Pod
- name: config
configMap:
# Provide the name of the ConfigMap you want to mount.
name: my-configmap
# An array of keys from the ConfigMap to create as files
items:
- key: "my_properties_file_name"
path: "my.properties"
The result is having a file namedmy.properties under /etc/config, BUT the content of that file is "my.properties" (as it was indicated as the file name in the configmap), and not the content of properties file as I have it actually in my localdisk.
How can I mount that file, using it's path configured in a configmap?
Put the content of the my.properties file directly inside the ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: |
This is the content of the file.
It supports multiple lines but do take care of the indentation.
Or you can also use a kubectl create configmap command:
kubectl create configmap my-configmap --from-file=my_properties_file_name=./my.properties
In either method, you are actually passing the snapshot of the content of the file on the localdisk to kubernetes to store. Any changes you make to the file on the localdisk won't be reflected unless you re-create the configmap.
The design of kubernetes allows running kubectl command against kubernetes cluster located on the other side of the globe so you can't simply mount a file on your localdisk to be accessible in realtime by the cluster. If you want such mechanism, you can't use a ConfigMap, but instead you would need to setup a shared volume that is mounted by both your local machine and the cluster for example using a NFS server.

Kubernetes configmap removes all the contents of existing directory

I have created a configmap and a pod yaml file .
I have tried multiple solutions but none worked for me .
kubectl describe cm cf3
Name: cf3
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
index.html:
----
hii im marimmo
Events: <none
pod yaml file
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: manya97/manya_tomcat:0.1
volumeMounts:
- name: config-volume
mountPath: /apache-tomcat-8.0.32/webapps/SampleWebApp/index.html
subPath: index.html
volumes:
- name: config-volume
configMap:
name: cf3
restartPolicy: Never
this was supposed to replace existing index.html file but somehow it removes all the content of SampleWebApp and places only index.html.
I dont know if have done it in correct way , I want to replace only content of index.html. Might be possible that mounting works this way I do not know .
Mounts are always directory-based. So having a mount in your yaml file tells k8s to mount the contents of the configMap (which could be one or more files) into the directory.
Whatever has been inside of the directory before the mount is gone then.
See the official documentation here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
There is a hint saying "Caution: If there are some files in the mount directory, they will be deleted."

How to mount data file in kubernetes via pvc?

I want to persistent data file via pvc with glusterfs in kubernetes, I mount the diretory and it'll work, but when I try to mount the file, it'll fail, because the file was mounted to the directory type, how can I mount the data file in k8s ?
image info:
how can I mount the data file in k8s ?
This is often application specific and there are several ways to do so, but mainly you want to read about subPath.
Generally, you can chose to:
use subPath to separate config files.
Mount volume/path as directory at some other location and then link file to specific place within pod (in rare cases that mixing with other config files or directory permission in same dir is presenting an issue, or boot/start policy of application prevents files from being mounted at the pod start but are required to be present after some initialization is performed, really edge cases).
Use ConfigMaps (or even Secrets) to hold configuration files. Note that if using subPath with configMap and Secret pod won't get updates there automatically, but is more common way of handling configuration files, and your conf/interpreter.json looks like a fine example...
Notes to keep in mind:
Mounting is "overlaping" underlying path, so you have to mount file up to the point of file in order to share its folder with other files. Sharing up to a folder would get you folder with single file in it which is usually not what is required.
If you use ConfigMaps then you have to reference individual file with subPath in order to mount it, even if you have a single file in ConfigMap. Something like this:
containers:
- volumeMounts:
- name: my-config
mountPath: /my-app/my-config.json
subPath: config.json
volumes:
- name: my-config
configMap:
name: cm-my-config-map-example
Edit:
Full example of mounting a single example.sh script file to /bin directory of a container using ConfigMap.
This example you can adjust to suit your needs of placing any file with any privilege in any desired folder. Replace my-namespace with any desired (or remove completely for default one)
Config map:
kind: ConfigMap
apiVersion: v1
metadata:
namespace: my-namespace
name: cm-example-script
data:
example-script.sh: |
#!/bin/bash
echo "Yaaaay! It's an example!"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: my-namespace
name: example-deployment
labels:
app: example-app
spec:
selector:
matchLabels:
app: example-app
strategy:
type: Recreate
template:
metadata:
labels:
app: example-app
spec:
containers:
- image: ubuntu:16.04
name: example-app-container
stdin: true
tty: true
volumeMounts:
- mountPath: /bin/example-script.sh
subPath: example-script.sh
name: example-script
volumes:
- name: example-script
configMap:
name: cm-example-script
defaultMode: 0744
Full example of mounting a single test.txt file to /bin directory of a container using persistent volume (file already exists in root of volume).
However, if you wish to mount with persistent volume instead configMap, here is another example of mounting in much the same way (test.txt is mounted in /bin/test.txt)... Note two things: test.txt must exist on PV and that I'm using statefulset just to run with automatically provisioned pvc, and you can adjust accordingly...
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: my-namespace
name: ss-example-file-mount
spec:
serviceName: svc-example-file-mount
replicas: 1
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- image: ubuntu:16.04
name: example-app-container
stdin: true
tty: true
volumeMounts:
- name: persistent-storage-example
mountPath: /bin/test.txt
subPath: test.txt
volumeClaimTemplates:
- metadata:
name: persistent-storage-example
spec:
storageClassName: sc-my-storage-class-for-provisioning-pv
accessModes: [ ReadWriteOnce ]
resources:
requests:
storage: 2Gi

How to dynamically populate values into Kubernetes yaml files

I would like to pass in some of the values in kubernetes yaml files during runtime like reading from config/properties file.
what is the best way to do that?
In the below example, I do not want to hardcode the port value, instead read the port number from config file.
Ex:
logstash.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: 33044 (looking to read this port from config file)
env:
- name: INPUT_PORT
value: "5044"
config.yaml
logstash_port: 33044
This sounds like a perfect use case for Helm (www.helm.sh).
Helm Charts helps you define, install, and upgrade Kubernetes applications. You can use a pre-defined chart (like Nginx, etc) or create your own chart.
Charts are structured like:
mychart/
Chart.yaml
values.yaml
charts/
templates/
...
In the templates folder, you can include your ReplicationController files (and any others). In the values.yaml file you can specify any variables you wish to share amongst the templates (like port numbers, file paths, etc).
The values file can be as simple or complex as you require. An example of a values file:
myTestService:
containerPort: 33044
image: "logstash"
You can then reference these values in your template file using:
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: {{ .Values.myTestService.containerPort }}
env:
- name: INPUT_PORT
value: "5044"
Once finished you can compile into Helm chart using helm package mychart. To deploy to your Kubernetes cluster you can use helm install mychart-VERSION.tgz. That will then deploy your chart to the cluster. The version number is set within the Chart.yaml file.
You can use Kubernetes ConfigMaps for this. ConfigMaps are introduced to include external configuration files such as property files.
First create a ConfigMap artifact out of your property like follows:
kubectl create configmap my-config --from-file=db.properties
Then in your Deployment yaml you can provide it as a volume binding or environment variables
Volume binding :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
volumeMounts:
- name: config-volume
mountPath: /etc/creds <mount path>
volumes:
- name: config-volume
configMap:
name: my-config
Here under mountPath you need to provide the location of your container where your property file should resides. And underconfigMap name you should define the name of your configMap you created.
Environment variables way :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
env:
- name: DB_PROPERTIES
valueFrom:
configMapKeyRef:
name: my-config
items:
- key: <propert name>
path: <path/to/property>
Here under the configMapKeyRef section under name you should define your config map name you created. e.g. my-config. Under the items you should define the key(s) of your property file and path to each of the key, Kubernetes will automatically resolve the value of the property internally.
You can find more about ConfigMap here.
https://kubernetes-v1-4.github.io/docs/user-guide/configmap/
There are some parameters you can't change once a pod is created. containerPort is one of them.
You can add a new container to a pod though. And open a new port.
The parameters you CAN change, you can do it either by dynamically creating or modifying the original deployment (say with sed) and running kubectl replace -f FILE command, or through kubectl edit DEPLOYMENT command; which automatically applies the changes.

Use relative paths in Kubernetes config

The goal is to orchestrate both production and local development environments using Kubernetes. The problem is that hostPath doesn't work with relative path values. This results in slightly differing configuration files on each developer's machine to accommodate for the different project locations (i.e. "/my/absolute/path/to/the/project"):
apiVersion: v1
kind: Service
metadata:
name: some-service
labels:
app: app
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-deploy
spec:
selector:
matchLabels:
app: app
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: nginx:1.13.12-alpine
ports:
- containerPort: 80
volumeMounts:
- name: vol_example
mountPath: /var/www/html
volumes:
- name: vol_example
hostPath:
path: "/my/absolute/path/to/the/project"
type: Directory
How can relative paths be used in Kubernetes config files? Variable replacements (such as $(PWD)/project) have been tried, but didn't seem to work. If config variables can work with volumes, this might help but unsure of how to achieve this.
As mentioned here kubectl will never support variable substitution.
You can create a helm chart for your app (yaml). It supports yaml template variables (among various other features). So you'll be able to pass hostPath parameter based on development or production.
Not a native Kubernetes solution but you can manually edit the .yaml file 'on-the-fly' before applying it with kubectl.
In your .yaml file use a substitution that is not likely to become ambiguous in the volume section:
volumes:
- name: vol_example
hostPath:
path: {{path}}/relative/path
type: Directory
Then to apply the manifest run:
cat deployment.yaml | sed s+{{path}}+$(pwd)+g | kubectl apply -f -
Note: sed is used with the separator + because $(pwd) will result in a path that includes one or more / which is the conventional sed separator.