I am trying to set configurable host paths in kubernetes, but I am facing issues. I created a config map, which has the path and then I'm trying to replace the placeholder with the config map value. Here is my configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: php
namespace: app
spec:
template:
spec:
containers:
- name: php
env:
- name: PHP_FOLDER
valueFrom:
configMapKeyRef:
name: local-paths
key: CODE_PATH
volumes:
- name: src-code
hostPath:
path: PHP_FOLDER
type: Directory
I also tried
apiVersion: apps/v1
kind: Deployment
metadata:
name: php
namespace: app
spec:
template:
spec:
containers:
- name: php
env:
- name: PHP_FOLDER
valueFrom:
configMapKeyRef:
name: local-paths
key: CODE_PATH
volumes:
- name: src-code
hostPath:
path: $(PHP_FOLDER)
type: Directory
I either get Error: Error response from daemon:
create $(PHP_FOLDER): "$(PHP_FOLDER)" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path or MountVolume.SetUp failed for volume "src-code" : hostPath type check failed: PHP_FOLDER is not a directory
You just can't use environment values in yaml files directly. What you did actually a lot worse. ConfigMaps or secrets are created in runtime which means they won't be available until container starts running. However yaml parsing happens before pod or deployment even created. I think you should practice timeline more until you are using kubernetes in production or even testing.
In this case best practice would be to use bash script to change yaml file just before deploying and automatize it.
You cannot use a variable for path definition.
As other users stated correctly, you can not use env variables in plain yaml/ kubectl. I suggest you add a lean layer before the deployment command and template the manifest with e.g. envsubst or ansible
Related
I'll describe what is my target then show what I had done to achieve it... my goal is to:
create a configmap that holds a path for properties file
create a deployment, that has a volume mounting the file from the path configured in configmap
What I had done:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: "my.properties"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-client-deployment
spec:
selector:
matchLabels:
app: my-client
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: my-client
spec:
containers:
- name: my-client-container
image: {{ .Values.image.client}}
imagePullPolicy: {{ .Values.pullPolicy.client }}
ports:
- containerPort: 80
env:
- name: MY_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: my-configmap
key: my_properties_file_name
volumeMounts:
- name: config
mountPath: "/etc/config"
readOnly: true
imagePullSecrets:
- name: secret-private-registry
volumes:
# You set volumes at the Pod level, then mount them into containers inside that Pod
- name: config
configMap:
# Provide the name of the ConfigMap you want to mount.
name: my-configmap
# An array of keys from the ConfigMap to create as files
items:
- key: "my_properties_file_name"
path: "my.properties"
The result is having a file namedmy.properties under /etc/config, BUT the content of that file is "my.properties" (as it was indicated as the file name in the configmap), and not the content of properties file as I have it actually in my localdisk.
How can I mount that file, using it's path configured in a configmap?
Put the content of the my.properties file directly inside the ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: |
This is the content of the file.
It supports multiple lines but do take care of the indentation.
Or you can also use a kubectl create configmap command:
kubectl create configmap my-configmap --from-file=my_properties_file_name=./my.properties
In either method, you are actually passing the snapshot of the content of the file on the localdisk to kubernetes to store. Any changes you make to the file on the localdisk won't be reflected unless you re-create the configmap.
The design of kubernetes allows running kubectl command against kubernetes cluster located on the other side of the globe so you can't simply mount a file on your localdisk to be accessible in realtime by the cluster. If you want such mechanism, you can't use a ConfigMap, but instead you would need to setup a shared volume that is mounted by both your local machine and the cluster for example using a NFS server.
I have two ConfigMap files. One is supposed to be "secret" values and the other has regular values and should import the secrets.
Here's the sample secret ConfigMap:
kind: ConfigMap
metadata:
name: secret-cm
data:
MY_SEKRET: 'SEKRET'
And the regular ConfigMap file:
kind: ConfigMap
metadata:
name: regular-cm
data:
SOME_CONFIG: 123
USING_SEKRET: $(MY_SEKRET)
And my deployment is as follows:
kind: Deployment
spec:
template:
spec:
containers:
- name: my_container
envFrom:
- configMapRef:
name: secret-cm
- configMapRef:
name: regular-cm
I was hoping that my variable USING_SEKRET would be "SEKRET" because of the order the envFrom files are imported but they just appear as "$(MY_SEKRET)" on the Pods.
I've also tried setting the dependent variable as an env directly at the Deployment but it results on the same problem:
kind: Deployment
...
env:
- name: MY_SEKRET
# Not the expected result because the variable is openly visible but should be hidden
value: 'SEKRET'
I was trying to follow the documentation guides, based on the Define an environment dependent variable for a container but I haven't seen examples similar to what I want to do.
Is there a way to do this?
EDIT:
To explain my idea behind this structure, secret-cm whole file will be encrypted at the repository so not all peers will be able to see its contents.
On the other hand, I still want to be able to show everyone where its variables are used, hence the dependency on regular-cm.
With that, authorized peers can run kubectl commands and variable replacements of secret-cm would work properly but for everyone else the file is hidden.
You did not explain why you want to define two configmap (one getting value from another) but I am assuming that you want the env parameter name define in confgimap be independent of paramter name used by your container in pod. If that is the case then create your configmap
kind: ConfigMap metadata: name: secret-cm data: MY_SEKRET: 'SEKRET'
Then in your deployment use the env variable from configmap
kind: Deployment
spec:
template:
spec:
containers:
- name: my_container
env:
- name: USING_SEKRET
valueFrom:
configMapKeyRef:
name: secret-cm
key: MY_SEKRET
Now when you access env variable $USING_SEKRET, it will show value as 'SEKRET'
incase your requirement is different then ignore this response and provide more details.
I define a Secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |-
apiUrl: "https://my.api.com/api/v1"
username: Administrator
password: NewPasswdTest11
And then creating volume mount in Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-webapp-test
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
replicas: 2
selector:
matchLabels:
name: k8s-webapp-test
version: 1.0.4
template:
metadata:
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
nodeSelector:
kubernetes.io/os: windows
volumes:
- name: secret-volume
secret:
secretName: string-data-secret
containers:
- name: k8s-webapp-test
image: dockerstore/k8s-webapp-test:1.0.4
ports:
- containerPort: 80
volumeMounts:
- name: secret-volume
mountPath: "/secrets"
readOnly: false
So, after the deployment, I have 2 pods with volume mounts in C:\secrets (I do use Windows nodes). When I try to edit config.yaml that is located in C:\secrets folder, I get following error:
Access to the path 'c:\secrets\config.yaml' is denied.
Although I marked file as readOnly false I cannot write to it. How can I modify the file?
As you can see here it is not possible by intention:
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location
You can look into using an init container which maps the secret and then copies it to the desired location where you might be able to modify it.
As an alternative to the init container you might also use a container lifecycle hook i.e. a PostStart-hook which executes immediately after a container is created.
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
cp -r /secrets ~/secrets;
You can create secrets from within a Pod but it seems you need to utilize the Kubernetes REST API to do so:
https://kubernetes.io/docs/concepts/overview/kubernetes-api/
I would like to pass in some of the values in kubernetes yaml files during runtime like reading from config/properties file.
what is the best way to do that?
In the below example, I do not want to hardcode the port value, instead read the port number from config file.
Ex:
logstash.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: 33044 (looking to read this port from config file)
env:
- name: INPUT_PORT
value: "5044"
config.yaml
logstash_port: 33044
This sounds like a perfect use case for Helm (www.helm.sh).
Helm Charts helps you define, install, and upgrade Kubernetes applications. You can use a pre-defined chart (like Nginx, etc) or create your own chart.
Charts are structured like:
mychart/
Chart.yaml
values.yaml
charts/
templates/
...
In the templates folder, you can include your ReplicationController files (and any others). In the values.yaml file you can specify any variables you wish to share amongst the templates (like port numbers, file paths, etc).
The values file can be as simple or complex as you require. An example of a values file:
myTestService:
containerPort: 33044
image: "logstash"
You can then reference these values in your template file using:
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: {{ .Values.myTestService.containerPort }}
env:
- name: INPUT_PORT
value: "5044"
Once finished you can compile into Helm chart using helm package mychart. To deploy to your Kubernetes cluster you can use helm install mychart-VERSION.tgz. That will then deploy your chart to the cluster. The version number is set within the Chart.yaml file.
You can use Kubernetes ConfigMaps for this. ConfigMaps are introduced to include external configuration files such as property files.
First create a ConfigMap artifact out of your property like follows:
kubectl create configmap my-config --from-file=db.properties
Then in your Deployment yaml you can provide it as a volume binding or environment variables
Volume binding :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
volumeMounts:
- name: config-volume
mountPath: /etc/creds <mount path>
volumes:
- name: config-volume
configMap:
name: my-config
Here under mountPath you need to provide the location of your container where your property file should resides. And underconfigMap name you should define the name of your configMap you created.
Environment variables way :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
env:
- name: DB_PROPERTIES
valueFrom:
configMapKeyRef:
name: my-config
items:
- key: <propert name>
path: <path/to/property>
Here under the configMapKeyRef section under name you should define your config map name you created. e.g. my-config. Under the items you should define the key(s) of your property file and path to each of the key, Kubernetes will automatically resolve the value of the property internally.
You can find more about ConfigMap here.
https://kubernetes-v1-4.github.io/docs/user-guide/configmap/
There are some parameters you can't change once a pod is created. containerPort is one of them.
You can add a new container to a pod though. And open a new port.
The parameters you CAN change, you can do it either by dynamically creating or modifying the original deployment (say with sed) and running kubectl replace -f FILE command, or through kubectl edit DEPLOYMENT command; which automatically applies the changes.
I am trying to write a deployment for a k8s pod.
I am having the following in the deploy.yaml file
apiVersion: apps/v1beta1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: __DEPLOY_NAME__-__ENV__
namespace: __RG_NAME__
spec:
replicas: 1
template:
metadata:
labels:
app: __DEPLOY_NAME__-__ENV__
containers:
- name: __DEPLOY_NAME__-__ENV__
image: __CONTAINER_REGISTRY__/__IMAGE_NAME__
env:
- name: NODE_ENV
value: __ENV__
imagePullPolicy: Always
volumeMounts:
- name: config-volume
mountPath: /etc/config
ports:
- containerPort: __PORT__
volumes:
- name: config-volume
configMap:
name: config
configMap:
name: oauth
I trying to use two different config maps named 'config' and 'oauth' as volume mounts in the same pod. When I tried the above code I got the following error.
error validating data: found invalid field volumes for v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
I am not sure if what I want to achieve is feasible and if not then how should I give the volume mounts.
First: fix indentation on your volumes block, it should be two spaces less (not a child of containers: but sibling of it.
Second: you should create two distinct volumes with distinct names and then have a volume mount for each one of them
Third: if you need to merge files from them you might want to try mounting particular files with subPath