Use relative paths in Kubernetes config - kubernetes

The goal is to orchestrate both production and local development environments using Kubernetes. The problem is that hostPath doesn't work with relative path values. This results in slightly differing configuration files on each developer's machine to accommodate for the different project locations (i.e. "/my/absolute/path/to/the/project"):
apiVersion: v1
kind: Service
metadata:
name: some-service
labels:
app: app
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-deploy
spec:
selector:
matchLabels:
app: app
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: nginx:1.13.12-alpine
ports:
- containerPort: 80
volumeMounts:
- name: vol_example
mountPath: /var/www/html
volumes:
- name: vol_example
hostPath:
path: "/my/absolute/path/to/the/project"
type: Directory
How can relative paths be used in Kubernetes config files? Variable replacements (such as $(PWD)/project) have been tried, but didn't seem to work. If config variables can work with volumes, this might help but unsure of how to achieve this.

As mentioned here kubectl will never support variable substitution.
You can create a helm chart for your app (yaml). It supports yaml template variables (among various other features). So you'll be able to pass hostPath parameter based on development or production.

Not a native Kubernetes solution but you can manually edit the .yaml file 'on-the-fly' before applying it with kubectl.
In your .yaml file use a substitution that is not likely to become ambiguous in the volume section:
volumes:
- name: vol_example
hostPath:
path: {{path}}/relative/path
type: Directory
Then to apply the manifest run:
cat deployment.yaml | sed s+{{path}}+$(pwd)+g | kubectl apply -f -
Note: sed is used with the separator + because $(pwd) will result in a path that includes one or more / which is the conventional sed separator.

Related

How to use dynamic/variable image tag in a Kubernetes deployment?

In our project, which also uses Kustomize, our base deployment.yaml file looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:IMAGE_TAG # <------------------------------
ports:
- containerPort: 80
Then we use sed to replace IMAGE_TAG with the version of the image we want to deploy.
Is there a more sophisticated way to do this, rather than editing the text yaml file using sed?
There is a specific transformer for this called the images transformer.
You can keep your deployment as it is, with or without tag:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
and then in your kustomization file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: nginx
newTag: MYNEWTAG
Do keep in mind that this will replace the tag of all the nginx images of all the resources included in your kustomization file. If you need to run multiple versions of nginx you can replace the image name in your deployment by a placeholder and have different entries in the transformer.
It is possible to use an image tag from an environment variable, without having to edit files for each different tag. This is useful if your image tag needs to vary without changing version-controlled files.
Standard kubectl is enough for this purpose. In short, use a configMapGenerator with data populated from environment variables. Then add replacements that refer to this ConfigMap data to replace relevant image tags.
Example
Continuing with your example deployment.yaml, you could have a kustomization.yaml file in the same folder that looks like so:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Generate a ConfigMap based on the environment variables in the file `.env`.
configMapGenerator:
- name: my-config-map
envs:
- .env
replacements:
- source:
# Replace any matches by the value of environment variable `MY_IMAGE_TAG`.
kind: ConfigMap
name: my-config-map
fieldPath: data.MY_IMAGE_TAG
targets:
- select:
# In each Deployment resource …
kind: Deployment
fieldPaths:
# … match the image of container `nginx` …
- spec.template.spec.containers.[name=nginx].image
options:
# … but replace only the second part (image tag) when split by ":".
delimiter: ":"
index: 1
resources:
- deployment.yaml
In the same folder, you need a file .env with the environment variable name only (note: just the name, no value assigned):
MY_IMAGE_TAG
Now MY_IMAGE_TAG from the local environment is integrated as the image tag when running kubectl kustomize, kubectl apply --kustomize, etc.
Demo:
MY_IMAGE_TAG=foobar kubectl kustomize .
This prints the generated image tag, which is foobar as desired:
# …
spec:
# …
template:
# …
spec:
containers:
- image: nginx:foobar
name: nginx
ports:
- containerPort: 80
Alternatives
Keep the following in mind from the configMapGenerator documentation:
Note: It's recommended to use the local environment variable population functionality sparingly - an overlay with a patch is often more maintainable. Setting values from the environment may be useful when they cannot easily be predicted, such as a git SHA.
If you are simply looking to share a fixed image tag between multiple files, see the already suggested images transformer.
#evolutics answer is probably "proper" way to do it but if you are customizing only image tag every deployment you could consider putting $IMAGE_TAG variable in deployment.yaml and using envsubst command.
Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:$IMAGE_TAG
ports:
- containerPort: 80
IMAGE_TAG=my_image_tag envsubst '${IMAGE_TAG}' < deployment.yaml > final_deployment.yaml

Use kustomize to set hostPath path

Is it possible to use kustomize to specify a volume hostPath from an env variable?
I have a Kubernetes manifest that describes my deployment consisting of a container.
During development, I use a different image (that contains dev tools) and mount code from my host into the container. This way I can make code changes without having to re-deploy.
I'm using a patchStategicMerge to replace the production image, with the one I want to use during dev and mount the code into the container i.e.
kustomization.yaml
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- my-service.yaml
my-service.yaml
---
apiVersion: apps/v1
...
...
spec:
containers:
- name: myservice
image: myservice-dev-image:1.0.0
command: ['CompileDaemon', '--build=make build', '--command=./myservice']
volumeMounts:
- name: code
mountPath: /go/src/app
volumes:
- name: code
hostPath:
path: /source/mycodepath/github.com/myservice
What I'd like to do is make the path configurable via an environment variable, so that I don't have to check my specific path (/source/mycodepath/) into git, and so that other developers can easily use it in their own environment.
Is it possible to do this with kustomize ?
Create following directory structure
k8s
k8s/base
k8s/overlays
k8s/overlays/bob
k8s/overlays/sue
First we need to create the base. The base is the default template and it provide the bits that apply to both people. In k8s/base create a file called app.yaml and populate with the following (actually paste yours here. You can put other common bits in there too separated by a --- and new line).
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myservice
namespace: default
spec:
strategy:
type: RollingUpdate
replicas: 1
template:
metadata:
labels:
name: myservice
app: myservice
spec:
containers:
- name: myservice
image: myservice-dev-image:1.0.0
command: ['CompileDaemon', '--build=make build', '--command=./myservice']
volumeMounts:
- name: code
mountPath: /go/src/app
volumes:
- name: code
hostPath:
path: /xxx
Next in the same directory (k8s/base) create another file called kustomization.yaml and populate with:
resources:
- app.yaml
Next we will create two overlays: one for Bob and one for Sue.
In k8s/overlays/bob let's create Bob's custom changes as app.yaml and populate with the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myservice
namespace: default
spec:
template:
spec:
volumes:
- name: code
hostPath:
path: /users/bob/code
Now also in k8s/overlays/bob create another file called kustomization.yaml with the following:
resources:
- ../../base
patchesStrategicMerge:
- app.yaml
We can copy the two files in k8s/overlays/bob into the k8s/overlays/sue directory and just change the path in the volumes: bit.
Next we need to do a kustomize build to generate the resulting versions - bob and sue.
If the k8s directory is in your code directory, open terminal (with Kustomize installed and run:
kustomize build k8s/overlays/bob
That should show you what Bob's kustomization will look like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myservice
namespace: default
spec:
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: myservice
name: myservice
spec:
containers:
- command:
- CompileDaemon
- --build=make build
- --command=./myservice
image: myservice-dev-image:1.0.0
name: myservice
volumeMounts:
- mountPath: /go/src/app
name: code
volumes:
- hostPath:
path: /users/bob/code
name: code
To apply that you can run:
kustomize build k8s/overlays/bob | kubectl apply -f -
To apply Sue you can run:
kustomize build k8s/overlays/sue| kubectl apply -f -
Yaml is sensitive about spaces and I'm not sure this will sit well in a Stackoverflow answer so I've put on Github as well: https://github.com/just1689/kustomize-local-storage

Passing values from initContainers to container spec

I have a kubernetes deployment with the below spec that gets installed via helm 3.
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
spec:
replicas: 1
template:
spec:
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
args:
- --listen=0.0.0.0:80
- --client-id=gk-client
- --discovery-url={{ .Values.discoveryUrl }}
I need to pass the discoveryUrl value as a helm value, which is the public IP address of the nginx-ingress pod that I deploy via a different helm chart. I install the above deployment like below:
helm3 install my-nginx-ingress-chart
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].ip}')
helm3 install my-gatekeeper-chart --set discovery_url=${INGRESS_IP}
This works fine, however, Now instead of these two helm3 install, I want to have a single helm3 install, where both the nginx-ingress and the gatekeeper deployment should be created.
I understand that in the initContainer of my-gatekeeper-image we can get the nginx-ingress ip address, but I am not able to understand how to set that as an environment variable or pass to the container spec.
There are some stackoverflow questions that mention that we can create a persistent volume or secret to achieve this, but I am not sure, how that would work if we have to delete them. I do not want to create any extra objects and maintain the lifecycle of them.
It is not possible to do this without mounting a persistent volume. But the creation of persistent volume can be backed by just an in-memory store, instead of a block storage device. That way, we do not have to do any extra lifecycle management. The way to achieve that is:
apiVersion: v1
kind: ConfigMap
metadata:
name: gatekeeper
data:
gatekeeper.sh: |-
#!/usr/bin/env bash
set -e
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].name}')
# Do other validations/cleanup
echo $INGRESS_IP > /opt/gkconf/discovery_url;
exit 0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
replicas: 1
selector:
matchLabels:
app: gatekeeper
template:
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
initContainers:
- name: gkinit
command: [ "/opt/gk-init.sh" ]
image: 'bitnami/kubectl:1.12'
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
- mountPath: /opt/gk-init.sh
name: gatekeeper
subPath: gatekeeper.sh
readOnly: false
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
# ENTRYPOINT of above image should read the
# file /opt/gkconf/discovery_url and then launch
# the actual gatekeeper binary
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
volumes:
- name: gkconf
emptyDir:
medium: Memory
- name: gatekeeper
configMap:
name: gatekeeper
defaultMode: 0555
Using init containers is indeed a valid solution but you need to be aware that by doing so you are adding complexity to your deployment.
This is because you would also need to create serviceaccount with permisions to be able to read service objects from inside of init container. Then, when having the IP, you can't just set env variable for gatekeeper container without recreating a pod so you would need to save the IP e.g. to shared file and read it from it when starting gatekeeper.
Alternatively you can reserve ip address if your cloud provided supports this feature and use this static IP when deploying nginx service:
apiVersion: v1
kind: Service
[...]
type: LoadBalancer
loadBalancerIP: "YOUR.IP.ADDRESS.HERE"
Let me know if you have any questions or if something needs clarification.

How to fetch configmap from kubernetes pod

I have one spring boot microservice running on docker container, below is the Dockerfile
FROM java:8-jre
MAINTAINER <>
WORKDIR deploy/
#COPY config/* /deploy/config/
COPY ./ms.console.jar /deploy/
CMD chmod +R 777 ./ms.console.jar
CMD ["java","-jar","/deploy/ms.console.jar","console"]
EXPOSE 8384
here my configuration stores in external folder, i.e /config/console-server.yml and when I started the application, internally it will load the config (spring boot functionality).
Now I want to separate this configuration using configmap, for that I simply created one configmap and storing all the configuration details.
kubectl create configmap console-configmap
--from-file=./config/console-server.yml
kubectl describe configmap console-configmap
below are the description details:
Name: console-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
console-server.yml:
----
server:
http:
port: 8385
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain,text/css,application/javascript
min-response-size: 2048
---
spring:
thymeleaf:
prefix: classpath:/static
application:
name: console-service
profiles:
active: native
servlet:
multipart:
max-file-size: 30MB
max-request-size: 30MB
---
host:
gateway: http://apigateway:4000
webhook: http://localhost:9000
my deployment yml is:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
envFrom:
- configMapRef:
name: console-configmap
imagePullSecrets:
- name: regcresd
My doubt is, I commented config folder in the Dockerfile, so while running pods, it's throwing exception because of no configuration, how I will inject this console-configmap to my deployment, what I tried already shared, but getting same issues.
First of all, how are you consuming the .yml file in your application? If you consume your yml file contents as environment variables, your config should just work fine. But I suspect that you want to consume the contents from the config file inside the container. If that is the case you have to create a volume out of the configmap as follows:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/config
name: config
volumes:
- name: config
configMap:
name: console-configmap
imagePullSecrets:
- name: regcresd
The file will be available in the path /app/config/console-server.yml. You have to modify it as per your needs.
do you need to load key:value pairs from the config file as environment variables then below spec would work
envFrom:
- configMapRef:
name: console-configmap
if you need the config as a file inside pod then mount the configmap as volume. following link would be helpful
https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/

How to dynamically populate values into Kubernetes yaml files

I would like to pass in some of the values in kubernetes yaml files during runtime like reading from config/properties file.
what is the best way to do that?
In the below example, I do not want to hardcode the port value, instead read the port number from config file.
Ex:
logstash.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: 33044 (looking to read this port from config file)
env:
- name: INPUT_PORT
value: "5044"
config.yaml
logstash_port: 33044
This sounds like a perfect use case for Helm (www.helm.sh).
Helm Charts helps you define, install, and upgrade Kubernetes applications. You can use a pre-defined chart (like Nginx, etc) or create your own chart.
Charts are structured like:
mychart/
Chart.yaml
values.yaml
charts/
templates/
...
In the templates folder, you can include your ReplicationController files (and any others). In the values.yaml file you can specify any variables you wish to share amongst the templates (like port numbers, file paths, etc).
The values file can be as simple or complex as you require. An example of a values file:
myTestService:
containerPort: 33044
image: "logstash"
You can then reference these values in your template file using:
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: {{ .Values.myTestService.containerPort }}
env:
- name: INPUT_PORT
value: "5044"
Once finished you can compile into Helm chart using helm package mychart. To deploy to your Kubernetes cluster you can use helm install mychart-VERSION.tgz. That will then deploy your chart to the cluster. The version number is set within the Chart.yaml file.
You can use Kubernetes ConfigMaps for this. ConfigMaps are introduced to include external configuration files such as property files.
First create a ConfigMap artifact out of your property like follows:
kubectl create configmap my-config --from-file=db.properties
Then in your Deployment yaml you can provide it as a volume binding or environment variables
Volume binding :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
volumeMounts:
- name: config-volume
mountPath: /etc/creds <mount path>
volumes:
- name: config-volume
configMap:
name: my-config
Here under mountPath you need to provide the location of your container where your property file should resides. And underconfigMap name you should define the name of your configMap you created.
Environment variables way :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
env:
- name: DB_PROPERTIES
valueFrom:
configMapKeyRef:
name: my-config
items:
- key: <propert name>
path: <path/to/property>
Here under the configMapKeyRef section under name you should define your config map name you created. e.g. my-config. Under the items you should define the key(s) of your property file and path to each of the key, Kubernetes will automatically resolve the value of the property internally.
You can find more about ConfigMap here.
https://kubernetes-v1-4.github.io/docs/user-guide/configmap/
There are some parameters you can't change once a pod is created. containerPort is one of them.
You can add a new container to a pod though. And open a new port.
The parameters you CAN change, you can do it either by dynamically creating or modifying the original deployment (say with sed) and running kubectl replace -f FILE command, or through kubectl edit DEPLOYMENT command; which automatically applies the changes.