Is it possible to import environment variables from a different .yml file into the deployment file. My container requires environment variables.
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: <removed>
imagePullPolicy: Always
env:
- name: NODE_ENV
value: "TEST"
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
vars.yml
NODE_ENV: TEST
What i'd like is to declare my variables in a seperate file and simply import them into the deployment.
What you describe sounds like a helm use case. If your deployment were part of a helm chart/template then you could have different values files (which are yaml) and inject the values from them into the template based on your parameters at install time. Helm is a common choice for helping to manage env-specific config.
But note that if you just want to inject an environment variable in your yaml rather than taking it from another yaml then a popular way to do that is envsubst.
Related
How do I use env variable defined inside deployment? For example, In yaml file dow below I try to use env CONT_NAME for setting container name, but it does not succeed. Could you help please with it, how to do it?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: $CONT_NAME
image: nginx:1.7.9
env:
- name: CONT_NAME
value: nginx
ports:
- containerPort: 80
You can't use variables to set values inside deployment natively. If you want to do that you have to process the file before executing the kubectl, take a look at this post. The best option to do this which it's focused on parametrize and standardize deployments is to use Helm
In those situations I just replace the $CONT_NAME in the yaml with the correct value right before applying the yaml.
sed -ie "s/\$COUNT_NAME/$COUNT_NAME/g" yourYamlFile.yaml
If your using fluxcd, it has the ability to do variable interpolation link
I am unable to deploy this file by using
kubectl apply -f command
Deployment YAML image
I have provided the YAML file required for your deployment. It is important that all the lines are indented correctly. Hyphens (-) indicate a list item. Therefore, it is not required to use them on every line.
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc-deployment
namespace: abc
spec:
replicas: 3
selector:
matchLabels:
app: abc-deployment
template:
metadata:
labels:
app: abc-deployment
spec:
containers:
- name: abc-deployment
image: anyimage
ports:
- containerPort: 80
env:
- name: APP_VERSION
value: v1
- name: ENVIRONMENT
value: "123"
- name: DATA
valueFrom:
configMapKeyRef:
name: abc-configmap
key: data
imagePullPolicy: IfNotPresent
restartPolicy: Always
imagePullSecrets:
- name: abc-secret
As a side note, the way envFrom was used is incorrect. It must be within the container env section, and formatted as such in the example above (see the DATA env variable).
If you are using Visual Studio Code, there is an official Kubernetes extension from Microsoft that provides Intellisense (suggestions) and alerts you to errors.
Hope this helps.
I have a kubernetes deployment with the below spec that gets installed via helm 3.
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
spec:
replicas: 1
template:
spec:
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
args:
- --listen=0.0.0.0:80
- --client-id=gk-client
- --discovery-url={{ .Values.discoveryUrl }}
I need to pass the discoveryUrl value as a helm value, which is the public IP address of the nginx-ingress pod that I deploy via a different helm chart. I install the above deployment like below:
helm3 install my-nginx-ingress-chart
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].ip}')
helm3 install my-gatekeeper-chart --set discovery_url=${INGRESS_IP}
This works fine, however, Now instead of these two helm3 install, I want to have a single helm3 install, where both the nginx-ingress and the gatekeeper deployment should be created.
I understand that in the initContainer of my-gatekeeper-image we can get the nginx-ingress ip address, but I am not able to understand how to set that as an environment variable or pass to the container spec.
There are some stackoverflow questions that mention that we can create a persistent volume or secret to achieve this, but I am not sure, how that would work if we have to delete them. I do not want to create any extra objects and maintain the lifecycle of them.
It is not possible to do this without mounting a persistent volume. But the creation of persistent volume can be backed by just an in-memory store, instead of a block storage device. That way, we do not have to do any extra lifecycle management. The way to achieve that is:
apiVersion: v1
kind: ConfigMap
metadata:
name: gatekeeper
data:
gatekeeper.sh: |-
#!/usr/bin/env bash
set -e
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].name}')
# Do other validations/cleanup
echo $INGRESS_IP > /opt/gkconf/discovery_url;
exit 0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
replicas: 1
selector:
matchLabels:
app: gatekeeper
template:
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
initContainers:
- name: gkinit
command: [ "/opt/gk-init.sh" ]
image: 'bitnami/kubectl:1.12'
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
- mountPath: /opt/gk-init.sh
name: gatekeeper
subPath: gatekeeper.sh
readOnly: false
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
# ENTRYPOINT of above image should read the
# file /opt/gkconf/discovery_url and then launch
# the actual gatekeeper binary
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
volumes:
- name: gkconf
emptyDir:
medium: Memory
- name: gatekeeper
configMap:
name: gatekeeper
defaultMode: 0555
Using init containers is indeed a valid solution but you need to be aware that by doing so you are adding complexity to your deployment.
This is because you would also need to create serviceaccount with permisions to be able to read service objects from inside of init container. Then, when having the IP, you can't just set env variable for gatekeeper container without recreating a pod so you would need to save the IP e.g. to shared file and read it from it when starting gatekeeper.
Alternatively you can reserve ip address if your cloud provided supports this feature and use this static IP when deploying nginx service:
apiVersion: v1
kind: Service
[...]
type: LoadBalancer
loadBalancerIP: "YOUR.IP.ADDRESS.HERE"
Let me know if you have any questions or if something needs clarification.
I would like to pass in some of the values in kubernetes yaml files during runtime like reading from config/properties file.
what is the best way to do that?
In the below example, I do not want to hardcode the port value, instead read the port number from config file.
Ex:
logstash.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: 33044 (looking to read this port from config file)
env:
- name: INPUT_PORT
value: "5044"
config.yaml
logstash_port: 33044
This sounds like a perfect use case for Helm (www.helm.sh).
Helm Charts helps you define, install, and upgrade Kubernetes applications. You can use a pre-defined chart (like Nginx, etc) or create your own chart.
Charts are structured like:
mychart/
Chart.yaml
values.yaml
charts/
templates/
...
In the templates folder, you can include your ReplicationController files (and any others). In the values.yaml file you can specify any variables you wish to share amongst the templates (like port numbers, file paths, etc).
The values file can be as simple or complex as you require. An example of a values file:
myTestService:
containerPort: 33044
image: "logstash"
You can then reference these values in your template file using:
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: {{ .Values.myTestService.containerPort }}
env:
- name: INPUT_PORT
value: "5044"
Once finished you can compile into Helm chart using helm package mychart. To deploy to your Kubernetes cluster you can use helm install mychart-VERSION.tgz. That will then deploy your chart to the cluster. The version number is set within the Chart.yaml file.
You can use Kubernetes ConfigMaps for this. ConfigMaps are introduced to include external configuration files such as property files.
First create a ConfigMap artifact out of your property like follows:
kubectl create configmap my-config --from-file=db.properties
Then in your Deployment yaml you can provide it as a volume binding or environment variables
Volume binding :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
volumeMounts:
- name: config-volume
mountPath: /etc/creds <mount path>
volumes:
- name: config-volume
configMap:
name: my-config
Here under mountPath you need to provide the location of your container where your property file should resides. And underconfigMap name you should define the name of your configMap you created.
Environment variables way :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
env:
- name: DB_PROPERTIES
valueFrom:
configMapKeyRef:
name: my-config
items:
- key: <propert name>
path: <path/to/property>
Here under the configMapKeyRef section under name you should define your config map name you created. e.g. my-config. Under the items you should define the key(s) of your property file and path to each of the key, Kubernetes will automatically resolve the value of the property internally.
You can find more about ConfigMap here.
https://kubernetes-v1-4.github.io/docs/user-guide/configmap/
There are some parameters you can't change once a pod is created. containerPort is one of them.
You can add a new container to a pod though. And open a new port.
The parameters you CAN change, you can do it either by dynamically creating or modifying the original deployment (say with sed) and running kubectl replace -f FILE command, or through kubectl edit DEPLOYMENT command; which automatically applies the changes.
The goal is to orchestrate both production and local development environments using Kubernetes. The problem is that hostPath doesn't work with relative path values. This results in slightly differing configuration files on each developer's machine to accommodate for the different project locations (i.e. "/my/absolute/path/to/the/project"):
apiVersion: v1
kind: Service
metadata:
name: some-service
labels:
app: app
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-deploy
spec:
selector:
matchLabels:
app: app
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: nginx:1.13.12-alpine
ports:
- containerPort: 80
volumeMounts:
- name: vol_example
mountPath: /var/www/html
volumes:
- name: vol_example
hostPath:
path: "/my/absolute/path/to/the/project"
type: Directory
How can relative paths be used in Kubernetes config files? Variable replacements (such as $(PWD)/project) have been tried, but didn't seem to work. If config variables can work with volumes, this might help but unsure of how to achieve this.
As mentioned here kubectl will never support variable substitution.
You can create a helm chart for your app (yaml). It supports yaml template variables (among various other features). So you'll be able to pass hostPath parameter based on development or production.
Not a native Kubernetes solution but you can manually edit the .yaml file 'on-the-fly' before applying it with kubectl.
In your .yaml file use a substitution that is not likely to become ambiguous in the volume section:
volumes:
- name: vol_example
hostPath:
path: {{path}}/relative/path
type: Directory
Then to apply the manifest run:
cat deployment.yaml | sed s+{{path}}+$(pwd)+g | kubectl apply -f -
Note: sed is used with the separator + because $(pwd) will result in a path that includes one or more / which is the conventional sed separator.