I am to trying to pass variables to kubernetes YAML file from Ansible but somehow values are not being populated.
Here is my playbook:
- hosts: master
gather_facts: no
vars:
logstash_port: 5044
tasks:
- name: Creating kubernetes pod
command: kubectl create -f logstash.yml
logstash.yml:
apiVersion: v1
kind: Pod
metadata: logstash
spec:
containers:
- name: logstash
image: logstash
ports:
- containerPort: {{ logstash_port }}
Is there a better way to pass arguments to Kubernetes YAML file that is being invoked using command task?
What you are trying to do has no chance of working. Kubernetes (the kubectl command) has nothing to do with Jinja2 syntax, which you try to use in the logstash.yml, and it has no access to Ansible objects (for multiple reasons).
Instead, use k8s_raw module to manage Kubernetes objects.
You can include Kubernetes' manifest directly in the definition declaration and there you can use Jinja2 templates:
- k8s_raw:
state: present
definition:
apiVersion: v1
kind: Pod
metadata: logstash
spec:
containers:
- name: logstash
image: logstash
ports:
- containerPort: "{{ logstash_port }}"
Or you can leave your logstash.yml as is, and feed it using the template lookup plugin:
- k8s_raw:
state: present
definition: "{{ lookup('template', 'path/to/logstash.yml') | from_yaml }}"
Notice if you used Jinja2 template directly in Ansible code, you must quote it. It's not necessary with the template plugin.
Related
I'm running a StatefulSet where each replica requires its own unique configuration. To achieve that I'm currently using a configuration with two containers per Pod:
An initContainer prepares the configuration and stores it to a shared volume
A main container consumes the configuration by outputting the contents of the shared volume and passing it to the program as CLI flags.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: my-app
serviceName: my-app
template:
metadata:
labels:
app.kubernetes.io/name: my-app
spec:
initContainers:
- name: generate-config
image: myjqimage:latest
command: [ "/bin/sh" ]
args:
- -c
- |
set -eu -o pipefail
POD_INDEX="${HOSTNAME##*-}"
# A configuration is stored as a JSON array in a Secret
# E.g., [{"param1":"string1","param2":"string2"}]
echo "$MY_APP_CONFIG" | jq -rc --arg i "$POD_INDEX" '.[$i|tonumber-1].param1' > /config/param1
echo "$MY_APP_CONFIG" | jq -rc --arg i "$POD_INDEX" '.[$i|tonumber-1].param2' > /config/param2
env:
- name: MY_APP_CONFIG
valueFrom:
secretKeyRef:
name: my-app
key: config
volumeMounts:
- name: configs
mountPath: "/config"
containers:
- name: my-app
image: myapp:latest
command:
- /bin/sh
args:
- -c
- |
/myapp --param1 $(cat /config/param1) --param2 $(cat /config/param2)
volumeMounts:
- name: configs
mountPath: "/config"
volumes:
- name: configs
emptyDir:
medium: "Memory"
---
apiVersion: v1
kind: Secret
metadata:
name: my-app
namespace: default
labels:
app.kubernetes.io/name: my-app
type: Opaque
data:
config: W3sicGFyYW0xIjoic3RyaW5nMSIsInBhcmFtMiI6InN0cmluZzIifV0=
Now I want to switch to distroless for my main container. Distroless images only contain the required dependencies to run the program (glibc in my case). And it is missing a shell. So if previously I could execute cat and output the contents of a file. Now I'm a bit stuck.
Now instead of reading the contents from file, I should pass the CLI flags defined as environment variables. Something like this:
containers:
- name: my-app
image: myapp:latest
command: ["/myapp", "--param1", "$(PARAM1)", "--param2", "$(PARAM2)"]
env:
- name: PARAM1
value: somevalue1
- name: PARAM2
value: somevalue2
Again, each Pod in a StatefulSet should have a unique configuration. I.e., PARAM1 and PARAM2 should be unique across the Pods in a StatefulSet. How do I achieve that?
Options I considered:
Using Debug Containers -- a new feature of K8s. Somehow use it to edit the configuration of a running container in runtime and inject the required variables. But the feature just became beta in 1.23. And I don't want to mutate my StatefulSet in runtime as I'm using a GitOps approach to store the configuration in Git. It'll probably cause a continuous configuration drift
Using a Job to mutate the configuration in runtime. Again, looks very ugly and violates the GitOps principle
Using shareProcessNamespace. Unsure if it can help but maybe I can somehow inject the environment variables from within the initContainer
Limitations:
Application only supports configuration provisioned through CLI flags. No environment variables, no loading the config from a file
Can environment variables passed to containers be composed from environment variables that already exist? Something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
env:
- name: URL
value: $(HOST):$(PORT)
Helm with it's variables seems like a better way of handling that kind use cases.
In the example below you have a deployment snippet with values and variables:
spec:
containers:
- name: {{ .Chart.Name }}
image: "image/thomas:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: URL
value: {{ .Values.host }}:{{ .Values.port }}
And here is one of the ways of deploying it with some custom variables:
helm upgrade --install myChart . \
--set image.tag=v2.5.4 \
--set host=example.com \
--set-string port=12345 \
Helm allows you also to use template functions. You can have defaultfunctions and this will go to default values if they're not filled. In the example above you can see required which display the message and fails to go further with installing the chart if you won't specify the value. There is also include function that allows you to bring in another template and pass results to other template functions.
Within a single Pod spec, this works with exactly the syntax you described, but the environment variables must be defined (earlier) in the same Pod spec. See Define Dependent Environment Variables in the Kubernetes documentation.
env:
- name: HOST
value: host.example.com
- name: PORT
value: '80'
- name: URL
value: '$(HOST):$(PORT)'
Beyond this, a Kubernetes YAML file needs to be totally standalone, and you can't use environment variables on the system running kubectl to affect the file content. Other tooling like Helm fills this need better; see #thomas's answer for an example.
These manifests are complete files. Not a good way to use variables in it. Though you can.
use the below command to replace and pipe it to kubectl.
sed -i -e "s#%%HOST%%#http://whatever#" file.yml;
Though I would suggest to use Helm.
Read more:
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Is it possible to import environment variables from a different .yml file into the deployment file. My container requires environment variables.
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: <removed>
imagePullPolicy: Always
env:
- name: NODE_ENV
value: "TEST"
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
vars.yml
NODE_ENV: TEST
What i'd like is to declare my variables in a seperate file and simply import them into the deployment.
What you describe sounds like a helm use case. If your deployment were part of a helm chart/template then you could have different values files (which are yaml) and inject the values from them into the template based on your parameters at install time. Helm is a common choice for helping to manage env-specific config.
But note that if you just want to inject an environment variable in your yaml rather than taking it from another yaml then a popular way to do that is envsubst.
I would like to pass in some of the values in kubernetes yaml files during runtime like reading from config/properties file.
what is the best way to do that?
In the below example, I do not want to hardcode the port value, instead read the port number from config file.
Ex:
logstash.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: 33044 (looking to read this port from config file)
env:
- name: INPUT_PORT
value: "5044"
config.yaml
logstash_port: 33044
This sounds like a perfect use case for Helm (www.helm.sh).
Helm Charts helps you define, install, and upgrade Kubernetes applications. You can use a pre-defined chart (like Nginx, etc) or create your own chart.
Charts are structured like:
mychart/
Chart.yaml
values.yaml
charts/
templates/
...
In the templates folder, you can include your ReplicationController files (and any others). In the values.yaml file you can specify any variables you wish to share amongst the templates (like port numbers, file paths, etc).
The values file can be as simple or complex as you require. An example of a values file:
myTestService:
containerPort: 33044
image: "logstash"
You can then reference these values in your template file using:
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: {{ .Values.myTestService.containerPort }}
env:
- name: INPUT_PORT
value: "5044"
Once finished you can compile into Helm chart using helm package mychart. To deploy to your Kubernetes cluster you can use helm install mychart-VERSION.tgz. That will then deploy your chart to the cluster. The version number is set within the Chart.yaml file.
You can use Kubernetes ConfigMaps for this. ConfigMaps are introduced to include external configuration files such as property files.
First create a ConfigMap artifact out of your property like follows:
kubectl create configmap my-config --from-file=db.properties
Then in your Deployment yaml you can provide it as a volume binding or environment variables
Volume binding :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
volumeMounts:
- name: config-volume
mountPath: /etc/creds <mount path>
volumes:
- name: config-volume
configMap:
name: my-config
Here under mountPath you need to provide the location of your container where your property file should resides. And underconfigMap name you should define the name of your configMap you created.
Environment variables way :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
env:
- name: DB_PROPERTIES
valueFrom:
configMapKeyRef:
name: my-config
items:
- key: <propert name>
path: <path/to/property>
Here under the configMapKeyRef section under name you should define your config map name you created. e.g. my-config. Under the items you should define the key(s) of your property file and path to each of the key, Kubernetes will automatically resolve the value of the property internally.
You can find more about ConfigMap here.
https://kubernetes-v1-4.github.io/docs/user-guide/configmap/
There are some parameters you can't change once a pod is created. containerPort is one of them.
You can add a new container to a pod though. And open a new port.
The parameters you CAN change, you can do it either by dynamically creating or modifying the original deployment (say with sed) and running kubectl replace -f FILE command, or through kubectl edit DEPLOYMENT command; which automatically applies the changes.
I need to be able to run a shell script (my script is for initializing my db cluster) to initialize my pods in Kubernetes,
I don't want to create my script inside my dockerfile because I get my image directly from the web so I don't want to touch it.
So I want to know if there is a way to get my script in to one of my volumes so I can execute it like that:
spec:
containers:
- name: command-demo-container
image: debian
command: ["./init.sh"]
restartPolicy: OnFailure
It depends what exactly does your init script do. But the InitContainers should be helpful in such cases. Init containers are run before the main application container is started and can do some preparation work such as create configuration files etc.
You would still need your own Docker image, but it doesn't have to be the same image as the database one.
I finally decided to take the approach of creating a config file with the script we want to run and then call this configMap from inside the volume.
this is a short explanation:
In my pod.yaml file there is a VolumeMount called "/pgconf" which is the directory that the docker image reads any SQL script that you put there and run it when the pod is starting.
And inside Volumes I will put the configMap name (postgres-init-script-configmap) which is the name of the config defined inside the configmap.yaml file.
There is no need to create the configMap using kubernetes,
The pod will take the configuration from the configMap file as long as you place it in the same directory as the pod.yaml .
my POD yaml file:
apiVersion: v1
kind: Pod
metadata:
name: "{{.Values.container.name.primary}}"
labels:
name: "{{.Values.container.name.primary}}"
spec:
securityContext:
fsGroup: 26
restartPolicy: {{default "Always" .Values.restartPolicy}}
containers:
- name: {{.Values.container.name.primary}}
image: "{{.Values.image.repository}}/{{.Values.image.container}}:{{.Values.image.tag}}"
ports:
- containerPort: {{.Values.container.port}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: primary
resources:
requests:
cpu: {{ .Values.resources.cpu }}
memory: {{ .Values.resources.memory }}
volumeMounts:
- mountPath: /pgconf
name: init-script
readOnly: true
volumes:
- name: init-script
configMap:
name: postgres-init-script-configmap
my configmap.yaml (Which contains the SQL script that will initial the DB):
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init-script-configmap
data:
setup.sql: |-
CREATE USER david WITH PASSWORD 'david';