Can environment variables passed to containers be composed from environment variables that already exist? Something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
env:
- name: URL
value: $(HOST):$(PORT)
Helm with it's variables seems like a better way of handling that kind use cases.
In the example below you have a deployment snippet with values and variables:
spec:
containers:
- name: {{ .Chart.Name }}
image: "image/thomas:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: URL
value: {{ .Values.host }}:{{ .Values.port }}
And here is one of the ways of deploying it with some custom variables:
helm upgrade --install myChart . \
--set image.tag=v2.5.4 \
--set host=example.com \
--set-string port=12345 \
Helm allows you also to use template functions. You can have defaultfunctions and this will go to default values if they're not filled. In the example above you can see required which display the message and fails to go further with installing the chart if you won't specify the value. There is also include function that allows you to bring in another template and pass results to other template functions.
Within a single Pod spec, this works with exactly the syntax you described, but the environment variables must be defined (earlier) in the same Pod spec. See Define Dependent Environment Variables in the Kubernetes documentation.
env:
- name: HOST
value: host.example.com
- name: PORT
value: '80'
- name: URL
value: '$(HOST):$(PORT)'
Beyond this, a Kubernetes YAML file needs to be totally standalone, and you can't use environment variables on the system running kubectl to affect the file content. Other tooling like Helm fills this need better; see #thomas's answer for an example.
These manifests are complete files. Not a good way to use variables in it. Though you can.
use the below command to replace and pipe it to kubectl.
sed -i -e "s#%%HOST%%#http://whatever#" file.yml;
Though I would suggest to use Helm.
Read more:
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Related
I followed this explanation on how to inject vault secrets as environment variables into a Kubernetes container:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/role: 'web'
vault.hashicorp.com/agent-inject-secret-config: 'secret/data/web'
# Environment variable export template
vault.hashicorp.com/agent-inject-template-config: |
{{ with secret "secret/data/web" -}}
export api_key="{{ .Data.data.payments_api_key }}"
{{- end }}
spec:
serviceAccountName: web
containers:
- name: web
image: alpine:latest
command:
['sh', '-c']
args:
['source /vault/secrets/config && <entrypoint script>']
ports:
- containerPort: 9090
The problem with this is approach is that it overrides the entrypoint of the container image (as pointed out here), which I would like to avoid.
Is it possible to import a vault secret as an environment variable without overriding the default command and arguments of my underlying image?
A solution that reliably substitutes <entrypoint script> with the original entrypoint of my image without hardcoding it would also be ok.
A solution that reliably substitutes with the
original entrypoint of my image without hardcoding it would also be
ok.
The linked solution you quoted is an example of such a solution.
To elaborate, let's say your image's original entrypoint is <entrypoint script>. According to Vault docs, you need to source secrets. The solution you shared overrides original entrypoint using command, it became sh -c. Its arguments (args) are specified by 'source /vault/secrets/config && <entrypoint script>. When the new entrypoint runs, it sources the secrets and runs the original entrypoint.
How do I use env variable defined inside deployment? For example, In yaml file dow below I try to use env CONT_NAME for setting container name, but it does not succeed. Could you help please with it, how to do it?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: $CONT_NAME
image: nginx:1.7.9
env:
- name: CONT_NAME
value: nginx
ports:
- containerPort: 80
You can't use variables to set values inside deployment natively. If you want to do that you have to process the file before executing the kubectl, take a look at this post. The best option to do this which it's focused on parametrize and standardize deployments is to use Helm
In those situations I just replace the $CONT_NAME in the yaml with the correct value right before applying the yaml.
sed -ie "s/\$COUNT_NAME/$COUNT_NAME/g" yourYamlFile.yaml
If your using fluxcd, it has the ability to do variable interpolation link
I'm trying to deploy an application that uses PostgreSQL as a database to my minikube. I'm using helm as a package manager, and add have added PostgreSQL dependency to my requirements.yaml. Now the question is, how do I set postgres user, db and password for that deployment? Here's my templates/applicaion.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ template "sgm.fullname" . }}-service
spec:
type: NodePort
selector:
app: {{ template "sgm.fullname" . }}
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "sgm.fullname" . }}-deployment
spec:
replicas: 2
selector:
matchLabels:
app: {{ template "sgm.fullname" . }}
template:
metadata:
labels:
app: {{ template "sgm.fullname" . }}
spec:
containers:
- name: sgm
image: mainserver/sgm
env:
- name: POSTGRES_HOST
value: {{ template "postgres.fullname" . }}.default.svc.cluster.local
I've tried adding a configmap as it is stated in the postgres helm chart github Readme, but seems like I'm doing something wrong
This is lightly discussed in the Helm documentation: your chart's values.yaml file contains configuration blocks for the charts it includes. The GitHub page for the Helm stable/postgresql chart lists out all of the options.
Either in your chart's values.yaml file, or in a separate YAML file you pass to the helm install -f option, you can set parameters like
postgresql:
postgresqlDatabase: stackoverflow
postgresqlPassword: enterImageDescriptionHere
Note that the chart doesn't create a non-admin user (unlike its sibling MySQL chart). If you're okay with the "normal" database user having admin-level privileges (like creating and deleting databases) then you can set postgresqlUser here too.
In your own chart you can reference these values like any other
- name: PGUSER
value: {{ .Values.postgresql.postgresqlUser }}
Is it possible to import environment variables from a different .yml file into the deployment file. My container requires environment variables.
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: <removed>
imagePullPolicy: Always
env:
- name: NODE_ENV
value: "TEST"
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
vars.yml
NODE_ENV: TEST
What i'd like is to declare my variables in a seperate file and simply import them into the deployment.
What you describe sounds like a helm use case. If your deployment were part of a helm chart/template then you could have different values files (which are yaml) and inject the values from them into the template based on your parameters at install time. Helm is a common choice for helping to manage env-specific config.
But note that if you just want to inject an environment variable in your yaml rather than taking it from another yaml then a popular way to do that is envsubst.
I am to trying to pass variables to kubernetes YAML file from Ansible but somehow values are not being populated.
Here is my playbook:
- hosts: master
gather_facts: no
vars:
logstash_port: 5044
tasks:
- name: Creating kubernetes pod
command: kubectl create -f logstash.yml
logstash.yml:
apiVersion: v1
kind: Pod
metadata: logstash
spec:
containers:
- name: logstash
image: logstash
ports:
- containerPort: {{ logstash_port }}
Is there a better way to pass arguments to Kubernetes YAML file that is being invoked using command task?
What you are trying to do has no chance of working. Kubernetes (the kubectl command) has nothing to do with Jinja2 syntax, which you try to use in the logstash.yml, and it has no access to Ansible objects (for multiple reasons).
Instead, use k8s_raw module to manage Kubernetes objects.
You can include Kubernetes' manifest directly in the definition declaration and there you can use Jinja2 templates:
- k8s_raw:
state: present
definition:
apiVersion: v1
kind: Pod
metadata: logstash
spec:
containers:
- name: logstash
image: logstash
ports:
- containerPort: "{{ logstash_port }}"
Or you can leave your logstash.yml as is, and feed it using the template lookup plugin:
- k8s_raw:
state: present
definition: "{{ lookup('template', 'path/to/logstash.yml') | from_yaml }}"
Notice if you used Jinja2 template directly in Ansible code, you must quote it. It's not necessary with the template plugin.