config:
temp.yaml: |-
instances:
- server: <The value that is going to be passed from helm template --set>
port: 443
command: --set config."temp\.yaml.instances[0].server"=server_url
I have a helm template like in the above example. As it can be seen temp.yaml is inline but I need to set a value thats in the inline part. Is this somehow possible to do this using only "helm template --set" command?
| declare multi-line strings, You cloud not treat string like struct.
helm yaml doc
In addition, --set is used to overridden value in values.yaml. And the key of the value in values.yaml cannot contain ., so there cannot contain keys such as temp.yaml
helm --set
One possible implementation
values.yaml
config:
temp: |-
instances:
- server: 127.0.0.1
port: 443
templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
temp.yaml: |-
{{ $.Values.config.temp }}
cmd:
helm template --debug test .
output:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
temp.yaml: |-
instances:
- server: 127.0.0.1
port: 443
cmd 2:
helm --debug template test . --set config2.temp=\
' instances:
- server: 1.2.3.4
port: 443'
output 2:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
temp.yaml: |-
instances:
- server: 1.2.3.4
port: 443
You can only helm install --set (or --set-string) an entire string value; you can't selectively replace things inside a string or tell Helm that a string value is actually YAML.
It looks like you're trying to expose an entire ConfigMap data structure as configurable settings. You might be able to restructure your application so most of the ConfigMap is generated in Helm template code, and only the things that need to be configured are exposed as Helm values. For example, if your ConfigMap looks like
# templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "mychart.name" . }}
labels:
{{ include "mychart.labels" . | indent 4 }}
data:
temp.yaml: |-
instances:
{{ .Values.instances | toYaml | indent 6 }}
Now your values.yaml file contains the "instances" data as a YAML structure, not a string
# values.yaml, or some `helm install -f` file
instances:
- server: 127.0.0.1
port: 443
and the ... | toYaml | indent sequence will convert that structure back to YAML text and indent it appropriately for the ConfigMap output.
If you do this then you can use helm install --set in pretty much exactly the way you describe initially.
Related
I don't know Golang at all but need to implement Go template syntax in my kubernetes config (where hishicorp vault is configured). What I'm trying to do is to modify file in order to change its format. So source looks like this:
data: map[key1:value1]
metadata: map[created_time:2021-10-06T21:02:18.41643371Z deletion_time: destroyed:false version:1]
The Kubernetes config part with go template is used in order to format file is here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: ${REPLICAS}
selector:
matchLabels:
component: test
template:
metadata:
labels:
component: test
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-inject-status: 'update'
vault.hashicorp.com/role: 'test'
vault.hashicorp.com/agent-inject-secret-config: 'secret/data/test/config'
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/test/config" -}}
{{ range $k, $v := .Data.data }}
export {{ $k }}={{ $v | quote }}
{{ end }}
{{- end}}
spec:
serviceAccountName: test
containers:
- name: test
image: ${IMAGE}
ports:
- containerPort: 3000
But the error I'm getting is:
runtime error encountered: error="template server: (dynamic): parse: template: :2: unexpected "," in range"
EDIT:
To deploy vault on k8s I'm using vault helm chart
For what I can see you have env variables in your yaml file (${REPLICAS}, ${IMAGE}), which makes me think that you are using something like cat file.yml | envsubst | kubectl apply --wait=true -f - in order to replace those env vars for the real values.
The issue with this is that $k and $v are also being replaced for '' (since you do not have that env var in your system).
One ugly but effective solution is to export v="$v" and export k="$k" which whill generate your yaml file correctly.
Can environment variables passed to containers be composed from environment variables that already exist? Something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
env:
- name: URL
value: $(HOST):$(PORT)
Helm with it's variables seems like a better way of handling that kind use cases.
In the example below you have a deployment snippet with values and variables:
spec:
containers:
- name: {{ .Chart.Name }}
image: "image/thomas:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: URL
value: {{ .Values.host }}:{{ .Values.port }}
And here is one of the ways of deploying it with some custom variables:
helm upgrade --install myChart . \
--set image.tag=v2.5.4 \
--set host=example.com \
--set-string port=12345 \
Helm allows you also to use template functions. You can have defaultfunctions and this will go to default values if they're not filled. In the example above you can see required which display the message and fails to go further with installing the chart if you won't specify the value. There is also include function that allows you to bring in another template and pass results to other template functions.
Within a single Pod spec, this works with exactly the syntax you described, but the environment variables must be defined (earlier) in the same Pod spec. See Define Dependent Environment Variables in the Kubernetes documentation.
env:
- name: HOST
value: host.example.com
- name: PORT
value: '80'
- name: URL
value: '$(HOST):$(PORT)'
Beyond this, a Kubernetes YAML file needs to be totally standalone, and you can't use environment variables on the system running kubectl to affect the file content. Other tooling like Helm fills this need better; see #thomas's answer for an example.
These manifests are complete files. Not a good way to use variables in it. Though you can.
use the below command to replace and pipe it to kubectl.
sed -i -e "s#%%HOST%%#http://whatever#" file.yml;
Though I would suggest to use Helm.
Read more:
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Can a confuration for a progam running in container/pod be placed in a Deployment yaml instead of ConfigMap yaml - like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
spec:
containers:
- env:
-name: "MyConfigKey"
value: "MyConfigValue"
Single environment
Putting values in environment variables in the Deployment works.
Problem: You should not work on the environment that is the production environment, so you will need at least another environment.
Using docker, containers and Kubernetes makes it very easy to create more than one environment.
Multiple environements
When you want to use more than one environment, you want to keep the difference as small as possible. This is important to fast detect problems and to limit the management needed.
Problem: Maintaining the difference between environments and also avoid unique problems (config drift / snowflake servers).
Therefore, keep as much as possible common for the environments, e.g. use the same Deployment.
Only use unique instances of ConfigMap, Secret and probably Ingress for each app and environment.
This is my approach when you want to set env directly from deployment:
If you using Helm:
Helm values.yaml file:
deployment:
env:
enabled: false
vars:
KEY1: VALUE1
KEY2: VALUE2
Deployment templates deployment.yaml:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: ...
{{- if .Values.deployment.env.enabled }}
env:
{{- range $key, $val := .Values.deployment.env.vars }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end}}
{{- end }}
...
And if you just want to apply directly from kubectl command and deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: ...
env:
- name: key1
value: value1
- name: key2
value: value2
...
I'm trying to deploy an application that uses PostgreSQL as a database to my minikube. I'm using helm as a package manager, and add have added PostgreSQL dependency to my requirements.yaml. Now the question is, how do I set postgres user, db and password for that deployment? Here's my templates/applicaion.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ template "sgm.fullname" . }}-service
spec:
type: NodePort
selector:
app: {{ template "sgm.fullname" . }}
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "sgm.fullname" . }}-deployment
spec:
replicas: 2
selector:
matchLabels:
app: {{ template "sgm.fullname" . }}
template:
metadata:
labels:
app: {{ template "sgm.fullname" . }}
spec:
containers:
- name: sgm
image: mainserver/sgm
env:
- name: POSTGRES_HOST
value: {{ template "postgres.fullname" . }}.default.svc.cluster.local
I've tried adding a configmap as it is stated in the postgres helm chart github Readme, but seems like I'm doing something wrong
This is lightly discussed in the Helm documentation: your chart's values.yaml file contains configuration blocks for the charts it includes. The GitHub page for the Helm stable/postgresql chart lists out all of the options.
Either in your chart's values.yaml file, or in a separate YAML file you pass to the helm install -f option, you can set parameters like
postgresql:
postgresqlDatabase: stackoverflow
postgresqlPassword: enterImageDescriptionHere
Note that the chart doesn't create a non-admin user (unlike its sibling MySQL chart). If you're okay with the "normal" database user having admin-level privileges (like creating and deleting databases) then you can set postgresqlUser here too.
In your own chart you can reference these values like any other
- name: PGUSER
value: {{ .Values.postgresql.postgresqlUser }}
I have the documentation regarding the configmap:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
From what I understand is I can create a config map(game-config-2) from two files
(game.properties and ui.properties) using
kubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/kubectl/game.properties --from-file=configure-pod-container/configmap/kubectl/ui.properties
Now I see the configmap
kubectl describe configmaps game-config-2
Name: game-config-2
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
game.properties: 158 bytes
ui.properties: 83 bytes
How can I use that configmap? I tried this way:
envFrom:
- configMapRef:
name: game-config-2
But this is not working, the env variable is not picking from the configmap. Or can I have two configMapRef under envFrom?
Yes, a pod or deployment can get env From a bunch of configMapRef entries:
spec:
containers:
- name: encouragement-api
image: registry-......../....../encouragement.api
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: general-config
- configMapRef:
name: private-config
Best to create them from yaml files for k8s law and order:
config_general.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: general-config
data:
HOSTNAME: Develop_hostname
COMPUTERNAME: Develop_compname
ASPNETCORE_ENVIRONMENT: Development
encouragement-api/config_private.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: private-config
data:
PRIVATE_STUFF: real_private
apply the two configmaps:
kubectl apply -f config_general.yaml
kubectl apply -f encouragement-api/config_private.yaml
Run and exec into the pod and run env |grep PRIVATE && env |grep HOSTNAME
I have config_general.yaml laying around in the same repo as the developers' code, they can change it however they like. Passwords and sensitive values are kept in the config_private.yaml file which is sitting elsewhere (a S3 encrypted bucket) and the values there are base64 encoded for an extra bit of security.
One solution to this problem is to create a ConfigMap with a multiple data key/values:
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
game.properties: |
<paste file content here>
ui.properties: |
<paste file content here>
Just don't forget | symbol before pasting content of files.
Multiple --from-env-file are not allowed.
Multiple --from-file will work for you.
Eg:
cat config1.txt
var1=val1
cat config2.txt
var3=val3
var4=val4
kubectl create cm details2 --from-env-file=config1.txt --from-env-file=config2.txt -o yaml --dry-run
Output
apiVersion: v1
data:
var3: val3
var4: val4
kind: ConfigMap
name: details2
k create cm details2 --from-file=config1.txt --from-file=config2.txt -o yaml --dry-run
Output
apiVersion: v1
data:
config1.txt: |
var1=val1
config2.txt: |
var3=val3
var4=val4
kind: ConfigMap
name: details2
If you use Helm, it is much simpler.
Create a ConfigMap template like this
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Values.configMapName }}
data:
{{ .Values.gameProperties.file.name }}: |
{{ tpl (.Files.Get .Values.gameProperties.file.path) }}
{{ .Values.uiProperties.file.name }}: |
{{ tpl (.Files.Get .Values.uiProperties.file.path) }}
and two files with the key:value pairs like this game.properties
GAME_NAME: NFS
and another files ui.properties
GAME_UI: NFS UI
and values.yaml should like this
configMapName: game-config-2
gameProperties:
file:
name: game.properties
path: "properties/game.properties"
uiProperties:
file:
name: ui.properties
path: "properties/ui.properties"
You can verify if templates interpolate the values from values.yaml file by helm template ., you can expect this as output
kind: ConfigMap
apiVersion: v1
metadata:
name: game-config-2
data:
game.properties: |
GAME_NAME: NFS
ui.properties: |
GAME_UI: NFS UI
am not sure if you can load all key:value pairs from a specific file in a configmap as environemnt variables in a pod. you can load all key:value pairs from a specific configmap as environemnt variables in a pod. see below
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
Verify that pod shows below env variables
SPECIAL_LEVEL=very
SPECIAL_TYPE=charm
As #Emruz_Hossain mentioned , if game.properties and ui.properties have only env variables then this can work for you
kubectl create configmap game-config-2 --from-env-file=configure-pod-container/configmap/kubectl/game.properties --from-env-file=configure-pod-container/configmap/kubectl/ui.properties