I have the documentation regarding the configmap:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
From what I understand is I can create a config map(game-config-2) from two files
(game.properties and ui.properties) using
kubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/kubectl/game.properties --from-file=configure-pod-container/configmap/kubectl/ui.properties
Now I see the configmap
kubectl describe configmaps game-config-2
Name: game-config-2
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
game.properties: 158 bytes
ui.properties: 83 bytes
How can I use that configmap? I tried this way:
envFrom:
- configMapRef:
name: game-config-2
But this is not working, the env variable is not picking from the configmap. Or can I have two configMapRef under envFrom?
Yes, a pod or deployment can get env From a bunch of configMapRef entries:
spec:
containers:
- name: encouragement-api
image: registry-......../....../encouragement.api
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: general-config
- configMapRef:
name: private-config
Best to create them from yaml files for k8s law and order:
config_general.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: general-config
data:
HOSTNAME: Develop_hostname
COMPUTERNAME: Develop_compname
ASPNETCORE_ENVIRONMENT: Development
encouragement-api/config_private.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: private-config
data:
PRIVATE_STUFF: real_private
apply the two configmaps:
kubectl apply -f config_general.yaml
kubectl apply -f encouragement-api/config_private.yaml
Run and exec into the pod and run env |grep PRIVATE && env |grep HOSTNAME
I have config_general.yaml laying around in the same repo as the developers' code, they can change it however they like. Passwords and sensitive values are kept in the config_private.yaml file which is sitting elsewhere (a S3 encrypted bucket) and the values there are base64 encoded for an extra bit of security.
One solution to this problem is to create a ConfigMap with a multiple data key/values:
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
game.properties: |
<paste file content here>
ui.properties: |
<paste file content here>
Just don't forget | symbol before pasting content of files.
Multiple --from-env-file are not allowed.
Multiple --from-file will work for you.
Eg:
cat config1.txt
var1=val1
cat config2.txt
var3=val3
var4=val4
kubectl create cm details2 --from-env-file=config1.txt --from-env-file=config2.txt -o yaml --dry-run
Output
apiVersion: v1
data:
var3: val3
var4: val4
kind: ConfigMap
name: details2
k create cm details2 --from-file=config1.txt --from-file=config2.txt -o yaml --dry-run
Output
apiVersion: v1
data:
config1.txt: |
var1=val1
config2.txt: |
var3=val3
var4=val4
kind: ConfigMap
name: details2
If you use Helm, it is much simpler.
Create a ConfigMap template like this
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Values.configMapName }}
data:
{{ .Values.gameProperties.file.name }}: |
{{ tpl (.Files.Get .Values.gameProperties.file.path) }}
{{ .Values.uiProperties.file.name }}: |
{{ tpl (.Files.Get .Values.uiProperties.file.path) }}
and two files with the key:value pairs like this game.properties
GAME_NAME: NFS
and another files ui.properties
GAME_UI: NFS UI
and values.yaml should like this
configMapName: game-config-2
gameProperties:
file:
name: game.properties
path: "properties/game.properties"
uiProperties:
file:
name: ui.properties
path: "properties/ui.properties"
You can verify if templates interpolate the values from values.yaml file by helm template ., you can expect this as output
kind: ConfigMap
apiVersion: v1
metadata:
name: game-config-2
data:
game.properties: |
GAME_NAME: NFS
ui.properties: |
GAME_UI: NFS UI
am not sure if you can load all key:value pairs from a specific file in a configmap as environemnt variables in a pod. you can load all key:value pairs from a specific configmap as environemnt variables in a pod. see below
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
Verify that pod shows below env variables
SPECIAL_LEVEL=very
SPECIAL_TYPE=charm
As #Emruz_Hossain mentioned , if game.properties and ui.properties have only env variables then this can work for you
kubectl create configmap game-config-2 --from-env-file=configure-pod-container/configmap/kubectl/game.properties --from-env-file=configure-pod-container/configmap/kubectl/ui.properties
Related
I have different sets of environment variables per deployment/microservice and vaule for each environment (dev/test/qa) are different.
Do I need overlay file for each deployment/microservice against each environment (dev/test/qa) or I can managed with single overlay per environment?
Deployment - app-1.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-1
spec:
template:
spec:
containers:
- name: example-1
image: example:1.0
env:
-Name: "A1"
-Value: "B1"
env:
-Name: "D1"
-Value: "E1"
Deployment - app-2.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-2
spec:
template:
spec:
containers:
- name: example-2
image: example:2.0
env:
-Name: "X1"
-Value: "Y1"
env:
-Name: "P1"
-Value: "Q1"
You can keep everything inside the single YAML file and divide the YAML as per need.
you can use the --- to merge the YAML configuration files in one file like the below in the example.
In a single YAML file, you can add everything Secret, deployment, service and etc as per requirement.
However if for each environment you have a different cluster to manage then applying the single YAML file might create the issues, in that case, it's better to keep the files separate.
If you are planning to set up the CI/CD and automation for deployment i would suggest keeping the single deployment file with a variable approach.
apiVersion: apps/v1
kind: Deployment
metadata:
name: DEPLOYMENT_NAME
spec:
template:
spec:
containers:
- name: example-2
image: IMAGE_NAME
env:
-Name: "A1"
-Value: "B1"
using the ubuntu sed command you have to run time to replace the values of IMAGE_NAME, DEPLOYMENT_NAME and A1 etc based on the environment and as soon as your file get ready you can apply from the CI/CD server
Single merged file :
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-1
spec:
template:
spec:
containers:
- name: example-1
image: example:1.0
env:
-Name: "A1"
-Value: "B1"
env:
-Name: "D1"
-Value: "E1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-2
spec:
template:
spec:
containers:
- name: example-2
image: example:2.0
env:
-Name: "X1"
-Value: "Y1"
env:
-Name: "P1"
-Value: "Q1"
EDIT
If managing the environment is the only case you can also use the secret or configmap to manage the environment variables.
secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: dev-env-sample
annotations:
kubernetes.io/service-account.name: "sa-name"
type: kubernetes.io/service-account-token
data:
extra: YmFyCg==
A1 : B1
D1 : E1
this is a single secret file storing all the Dev environment variables.
inject all variables to the Dev deployment, add the below config to the deployment file so all your variables inside that config map or secret will get injected into the deployment.
envFrom:
- SecretRef:
name: dev-env-sample
https://kubernetes.io/docs/concepts/configuration/secret/
configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: dev-configmap
data:
extra: YmFyCg==
A1 : B1
D1 : E1
and you can inject the configmap to deployment using the
envFrom:
- configMapRef:
name: dev-configmap
The difference between secret and configmap is that secret save the data in base64 encoded format while configmap save in plain text.
You can also merge the multiple secrets or config map in single YAML file
apiVersion: v1
kind: Secret
metadata:
name: dev-env-sample
annotations:
kubernetes.io/service-account.name: "sa-name"
type: kubernetes.io/service-account-token
data:
extra: YmFyCg==
A1 : B1
D1 : E1
---
apiVersion: v1
kind: Secret
metadata:
name: stag-env-sample
annotations:
kubernetes.io/service-account.name: "sa-name"
type: kubernetes.io/service-account-token
data:
extra: YmFyCg==
A1 : B1
D1 : E1
---
apiVersion: v1
kind: Secret
metadata:
name: prod-env-sample
annotations:
kubernetes.io/service-account.name: "sa-name"
type: kubernetes.io/service-account-token
data:
extra: YmFyCg==
A1 : B1
D1 : E1
inject the secret into deployment as per need per environment.
dev-env-sample can be added in to the deployment file for dev environment.
You can use variable in the env value field as below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-1
spec:
template:
spec:
containers:
- name: example-1
image: example:1.0
env:
- Name: "A1"
Value: ${A1_VALUE}
- Name: "D1"
Value: ${D1_VALUE}
Then, on dev env you can do the following.
export A1_VALUE=<your dev env value for A1>
export D1_VALUE=<your dev env value for D1>
envsubst < example-1.yaml | kubectl apply -f -
Then, on qa env you can do the following.
export A1_VALUE=<your qa env value for A1>
export D1_VALUE=<your qa env value for D1>
envsubst < example-1.yaml | kubectl apply -f -
You can also put those env variables in a file. For example, you can have the following two env files.
dev.env file
A1_VALUE=a1_dev
D1_VALUE=b1_dev
qa.env file
A1_VALUE=a1_qa
D1_VALUE=b1_qa
Then, on dev environment, just run:
❯ source dev.env
❯ envsubst < example-1.yaml| kubectl apply -f -
On qa environment, just run:
❯ source qa.env
❯ envsubst < example-1.yaml| kubectl apply -f -
Note that you have to install envsubst in your machine.
I'll describe what is my target then show what I had done to achieve it... my goal is to:
create a configmap that holds a path for properties file
create a deployment, that has a volume mounting the file from the path configured in configmap
What I had done:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: "my.properties"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-client-deployment
spec:
selector:
matchLabels:
app: my-client
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: my-client
spec:
containers:
- name: my-client-container
image: {{ .Values.image.client}}
imagePullPolicy: {{ .Values.pullPolicy.client }}
ports:
- containerPort: 80
env:
- name: MY_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: my-configmap
key: my_properties_file_name
volumeMounts:
- name: config
mountPath: "/etc/config"
readOnly: true
imagePullSecrets:
- name: secret-private-registry
volumes:
# You set volumes at the Pod level, then mount them into containers inside that Pod
- name: config
configMap:
# Provide the name of the ConfigMap you want to mount.
name: my-configmap
# An array of keys from the ConfigMap to create as files
items:
- key: "my_properties_file_name"
path: "my.properties"
The result is having a file namedmy.properties under /etc/config, BUT the content of that file is "my.properties" (as it was indicated as the file name in the configmap), and not the content of properties file as I have it actually in my localdisk.
How can I mount that file, using it's path configured in a configmap?
Put the content of the my.properties file directly inside the ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: |
This is the content of the file.
It supports multiple lines but do take care of the indentation.
Or you can also use a kubectl create configmap command:
kubectl create configmap my-configmap --from-file=my_properties_file_name=./my.properties
In either method, you are actually passing the snapshot of the content of the file on the localdisk to kubernetes to store. Any changes you make to the file on the localdisk won't be reflected unless you re-create the configmap.
The design of kubernetes allows running kubectl command against kubernetes cluster located on the other side of the globe so you can't simply mount a file on your localdisk to be accessible in realtime by the cluster. If you want such mechanism, you can't use a ConfigMap, but instead you would need to setup a shared volume that is mounted by both your local machine and the cluster for example using a NFS server.
I'm using GKE and Helm v3 and I'm trying to create/reserve a static IP address using ComputeAddress and then to create DNS A record with the previously reserved IP address.
Reserve IP address
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: ip-address
annotations:
cnrm.cloud.google.com/project-id: project-id
spec:
location: global
Get reserved IP address
kubectl get computeaddress ip-address -o jsonpath='{.spec.address}'
Create DNS A record
apiVersion: dns.cnrm.cloud.google.com/v1beta1
kind: DNSRecordSet
metadata:
name: dns-record-a
annotations:
cnrm.cloud.google.com/project-id: project-id
spec:
name: "{{ .Release.Name }}.example.com"
type: "A"
ttl: 300
managedZoneRef:
external: example-com
rrdatas:
- **IP-ADDRESS-VALUE** <----
Is there a way to reference the IP address value, created by ComputeAddress, in the DNSRecordSet resource?
Basically, I need something similar to the output values in Terraform.
Thanks!
Currently, there is not possible to assign different value as string (IP Address) on the field "rrdatas". So you are not able to "call" another resource like the IP Address created before. You need to put the value on format x.x.x.x
It's interesting that something similar exists for GKE Ingress where we can reference reserved IP address and managed SSL certificate using annotations:
annotations:
kubernetes.io/ingress.global-static-ip-name: my-static-address
I have no idea why there is not something like this for DNSRecordSet resource. Hopefully, GKE will introduce it in the future.
Instead of running two commands, I've found a workaround by using Helm's hooks.
First, we need to define Job as post-install and post-upgrade hook which will pick up the reserved IP address when it becomes ready and then create appropriate DNSRecordSet resource with it. The script which retrieves the IP address, and manifest for DNSRecordSet are passed through ConfigMap and mounted to Pod.
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-dns-record-set-hook"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
template:
metadata:
name: "{{ .Release.Name }}-dns-record-set-hook"
spec:
restartPolicy: OnFailure
containers:
- name: post-install-job
image: alpine:latest
command: ['sh', '-c', '/opt/run-kubectl-command-to-set-dns.sh']
volumeMounts:
- name: volume-dns-record-scripts
mountPath: /opt
- name: volume-writable
mountPath: /mnt
volumes:
- name: volume-dns-record-scripts
configMap:
name: dns-record-scripts
defaultMode: 0777
- name: volume-writable
emptyDir: {}
ConfigMap definition with the script and manifest file:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: dns-record-scripts
data:
run-kubectl-command-to-set-dns.sh: |-
# install kubectl command
apk add curl && \
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.1/bin/linux/amd64/kubectl && \
chmod u+x kubectl && \
mv kubectl /bin/kubectl
# wait for reserved IP address to be ready
kubectl wait --for=condition=Ready computeaddress/ip-address
# get reserved IP address
IP_ADDRESS=$(kubectl get computeaddress ip-address -o jsonpath='{.spec.address}')
echo "Reserved address: $IP_ADDRESS"
# update IP_ADDRESS in manifest
sed "s/##IP_ADDRESS##/$IP_ADDRESS/g" /opt/dns-record.yml > /mnt/dns-record.yml
# create DNS record
kubectl apply -f /mnt/dns-record.yml
dns-record.yml: |-
apiVersion: dns.cnrm.cloud.google.com/v1beta1
kind: DNSRecordSet
metadata:
name: dns-record-a
annotations:
cnrm.cloud.google.com/project-id: project-id
spec:
name: "{{ .Release.Name }}.example.com"
type: A
ttl: 300
managedZoneRef:
external: example-com
rrdatas:
- "##IP_ADDRESS##"
And, finally, for (default) Service Account to be able to retrieve the IP address and create/update DNSRecordSet, we need to assign some roles to it:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dnsrecord-setter
rules:
- apiGroups: ["compute.cnrm.cloud.google.com"]
resources: ["computeaddresses"]
verbs: ["get", "list"]
- apiGroups: ["dns.cnrm.cloud.google.com"]
resources: ["dnsrecordsets"]
verbs: ["get", "create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dnsrecord-setter
subjects:
- kind: ServiceAccount
name: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dnsrecord-setter
Given the following kustomize patch:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de
I want to use kubectl apply -k and somehow pass a value for ${PASSWORD} which I can set from my build script.
The only solution I got to work so far was replacing the ${PASSWORD} with sed, but I would prefer a kustomize solution.
As #Jonas already suggested you should consider using Secret. It's nicely described in this article.
I want to use kubectl apply -k and somehow pass a value for
${PASSWORD} which I can set from my build script.
I guess your script can store the generated password as a variable or save it to some file. You can easily create a Secret as follows:
$ kustomize edit add secret sl-demo-app --from-literal=db-password=$PASSWORD
or from a file:
$ kustomize edit add secret sl-demo-app --from-file=file/path
As you can read in the mentioned article:
These commands will modify your kustomization.yaml and add a
SecretGenerator inside it.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- custom-env.yaml
- replica-and-rollout-strategy.yaml
secretGenerator:
- literals:
- db-password=12345
name: sl-demo-app
type: Opaque
kustomize build run in your project directory will create among others following Secret:
apiVersion: v1
data:
db-password: MTIzNDU=
kind: Secret
metadata:
name: sl-demo-app-6ft88t2625
type: Opaque
...
More details you can fine in the article.
If we want to use this secret from our deployment, we just have, like
before, to add a new layer definition which uses the secret.
For example, this file will mount the db-password value as
environement variables
apiVersion: apps/v1
kind: Deployment
metadata:
name: sl-demo-app
spec:
template:
spec:
containers:
- name: app
env:
- name: "DB_PASSWORD"
valueFrom:
secretKeyRef:
name: sl-demo-app
key: db.password
In your Deployment definition file it may look similar to this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
env:
- name: "PASSWORD"
valueFrom:
secretKeyRef:
name: git-secret
key: git.password
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de
I have a requirement where i push bunch of key value pairs to a text/json file. Post that, i want to import the key value data into a configMap and consume this configMap within a POD using kubernetes-client API's.
Any pointers on how to get this done would be great.
TIA
You can do it in two ways.
Create ConfigMap from file as is.
In this case you will get ConfigMap with filename as a key and filedata as a value.
For example, you have file your-file.json with content {key1: value1, key2: value2, keyN: valueN}.
And your-file.txt with content
key1: value1
key2: value2
keyN: valueN
kubectl create configmap name-of-your-configmap --from-file=your-file.json
kubectl create configmap name-of-your-configmap-2 --from-file=your-file.txt
As result:
apiVersion: v1
kind: ConfigMap
metadata:
name: name-of-your-configmap
data:
your-file.json: |
{key1: value1, key2: value2, keyN: valueN}
apiVersion: v1
kind: ConfigMap
metadata:
name: name-of-your-configmap-2
data:
your-file.txt: |
key1: value1
key2: value2
keyN: valueN
After this you can mount any of ConfigMaps to a Pod, for example let's mount your-file.json:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh","-c","cat /etc/config/keys" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: name-of-your-configmap
items:
- key: your-file.json
path: keys
restartPolicy: Never
Now you can get any information from your /etc/config/your-file.json inside the Pod. Remember that data is read-only.
Create ConfigMap from file with environment variables.
You can use special syntax to define pairs of key: value in file.
These syntax rules apply:
Each line in a file has to be in VAR=VAL format.
Lines beginning with # (i.e. comments) are ignored.
Blank lines are ignored.
There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).
You have file your-env-file.txt with content
key1=value1
key2=value2
keyN=valueN
kubectl create configmap name-of-your-configmap-3 --from-env-file=you-env-file.txt
As result:
apiVersion: v1
kind: ConfigMap
metadata:
name: name-of-your-configmap-3
data:
key1: value1
key2: value2
keyN: valueN
Now you can use ConfigMap data as Pod environment variables:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod-2
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: name-of-your-configmap-3
key: key1
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: name-of-your-configmap-3
key: key2
- name: SOME_VAR
valueFrom:
configMapKeyRef:
name: name-of-your-configmap-3
key: keyN
restartPolicy: Never
Now you can use these variables inside the Pod.
For more information check for documentation
I can also recommend Kustomize for this task. You can use it as part of your deployment pipeline to generate the K8s configuration (not only ConfigMaps, but also Deployments, NetworkPolicies, Services etc.).
In kustomize you'd need a ConfigMapGenerator. There are different options. In your case env is suitable.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
# generate a ConfigMap named my-system-env-<some-hash> where each key/value pair in the
# env.txt appears as a data entry (separated by \n).
- name: my-system-env
env: env.txt
Other options like files will load the whole content of the file into a single value of the ConfigMap.
export the key value pairs in env or text file as is identical in the container environment variables of pod using
create a config map from configmap using
kubectl create configmap special-config --from-env-file=<key value pairs file>
update the spec for the container of pod that needs these key value pairs to
envFrom:
- configMapRef:
name: special-config
Example:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never