ConfigMap Kubernetes YAML: space in value is causing error - kubernetes

For some strange and unknown reason, when I use a ConfigMap with key value pairs that will be set as environment variables in the pods (using envFrom), my pods fail to start.
Here is the ConfigMap portion of my YAML:
apiVersion: v1
kind: ConfigMap
metadata:
name: all-config
data:
# DB configuration
dbServer: "host.docker.internal"
dbPort: "3306"
# problematic config
validationQuery: 'Select 1'
If I comment out the validationQuery key/value pair, the pod starts. If I leave it in, it fails. If I remove the space, it runs! Very strange behavior as it boils down to a whitespace.
Any ideas on why this fails and how users have been getting around this? Can someone try to reproduce?

I honestly believe that it's something with your application not liking environment variables with spaces. I tried this myself and I can see the environment variable with the space nice and dandy when I shell into the pod/container.
PodSpec:
...
spec:
containers:
- command:
- /bin/sleep
- infinity
env:
- name: WHATEVER
valueFrom:
configMapKeyRef:
key: myenv
name: j
...
$ kubectl get cm j -o=yaml
apiVersion: v1
data:
myenv: Select 1
kind: ConfigMap
metadata:
creationTimestamp: 2018-10-10T20:44:02Z
name: j
namespace: default
resourceVersion: "11111111"
selfLink: /api/v1/namespaces/default/configmaps/j
uid: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaa
root#mypod-xxxxxxxxxx-xxxxx:/# echo $WHATEVER
Select 1
root#mypod-xxxxxxxxxx-xxxxx:/#

Related

Set node label to the pod environment variable

How to set Node label to Pod environment variable? I need to know the label topology.kubernetes.io/zone value inside the pod.
The Downward API currently does not support exposing node labels to pods/containers. There is an open issue about that on GitHib, but it is unclear when it will be implemented if at all.
That leaves the only option to get node labels from Kubernetes API, just as kubectl does. It is not easy to implement, especially if you want labels as environment variables. I'll give you an example how it can be done with an initContainer, curl, and jq but if possible, I suggest you rather implement this in your application, for it will be easier and cleaner.
To make a request for labels you need permissions to do that. Therefore, the example below creates a service account with permissions to get (describe) nodes. Then, the script in the initContainer uses the service account to make a request and extract labels from json. The test container reads environment variables from the file and echoes one.
Example:
# Create a service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: describe-nodes
namespace: <insert-namespace-name-where-the-app-is>
---
# Create a cluster role that allowed to perform describe ("get") over ["nodes"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: describe-nodes
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
---
# Associate the cluster role with the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: describe-nodes
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: describe-nodes
subjects:
- kind: ServiceAccount
name: describe-nodes
namespace: <insert-namespace-name-where-the-app-is>
---
# Proof of concept pod
apiVersion: v1
kind: Pod
metadata:
name: get-node-labels
spec:
# Service account to get node labels from Kubernetes API
serviceAccountName: describe-nodes
# A volume to keep the extracted labels
volumes:
- name: node-info
emptyDir: {}
initContainers:
# The container that extracts the labels
- name: get-node-labels
# The image needs 'curl' and 'jq' apps in it
# I used curl image and run it as root to install 'jq'
# during runtime
# THIS IS A BAD PRACTICE UNSUITABLE FOR PRODUCTION
# Make an image where both present.
image: curlimages/curl
# Remove securityContext if you have an image with both curl and jq
securityContext:
runAsUser: 0
# It'll put labels here
volumeMounts:
- mountPath: /node
name: node-info
env:
# pass node name to the environment
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: APISERVER
value: https://kubernetes.default.svc
- name: SERVICEACCOUNT
value: /var/run/secrets/kubernetes.io/serviceaccount
- name: SCRIPT
value: |
set -eo pipefail
# install jq; you don't need this line if the image has it
apk add jq
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
# Get node labels into a json
curl --cacert ${CACERT} \
--header "Authorization: Bearer ${TOKEN}" \
-X GET ${APISERVER}/api/v1/nodes/${NODENAME} | jq .metadata.labels > /node/labels.json
# Extract 'topology.kubernetes.io/zone' from json
NODE_ZONE=$(jq '."topology.kubernetes.io/zone"' -r /node/labels.json)
# and save it into a file in the format suitable for sourcing
echo "export NODE_ZONE=${NODE_ZONE}" > /node/zone
command: ["/bin/ash", "-c"]
args:
- 'echo "$$SCRIPT" > /tmp/script && ash /tmp/script'
containers:
# A container that needs the label value
- name: test
image: debian:buster
command: ["/bin/bash", "-c"]
# source ENV variable from file, echo NODE_ZONE, and keep running doing nothing
args: ["source /node/zone && echo $$NODE_ZONE && cat /dev/stdout"]
volumeMounts:
- mountPath: /node
name: node-info
You could use InitContainer
...
spec:
initContainers:
- name: node2pod
image: <image-with-k8s-access>
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
...
Ref: Node Label to Pod
Edit Update
A similar Solution could be Inject node labels into Kubernetes pod

Kubernetes: Define environment variables dependent on other ones using "envFrom"

I have two ConfigMap files. One is supposed to be "secret" values and the other has regular values and should import the secrets.
Here's the sample secret ConfigMap:
kind: ConfigMap
metadata:
name: secret-cm
data:
MY_SEKRET: 'SEKRET'
And the regular ConfigMap file:
kind: ConfigMap
metadata:
name: regular-cm
data:
SOME_CONFIG: 123
USING_SEKRET: $(MY_SEKRET)
And my deployment is as follows:
kind: Deployment
spec:
template:
spec:
containers:
- name: my_container
envFrom:
- configMapRef:
name: secret-cm
- configMapRef:
name: regular-cm
I was hoping that my variable USING_SEKRET would be "SEKRET" because of the order the envFrom files are imported but they just appear as "$(MY_SEKRET)" on the Pods.
I've also tried setting the dependent variable as an env directly at the Deployment but it results on the same problem:
kind: Deployment
...
env:
- name: MY_SEKRET
# Not the expected result because the variable is openly visible but should be hidden
value: 'SEKRET'
I was trying to follow the documentation guides, based on the Define an environment dependent variable for a container but I haven't seen examples similar to what I want to do.
Is there a way to do this?
EDIT:
To explain my idea behind this structure, secret-cm whole file will be encrypted at the repository so not all peers will be able to see its contents.
On the other hand, I still want to be able to show everyone where its variables are used, hence the dependency on regular-cm.
With that, authorized peers can run kubectl commands and variable replacements of secret-cm would work properly but for everyone else the file is hidden.
You did not explain why you want to define two configmap (one getting value from another) but I am assuming that you want the env parameter name define in confgimap be independent of paramter name used by your container in pod. If that is the case then create your configmap
kind: ConfigMap metadata: name: secret-cm data: MY_SEKRET: 'SEKRET'
Then in your deployment use the env variable from configmap
kind: Deployment
spec:
template:
spec:
containers:
- name: my_container
env:
- name: USING_SEKRET
valueFrom:
configMapKeyRef:
name: secret-cm
key: MY_SEKRET
Now when you access env variable $USING_SEKRET, it will show value as 'SEKRET'
incase your requirement is different then ignore this response and provide more details.

Kubernetes Kustomize: replace variable in patch file

Given the following kustomize patch:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de
I want to use kubectl apply -k and somehow pass a value for ${PASSWORD} which I can set from my build script.
The only solution I got to work so far was replacing the ${PASSWORD} with sed, but I would prefer a kustomize solution.
As #Jonas already suggested you should consider using Secret. It's nicely described in this article.
I want to use kubectl apply -k and somehow pass a value for
${PASSWORD} which I can set from my build script.
I guess your script can store the generated password as a variable or save it to some file. You can easily create a Secret as follows:
$ kustomize edit add secret sl-demo-app --from-literal=db-password=$PASSWORD
or from a file:
$ kustomize edit add secret sl-demo-app --from-file=file/path
As you can read in the mentioned article:
These commands will modify your kustomization.yaml and add a
SecretGenerator inside it.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- custom-env.yaml
- replica-and-rollout-strategy.yaml
secretGenerator:
- literals:
- db-password=12345
name: sl-demo-app
type: Opaque
kustomize build run in your project directory will create among others following Secret:
apiVersion: v1
data:
db-password: MTIzNDU=
kind: Secret
metadata:
name: sl-demo-app-6ft88t2625
type: Opaque
...
More details you can fine in the article.
If we want to use this secret from our deployment, we just have, like
before, to add a new layer definition which uses the secret.
For example, this file will mount the db-password value as
environement variables
apiVersion: apps/v1
kind: Deployment
metadata:
name: sl-demo-app
spec:
template:
spec:
containers:
- name: app
env:
- name: "DB_PASSWORD"
valueFrom:
secretKeyRef:
name: sl-demo-app
key: db.password
In your Deployment definition file it may look similar to this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
env:
- name: "PASSWORD"
valueFrom:
secretKeyRef:
name: git-secret
key: git.password
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de

kubernetes: How to create and use configmap from multiple files

I have the documentation regarding the configmap:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
From what I understand is I can create a config map(game-config-2) from two files
(game.properties and ui.properties) using
kubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/kubectl/game.properties --from-file=configure-pod-container/configmap/kubectl/ui.properties
Now I see the configmap
kubectl describe configmaps game-config-2
Name: game-config-2
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
game.properties: 158 bytes
ui.properties: 83 bytes
How can I use that configmap? I tried this way:
envFrom:
- configMapRef:
name: game-config-2
But this is not working, the env variable is not picking from the configmap. Or can I have two configMapRef under envFrom?
Yes, a pod or deployment can get env From a bunch of configMapRef entries:
spec:
containers:
- name: encouragement-api
image: registry-......../....../encouragement.api
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: general-config
- configMapRef:
name: private-config
Best to create them from yaml files for k8s law and order:
config_general.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: general-config
data:
HOSTNAME: Develop_hostname
COMPUTERNAME: Develop_compname
ASPNETCORE_ENVIRONMENT: Development
encouragement-api/config_private.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: private-config
data:
PRIVATE_STUFF: real_private
apply the two configmaps:
kubectl apply -f config_general.yaml
kubectl apply -f encouragement-api/config_private.yaml
Run and exec into the pod and run env |grep PRIVATE && env |grep HOSTNAME
I have config_general.yaml laying around in the same repo as the developers' code, they can change it however they like. Passwords and sensitive values are kept in the config_private.yaml file which is sitting elsewhere (a S3 encrypted bucket) and the values there are base64 encoded for an extra bit of security.
One solution to this problem is to create a ConfigMap with a multiple data key/values:
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
game.properties: |
<paste file content here>
ui.properties: |
<paste file content here>
Just don't forget | symbol before pasting content of files.
Multiple --from-env-file are not allowed.
Multiple --from-file will work for you.
Eg:
cat config1.txt
var1=val1
cat config2.txt
var3=val3
var4=val4
kubectl create cm details2 --from-env-file=config1.txt --from-env-file=config2.txt -o yaml --dry-run
Output
apiVersion: v1
data:
var3: val3
var4: val4
kind: ConfigMap
name: details2
k create cm details2 --from-file=config1.txt --from-file=config2.txt -o yaml --dry-run
Output
apiVersion: v1
data:
config1.txt: |
var1=val1
config2.txt: |
var3=val3
var4=val4
kind: ConfigMap
name: details2
If you use Helm, it is much simpler.
Create a ConfigMap template like this
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Values.configMapName }}
data:
{{ .Values.gameProperties.file.name }}: |
{{ tpl (.Files.Get .Values.gameProperties.file.path) }}
{{ .Values.uiProperties.file.name }}: |
{{ tpl (.Files.Get .Values.uiProperties.file.path) }}
and two files with the key:value pairs like this game.properties
GAME_NAME: NFS
and another files ui.properties
GAME_UI: NFS UI
and values.yaml should like this
configMapName: game-config-2
gameProperties:
file:
name: game.properties
path: "properties/game.properties"
uiProperties:
file:
name: ui.properties
path: "properties/ui.properties"
You can verify if templates interpolate the values from values.yaml file by helm template ., you can expect this as output
kind: ConfigMap
apiVersion: v1
metadata:
name: game-config-2
data:
game.properties: |
GAME_NAME: NFS
ui.properties: |
GAME_UI: NFS UI
am not sure if you can load all key:value pairs from a specific file in a configmap as environemnt variables in a pod. you can load all key:value pairs from a specific configmap as environemnt variables in a pod. see below
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
Verify that pod shows below env variables
SPECIAL_LEVEL=very
SPECIAL_TYPE=charm
As #Emruz_Hossain mentioned , if game.properties and ui.properties have only env variables then this can work for you
kubectl create configmap game-config-2 --from-env-file=configure-pod-container/configmap/kubectl/game.properties --from-env-file=configure-pod-container/configmap/kubectl/ui.properties

applying task in k8s pod

I'm trying to run kubectl -f pod.yaml but getting this error. Any hint?
error: error validating "/pod.yaml": error validating data: [ValidationError(Pod): unknown field "imagePullSecrets" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "nodeSelector" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "tasks" in io.k8s.api.core.v1.Pod]; if you choose to ignore these errors, turn validation off with --validate=false
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod-10.0.1
namespace: e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: bded587f4604
imagePullSecrets: ["testo", "awsecr-cred"]
nodeSelector:
kubernetes.io/hostname: 11-4730
tasks:
- name: traind
command: et estimate -e v/lat/exent_sps/enet/default_sql.spec.txt -r /out
completions: 1
inputs:
datasets:
- name: poa
version: 2018-
mountPath: /in/0
You have an indentation error on your pod.yaml definition with imagePullSecrets and you need to specify the - name: for your imagePullSecrets. Should be something like this:
apiVersion: v1
kind: Pod
metadata:
name: gpu-test-test-pod-10.0.1.11-e8b74730
namespace: test-e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: test.io/tets/maglev-test-bded587f4604
imagePullSecrets:
- name: testawsecr-cred
...
Note that imagePullSecrets: is plural and an array so you can specify multiple credentials to multiple registries.
If you are using Docker you can also specify multiple credentials in ~/.docker/config.json.
If you have the same credentials in imagePullSecrets: and configs in ~/.docker/config.json, the credentials are merged.