kubernetes assign configmap as environment variables on deployment - kubernetes

I am trying to deploy my image to Azure Kubernetes Service. I use command:
kubectl apply -f mydeployment.yml
And here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: mycr.azurecr.io/my-api
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: my-existing-config-map
I have configmap my-existing-config-map created with a bunch of values in it but the deployment doesn't add these values as environment variables.
Config map was created from ".env" file this way:
kubectl create configmap my-existing-config-map --from-file=.env
What am I missing here?

If your .env file in this format
a=b
c=d
you need to use --from-env-file=.env instead.
To be more explanatory, using --from-file=aa.xx creates configmap looks like this
aa.xx: |
file content here....
....
....
When the config map used with envFrom.configmapref, it just creates on env variable "aa.xx" with the content. In the case, that filename starts with '.' like .env , the env variable is not even created because the name violates UNIX env variable name rules.

As you are using the .env file the format of the file is important
Create config.env file in the following format which can include comments
echo -e "var1=val1\n# this is a comment\n\nvar2=val2\n#anothercomment" > config.env
Create Config Map
kubectl create cm config --from-env-file=config.env
Use config Map in your pod definition file
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
envFrom:
- configMapRef:
name: config

Related

Injecting environment variables to Postgres pod from Hashicorp Vault

I'm trying to set the POSTGRES_PASSWORD, POSTGRES_USER and POSTGRES_DB environment variables in a Kubernetes Pod, running the official postgres docker image, with values injected from Hashicorp Vault.
The issue I experience is that the Postgres Pod will not start and provides no logs as to what might have caused it to stop.
I'm trying to source the injected secrets on startup using args /bin/bash/ source /vault/secrets/backend. Nothing seems to happen once this command is reached. If i add an echo statement in front of source it will display this in the kubectl logs.
Steps taken so far include removing the - args part of configuration and setting the required POSTGRES_PASSWORD variable directly with a test value. When done the pod starts and I can exec into it and verify that the secrets are indeed injected and I'm able to source them. Running cat command on it gives me the following output:
export POSTGRES_PASSWORD="jiasjdi9u2easjdu##djasj#!-d2KDKf"
export POSTGRES_USER="postgres"
export POSTGRES_DB="postgres"
To me this indicates that the Vault injection is working as expected and that this part is configured according to my needs.
*edit: commands after sourcing is indeed run. Tested with echo command
My configuration is as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-db
namespace: planet9-demo
labels:
app: postgres-db
environment: development
spec:
serviceName: postgres-service
selector:
matchLabels:
app: postgres-db
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-backend: secret/data/backend
vault.hashicorp.com/agent-inject-template-backend: |
{{ with secret "secret/backend/database" -}}
export POSTGRES_PASSWORD="{{ .Data.data.adminpassword}}"
export POSTGRES_USER="{{ .Data.data.postgresadminuser}}"
export POSTGRES_DB="{{ .Data.data.postgresdatabase}}"
{{- end }}
vault.hashicorp.com/role: postgresDB
labels:
app: postgres-db
tier: backend
spec:
containers:
- args:
- /bin/bash
- -c
- source /vault/secrets/backend
name: postgres-db
image: postgres:latest
resources:
requests:
cpu: 300m
memory: 1Gi
limits:
cpu: 400m
memory: 2Gi
volumeMounts:
- name: postgres-pvc
mountPath: /mnt/data
subPath: postgres-data/planet9-demo
env:
- name: PGDATA
value: /mnt/data
restartPolicy: Always
serviceAccount: sa-postgres-db
serviceAccountName: sa-postgres-db
volumes:
- name: postgres-pvc
persistentVolumeClaim:
claimName: postgres-pvc

How do I attach a configmap to a deployment in Kubernetes?

Based on the instructions found here (https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/) I am trying to create an nginx deployment and configure it using a config-map. I can successfully access nginx using curl (yea!) but the configmap does not appear to be "sticking." The only thing it is supposed to do right now is forward the traffic along. I have seen the thread here (How do I load a configMap in to an environment variable?). although I am using the same format, their answer was not relevant.
Can anyone tell me how to properly configure the configmaps? the yaml is
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: sandbox
spec:
selector:
matchLabels:
run: nginx
app: dsp
tier: frontend
replicas: 2
template:
metadata:
labels:
run: nginx
app: dsp
tier: frontend
spec:
containers:
- name: nginx
image: nginx
env:
# Define the environment variable
- name: nginx-conf
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: nginx-conf
# Specify the key associated with the value
key: nginx.conf
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
containerPort: 80
the nginx-conf is
# The identifier Backend is internal to nginx, and used to name this specific upstream
upstream Backend {
# hello is the internal DNS name used by the backend Service inside Kubernetes
server dsp;
}
server {
listen 80;
location / {
# The following statement will proxy traffic to the upstream named Backend
proxy_pass http://Backend;
}
}
I turn it into a configmap using the following line
kubectl create configmap -n sandbox nginx-conf --from-file=apps/nginx.conf
You need to mount the configMap rather than use it as an environment variable, as the setting is not a key-value format.
Your Deployment yaml should be like below:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /etc/nginx
name: nginx-conf
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
You need to create (apply) the configMap beforehand. You can create it from file:
kubectl create configmap nginx-conf --from-file=nginx.conf
or you can directly describe configMap manifest:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
# The identifier Backend is internal to nginx, and used to name this specific upstream
upstream Backend {
# hello is the internal DNS name used by the backend Service inside Kubernetes
server dsp;
}
...
}

Can i get a configMap value from an external file?

I have this configMap defined :
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
labels:
app: my-config
data:
myConfiguration.json: |
{
"configKey": [
{
"key" : "value"
},
{
"key" : "value"
}
}
and this is how i use it in my pod :
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: someimage
name: someimage
spec:
selector:
matchLabels:
app: someimage
replicas: 1
template:
metadata:
labels:
app: someimage
spec:
containers:
- image: someimage
name: someimage
command:
- mb
- --configfile
- /configFolder/myConfig.json
ports:
- containerPort: 2525
volumeMounts:
- name: config-volume
mountPath: /configFolder
hostname: somehost
restartPolicy: Always
nodeSelector:
beta.kubernetes.io/os: linux
volumes:
- name: config-volume
configMap:
name: my-config
items:
- key: myConfiguration.json
path: myConfiguration.json
my question is : is it possible to keep the value of the myconfiguration (the json string) in a separate file, separated from the configmap? in order to keep it clean? How would i need to change the deployment and the configmap yaml definitions so i do not have to change the application?
Important : i cannot use any separate templating tool.
thanks
Yes you can! Using Kustomize.
Kustomize is kubectl sub-command introduced in 1.14, and it has a lot of features that will help customize your deployments.
To do that you'll have to use ConfigMaps Generators. This will require an additional file kustomization.yml.
So if your deployment yaml file is deployment.yaml and your configMap's name is my-config, then the kustomization.yaml should look something like this
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
configMapGenerator:
- name: my-config
files:
- myConfiguration.json
- myConfiguration2.json # you can use multiple files
To run kustomize you'll have to use kubectl apply with the -k option.
Edit: Kustomize will append the value of the hash of you ConfigMaps into their names. Having that it will be able to track changes on you configurations and trigger a redeploy for you whenever they change.
So no need for deleting your pods whenever you configMaps are altered.

How to fetch configmap from kubernetes pod

I have one spring boot microservice running on docker container, below is the Dockerfile
FROM java:8-jre
MAINTAINER <>
WORKDIR deploy/
#COPY config/* /deploy/config/
COPY ./ms.console.jar /deploy/
CMD chmod +R 777 ./ms.console.jar
CMD ["java","-jar","/deploy/ms.console.jar","console"]
EXPOSE 8384
here my configuration stores in external folder, i.e /config/console-server.yml and when I started the application, internally it will load the config (spring boot functionality).
Now I want to separate this configuration using configmap, for that I simply created one configmap and storing all the configuration details.
kubectl create configmap console-configmap
--from-file=./config/console-server.yml
kubectl describe configmap console-configmap
below are the description details:
Name: console-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
console-server.yml:
----
server:
http:
port: 8385
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain,text/css,application/javascript
min-response-size: 2048
---
spring:
thymeleaf:
prefix: classpath:/static
application:
name: console-service
profiles:
active: native
servlet:
multipart:
max-file-size: 30MB
max-request-size: 30MB
---
host:
gateway: http://apigateway:4000
webhook: http://localhost:9000
my deployment yml is:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
envFrom:
- configMapRef:
name: console-configmap
imagePullSecrets:
- name: regcresd
My doubt is, I commented config folder in the Dockerfile, so while running pods, it's throwing exception because of no configuration, how I will inject this console-configmap to my deployment, what I tried already shared, but getting same issues.
First of all, how are you consuming the .yml file in your application? If you consume your yml file contents as environment variables, your config should just work fine. But I suspect that you want to consume the contents from the config file inside the container. If that is the case you have to create a volume out of the configmap as follows:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/config
name: config
volumes:
- name: config
configMap:
name: console-configmap
imagePullSecrets:
- name: regcresd
The file will be available in the path /app/config/console-server.yml. You have to modify it as per your needs.
do you need to load key:value pairs from the config file as environment variables then below spec would work
envFrom:
- configMapRef:
name: console-configmap
if you need the config as a file inside pod then mount the configmap as volume. following link would be helpful
https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/

K8S deployments with shared environment variables

We have a set of deployments (sets of pods) that are all using same docker image. Examples:
web api
web admin
web tasks worker nodes
data tasks worker nodes
...
They all require a set of environment variables that are common, for example location of the database host, secret keys to external services, etc. They also have a set of environment variables that are not common.
Is there anyway where one could either:
Reuse a template where environment variables are defined
Load environment variables from file and set them on the pods
The optimal solution would be one that is namespace aware, as we separate the test, stage and prod environment using kubernetes namespaces.
Something similar to dockers env_file would be nice. But I cannot find any examples or reference related to this. The only thing I can find is setting env via secrets, but that is not clean, way to verbose, as I still need to write all environment variables for each deployment.
You can create a ConfigMap with all the common key:value pairs of env variables.
Then you can reuse the configmap to declare all the values of configMap as environment in Deployment.
Here is an example taken from kubernetes official docs.
Create a ConfigMap containing multiple key-value pairs.
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
Use envFrom to define all of the ConfigMap’s data as Pod environment variables. The key from the ConfigMap becomes the environment variable name in the Pod.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config # All the key-value pair will be taken as environment key-value pair
env:
- name: uncommon
value: "uncommon value"
restartPolicy: Never
You can specify uncommon env variables in env field.
Now, to verify if the environment variables are actually available, see the logs.
$ kubectl logs -f test-pod
KUBERNETES_PORT=tcp://10.96.0.1:443
SPECIAL_LEVEL=very
uncommon=uncommon value
SPECIAL_TYPE=charm
...
Here, it is visible that all the provided environments are available.
you can add a secret first then use newly created secret into your countless deployment files to share same environment variable with value:
kubectl create secret generic jwt-secret --from-literal=JWT_KEY=my_awesome_jwt_secret_code
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: lord/auth
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.JWT_KEY
apiVersion: apps/v1
kind: Deployment
metadata:
name: tickets-depl
spec:
replicas: 1
selector:
matchLabels:
app: tickets
template:
metadata:
labels:
app: tickets
spec:
containers:
- name: tickets
image: lord/tickets
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.JWT_KEY