k8s: configMap does not work in deployment - kubernetes

We ran into an issue recently as to using environment variables inside container.
OS: windows 10 pro
k8s cluster: minikube
k8s version: 1.18.3
1. The way that doesn't work, though it's preferred way for us
Here is the deployment.yaml using 'envFrom':
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
labels:
app.kubernetes.io/name: db
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: db
template:
metadata:
labels:
app.kubernetes.io/name: db
spec:
serviceAccountName: default
securityContext:
{}
containers:
- name: db
image: "postgres:9.4"
ports:
- name: http
containerPort: 5432
protocol: TCP
envFrom:
- configMapRef:
name: db-configmap
here is the db.properties:
POSTGRES_HOST_AUTH_METHOD=trust
step 1:
kubectl create configmap db-configmap ./db.properties
step 2:
kebuctl apply -f ./deployment.yaml
step 3:
kubectl get pod
Run the above command, get the following result:
db-8d7f7bcb9-7l788 0/1 CrashLoopBackOff 1 9s
That indicates the environment variables POSTGRES_HOST_AUTH_METHOD is not injected.
2. The way that works (we can't work with this approach)
Here is the deployment.yaml using 'env':
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
labels:
app.kubernetes.io/name: db
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: db
template:
metadata:
labels:
app.kubernetes.io/name: db
spec:
serviceAccountName: default
securityContext:
{}
containers:
- name: db
image: "postgres:9.4"
ports:
- name: http
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
step 1:
kubectl apply -f ./deployment.yaml
step 2:
kubectl get pod
Run the above command, get the following result:
db-fc58f998d-nxgnn 1/1 Running 0 32s
the above indicates the environment is injected so that the db starts.
What did I do wrong in the first case?
Thank you in advance for the help.
Update:
Provide the configmap:
kubectl describe configmap db-configmap
Name: db-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
db.properties:
----
POSTGRES_HOST_AUTH_METHOD=trust

For creating config-maps for usecase-1. please use the below command
kubectl create configmap db-configmap --from-env-file db.properties

Are you missing the key? (see "key:" (no quotes) below) And I think you need to provide the name of the env-variable...which people usually use the key-name, but you don't have to. I've repeated the same value ("POSTGRES_HOST_AUTH_METHOD") below as the environment variable NAME and the keyname of the config-map.
#start env .. where we add environment variables
env:
# Define the environment variable
- name: POSTGRES_HOST_AUTH_METHOD
#value: "UseHardCodedValueToDebugSometimes"
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to environment variable (above "name:")
name: db-configmap
# Specify the key associated with the value
key: POSTGRES_HOST_AUTH_METHOD
My example (trying to use your values)....comes from this generic example:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
pods/pod-single-configmap-env-variable.yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
restartPolicy: Never
PS
You can use "describe" to take a looksie at your config-map, after you (think:) ) you have set it up correctly.
kubectl describe configmap db-configmap --namespace=IfNotDefaultNameSpaceHere

See when you do it like you described.
deployment# exb db-7785cdd5d8-6cstw
root#db-7785cdd5d8-6cstw:/# env | grep -i TRUST
db.properties=POSTGRES_HOST_AUTH_METHOD=trust
the env set is not exactly POSTGRES_HOST_AUTH_METHOD its actually taking filename in env.
create configmap via
kubectl create cm db-configmap --from-env-file db.properties and it will actually put env POSTGRES_HOST_AUTH_METHOD in pod.

Related

I have one deployment.yaml file if I am trying to deploy it in kubernetes by the command kubectl apply -f then it is throwing resource not found error

I am unable to deploy this file by using
kubectl apply -f command
Deployment YAML image
I have provided the YAML file required for your deployment. It is important that all the lines are indented correctly. Hyphens (-) indicate a list item. Therefore, it is not required to use them on every line.
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc-deployment
namespace: abc
spec:
replicas: 3
selector:
matchLabels:
app: abc-deployment
template:
metadata:
labels:
app: abc-deployment
spec:
containers:
- name: abc-deployment
image: anyimage
ports:
- containerPort: 80
env:
- name: APP_VERSION
value: v1
- name: ENVIRONMENT
value: "123"
- name: DATA
valueFrom:
configMapKeyRef:
name: abc-configmap
key: data
imagePullPolicy: IfNotPresent
restartPolicy: Always
imagePullSecrets:
- name: abc-secret
As a side note, the way envFrom was used is incorrect. It must be within the container env section, and formatted as such in the example above (see the DATA env variable).
If you are using Visual Studio Code, there is an official Kubernetes extension from Microsoft that provides Intellisense (suggestions) and alerts you to errors.
Hope this helps.

Pass unique configuration to a distroless container

I'm running a StatefulSet where each replica requires its own unique configuration. To achieve that I'm currently using a configuration with two containers per Pod:
An initContainer prepares the configuration and stores it to a shared volume
A main container consumes the configuration by outputting the contents of the shared volume and passing it to the program as CLI flags.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: my-app
serviceName: my-app
template:
metadata:
labels:
app.kubernetes.io/name: my-app
spec:
initContainers:
- name: generate-config
image: myjqimage:latest
command: [ "/bin/sh" ]
args:
- -c
- |
set -eu -o pipefail
POD_INDEX="${HOSTNAME##*-}"
# A configuration is stored as a JSON array in a Secret
# E.g., [{"param1":"string1","param2":"string2"}]
echo "$MY_APP_CONFIG" | jq -rc --arg i "$POD_INDEX" '.[$i|tonumber-1].param1' > /config/param1
echo "$MY_APP_CONFIG" | jq -rc --arg i "$POD_INDEX" '.[$i|tonumber-1].param2' > /config/param2
env:
- name: MY_APP_CONFIG
valueFrom:
secretKeyRef:
name: my-app
key: config
volumeMounts:
- name: configs
mountPath: "/config"
containers:
- name: my-app
image: myapp:latest
command:
- /bin/sh
args:
- -c
- |
/myapp --param1 $(cat /config/param1) --param2 $(cat /config/param2)
volumeMounts:
- name: configs
mountPath: "/config"
volumes:
- name: configs
emptyDir:
medium: "Memory"
---
apiVersion: v1
kind: Secret
metadata:
name: my-app
namespace: default
labels:
app.kubernetes.io/name: my-app
type: Opaque
data:
config: W3sicGFyYW0xIjoic3RyaW5nMSIsInBhcmFtMiI6InN0cmluZzIifV0=
Now I want to switch to distroless for my main container. Distroless images only contain the required dependencies to run the program (glibc in my case). And it is missing a shell. So if previously I could execute cat and output the contents of a file. Now I'm a bit stuck.
Now instead of reading the contents from file, I should pass the CLI flags defined as environment variables. Something like this:
containers:
- name: my-app
image: myapp:latest
command: ["/myapp", "--param1", "$(PARAM1)", "--param2", "$(PARAM2)"]
env:
- name: PARAM1
value: somevalue1
- name: PARAM2
value: somevalue2
Again, each Pod in a StatefulSet should have a unique configuration. I.e., PARAM1 and PARAM2 should be unique across the Pods in a StatefulSet. How do I achieve that?
Options I considered:
Using Debug Containers -- a new feature of K8s. Somehow use it to edit the configuration of a running container in runtime and inject the required variables. But the feature just became beta in 1.23. And I don't want to mutate my StatefulSet in runtime as I'm using a GitOps approach to store the configuration in Git. It'll probably cause a continuous configuration drift
Using a Job to mutate the configuration in runtime. Again, looks very ugly and violates the GitOps principle
Using shareProcessNamespace. Unsure if it can help but maybe I can somehow inject the environment variables from within the initContainer
Limitations:
Application only supports configuration provisioned through CLI flags. No environment variables, no loading the config from a file

How to fetch configmap from kubernetes pod

I have one spring boot microservice running on docker container, below is the Dockerfile
FROM java:8-jre
MAINTAINER <>
WORKDIR deploy/
#COPY config/* /deploy/config/
COPY ./ms.console.jar /deploy/
CMD chmod +R 777 ./ms.console.jar
CMD ["java","-jar","/deploy/ms.console.jar","console"]
EXPOSE 8384
here my configuration stores in external folder, i.e /config/console-server.yml and when I started the application, internally it will load the config (spring boot functionality).
Now I want to separate this configuration using configmap, for that I simply created one configmap and storing all the configuration details.
kubectl create configmap console-configmap
--from-file=./config/console-server.yml
kubectl describe configmap console-configmap
below are the description details:
Name: console-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
console-server.yml:
----
server:
http:
port: 8385
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain,text/css,application/javascript
min-response-size: 2048
---
spring:
thymeleaf:
prefix: classpath:/static
application:
name: console-service
profiles:
active: native
servlet:
multipart:
max-file-size: 30MB
max-request-size: 30MB
---
host:
gateway: http://apigateway:4000
webhook: http://localhost:9000
my deployment yml is:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
envFrom:
- configMapRef:
name: console-configmap
imagePullSecrets:
- name: regcresd
My doubt is, I commented config folder in the Dockerfile, so while running pods, it's throwing exception because of no configuration, how I will inject this console-configmap to my deployment, what I tried already shared, but getting same issues.
First of all, how are you consuming the .yml file in your application? If you consume your yml file contents as environment variables, your config should just work fine. But I suspect that you want to consume the contents from the config file inside the container. If that is the case you have to create a volume out of the configmap as follows:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/config
name: config
volumes:
- name: config
configMap:
name: console-configmap
imagePullSecrets:
- name: regcresd
The file will be available in the path /app/config/console-server.yml. You have to modify it as per your needs.
do you need to load key:value pairs from the config file as environment variables then below spec would work
envFrom:
- configMapRef:
name: console-configmap
if you need the config as a file inside pod then mount the configmap as volume. following link would be helpful
https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/

kubernetes timezone in POD with command and argument

I want to change timezone with command.
I know applying hostpath.
Could you know how to apply command ?
ln -snf /user/share/zoneinfor/$TZ /etc/localtime
it works well within container.
But I don't know applying with command and arguments in yaml file.
You can change the timezone of your pod by using specific timezone config and hostPath volume to set specific timezone. Your yaml file will look something like:
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Prague
type: File
If you want it across all pods or deployments, you need to add volume and volumeMounts to all your deployment file and change the path value in hostPath section to the timezone you want to set.
Setting TZ environment variable as below works fine for me on GCP Kubernetes.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: demo
image: gcr.io/project/image:master
imagePullPolicy: Always
env:
- name: TZ
value: Europe/Warsaw
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 0
In a deployment, you can do it by creating a volumeMounts in /etc/localtime and setting its values. Here is an example I have for a mariadb:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mariadb
spec:
replicas: 1
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb
ports:
- containerPort: 3306
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
In order to add "hostPath" in the deployment config, as suggested in previous answers, you'll need to be a privileged user. Otherwise your deployment may fail on:
"hostPath": hostPath volumes are not allowed to be used
As a workaround you can try one of theses options:
Add allowedHostPaths: {} next to volumes.
Add TZ environment variable. For example: TZ = Asia/Jerusalem
(Option 2 is similar to running docker exec -it openshift/origin /bin/bash -c "export TZ='Asia/Jerusalem' && /bin/bash").
For me: Setting up of volumes and volumeMounts didn't help. Setting up of TZ environment alone works in my case.

Kubernetes deployment name from within a pod?

How can I to source the Kubernetes deployment/job name that spawned the current pod from within the pod?
In many cases the hostname of the Pod equals to the name of the Pod (you can access that by the HOSTNAME environment variable). However that's not a reliable method of determining the Pod's identity.
You will want to you use the Downward API which allows you to expose metadata as environment variables and/or files on a volume.
The name and namespace of a Pod can be exposed as environment variables (fields: metadata.name and metadata.namespace) but the information about the creator of a Pod (which is the annotation kubernetes.io/created-by) can only be exposed as a file.
Example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: busybox
labels: {app: busybox}
spec:
selector: {matchLabels: {app: busybox}}
template:
metadata: {labels: {app: busybox}}
spec:
containers:
- name: busybox
image: busybox
command:
- "sh"
- "-c"
- |
echo "I am $MY_POD_NAME in the namespace $MY_POD_NAMESPACE"
echo
grep ".*" /etc/podinfo/*
while :; do sleep 3600; done
env:
- name: MY_POD_NAME
valueFrom: {fieldRef: {fieldPath: metadata.name}}
- name: MY_POD_NAMESPACE
valueFrom: {fieldRef: {fieldPath: metadata.namespace}}
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo/
volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef: {fieldPath: metadata.labels}
- path: "annotations"
fieldRef: {fieldPath: metadata.annotations}
Too see the output:
$ kubectl logs `kubectl get pod -l app=busybox -o name | cut -d / -f2`
Output:
I am busybox-1704453464-m1b9h in the namespace default
/etc/podinfo/annotations:kubernetes.io/config.seen="2017-02-16T16:46:57.831347234Z"
/etc/podinfo/annotations:kubernetes.io/config.source="api"
/etc/podinfo/annotations:kubernetes.io/created-by="{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"busybox-1704453464\",\"uid\":\"87b86370-f467-11e6-8d47-525400247352\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"191157\"}}\n"
/etc/podinfo/annotations:kubernetes.io/limit-ranger="LimitRanger plugin set: cpu request for container busybox"
/etc/podinfo/labels:app="busybox"
/etc/podinfo/labels:pod-template-hash="1704453464"
If you are using the Downwards API to get deployment name from inside the pod, and you want to avoid using the volume mount way - there is one opinionated way to get deployment info, exposed to pod as environment variables.
Template labels specified in a Deployment spec are added as pod labels to each pod of that deployment.
Example : the app label below will be added to all pods of this deployment
...
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
...
It is a commonly followed (again, not necessarily true for your case) convention for deployments, to keep the app label value same as the the deployment name, as shown in the above example. If your deployments follow this convention (mine did), you can expose this label's value (essentially, the name of deployment) as an environment variable to the pod, using the downwards API
Continuing on above example :
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
Again, clarifying that this is not a guaranteed solution for your problem as it still does not give the deployment name in env vars. It is just an opinionated way which I found useful and thought would be good to share.
In my case, there were a lot of deployments (>20) and I didn't want to add the deployment name manually as an env variable, for each of the deployment config. As my deployments already followed the above convention, I just copied the bit of yaml specifying NAMESPACE and DEPLOYMENT_NAME variable to each deployment config
references :
https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api