how to build DSN env from several ConfigMap resources? - kubernetes

In order to a service work, it needs an environment variable called DSN which prints to something like postgres://user:password#postgres.svc.cluster.local:5432/database. This value I built with a ConfigMap resource:
apiVersion: v1
kind: ConfigMap
metadata:
name: idp-config
namespace: diary
data:
DSN: postgres://user:password#postgres.svc.cluster.local:5432/database
This ConfigMap mounts as environment variable in my service Pod. Since the values are different from user and password and these PostgreSQL credentials are in another k8s resource (a Secret and a ConfigMap), how can I properly build this DSN environment in a k8s resource yaml so my service can connect to the database?

Digging into Kubernetes Docs I was able to find. According to Define Environment Variables for a Container :
Environment variables that you define in a Pod’s configuration can be used elsewhere in the configuration, for example in commands and arguments that you set for the Pod’s containers. In the example configuration below, the GREETING, HONORIFIC, and NAME environment variables are set to Warm greetings to, The Most Honorable, and Kubernetes, respectively. Those environment variables are then used in the CLI arguments passed to the env-print-demo container.
apiVersion: v1
kind: Pod
metadata:
name: print-greeting
spec:
containers:
- name: env-print-demo
image: bash
env:
- name: GREETING
value: "Warm greetings to"
- name: HONORIFIC
value: "The Most Honorable"
- name: NAME
value: "Kubernetes"
command: ["echo"]
args: ["$(GREETING) $(HONORIFIC) $(NAME)"]

Related

Kubernetes: Set environment variable in all pods

Is it possible to provide environment variables which will be set in all pods instead of configuring in each pods spec?
If not natively possible in Kubernetes, what would be an efficient method to accomplish it? We have Helm, but that still requires a lot of duplication.
This old answer suggested "PodPreset" which is no longer part of Kubernetes: Kubernetes - Shared environment variables for all Pods
You could do this using a mutating admission webhook to inject the environment variable into the pod manifest.
There are more details on implementing webhooks here.
I am not sure if you can do that for EVERY single pod in the cluster (if that is what you meant), but you CAN do it for every single pod within an application or service.
For example, via a Deployment, you can set a variable within the pod template, and all replicas will carry that value.
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
replicas: 5
template:
metadata:
...
spec:
containers:
- image: nginx
name: nginx
...
env:
- name: VAR_NAME # <---
value: "var_value" # <---
...
In this (edited) example, all 5 replicas of the nginx will have the environment variable VAR_NAME set to the value var_value.
You could also use a configmap (https://kubernetes.io/docs/concepts/configuration/configmap/) or secrets (https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables) to set environments variables from a shared location, depending on your requirements.

Kubernetes exposes more environment variables than expected

I've faced a strange behaviour with K8s pods running in AWS EKS cluster (version 1.14). The services are deployed via Helm 3 charts. The case is that pod receives more environment variables than expected.
The pod specification says that variables should be populated from a config map.
apiVersion: v1
kind: Pod
metadata:
name: apigw-api-gateway-59cf5bfdc9-s6hrh
namespace: development
spec:
containers:
- env:
- name: JAVA_OPTS
value: -server -XX:MaxRAMPercentage=75.0 -XX:+UseContainerSupport -XX:+HeapDumpOnOutOfMemoryError
- name: GATEWAY__REDIS__HOST
value: apigw-redis-master.development.svc.cluster.local
envFrom:
- configMapRef:
name: apigw-api-gateway-env # <-- this is the map
# the rest of spec is hidden
The config map apigw-api-gateway-env has this specification:
apiVersion: v1
data:
GATEWAY__APP__ADMIN_LOPUSH: ""
GATEWAY__APP__CUSTOMER_LOPUSH: ""
GATEWAY__APP__DISABLE_RATE_LIMITS: "true"
# here are other 'GATEWAY__' envs
JMX_AUTH: "false"
JMX_ENABLED: "true"
# here are other 'JMX_' envs
kind: ConfigMap
metadata:
name: apigw-api-gateway-env
namespace: development
If I request a list of environment variables, I can find values from a different service. These values are not specified in the config map of the 'apigw' application; they are stored in a map for a 'lopush' application. Here is a sample.
/ # env | grep -i lopush | sort | head -n 4
GATEWAY__APP__ADMIN_LOPUSH=<hidden>
GATEWAY__APP__CUSTOMER_LOPUSH=<hidden>
LOPUSH_GAME_ADMIN_MOBILE_PORT=tcp://172.20.248.152:5050
LOPUSH_GAME_ADMIN_MOBILE_PORT_5050_TCP=tcp://172.20.248.152:5050
I've also noticed that this behaviour is somehow relative to the order in which the services were launched. That could be just because some config maps didn't exist at that moment. It seems for now like the pod receives variables from all config maps in the current namespace.
Did any one faced this issue before? Is it possible, that there are other criteria which force K8s to populate environment from other maps?
If you mean the _PORT stuff, that's for compatibility with the old Docker Container Links system. All services in the namespace get automatically set up that way to make it easier to move things from older Docker-based systems.

Using sensitive environment variables in Kubernetes configMaps

I know you can use ConfigMap properties as environment variables in the pod spec, but can you use environment variables declared in the pods spec inside the configmap?
For example:
I have a secret password which I wish to access in my configmap application.properties. The secret looks like so:
apiVersion: v1
data:
pw: THV3OE9vcXVpYTll==
kind: Secret
metadata:
name: foo
namespace: foo-bar
type: Opaque
so inside the pod spec I reference the secret as an env var. The configMap will be mounted as a volume from within the spec:
env:
- name: PASSWORD
valueFrom:
secretKeyRef:
name: foo
key: pw
...
and inside my configMap I can then reference the secret value like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: application.properties
namespace: foo-bar
data:
application.properties: /
secret.password=$(PASSWORD)
Anything I've found online is just about consuming configMap values as env vars and doesn't mention consuming env vars in configMap values.
Currently it's not a Kubernetes Feature.
There is a closed issue requesting this feature and it's kind of controversial topic because the discussion is ongoing many months after being closed:
Reference Secrets from ConfigMap #79224
Referencing the closing comment:
Best practice is to not use secret values in envvars, only as mounted files. if you want to keep all config values in a single object, you can place all the values in a secret object and reference them that way.
Referencing secrets via configmaps is a non-goal... it confuses whether things mounting or injecting the config map are mounting confidential values.
I suggest you to read the entire thread to understand his reasons and maybe find another approach for your environment to get this variables.
"OK, but this is Real Life, I need to make this work"
Then I recommend you this workaround:
Import Data to Config Map from Kubernetes Secret
It makes the substitution with a shell in the entrypoint of the container.

How do I load a configMap in to an environment variable?

How do I load a configMap into an environment variable?
Things I've done
Kubernetes documentation describes just this scenario, and following it:
I've actually set up my configMap through Terraform with this:
resource "kubernetes_config_map" "production_database_host" {
metadata {
name = "production-database-host"
}
data {
connection_name = "${google_sql_database_instance.master.connection_name}"
}
}
But via Kubernetes, it would look like this:
apiVersion: v1
data:
connection_name: this_string_is_redacted
kind: ConfigMap
metadata:
creationTimestamp: 2018-10-12T05:49:49Z
name: production-database-host
namespace: default
resourceVersion: "316273"
selfLink: /api/v1/namespaces/default/configmaps/production-database-host
uid: a1c06423-cde2-11e8-b615-42010a800235
(Fetched by running kubectl get configmap production-database-host -o yaml)
Now, I also have a working container, in a deployment, where I added an environment variable like so:
env:
- name: INSTANCE_CONNECTION_NAME
valueFrom:
configMapKeyRef:
name: production-database-host
key: connection_name
However, applying this config gives me:
$ kubectl apply -f .
error: error converting YAML to JSON: yaml: line 39: did not find expected key
What am I doing wrong here? Why won't this simply load this_string_is_redacted into the INSTANCE_CONNECTION_NAME environment variable?
Edit: All the source for my infrastructure is in this repo. The Terraform files are applied first, they create the Kubnernetes cluster and add the configMap. Then I apply the Kubernetes config.
It was a formatting issue, unfortunately the block:
env:
- name: INSTANCE_CONNECTION_NAME
valueFrom:
configMapKeyRef:
name: production-database-host
key: connection_name
Was indented one space more than I should have been. Everything else works fine.

How to access the service ip in kubernetes by name?

Say if I have a rabbitmq service as follows:
apiVersion: v1
kind: Service
metadata:
name: my-rabbitmq
spec:
ports:
- port: 6379
selector:
app: my-rabbitmq
And I have another deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: A-worker
spec:
replicas: 1
containers:
- name: a-worker
image: worker-image
ports:
- containerPort: 80
env:
- name: rabbitmq_url
value: XXXXXXXXXXXXX
Is there any way to set the service ip as environment variable in my second deployment by some kind of selector? In other words what should go to the value: XXXXXXXXXX in the second deployment yaml. (Note I know I can get the service ip by kubectl get services, but I'd like to know how to set this by the service name or label). Any advice is welcome!
kubernetes injects environment variables for a service's host, port, protocol among others into pod containers (see this doc).
kubectl exec <pod> printenv is one way to check which env variables are set.
If the service is created after the pod the env var may not be present so killing (restarting) the pod is one way to make sure the new environment variables are populated.
The convention is typically uppercase <SERVICE_NAME>_SERVICE_HOST.
You can set it explicitly in a pod spec using the following syntax.
- name: rabbitmq_url
value: $(MY-RABBITMQ_SERVICE_HOST)
Bear in mind the variable is already injected by k8s and this is just aliasing it. You may want to update your reference in the application layer /script to use the k8s injected environment variable for the service.
Reading between the lines (and I hope this helps):
K8s automatically creates service environment variables for you inside each pod. See https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables for details.
The other route is to enable kube dns, in which case one can contact a service IP simply by using the service name.