K8S deployments with shared environment variables - kubernetes

We have a set of deployments (sets of pods) that are all using same docker image. Examples:
web api
web admin
web tasks worker nodes
data tasks worker nodes
...
They all require a set of environment variables that are common, for example location of the database host, secret keys to external services, etc. They also have a set of environment variables that are not common.
Is there anyway where one could either:
Reuse a template where environment variables are defined
Load environment variables from file and set them on the pods
The optimal solution would be one that is namespace aware, as we separate the test, stage and prod environment using kubernetes namespaces.
Something similar to dockers env_file would be nice. But I cannot find any examples or reference related to this. The only thing I can find is setting env via secrets, but that is not clean, way to verbose, as I still need to write all environment variables for each deployment.

You can create a ConfigMap with all the common key:value pairs of env variables.
Then you can reuse the configmap to declare all the values of configMap as environment in Deployment.
Here is an example taken from kubernetes official docs.
Create a ConfigMap containing multiple key-value pairs.
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
Use envFrom to define all of the ConfigMap’s data as Pod environment variables. The key from the ConfigMap becomes the environment variable name in the Pod.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config # All the key-value pair will be taken as environment key-value pair
env:
- name: uncommon
value: "uncommon value"
restartPolicy: Never
You can specify uncommon env variables in env field.
Now, to verify if the environment variables are actually available, see the logs.
$ kubectl logs -f test-pod
KUBERNETES_PORT=tcp://10.96.0.1:443
SPECIAL_LEVEL=very
uncommon=uncommon value
SPECIAL_TYPE=charm
...
Here, it is visible that all the provided environments are available.

you can add a secret first then use newly created secret into your countless deployment files to share same environment variable with value:
kubectl create secret generic jwt-secret --from-literal=JWT_KEY=my_awesome_jwt_secret_code
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: lord/auth
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.JWT_KEY
apiVersion: apps/v1
kind: Deployment
metadata:
name: tickets-depl
spec:
replicas: 1
selector:
matchLabels:
app: tickets
template:
metadata:
labels:
app: tickets
spec:
containers:
- name: tickets
image: lord/tickets
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.JWT_KEY

Related

Add ExternalSecret to Yaml file deploying to K8s

I'm trying to deploy a Kubernetes processor to a cluster on GCP GKE but the pod fails with the following error:
secret "service-account-credentials-dbt-test" not found: CreateContainerConfigError
This is my deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: dbt-core-processor
namespace: prod
labels:
app: dbt-core
spec:
replicas: 1
selector:
matchLabels:
app: dbt-core
template:
metadata:
labels:
app: dbt-core
spec:
containers:
- name: dbt-core-processor
image: IMAGE
resources:
requests:
cpu: 50m
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
valueFrom:
secretKeyRef:
name: service-account-credentials-dbt-test
key: service-account-credentials-dbt-test
---
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: service-account-credentials-dbt-test
namespace: prod
spec:
backendType: gcpSecretsManager
data:
- key: service-account-credentials-dbt-test
name: service-account-credentials-dbt-test
version: latest
When I run kubectl apply -f deployment.yml I get the following error:
deployment.apps/dbt-core-processor created
error: unable to recognize "deployment.yml": no matches for kind "ExternalSecret" in version "kubernetes-client.io/v1"
This creates my processor but the pod fails to spin up the secrets:
secret "service-account-credentials-dbt-test" not found: CreateContainerConfigError
How do I add the secrets from my secrets manager in GCP to this deployment?
ExternalSecret is a custom resource definition (CRD) and it looks like it is not installed on your cluster.
I googled kubernetes-client.io/v1 and it looks like you may be following instructions from the old, archived project that first provided this CRD? The GitHub repo pointed me to a maintained project that has replaced it.
The good news is that the current project has what looks like comprehensive documentation, including a guide to how to install the CRDs on your cluster and the proper configuration for the External secret.

Injecting environment variables to Postgres pod from Hashicorp Vault

I'm trying to set the POSTGRES_PASSWORD, POSTGRES_USER and POSTGRES_DB environment variables in a Kubernetes Pod, running the official postgres docker image, with values injected from Hashicorp Vault.
The issue I experience is that the Postgres Pod will not start and provides no logs as to what might have caused it to stop.
I'm trying to source the injected secrets on startup using args /bin/bash/ source /vault/secrets/backend. Nothing seems to happen once this command is reached. If i add an echo statement in front of source it will display this in the kubectl logs.
Steps taken so far include removing the - args part of configuration and setting the required POSTGRES_PASSWORD variable directly with a test value. When done the pod starts and I can exec into it and verify that the secrets are indeed injected and I'm able to source them. Running cat command on it gives me the following output:
export POSTGRES_PASSWORD="jiasjdi9u2easjdu##djasj#!-d2KDKf"
export POSTGRES_USER="postgres"
export POSTGRES_DB="postgres"
To me this indicates that the Vault injection is working as expected and that this part is configured according to my needs.
*edit: commands after sourcing is indeed run. Tested with echo command
My configuration is as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-db
namespace: planet9-demo
labels:
app: postgres-db
environment: development
spec:
serviceName: postgres-service
selector:
matchLabels:
app: postgres-db
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-backend: secret/data/backend
vault.hashicorp.com/agent-inject-template-backend: |
{{ with secret "secret/backend/database" -}}
export POSTGRES_PASSWORD="{{ .Data.data.adminpassword}}"
export POSTGRES_USER="{{ .Data.data.postgresadminuser}}"
export POSTGRES_DB="{{ .Data.data.postgresdatabase}}"
{{- end }}
vault.hashicorp.com/role: postgresDB
labels:
app: postgres-db
tier: backend
spec:
containers:
- args:
- /bin/bash
- -c
- source /vault/secrets/backend
name: postgres-db
image: postgres:latest
resources:
requests:
cpu: 300m
memory: 1Gi
limits:
cpu: 400m
memory: 2Gi
volumeMounts:
- name: postgres-pvc
mountPath: /mnt/data
subPath: postgres-data/planet9-demo
env:
- name: PGDATA
value: /mnt/data
restartPolicy: Always
serviceAccount: sa-postgres-db
serviceAccountName: sa-postgres-db
volumes:
- name: postgres-pvc
persistentVolumeClaim:
claimName: postgres-pvc

Get the Kubernetes uid of the Deployment that created the pod, from within the pod

I want to be able to know the Kubernetes uid of the Deployment that created the pod, from within the pod.
The reason for this is so that the Pod can spawn another Deployment and set the OwnerReference of that Deployment to the original Deployment (so it gets Garbage Collected when the original Deployment is deleted).
Taking inspiration from here, I've tried*:
Using field refs as env vars:
containers:
- name: test-operator
env:
- name: DEPLOYMENT_UID
valueFrom:
fieldRef: {fieldPath: metadata.uid}
Using downwardAPI and exposing through files on a volume:
containers:
volumeMounts:
- mountPath: /etc/deployment-info
name: deployment-info
volumes:
- name: deployment-info
downwardAPI:
items:
- path: "uid"
fieldRef: {fieldPath: metadata.uid}
*Both of these are under spec.template.spec of a resource of kind: Deployment.
However for both of these the uid is that of the Pod, not the Deployment. Is what I'm trying to do possible?
The behavior is correct, the Downward API is for pod rather than deployment/replicaset.
So I guess the solution is set the name of deployment manually in spec.template.metadata.labels, then adopt Downward API to inject the labels as env variables.
I think it's impossible to get the UID of Deployment itself, you can set any range of runAsUser while creating the deployment.
Try this command to get the UIDs of the existing pods:
kubectl get pod -o jsonpath='{range .items[*]}{#.metadata.name}{" runAsUser: "}{#.spec.containers[*].securityContext.runAsUser}{" fsGroup: "}{#.spec.securityContext.fsGroup}{" seLinuxOptions: "}{#.spec.securityContext.seLinuxOptions.level}{"\n"}{end}'
It's not the exact what you wanted to get, but it can be a hint for you.
To set the UID while creating the Deployment, see the example below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: toolbox2
labels:
app: toolbox2
spec:
replicas: 3
selector:
matchLabels:
app: toolbox2
template:
metadata:
labels:
app: toolbox2
spec:
securityContext:
supplementalGroups: [1000620001]
seLinuxOptions:
level: s0:c25,c10
containers:
- name: net-toolbox
image: quay.io/wcaban/net-toolbox
ports:
- containerPort: 2000
securityContext:
runAsUser: 1000620001

How to access Kubernetes cluster environment variables within a container?

For example, I run a Pod in a public cloud cluster. The Pod has a main container running the app. The cluster has an environment variable named ABC. Within the main container, I wish to access the environment variable ABC. What is the best way to do so?
Very simple option
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Read more : https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Option 1
If variables is not much important you can use the configmap to store the and config map will get injected to POD and your app can access the variables from OS.
Read more about configmap : https://kubernetes.io/docs/concepts/configuration/configmap/
Example configmap :
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
example pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
You can also inject files with list of variables as per requirement.
Option 2 :
You can also use secret if your variable is important : https://kubernetes.io/docs/concepts/configuration/secret/
Option 3 :
If you want to go with best practices and options you can use the vault with Kubernetes for managing all different microservice variables management.
Vault : https://www.vaultproject.io/
Example : https://github.com/travelaudience/kubernetes-vault-example
it key-value pair management and provides good security options also.

kubernetes assign configmap as environment variables on deployment

I am trying to deploy my image to Azure Kubernetes Service. I use command:
kubectl apply -f mydeployment.yml
And here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: mycr.azurecr.io/my-api
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: my-existing-config-map
I have configmap my-existing-config-map created with a bunch of values in it but the deployment doesn't add these values as environment variables.
Config map was created from ".env" file this way:
kubectl create configmap my-existing-config-map --from-file=.env
What am I missing here?
If your .env file in this format
a=b
c=d
you need to use --from-env-file=.env instead.
To be more explanatory, using --from-file=aa.xx creates configmap looks like this
aa.xx: |
file content here....
....
....
When the config map used with envFrom.configmapref, it just creates on env variable "aa.xx" with the content. In the case, that filename starts with '.' like .env , the env variable is not even created because the name violates UNIX env variable name rules.
As you are using the .env file the format of the file is important
Create config.env file in the following format which can include comments
echo -e "var1=val1\n# this is a comment\n\nvar2=val2\n#anothercomment" > config.env
Create Config Map
kubectl create cm config --from-env-file=config.env
Use config Map in your pod definition file
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
envFrom:
- configMapRef:
name: config