Assigning configmap entries to container's env variables - kubernetes

I have below ConfigMap code which pulls secrets from GSM.
kind: ConfigMap
metadata:
name: db-config
labels:
app: poc
data:
entrypoint.sh: |
#!/usr/bin/env bash
set -euo pipefail
echo $(gcloud secrets versions access --project=<project> --secret=<secret-name>) >> /var/config/dburl.env
---
apiVersion: v1
kind: Pod
metadata:
name: poc-pod
namespace: default
spec:
initContainers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: init
command: ["/tmp/entrypoint.sh"]
volumeMounts:
- mountPath: /tmp
name: entrypoint
- mountPath: /var/config
name: secrets
volumes:
# volumes mounting
...
containers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: my-container
volumeMounts:
- mountPath: /var/config
name: secrets
env:
- name: HOST
?? # Assign value fetched in configmap
How to assign values from CM created files to container's env variables? Or, is there any other approach available to achieve this?
I need send couple of env variable to Spring cloud config service. It's hard to find any guide/documentation for this. Any help is appreciated!

One of the best ways to access secrets in google secret manager from GKE is by using the operator External Secret Operator, you can install it easily using helm.
Once is installed, you create a service account with the role roles/secretmanager.secretAccessor, then you download the creds (key file) and save them in a k8s secret:
kubectl create secret generic gcpsm-secret --from-file=service-account-credentials=key.json
Then you can define your secret store (it is not execlusive for GCP, it works with other secret manager such as AWS secret manager...):
apiVersion: external-secrets.io/v1alpha1
kind: SecretStore
metadata:
name: gcp-secret-manager
spec:
provider:
gcpsm:
auth:
secretRef:
secretAccessKeySecretRef:
name: gcpsm-secret # the secret you created in the first step
key: service-account-credentials
projectID: <your project id>
Now, you can create an external secret, and the operator will read the secret from the secret manager and create a k8s secret for you
apiVersion: external-secrets.io/v1alpha1
kind: ExternalSecret
metadata:
name: gcp-external-secret
spec:
secretStoreRef:
kind: SecretStore
name: gcp-secret-manager
target:
name: k8s-secret # the k8s secret name
data:
- secretKey: host # the key name in the secret
remoteRef:
key: <secret-name in gsm>
Finally in your pod, you can access the secret by:
apiVersion: v1
kind: Pod
metadata:
name: poc-pod
namespace: default
spec:
initContainers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: init
command: ["/tmp/entrypoint.sh"]
volumeMounts:
- mountPath: /tmp
name: entrypoint
- mountPath: /var/config
name: secrets
volumes:
# volumes mounting
...
containers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: my-container
volumeMounts:
- mountPath: /var/config
name: secrets
env:
- name: HOST
valueFrom:
secretKeyRef:
name: k8s-secret
key: host
If you update the secret value in the secret manager, you should recreate the external secret to update the k8s secret value.

Related

How to pass environment variables to a script present in ConfigMap while accessing it as a volume in Kubernetes

I have the following ConfigMap which is having a variable called VAR. This variable should get the value from the workflow while accessing it as a volume
apiVersion: v1
kind: ConfigMap
metadata:
name: test-pod-cfg
data:
test-pod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test
image: ubuntu
command: ["/busybox/sh", "-c", "echo $VAR"]
Here is the argo workflow which is fetching script test-pod.yaml in ConfigMap and adding it as a volume to container. In this how to pass Environment variable VAR to the ConfigMap for replacing it dynamically
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: test-wf-
spec:
entrypoint: main
templates:
- name: main
container:
image: "ubuntu"
command: ["/bin/sh", "-c", "cat /mnt/vc/test"]
volumeMounts:
- name: vc
mountPath: "/mnt/vc"
volumes:
- name: vc
configMap:
name: test-pod-cfg
items:
- key: test-pod.yaml
path: test
To mount the ConfigMap as a volume and make the environment variable VAR available to the container, you will need to add a volume to the pod's spec and set the environment variable in the container's spec.
In the volume spec, you will need to add the ConfigMap as a volume source and set the path to the file containing the environment variable. For example:
spec:
entrypoint: test-pod
templates:
- name: test-pod
container:
image: ubuntu
command: ["/busybox/sh", "-c", "echo $VAR"]
volumeMounts:
- name: config
mountPath: /etc/config
env:
- name: VAR
valueFrom:
configMapKeyRef:
name: test-pod-cfg
key: test-pod.yaml
volumes:
- name: config
configMap:
name: test-pod-cfg
The environment variable VAR will then be available in the container with the value specified in the ConfigMap.
For more information follow this official doc.

Best Practice for Operators for how to get Deployment's configuration

I am working on operator-sdk, in the controller, we often need to create a Deployment object, and Deployment resource has a lot of configuration items, such as environment variables or ports definition or others as following. I am wondering what is best way to get these values, I don't want to hard code them, for example, variable_a or variable_b.
Probably, you can put them in the CRD as spec, then pass them to Operator Controller; Or maybe you can put them in the configmap, then pass configmap name to Operator Controller, Operator Controller can access configmap to get them; Or maybe you can put in the template file, then in the Operator Controller, controller has to read that template file.
What is best way or best practice to deal with this situation? Thanks for sharing your ideas or points.
deployment := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
Labels: ls,
},
Spec: appsv1.DeploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{
MatchLabels: ls,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: ls,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
Image: "....",
Name: m.Name,
Ports: []corev1.ContainerPort{{
ContainerPort: port_a,
Name: "tcpport",
}},
Env: []corev1.EnvVar{
{
Name: "aaaa",
Value: variable_a,
},
{
Name: "bbbb",
Value: variable_b,
},
Using enviroment variables
It can be convenient that your app gets your data as environment variables.
Environment variables from ConfigMap
For non-sensitive data, you can store your variables in a ConfigMap and then define container environment variables using the ConfigMap data.
Example from Kubernetes docs:
Create the ConfigMap first. File configmaps.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
---
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
namespace: default
data:
log_level: INFO
Create the ConfigMap:
kubectl create -f ./configmaps.yaml
Then define the environment variables in the Pod specification, pod-multiple-configmap-env-variable.yaml:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: env-config
key: log_level
restartPolicy: Never
Create the Pod:
kubectl create -f ./pod-multiple-configmap-env-variable.yaml
Now in your controller you can read these environment variables SPECIAL_LEVEL_KEY (which will give you special.how value from special-config ConfigMap) and LOG_LEVEL (which will give you log_level value from env-config ConfigMap):
For example:
specialLevelKey := os.Getenv("SPECIAL_LEVEL_KEY")
logLevel := os.Getenv("LOG_LEVEL")
fmt.Println("SPECIAL_LEVEL_KEY:", specialLevelKey)
fmt.Println("LOG_LEVEL:", logLevel)
Environment variables from Secret
If your data is sensitive, you can store it in a Secret and then use the Secret as environment variables.
To create a Secret manually:
You'll first need to encode your strings using base64.
# encode username
$ echo -n 'admin' | base64
YWRtaW4=
# encode password
$ echo -n '1f2d1e2e67df' | base64
MWYyZDFlMmU2N2Rm
Then create a Secret with the above data:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
Create a Secret with kubectl apply:
$ kubectl apply -f ./secret.yaml
Please notice that there are other ways to create a secret, pick one that works best for you:
Creating a Secret using kubectl
Creating a Secret from a generator
Creating a Secret from files
Creating a Secret from string literals
Now you can use this created Secret for environment variables.
To use a secret in an environment variable in a Pod:
Create a secret or use an existing one. Multiple Pods can reference the same secret.
Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in env[].valueFrom.secretKeyRef.
Modify your image and/or command line so that the program looks for values in the specified environment variables.
Here is a Pod example from Kubernetes docs that shows how to use a Secret for environment variables:
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
Finally, as stated in the docs:
Inside a container that consumes a secret in an environment variables, the secret keys appear as normal environment variables containing the base64 decoded values of the secret data.
Now in your controller you can read these environment variables SECRET_USERNAME (which will give you username value from mysecret Secret) and SECRET_PASSWORD (which will give you password value from mysecret Secret):
For example:
username := os.Getenv("SECRET_USERNAME")
password := os.Getenv("SECRET_PASSWORD")
Using volumes
You can also mount both ConfigMap and Secret as a volume to you pods.
Populate a Volume with data stored in a ConfigMap:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
Using Secrets as files from a Pod:
To consume a Secret in a volume in a Pod:
Create a secret or use an existing one. Multiple Pods can reference the same secret.
Modify your Pod definition to add a volume under .spec.volumes[]. Name the volume anything, and have a .spec.volumes[].secret.secretName field equal to the name of the Secret object.
Add a .spec.containers[].volumeMounts[] to each container that needs the secret. Specify .spec.containers[].volumeMounts[].readOnly = true and .spec.containers[].volumeMounts[].mountPath to an unused directory name where you would like the secrets to appear.
Modify your image or command line so that the program looks for files in that directory. Each key in the secret data map becomes the filename under mountPath.
An example of a Pod that mounts a Secret in a volume:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret

Pod does not see secrets

The pod that created in the same default namespace as it's secret does not see values from it.
Secret's file contains following:
apiVersion: v1
kind: Secret
metadata:
name: backend-secret
data:
SECRET_KEY: <base64 of value>
DEBUG: <base64 of value>
After creating this secret via kubectl create -f backend-secret.yaml I'm launching pod with the following configuration:
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
imagePullSecrets:
- name: dockerhub-credentials
volumes:
- name: secret
secret:
secretName: backend-secret
But pod crashes after trying to extract this environment variable via python's os.environ['DEBUG'] line.
How to make it work?
If you mount secret as volume, it will be mounted in a defined directory where key name will be the file name. For example click here
If you want to access secrets from the environment into your pod then you need to use secret in an environment variable like following.
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
env:
- name: DEBUG
valueFrom:
secretKeyRef:
name: backend-secret
key: DEBUG
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: backend-secret
key: SECRET_KEY
imagePullSecrets:
- name: dockerhub-credentials
Ref: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
Finally, I've used these lines at Deployment.spec.template.spec.containers:
containers:
- name: backend
image: zuber93/wts_backend
imagePullPolicy: Always
envFrom:
- secretRef:
name: backend-secret
ports:
- containerPort: 8000

How to mount the properties file in Kubernetes configmap using manifest yaml

I use minikube on windows 10 and try to test Kubernetes ConfigMap with both literal type and outer file type. First I make below manifest yaml file to make ConfigMap.
apiVersion: v1
kind: ConfigMap
metadata:
name: simple-config
data:
mysql_root_password: password
mysql_password: password
mysql_database: test
---
apiVersion: v1
kind: Pod
metadata:
name: blog-db
labels:
app: blog-mysql
spec:
containers:
- name: blog-mysql
image: mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
name: simple-config
key: mysql_root_password
- name: MYSQL_PASSWORD
valueFrom:
configMapKeyRef:
name: simple-config
key: mysql_password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: simple-config
key: mysql_database
ports:
- containerPort: 3306
The above configmap yaml file throws no errors. It works successfully. This time I try to test kubernetes configmap with file.
== configmap.properties
mysql_root_password=password
mysql_password=password
mysql_database=test
But I am stuck with this part. Most of configmap examples use kubectl command with --from-file option like below,
kubectl create configmap simple-config --from-file=configmap.properties
But I have no idea how to mount the properties file using manifest yaml file grammer. Any advice?
You can not directly mount a properties file in a pod without first creating a ConfigMap from the properties file.You can create configMap from env file as below
kubectl create configmap simple-config \
--from-env-file=configmap.properties

Setting postgres environmental variables running image

As the documentation shows, you should be setting the env vars when doing a docker run like the following:
docker run --name some-postgres -e POSTGRES_PASSWORD='foo' POSTGRES_USER='bar'
This sets the superuser and password to access the database instead of the defaults of POSTGRES_PASSWORD='' and POSTGRES_USER='postgres'.
However, I'm using Skaffold to spin up a k8s cluster and I'm trying to figure out how to do something similar. How does one go about doing this for Kubernetes and Skaffold?
#P Ekambaram is correct but I would like to go further into this topic and explain the "whys and hows".
When passing passwords on Kubernetes, it's highly recommended to use encryption and you can do this by using secrets.
Creating your own Secrets (Doc)
To be able to use the secrets as described by #P Ekambaram, you need to have a secret in your kubernetes cluster.
To easily create a secret, you can also create a Secret from generators and then apply it to create the object on the Apiserver. The generators should be specified in a kustomization.yaml inside a directory.
For example, to generate a Secret from literals username=admin and password=secret, you can specify the secret generator in kustomization.yaml as
# Create a kustomization.yaml file with SecretGenerator
$ cat <<EOF >./kustomization.yaml
secretGenerator:
- name: db-user-pass
literals:
- username=admin
- password=secret
EOF
Apply the kustomization directory to create the Secret object.
$ kubectl apply -k .
secret/db-user-pass-dddghtt9b5 created
Using Secrets as Environment Variables (Doc)
This is an example of a pod that uses secrets from environment variables:
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
Source: here and here.
Use the below YAML
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
template:
metadata:
labels:
name: postgres
spec:
containers:
- name: postgres
image: postgres:11.2
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "sampledb"
- name: POSTGRES_USER
value: "postgres"
- name: POSTGRES_PASSWORD
value: "secret"
volumeMounts:
- name: data
mountPath: /var/lib/postgresql
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
name: postgres