How to mount the properties file in Kubernetes configmap using manifest yaml - kubernetes

I use minikube on windows 10 and try to test Kubernetes ConfigMap with both literal type and outer file type. First I make below manifest yaml file to make ConfigMap.
apiVersion: v1
kind: ConfigMap
metadata:
name: simple-config
data:
mysql_root_password: password
mysql_password: password
mysql_database: test
---
apiVersion: v1
kind: Pod
metadata:
name: blog-db
labels:
app: blog-mysql
spec:
containers:
- name: blog-mysql
image: mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
name: simple-config
key: mysql_root_password
- name: MYSQL_PASSWORD
valueFrom:
configMapKeyRef:
name: simple-config
key: mysql_password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: simple-config
key: mysql_database
ports:
- containerPort: 3306
The above configmap yaml file throws no errors. It works successfully. This time I try to test kubernetes configmap with file.
== configmap.properties
mysql_root_password=password
mysql_password=password
mysql_database=test
But I am stuck with this part. Most of configmap examples use kubectl command with --from-file option like below,
kubectl create configmap simple-config --from-file=configmap.properties
But I have no idea how to mount the properties file using manifest yaml file grammer. Any advice?

You can not directly mount a properties file in a pod without first creating a ConfigMap from the properties file.You can create configMap from env file as below
kubectl create configmap simple-config \
--from-env-file=configmap.properties

Related

Assigning configmap entries to container's env variables

I have below ConfigMap code which pulls secrets from GSM.
kind: ConfigMap
metadata:
name: db-config
labels:
app: poc
data:
entrypoint.sh: |
#!/usr/bin/env bash
set -euo pipefail
echo $(gcloud secrets versions access --project=<project> --secret=<secret-name>) >> /var/config/dburl.env
---
apiVersion: v1
kind: Pod
metadata:
name: poc-pod
namespace: default
spec:
initContainers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: init
command: ["/tmp/entrypoint.sh"]
volumeMounts:
- mountPath: /tmp
name: entrypoint
- mountPath: /var/config
name: secrets
volumes:
# volumes mounting
...
containers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: my-container
volumeMounts:
- mountPath: /var/config
name: secrets
env:
- name: HOST
?? # Assign value fetched in configmap
How to assign values from CM created files to container's env variables? Or, is there any other approach available to achieve this?
I need send couple of env variable to Spring cloud config service. It's hard to find any guide/documentation for this. Any help is appreciated!
One of the best ways to access secrets in google secret manager from GKE is by using the operator External Secret Operator, you can install it easily using helm.
Once is installed, you create a service account with the role roles/secretmanager.secretAccessor, then you download the creds (key file) and save them in a k8s secret:
kubectl create secret generic gcpsm-secret --from-file=service-account-credentials=key.json
Then you can define your secret store (it is not execlusive for GCP, it works with other secret manager such as AWS secret manager...):
apiVersion: external-secrets.io/v1alpha1
kind: SecretStore
metadata:
name: gcp-secret-manager
spec:
provider:
gcpsm:
auth:
secretRef:
secretAccessKeySecretRef:
name: gcpsm-secret # the secret you created in the first step
key: service-account-credentials
projectID: <your project id>
Now, you can create an external secret, and the operator will read the secret from the secret manager and create a k8s secret for you
apiVersion: external-secrets.io/v1alpha1
kind: ExternalSecret
metadata:
name: gcp-external-secret
spec:
secretStoreRef:
kind: SecretStore
name: gcp-secret-manager
target:
name: k8s-secret # the k8s secret name
data:
- secretKey: host # the key name in the secret
remoteRef:
key: <secret-name in gsm>
Finally in your pod, you can access the secret by:
apiVersion: v1
kind: Pod
metadata:
name: poc-pod
namespace: default
spec:
initContainers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: init
command: ["/tmp/entrypoint.sh"]
volumeMounts:
- mountPath: /tmp
name: entrypoint
- mountPath: /var/config
name: secrets
volumes:
# volumes mounting
...
containers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: my-container
volumeMounts:
- mountPath: /var/config
name: secrets
env:
- name: HOST
valueFrom:
secretKeyRef:
name: k8s-secret
key: host
If you update the secret value in the secret manager, you should recreate the external secret to update the k8s secret value.

How to supply a value of a server in NFS mount in a k8 Deployment via a ConfigMap

I'm writing a helm chart where I need to supply a nfs.server value for the volume mount from the ConfigMap (efs-url in the example below).
There are examples in the docs on how to pass the value from the ConfigMap to env variables or even mount ConfigMaps. I understand how I can pass this value from the values.yaml but I just can't find an example on how it can be done using a ConfigMap.
I have control over this ConfigMap so I can reformat it as needed.
Am I missing something very obvious?
Is it even possible to do?
If not, what are the possible workarounds?
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-url
data:
url: yourEFSsystemID.efs.yourEFSregion.amazonaws.com
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: efs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: aws.region
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: provisioner.name
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: <<< VALUE SHOULD COME FROM THE CONFIG MAP >>>
path: /
Having analysed the comments it looks like using ConfigMap approach is not suitable for this example as ConfigMap
is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
To read more about ConfigMaps and how they can be utilized one can visit the "ConfigMaps" section and the "Configure a Pod to Use a ConfigMap" section.

ConfigMap value as input for another variable inside container

How to use ConfigMap for $LOCAL_IP_DB variable declared in below section as input for another variable declared? $LOCAL_IP_DB is a generic key defined inside db-secret configmap, but there is another environment variable which needs it? How to make it work?
spec:
containers:
- env:
- name: LOCAL_IP_DB
valueFrom:
configMapKeyRef:
name: db-secret
key: LOCAL_IP_DB
- name: LOG_Files
value: \\${LOCAL_IP_DB}\redis\files\
The key is using: $() instead of ${}
example-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: example
image: bash
args: [printenv]
env:
- name: LOCAL_IP_DB
valueFrom:
configMapKeyRef:
name: db-secret
key: LOCAL_IP_DB
- name: LOG_FILES
value: \$(LOCAL_IP_DB)\redis\files\
example-configmap.yaml:
apiVersion: v1
data:
LOCAL_IP_DB: 192.168.0.1
kind: ConfigMap
metadata:
name: db-secret
test:
controlplane $ kubectl apply -f example-pod.yaml -f example-configmap.yaml
controlplane $ kubectl logs example | grep 192
LOCAL_IP_DB=192.168.0.1
LOG_FILES=\192.168.0.1\redis\files\
You can find more information about this function here: link
Note, if you want to manage secrets Secret is the recommended way to do that.

Providing a .env file in Kubernetes

How do I provide a .env file in Kubernetes. I am using a Node.JS package that populates my process.env via my .env file.
You can do it in two ways:
Providing env variable for the container:
During creation of a pod, you can set environment variables for the containers that run in that Pod. To set environment variables, include the env field in the configuration file.
ex:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Using ConfigMaps:
first you need to create a ConfigMaps, ex is below, here data field refers your values in a key-value pair.
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
Now, use envFrom to define all of the ConfigMap's data as container environment variables, ex:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
you can even specify individual field by giving env like below:
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_LEVEL
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_TYPE
Ref: configmap and env set

Error communicating with the configmap

Hi I am using google kubernetes engine to deploy my application. I tried to add a configMap
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap
namespace: default
data:
database_user: root
database_password: root
database_db: db
database_port: 5432
database_host: mypostgres
And then in my application deployment file I mapped my envirement variables like the following
spec:
containers:
- env:
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: configmap
key: database_host
- name: DATABASE_NAME
valueFrom:
configMapKeyRef:
name: configmap
key: database_db
- name: DATABASE_PASSWORD
valueFrom:
configMapKeyRef:
name: configmap
key: database_password
- name: DATABASE_USER
valueFrom:
configMapKeyRef:
name: configmap
key: database_user
- name: DATABASE_PORT
valueFrom:
configMapKeyRef:
name: configmap
key: database_port
My service I not running and I got the
CreateContainerConfigError
When I try to show the result of the pod
When I do "describe my pod " I got
Error: Couldn't find key database_host
My question is, why my deployment file are not commincating with the configmap I defined
I created the configmap via this command
kubectl create configmap configmap --from-file=configmap.yaml
As mentioned in "kubectl create configmap --help":
--from-env-file='': Specify the path to a file to read lines of key=val pairs to create a configmap (i.e. a Docker
.env file).
so you just need to make a file named conf with value like:
database_user= root
database_password= root
database_db= db
database_port= 5432
database_host= mypostgres
and run: "kubectl create configmap coco-config --from-env-file=conf"
UPDATE:
If you put your data in " ", problem will be fixed
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap
namespace: default
data:
database_user: "root"
database_password: "root"
database_db: "db"
database_port: "5432"
database_host: "mypostgres"
Try configmap --from-env-file=configm
Do not use --from-file command.
Try kubectl apply -f configmap.yaml