Hi I am using google kubernetes engine to deploy my application. I tried to add a configMap
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap
namespace: default
data:
database_user: root
database_password: root
database_db: db
database_port: 5432
database_host: mypostgres
And then in my application deployment file I mapped my envirement variables like the following
spec:
containers:
- env:
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: configmap
key: database_host
- name: DATABASE_NAME
valueFrom:
configMapKeyRef:
name: configmap
key: database_db
- name: DATABASE_PASSWORD
valueFrom:
configMapKeyRef:
name: configmap
key: database_password
- name: DATABASE_USER
valueFrom:
configMapKeyRef:
name: configmap
key: database_user
- name: DATABASE_PORT
valueFrom:
configMapKeyRef:
name: configmap
key: database_port
My service I not running and I got the
CreateContainerConfigError
When I try to show the result of the pod
When I do "describe my pod " I got
Error: Couldn't find key database_host
My question is, why my deployment file are not commincating with the configmap I defined
I created the configmap via this command
kubectl create configmap configmap --from-file=configmap.yaml
As mentioned in "kubectl create configmap --help":
--from-env-file='': Specify the path to a file to read lines of key=val pairs to create a configmap (i.e. a Docker
.env file).
so you just need to make a file named conf with value like:
database_user= root
database_password= root
database_db= db
database_port= 5432
database_host= mypostgres
and run: "kubectl create configmap coco-config --from-env-file=conf"
UPDATE:
If you put your data in " ", problem will be fixed
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap
namespace: default
data:
database_user: "root"
database_password: "root"
database_db: "db"
database_port: "5432"
database_host: "mypostgres"
Try configmap --from-env-file=configm
Do not use --from-file command.
Try kubectl apply -f configmap.yaml
Related
How to use ConfigMap for $LOCAL_IP_DB variable declared in below section as input for another variable declared? $LOCAL_IP_DB is a generic key defined inside db-secret configmap, but there is another environment variable which needs it? How to make it work?
spec:
containers:
- env:
- name: LOCAL_IP_DB
valueFrom:
configMapKeyRef:
name: db-secret
key: LOCAL_IP_DB
- name: LOG_Files
value: \\${LOCAL_IP_DB}\redis\files\
The key is using: $() instead of ${}
example-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: example
image: bash
args: [printenv]
env:
- name: LOCAL_IP_DB
valueFrom:
configMapKeyRef:
name: db-secret
key: LOCAL_IP_DB
- name: LOG_FILES
value: \$(LOCAL_IP_DB)\redis\files\
example-configmap.yaml:
apiVersion: v1
data:
LOCAL_IP_DB: 192.168.0.1
kind: ConfigMap
metadata:
name: db-secret
test:
controlplane $ kubectl apply -f example-pod.yaml -f example-configmap.yaml
controlplane $ kubectl logs example | grep 192
LOCAL_IP_DB=192.168.0.1
LOG_FILES=\192.168.0.1\redis\files\
You can find more information about this function here: link
Note, if you want to manage secrets Secret is the recommended way to do that.
I use minikube on windows 10 and try to test Kubernetes ConfigMap with both literal type and outer file type. First I make below manifest yaml file to make ConfigMap.
apiVersion: v1
kind: ConfigMap
metadata:
name: simple-config
data:
mysql_root_password: password
mysql_password: password
mysql_database: test
---
apiVersion: v1
kind: Pod
metadata:
name: blog-db
labels:
app: blog-mysql
spec:
containers:
- name: blog-mysql
image: mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
name: simple-config
key: mysql_root_password
- name: MYSQL_PASSWORD
valueFrom:
configMapKeyRef:
name: simple-config
key: mysql_password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: simple-config
key: mysql_database
ports:
- containerPort: 3306
The above configmap yaml file throws no errors. It works successfully. This time I try to test kubernetes configmap with file.
== configmap.properties
mysql_root_password=password
mysql_password=password
mysql_database=test
But I am stuck with this part. Most of configmap examples use kubectl command with --from-file option like below,
kubectl create configmap simple-config --from-file=configmap.properties
But I have no idea how to mount the properties file using manifest yaml file grammer. Any advice?
You can not directly mount a properties file in a pod without first creating a ConfigMap from the properties file.You can create configMap from env file as below
kubectl create configmap simple-config \
--from-env-file=configmap.properties
As the documentation shows, you should be setting the env vars when doing a docker run like the following:
docker run --name some-postgres -e POSTGRES_PASSWORD='foo' POSTGRES_USER='bar'
This sets the superuser and password to access the database instead of the defaults of POSTGRES_PASSWORD='' and POSTGRES_USER='postgres'.
However, I'm using Skaffold to spin up a k8s cluster and I'm trying to figure out how to do something similar. How does one go about doing this for Kubernetes and Skaffold?
#P Ekambaram is correct but I would like to go further into this topic and explain the "whys and hows".
When passing passwords on Kubernetes, it's highly recommended to use encryption and you can do this by using secrets.
Creating your own Secrets (Doc)
To be able to use the secrets as described by #P Ekambaram, you need to have a secret in your kubernetes cluster.
To easily create a secret, you can also create a Secret from generators and then apply it to create the object on the Apiserver. The generators should be specified in a kustomization.yaml inside a directory.
For example, to generate a Secret from literals username=admin and password=secret, you can specify the secret generator in kustomization.yaml as
# Create a kustomization.yaml file with SecretGenerator
$ cat <<EOF >./kustomization.yaml
secretGenerator:
- name: db-user-pass
literals:
- username=admin
- password=secret
EOF
Apply the kustomization directory to create the Secret object.
$ kubectl apply -k .
secret/db-user-pass-dddghtt9b5 created
Using Secrets as Environment Variables (Doc)
This is an example of a pod that uses secrets from environment variables:
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
Source: here and here.
Use the below YAML
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
template:
metadata:
labels:
name: postgres
spec:
containers:
- name: postgres
image: postgres:11.2
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "sampledb"
- name: POSTGRES_USER
value: "postgres"
- name: POSTGRES_PASSWORD
value: "secret"
volumeMounts:
- name: data
mountPath: /var/lib/postgresql
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
name: postgres
I have a secret.yaml file inside the templates directory with the following data:
apiVersion: v1
kind: Secret
metadata:
name: appdbpassword
stringData:
password: password
I also have a ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: appdbconfigmap
data:
jdbcUrl: jdbc:oracle:thin:#proxy:service
username: bhargav
I am using the following pod:
apiVersion: v1
kind: Pod
metadata:
name: expense-pod-sample-1
spec:
containers:
- name: expense-container-sample-1
image: exm:1
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
envFrom:
- configMapRef:
name: appdbconfigmap
env:
- name: password
valueFrom:
secretKeyRef:
name: appdbpassword
key: password
When I use the helm install command I see the pod running but if I try to use the environment variable ${password} in my application, it just does not work. It says the password is wrong. It does not complain about the username which is a ConfigMap. This happens only if I use helm. If I don't use helm and independently-run all the YAML files using kubectl, my application access both username and password correctly.
Am I missing anything here ?
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
password : cGFzc3dvcmQK
You can also add the secret like this where converting data into base64 format. While stringData do it automatically when you create secret.
Trying Adding the secrets in environment like this way
envFrom:
- secretRef:
name: test-secret
I want to make some deployments in kubernetes using helm charts. Here is a sample override-values yaml that I use:
imageRepository: ""
ocbb:
imagePullPolicy: IfNotPresent
TZ: UTC
logDir: /oms_logs
tnsAdmin: /oms/ora_k8
LOG_LEVEL: 3
wallet:
client:
server:
root:
db:
deployment:
imageName: init_db
imageTag:
host: 192.168.88.80
port:
service:
alias:
schemauser: pincloud
schemapass:
schematablespace: pincloud
indextablespace: pincloudx
nls_lang: AMERICAN_AMERICA.AL32UTF8
charset: AL32UTF8
pipelineschemauser: ifwcloud
pipelineschemapass:
pipelineschematablespace: ifwcloud
pipelineindextablespace: ifwcloudx
pipelinealias:
queuename:
In this file I have to set some values involving credentials, for example schemapass, pipelineschemapass...
Documentation states I have to generate kubernetes secrets to do this and add this key to my yaml file with the same path hierarchy.
I generated some kubernetes secrets, for example:
kubectl create secret generic schemapass --from-literal=password='pincloud'
Now I don't know how to reference this newly generated secret in my yaml file. Any tip about how to set schemapass field in yaml chart to reference kubernetes secret?
You cannot use Kubernetes secret in your values.yaml. In values.yaml you only specify the input parameters for the Helm Chart, so it could be the secret name, but not the secret itself (or anything that it resolved).
If you want to use the secret in your container, then you can insert it as an environment variable:
env:
- name: SECRET_VALUE_ENV
valueFrom:
secretKeyRef:
name: schemapass
key: password
You can check more in the Hazelcast Enterprise Helm Chart. We do exactly that. You specify the secret name in values.yaml and then the secret is injected into the container using environment variable.
You can reference K8S values -whether secrets or not- in Helm by specifying them in your container as environment variables.
let your deployment be mongo.yml
--
kind: Deployment
--
--
containers:
--
env:
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
Where mongo-secret is
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
and mongo-config is
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service