I am trying to set the two env variables of mongo namely - MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD using kubernetes ConfigMap and Secret as follows:
When I don't use the config map and password, i.e. I hardcode the username and password, it works, but when I try to replace it with configmap and secret, it says
'Authentication failed.'
my username and password is the same, which is admin
Here's the yaml definition for these obects, can someone help me what is wrong?
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-username
data:
username: admin
---
apiVersion: v1
kind: Secret
metadata:
name: mongodb-password
data:
password: YWRtaW4K
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodbtest
spec:
# serviceName: mongodbtest
replicas: 1
selector:
matchLabels:
app: mongodbtest
template:
metadata:
labels:
app: mongodbtest
selector: mongodbtest
spec:
containers:
- name: mongodbtest
image: mongo:3
# env:
# - name: MONGO_INITDB_ROOT_USERNAME
# value: admin
# - name: MONGO_INITDB_ROOT_PASSWORD
# value: admin
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
configMapKeyRef:
name: mongodb-username
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-password
key: password
Finally I was able to find the solution after hours, it is not something I did from kubernetes side, it is when I did base64 encode.
The correct way to encode is with following command:
echo -n 'admin' | base64
and this was the issue with me.
Your deployment yaml is fine, just change spec.containers[0].env to spec.containers[0].envFrom:
spec:
containers:
- name: mongodbtest
image: mongo:3
envFrom:
- configMapRef:
name: mongodb-username
- secretRef:
name: mongodb-password
That will put all keys of your secret and configmap as environment variables in the deployment.
apiVersion: v1
data:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD : password
kind: ConfigMap
metadata:
name: mongo-cred
namespace: default
inject it to deployment like
envFrom:
- configMapRef:
name: mongo-cred
the deployment will be something like
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodbtest
spec:
# serviceName: mongodbtest
replicas: 1
selector:
matchLabels:
app: mongodbtest
template:
metadata:
labels:
app: mongodbtest
selector: mongodbtest
spec:
containers:
- name: mongodbtest
image: mongo:3
envFrom:
- configMapRef:
name: mongo-cred
if you want to save the data in secret, the secret is best practice to store data with encryption base64 and sensitive data.
envFrom:
- secretRef:
name: mongo-cred
you can create the secret with
apiVersion: v1
data:
MONGO_INITDB_ROOT_USERNAME: YWRtaW4K #base 64 encoded
MONGO_INITDB_ROOT_PASSWORD : YWRtaW4K
kind: secret
type: Opaque
metadata:
name: mongo-cred
namespace: default
Related
I am trying to create a deployment from an image that I specify in a Dockerfile. My goal would be to pass the environmental variables so I could access them inside the Dockerfile, so kind of similar to the postgres image available on Dockerhub. Additionally I would need a service so I could access the database in a Spring boot application. The reason why I would like to work it this way, because I would like to have additional content inside the Dockerfile. I am not sure whether I can do it this way, but if yes, what am I doing wrong, and how could I access it from Spring boot?
What I have tried so far:
apiVersion: v1
kind: Namespace
metadata:
name: testns
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
namespace: testns
data:
dbname: test_database
---
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
namespace: testns
type: Opaque
data:
username: cG9zdGdyZXN1c2Vy
password: cGFzc3dvcmQxMjM0NTY=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-depl
namespace: testns
labels:
app: postgresql
spec:
replicas: 1
selector:
matchLabels:
app: postgresql
template:
metadata:
labels:
app: postgresql
spec:
containers:
- name: postgresdb
image: testpostgres
ports:
- containerPort: 5432
env:
- name: DB
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: dbname
- name: USERNAME
valueFrom:
secretKeyRef:
name: postgres-secret
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
namespace: testns
spec:
selector:
app: postgresql
type: LoadBalancer
ports:
- protocol: TCP
port: 7000
targetPort: 5432
nodePort: 30000
And the related Dockerfile that I use to create the testpostgres image:
FROM postgres
ARG DB
ARG USERNAME
ARG PASSWORD
ENV POSTGRES_DB=$DB
ENV POSTGRES_USER=$USERNAME
ENV POSTGRES_PASSWORD=$PASSWORD
EXPOSE 5432
I'm having difficulty trying to get kustomize to replace contents of an item in a list.
My kustomize file
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resource.yaml
patches:
- patch.yaml
My patch.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
value: web-pgdb
My resource.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
valueFrom:
secretKeyRef:
name: web-pgdb
key: database
kustomize build returns
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- env:
- name: PG_DATABASE
value: web-pgdb
valueFrom:
secretKeyRef:
key: database
name: web-pgdb
name: web-service-migration
what i want kustomize build to return
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- env:
- name: PG_DATABASE
value: web-pgdb
name: web-service-migration
If I remember correctly patches in kustomize by default uses strategic merge, so you need to nullify valueFrom, so your patch should look like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
value: web-pgdb
valueFrom: null
More details about strategic merge patch and how to delete maps: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#maps
I checked this How to use for Keycloak operator custom resource using external database connection. I am using CloudSQL from Google platform as the external database source.
My configurations are
keycloak-idm
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: kiwigrid-keycloak-idm
spec:
instances: 3
externalAccess:
enabled: false
externalDatabase:
enabled: true
external db storage secret
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
namespace: kiwios-application
type: Opaque
stringData:
POSTGRES_DATABASE: keycloak-storage
POSTGRES_EXTERNAL_ADDRESS: pgsqlproxy.infra
POSTGRES_EXTERNAL_PORT: "5432"
POSTGRES_HOST: keycloak-postgresql
POSTGRES_USERNAME: keycloak-user
POSTGRES_PASSWORD: S1ly3AValJYBNR-fsptLYdT74
POSTGRES_SUPERUSER: "true"
storage database
apiVersion: sql.cnrm.cloud.google.com/v1beta1
kind: SQLDatabase
metadata:
name: keycloak-storage
namespace: kiwios-application
annotations:
cnrm.cloud.google.com/deletion-policy: "abandon"
spec:
charset: UTF8
collation: en_US.UTF8
instanceRef:
name: keycloak-storage-instance-pg
namespace: infra
storage users
apiVersion: sql.cnrm.cloud.google.com/v1beta1
kind: SQLUser
metadata:
name: keycloak-user
namespace: kiwios-application
annotations:
cnrm.cloud.google.com/deletion-policy: "abandon"
spec:
instanceRef:
name: keycloak-storage-instance-pg
namespace: infra
password:
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_PASSWORD
And the error shown in Kubernetes console
It is not working. Anyone please help me to figure out what I am doing wrong.
Update: I deep dived with k9s console. As per keycloak-operator functionality it creates a external name for the database connection.
which is here keycloak-postgresql
check image below
There is no error showing in keycloak-operator console. Only the keycloak-idm is not able to make a connection using this external name. It shows the below error.
This is what i am using for keycloak setup, also if you have read the question he has mention secret issue issue in update section
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.0
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_VENDOR
value: POSTGRES
- name: DB_ADDR
value: postgres
- name: DB_DATABASE
value: keycloak
- name: DB_USER
value: root
- name: DB_PASSWORD
value: password
- name : KEYCLOAK_HTTP_PORT
value : "80"
- name: KEYCLOAK_HTTPS_PORT
value: "443"
- name : KEYCLOAK_HOSTNAME
value : keycloak.harshmanvar.tk #replace with ingress URL
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
You can try changing the ENV variables into the secret you are using.
Example files : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment
Environment variables that Keycloak support : https://github.com/keycloak/keycloak-containers/blob/master/server/README.md#environment-variables
Have you tried this way..!
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
namespace: kiwios-application
type: Opaque
stringData:
POSTGRES_DATABASE: "keycloak-storage"
POSTGRES_EXTERNAL_ADDRESS: "pgsqlproxy.infra"
POSTGRES_EXTERNAL_PORT: "5432"
POSTGRES_HOST: "keycloak-postgresql"
POSTGRES_USERNAME: "keycloak-user"
POSTGRES_PASSWORD: "S1ly3AValJYBNR-fsptLYdT74"
POSTGRES_SUPERUSER: "true"
I have a secret:
apiVersion: v1
kind: Secret
metadata:
name: secret-ssh-auth
type: kubernetes.io/ssh-auth
data:
ssh-privatekey: |
SEVMTE9PT09PT09PT09PT09PT09PCg==
and deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: secret-ssh-auth
mountPath: /root/.ssh
volumes:
- name: secret-ssh-auth
secret:
secretName: secret-ssh-auth
defaultMode: 0400
It creates a file with this path /root/.ssh/ssh-privatekey while I want to have /root/.ssh/id_rsa name instead.
I know we can solve it by running a kubectl command, but I want to handle it inside the YAML file.
So, how to do that by the YAML file?
Based on the Kubernetes documentation the ssh-privatekey key is mandatory, in this case, you can leave it empty via stringData key, then define another one by data key like this:
apiVersion: v1
kind: Secret
metadata:
name: secret-ssh-auth
type: kubernetes.io/ssh-auth
stringData:
ssh-privatekey: |
-
data:
id_rsa: |
SEVMTE9PT09PT09PT09PT09PT09PCg==
Got the same problem, and revolved it by simply defining the spec.volumes like this, which renames the key with the path value:
volumes:
- name: privatekey
secret:
secretName: private-key
items:
- key: ssh-privatekey
path: id_rsa
defaultMode: 384
then refer it inside the container definition:
containers:
- name: xxx
volumeMounts:
- name: privatekey
mountPath: /path/to/.ssh
I have multiple Secrets in a Kubernetes. All of them contain many values, as example:
apiVersion: v1
kind: Secret
metadata:
name: paypal-secret
type: Opaque
data:
PAYPAL_CLIENT_ID: base64_PP_client_id
PAYPAL_SECRET: base64_pp_secret
stringData:
PAYPAL_API: https://api.paypal.com/v1
PAYPAL_HOST: api.paypal.com
I'm curious how to pass all of the values from all Secrets to a ReplicaSet for example.
I tried this one approach:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: pp-debts
labels:
environment: prod
spec:
replicas: 1
selector:
matchLabels:
environment: prod
template:
metadata:
labels:
environment: prod
spec:
containers:
- name: integration-app
image: my-container-image
envFrom:
- secretRef:
name: intercom-secret
envFrom:
- secretRef:
name: paypal-secret
envFrom:
- secretRef:
name: postgres-secret
envFrom:
- secretRef:
name: redis-secret
But when I connected to the pod, and looked on the env variables, I was able to see only values from the redis-secret.
Try using one envFrom with multiple entries under it as below:
- name: integration-app
image: my-container-image
envFrom:
- secretRef:
name: intercom-secret
- secretRef:
name: paypal-secret
- secretRef:
name: postgres-secret
- secretRef:
name: redis-secret
There's an example at the bottom of this blog post by David Chua