Authentication mongo deployed on kubernetes - mongodb

I tried to configure mongo with authentication on a kubernetes cluster. I deployed the following yaml:
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:4.0.0
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
# Get password from secret
value: "abc123changeme"
command:
- mongod
- --auth
- --replSet
- rs0
- --bind_ip
- 0.0.0.0
ports:
- containerPort: 27017
name: web
volumeMounts:
- name: mongo-ps
mountPath: /data/db
volumes:
- name: mongo-ps
persistentVolumeClaim:
claimName: mongodb-pvc
When I tried to authenticate with username "admin" and password "abc123changeme" I received "Authentication failed.".
How can I configure mongo admin username and password (I want to get password from secret)?
Thanks

The reason the environment variables don't work is that the MONGO_INITDB environment variables are used by the docker-entrypoint.sh script within the image ( https://github.com/docker-library/mongo/tree/master/4.0 ) however when you define a 'command:' in your kubernetes file you override that entrypoint (see notes https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ )
See below YML which is adapted from a few of the examples I found online. Note the learning points for me
cvallance/mongo-k8s-sidecar looks for ANY mongo instance matching the POD labels REGARDLESS of namespace so it'll try to hook up with any old instance in the cluster. This caused me a few hours of headscratching as I'd removed the environment= labels from the example as we use namespaces to segregate our environments..silly and obvious in retrospect...extremely confusing in the beginning (mongo logs were throwing all sorts of authentication errors and service down type errors because of the cross talk)
I was new to ClusterRoleBindings and it took me a while to realise they are Cluster level which I know seems obvious (despite needing to supply a namespace to get kubectl to accept it) but was causing mine to get overwritten between each namespace so make sure you create unique names per environment to avoid a deployment in one namespace messing up another as the ClusterRoleBinding gets overwritten if they're not unqiue within the cluster
MONGODB_DATABASE needs to be set to 'admin' for authentication to work.
I was following this example to configure authentication which depended on a sleep5 in the hope the daemon was up and running before attempting to create the adminUser. I found this wasn't long enough so upped it initially as failure to create the adminUser obviously led to connection refused issues. I later changed the sleep to test the daemon with a while loop and a ping of mongo which is more foolproof.
If you run mongod in a container (e.g. lxc, cgroups, Docker, etc.) that does not have access to all of the RAM available in a system, you must set --wiredTigerCacheSizeGB to a value less than the amount of RAM available in the container. The exact amount depends on the other processes running in the container.
You need at least 3 nodes in a Mongo cluster !
The YML below should spin up and configure a mongo replicaset in kubernetes with persistent storage and authentication enabled.
If you connect into the pod...
kubectl exec -ti mongo-db-0 --namespace somenamespace /bin/bash
mongo shell is installed in the image so you should be able to connect to the replicaset with...
mongo mongodb://mongoadmin:adminpassword#mongo-db/admin?replicaSet=rs0
And see that you get either rs0:PRIMARY> or rs0:SECONDARY, indicating the two pods are in a mongo replicateset. use rs.conf() to verify that from the PRIMARY.
#Create a Secret to hold the MONGO_INITDB_ROOT_USERNAME/PASSWORD
#so we can enable authentication
apiVersion: v1
data:
#echo -n "mongoadmin" | base64
init.userid: bW9uZ29hZG1pbg==
#echo -n "adminpassword" | base64
init.password: YWRtaW5wYXNzd29yZA==
kind: Secret
metadata:
name: mongo-init-credentials
namespace: somenamespace
type: Opaque
---
# Create a secret to hold a keyfile used to authenticate between replicaset members
# this seems to need to be base64 encoded twice (might not be the case if this
# was an actual file reference as per the examples, but we're using a simple key
# here
apiVersion: v1
data:
#echo -n "CHANGEMECHANGEMECHANGEME" | base64 | base64
mongodb-keyfile: UTBoQlRrZEZUVVZEU0VGT1IwVk5SVU5JUVU1SFJVMUYK
kind: Secret
metadata:
name: mongo-key
namespace: somenamespace
type: Opaque
---
# Create a service account for Mongo and give it Pod List role
# note this is a ClusterROleBinding - the Mongo Pod will be able
# to list all pods present in the cluster regardless of namespace
# (and this is exactly what it does...see below)
apiVersion: v1
kind: ServiceAccount
metadata:
name: mongo-serviceaccount
namespace: somenamespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: mongo-somenamespace-serviceaccount-view
namespace: somenamespace
subjects:
- kind: ServiceAccount
name: mongo-serviceaccount
namespace: somenamespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-viewer
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-viewer
namespace: somenamespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
---
#Create a Storage Class for Google Container Engine
#Note fstype: xfs isn't supported by GCE yet and the
#Pod startup will hang if you try to specify it.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: somenamespace
name: mongodb-ssd-storage
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
allowVolumeExpansion: true
---
#Headless Service for StatefulSets
apiVersion: v1
kind: Service
metadata:
namespace: somenamespace
name: mongo-db
labels:
name: mongo-db
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
---
# Now the fun part
#
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: somenamespace
name: mongo-db
spec:
serviceName: mongo-db
replicas: 3
template:
metadata:
labels:
# Labels MUST match MONGO_SIDECAR_POD_LABELS
# and MUST differentiate between other mongo
# instances in the CLUSTER not just the namespace
# as the sidecar will search the entire cluster
# for something to configure
app: mongo
environment: somenamespace
spec:
#Run the Pod using the service account
serviceAccountName: mongo-serviceaccount
terminationGracePeriodSeconds: 10
#Prevent a Mongo Replica running on the same node as another (avoid single point of failure)
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongo
topologyKey: "kubernetes.io/hostname"
containers:
- name: mongo
image: mongo:4.0.12
command:
#Authentication adapted from https://gist.github.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115
#in order to pass the new admin user id and password in
- /bin/sh
- -c
- >
if [ -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with runtime settings (clusterAuthMode)"
#ensure wiredTigerCacheSize is set within the size of the containers memory limit
mongod --wiredTigerCacheSizeGB 0.5 --replSet rs0 --bind_ip 0.0.0.0 --smallfiles --noprealloc --clusterAuthMode keyFile --keyFile /etc/secrets-volume/mongodb-keyfile --setParameter authenticationMechanisms=SCRAM-SHA-1;
else
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with setup setting (authMode)"
mongod --auth;
fi;
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- >
if [ ! -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- no Admin-user.lock file found yet"
#replaced simple sleep, with ping and test.
while (! mongo --eval "db.adminCommand('ping')"); do sleep 10; echo "KUBERNETES LOG $HOSTNAME - waiting another 10 seconds for mongo to start" >> /data/db/configlog.txt; done;
touch /data/db/admin-user.lock
if [ "$HOSTNAME" = "mongo-db-0" ]; then
echo "KUBERNETES LOG $HOSTNAME- creating admin user ${MONGODB_USERNAME}"
mongo --eval "db = db.getSiblingDB('admin'); db.createUser({ user: '${MONGODB_USERNAME}', pwd: '${MONGODB_PASSWORD}', roles: [{ role: 'root', db: 'admin' }]});" >> /data/db/config.log
fi;
echo "KUBERNETES LOG $HOSTNAME-shutting mongod down for final restart"
mongod --shutdown;
fi;
env:
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
ports:
- containerPort: 27017
livenessProbe:
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 5
periodSeconds: 60
timeoutSeconds: 10
readinessProbe:
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 5
periodSeconds: 60
timeoutSeconds: 10
resources:
requests:
memory: "350Mi"
cpu: 0.05
limits:
memory: "1Gi"
cpu: 0.1
volumeMounts:
- name: mongo-key
mountPath: "/etc/secrets-volume"
readOnly: true
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
# Sidecar searches for any POD in the CLUSTER with these labels
# not just the namespace..so we need to ensure the POD is labelled
# to differentiate it from other PODS in different namespaces
- name: MONGO_SIDECAR_POD_LABELS
value: "app=mongo,environment=somenamespace"
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
#don't be fooled by this..it's not your DB that
#needs specifying, it's the admin DB as that
#is what you authenticate against with mongo.
- name: MONGODB_DATABASE
value: admin
volumes:
- name: mongo-key
secret:
defaultMode: 0400
secretName: mongo-key
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "mongodb-ssd-storage"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi

Supposing you created a secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
Here a snippet to get a value from a secret in a kubernetes yaml file:
env:
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password

I found this issue is related to a bug in docker-entrypoint.sh and occurs when numactl is detected on the node.
Try this simplified code (which moves numactl out of the way):
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo:4.0.0
command:
- /bin/bash
- -c
# mv is not needed for later versions e.g. 3.4.19 and 4.1.7
- mv /usr/bin/numactl /usr/bin/numactl1 && source docker-entrypoint.sh mongod
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "xxxxx"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "xxxxx"
ports:
- containerPort: 27017
I raised an issue at:
https://github.com/docker-library/mongo/issues/330
Hopefully it will be fixed at some point so no need for the hack :o)

Adding this resolved the issue for me:
- name: ME_CONFIG_MONGODB_ENABLE_ADMIN
value: "true"
Seems like the default is set to "false".
If you are using Kubernetes you can check the reason for failure by using the command:
kubernetes logs <pod name>

This is what worked for me;
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: my-mongodb-pod
image: mongo:4.4.3
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "someMongoUser"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "somePassword"
- name: MONGO_REPLICA_SET
value: "myReplicaSet"
- name: MONGO_PORT
value: "27017"
# Note, to disable non-auth in mongodb is kind of complicated[4]
# Note, the `_getEnv` function is internal and undocumented[3].
#
# 1. https://gist.github.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115
# 2. https://stackoverflow.com/a/54726708/2768067
# 3. https://stackoverflow.com/a/67037065/2768067
# 4. https://www.mongodb.com/features/mongodb-authentication
command:
- /bin/sh
- -c
- >
set -x # print command been ran
set -e # fail if any command fails
env;
ps auxwww;
printf "\n\t mongod:: start in the background \n\n";
mongod \
--port="${MONGO_PORT}" \
--bind_ip_all \
--replSet="${MONGO_REPLICA_SET}" \
--quiet > /tmp/mongo.log.json 2>&1 &
sleep 9;
ps auxwww;
printf "\n\t mongod: set master \n\n";
mongo --port "${MONGO_PORT}" --eval '
rs.initiate({});
sleep(3000);';
printf "\n\t mongod: add user \n\n";
mongo --port "${MONGO_PORT}" --eval '
db.getSiblingDB("admin").createUser({
user: _getEnv("MONGO_INITDB_ROOT_USERNAME"),
pwd: _getEnv("MONGO_INITDB_ROOT_PASSWORD"),
roles: [{ role: "userAdminAnyDatabase", db: "admin" }]
});';
printf "\n\t mongod: shutdown \n\n";
mongod --shutdown;
sleep 3;
ps auxwww;
printf "\n\t mongod: restart with authentication \n\n";
mongod \
--auth \
--port="${MONGO_PORT}" \
--bind_ip_all \
--replSet="${MONGO_REPLICA_SET}" \
--verbose=v

Related

Health check of Postgres database in separate pod from springboot application pod to ensure database connection - K8s

as a training in my company I have to deploy my simple microservices app in minikube Kubernetes local cluster.
My app is as follows:
Post microservice
User microservice
both communicate with eachother throught RestTemplate
services uses postgres databases in separate containers build on official postgres image - each have its own database run i a docker container.
I have both services run i a containers alongside with 2 databases.
In docker-compose.yml I could simply add 'depends_on' to make sure that application containers will run only after containers with database are ready and running.
How can I achive the same thing in Kubernetes? I must run 8 pods (2 for microservices and 2 for databases all*2 instances) all pods in the same namespace. I must make sure that pods with databases will run first and only after those are ready, pods with microservices will run.
All should work with command
kubectl apply -f .
So shortly, pod with application should start after pod with database.
I have tried using InitContainer, but it doesnt work.
my post-manifest.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: post
namespace: kubernetes-microservices-task2
spec:
replicas: 2
selector:
matchLabels:
app: post
template:
metadata:
labels:
app: post
spec:
containers:
- name: post
image: norbertpedyk/post-image:varsion8
envFrom:
- configMapRef:
name: post-config
- secretRef:
name: secret
ports:
- containerPort: 8081
initContainers:
- name: check-db
image: busybox
command: [ "sh", "-c", "until nslookup post-db; do echo waiting for post-db; sleep 2; done" ]
---
apiVersion: v1
kind: Service
metadata:
name: post
namespace: kubernetes-microservices-task2
spec:
selector:
app: post
ports:
- name: post
port: 8081
targetPort: 8081
my post-db-manifest.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: post-db
namespace: kubernetes-microservices-task2
spec:
replicas: 2
selector:
matchLabels:
app: post-db
template:
metadata:
labels:
app: post-db
spec:
containers:
- name: post-db
image: postgres
env:
- name: POSTGRES_DB
value: "posts"
- name: POSTGRES_USER
value: "pedyk"
- name: POSTGRES_PASSWORD
value: "password"
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-post-data
mountPath: /data/postgres
volumes:
- name: postgres-post-data
emptyDir: { }
---
apiVersion: v1
kind: Service
metadata:
name: post-db
namespace: kubernetes-microservices-task2
spec:
selector:
app: post-db
ports:
- name: http
port: 5433
targetPort: 5432
Ehat I got is:
enter image description here
This is an example only for Post service, the same is with User service
I must make sure that pods with databases will run first and only after those are ready, pods with microservices will run.
You shouldn't rely on a specific startup order in order for your application to function. You really want your application to wait until the database is ready; your initContainer idea is heading in the right direction, but you're not checking to see if the database is ready; you're simply checking to see if the hostname resolves.
An easy way to check if postgres is ready is to use psql to run a simple query (SELECT 1) against it until it succeeds. We could do something like the following. In this example, I assume that we have a secret named postgres-credentials that looks like:
apiVersion: v1
kind: Secret
metadata:
labels:
app: wait-for-db
component: postgres
name: postgres-credentials-b8mfmtc4g6
namespace: postgres
type: Opaque
stringData:
POSTGRES_PASSWORD: secret
POSTGRES_USER: postgres
POSTGRES_DB: postgres
That secret is used to configure the postgres image; we can use the same secret to set up environment variables for psql:
initContainers:
- name: wait-for-db
image: docker.io/postgres:14
env:
- name: PGHOST
value: postgres
- name: PGUSER
valueFrom:
secretKeyRef:
key: POSTGRES_USER
name: postgres-credentials-b8mfmtc4g6
- name: PGPASSWORD
valueFrom:
secretKeyRef:
key: POSTGRES_PASSWORD
name: postgres-credentials-b8mfmtc4g6
- name: PGDATABASE
valueFrom:
secretKeyRef:
key: POSTGRES_DB
name: postgres-credentials-b8mfmtc4g6
command:
- /bin/sh
- -c
- |
while ! psql -c 'select 1' > /dev/null 2>&1; do
echo "waiting for database"
sleep 1
done
echo "database is ready!"
This initContainer runs psql -c 'select 1' until it succeeds, at which point the application pod can come up.
You can find a complete example deployment here (that uses a PgAdmin pod as an example application).
It's worth noting that this is really only half the picture: in an ideal world, you wouldn't need the initContainer because your application would already have the necessary logic to (a) wait for database availability at startup and (b) handle database interruptions at runtime. In other words, you want to be able to restart your database (or upgrade it, or whatever) while your application is running without having to restart your application.

Mongodb Authentication failed in Kubernetes (minikube)

Creating a mongodb from the below minikube deployment, I am not able to authenticate from the command line.
Basically I'd just want to list all databases because I suspect there is an issue with the connectivity to mongo-express.
I exec into the pod using kubectl exec -it mongodb-deployment-6b46455744-gfkzw -- /bin/bash,
mongo to start up the cli
db.auth("username", "password") gives MongoServerError: Authentication failed.
even though printenv gives
MONGO_INITDB_ROOT_PASSWORD=password
MONGO_INITDB_ROOT_USERNAME=username
any help?
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_ENABLE_ADMIN
value: "true"
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
managed to connect using
mongosh --port 27017 --authenticationDatabase \
"admin" -u "myUserAdmin" -p
from https://docs.mongodb.com/manual/tutorial/authenticate-a-user/#std-label-authentication-auth-as-user and docker hub page.

Setting postgres environmental variables running image

As the documentation shows, you should be setting the env vars when doing a docker run like the following:
docker run --name some-postgres -e POSTGRES_PASSWORD='foo' POSTGRES_USER='bar'
This sets the superuser and password to access the database instead of the defaults of POSTGRES_PASSWORD='' and POSTGRES_USER='postgres'.
However, I'm using Skaffold to spin up a k8s cluster and I'm trying to figure out how to do something similar. How does one go about doing this for Kubernetes and Skaffold?
#P Ekambaram is correct but I would like to go further into this topic and explain the "whys and hows".
When passing passwords on Kubernetes, it's highly recommended to use encryption and you can do this by using secrets.
Creating your own Secrets (Doc)
To be able to use the secrets as described by #P Ekambaram, you need to have a secret in your kubernetes cluster.
To easily create a secret, you can also create a Secret from generators and then apply it to create the object on the Apiserver. The generators should be specified in a kustomization.yaml inside a directory.
For example, to generate a Secret from literals username=admin and password=secret, you can specify the secret generator in kustomization.yaml as
# Create a kustomization.yaml file with SecretGenerator
$ cat <<EOF >./kustomization.yaml
secretGenerator:
- name: db-user-pass
literals:
- username=admin
- password=secret
EOF
Apply the kustomization directory to create the Secret object.
$ kubectl apply -k .
secret/db-user-pass-dddghtt9b5 created
Using Secrets as Environment Variables (Doc)
This is an example of a pod that uses secrets from environment variables:
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
Source: here and here.
Use the below YAML
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
template:
metadata:
labels:
name: postgres
spec:
containers:
- name: postgres
image: postgres:11.2
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "sampledb"
- name: POSTGRES_USER
value: "postgres"
- name: POSTGRES_PASSWORD
value: "secret"
volumeMounts:
- name: data
mountPath: /var/lib/postgresql
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
name: postgres

Production Redis cluster with sharding in Kubernetes

I have tried Redis stable 'helm' chart to deploy a Redis cluster with 1 master and 3 slaves that replicates data written to master. But it is a single point of failure - I deleted the master, no new pod of master was recreated. Also, the chart does not support data partitions (sharding).
EDIT: I have created a Redis cluster using helm redis-ha chart, but there is no option to have sharding.
Aren't there Redis helm charts to deploy a production ready HA cluster that supports partitions (sharding)? Can you point me to resources I can use to setup a manageable Redis cluster? Primarily, my Redis is used for data caching, message processing & streaming.
You need Redis Sentinel.
If helm is an option, these links can be of help:
https://github.com/helm/charts/tree/master/stable/redis (notice the sentinel-related configuration parameters)
https://github.com/helm/charts/tree/master/stable/redis-ha
Here's a fine tutorial about setting up a 3 master / 3 slaves redis cluster with partitioning. It's targetted at Rancher, but that's optional, I just tested it on Azure Kubernetes Service and it works fine.
First, apply this yaml (ConfigMap, StatefulSet with 6 replicas, Service):
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
data:
update-node.sh: |
#!/bin/sh
REDIS_NODES="/data/nodes.conf"
sed -i -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${REDIS_NODES}
exec "$#"
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes.conf
cluster-migration-barrier 1
appendonly yes
protected-mode no
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-cluster
spec:
serviceName: redis-cluster
replicas: 6
selector:
matchLabels:
app: redis-cluster
template:
metadata:
labels:
app: redis-cluster
spec:
containers:
- name: redis
image: redis:5.0.1-alpine
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["/conf/update-node.sh", "redis-server", "/conf/redis.conf"]
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: conf
mountPath: /conf
readOnly: false
- name: data
mountPath: /data
readOnly: false
volumes:
- name: conf
configMap:
name: redis-cluster
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: redis-cluster
spec:
type: ClusterIP
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
selector:
app: redis-cluster
And the next step is to run this script to form the cluster (you will have to enter "yes" once interactively):
kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')
for x in $(seq 0 5); do echo "redis-cluster-$x"; kubectl exec redis-cluster-$x -- redis-cli role; echo; done

Spring boot application pod fails to find the mongodb pod on Kubernetes cluster

I have a Spring Boot Application backed by MongoDB. Both are deployed on a Kubernetes cluster on Azure. My Application throws "Caused by: java.net.UnknownHostException: mongo-dev-0 (pod): Name or service not known" when it tries to connect to MongoDB.
I am able to connect to the mongo-dev-0 pod and run queries on the MongoDB, so there is no issue with the Mongo itself and it looks like the Spring boot is able to connect to Mongo Service and discover the pod behind the service.
How do I ensure the pods are discoverable by my Spring Boot Application?
How do I go about debugging this issue?
Any help is appreciated. Thanks in advance.
Here is my config:
---
apiVersion: v1
kind: Service
metadata:
name: mongo-dev
labels:
name: mongo-dev
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo-dev
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo-dev
spec:
serviceName: "mongo-dev"
replicas: 3
template:
metadata:
labels:
role: mongo-dev
environment: dev
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo-dev
image: mongo:3.4
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
- "--auth"
- "--bind_ip"
- 0.0.0.0
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-dev-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo-dev,environment=dev"
- name: KUBERNETES_MONGO_SERVICE_NAME
value: "mongo-dev"
volumeClaimTemplates:
- metadata:
name: mongo-dev-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "devdisk"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: devdisk
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Premium_LRS
location: abc
storageAccount: xyz
To be able to reach your mongodb pod via its service from your spring boot application, you have to start the mongodb pod and the corresponding service first, and then start your spring boot application pod (let's name it sb-pod).
You can enforce this order by using an initContainer in your sb-pod; to wait for the database service to be available before starting. Something like:
initContainers:
- name: init-mongo-dev
image: busybox
command: ['sh', '-c', 'until nslookup mongo-dev; do echo waiting for mongo-dev; sleep 2; done;']
If you connect to your sb-pod using:
kubectl exec -it sb-pod bash
and type the env command, make sure you can see the environment variables
MONGO_DEV_SERVICE_HOST and MONGO_DEV_SERVICE_PORT
How about mongo-dev-0.mongo-dev.default.svc.cluster.local ?
<pod-id>.<service name>.<namespace>.svc.cluster.local
As in Stable Network ID.