Cannot connect to a mongodb service in a Kubernetes cluster - mongodb

I have a Kubernetes cluster on Google Cloud, I have a database service, which is running in front of a mongodb deployment. I also have a series of microservices, which are attempting to connect to that datastore.
However, they can't seem to find the host.
apiVersion: v1
kind: Service
metadata:
labels:
name: mongo
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
name: mongo
Here's my mongo deployment...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo-deployment
spec:
replicas: 1
template:
metadata:
labels:
name: mongo
spec:
containers:
- image: mongo:latest
name: mongo
ports:
- name: mongo
containerPort: 27017
hostPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
gcePersistentDisk:
pdName: mongo-disk
fsType: ext4
And an example of one of my services...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bandzest-artists
spec:
replicas: 1
template:
metadata:
labels:
name: bandzest-artists
spec:
containers:
- name: artists-container
image: gcr.io/<omitted>/artists:41040e8
ports:
- containerPort: 7000
imagePullPolicy: Always
env:
- name: DB_HOST
value: mongo
- name: AWS_BUCKET_NAME
value: <omitted>
- name: AWS_ACCESS_KEY_ID
value: <omitted>
- name: AWS_SECRET_KEY
value: <omitted>

First, check that the service is created
kubectl describe svc mongo
You should see it show that it is both created and routing to your pod's IP. If you're wondering what your pod's IP is you can check it out via
kubectl get po | grep mongo
Which should return something like: mongo-deployment-<guid>-<guid>, then do
kubectl describe po mongo-deployment-<guid>-<guid>
You should make sure the pod is started correctly and says Running not something like ImagePullBackoff. It looks like you're mounting a volume from a gcePersistentDisk. If you're seeing your pod just hanging out in the ContainerCreating state it's very likely you're not mounting the disk correctly. Make sure you create the disk before you try and mount it as a volume.
If it looks like your service is routing correctly, then you can check the logs of your pod to make sure it started mongo correctly:
kubectl logs mongo-deployment-<guid>-<guid>
If it looks like the pod and logs are correct, you can exec into the pod and make sure mongo is actually starting and working:
kubectl exec -it mongo-deployment-<guid>-<guid> sh
Which should get you into the container (Pod) and then you can try something like this to see if your DB is running.

Related

Kubernetes deployment works but can't connect to postgresql from pgAdmin4

I have a deployment and service yaml file. I use minikube to run Kubernetes in my WSL.
postgres-deployment.yaml:
# PostgreSQL StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
spec:
replicas: 1
serviceName: postgresql-db-service
selector:
matchLabels:
app: postgresql-db
template:
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: postgresql-db
image: postgres:latest
volumeMounts:
- name: postgresql-db-disk
mountPath: /data
env:
- name: POSTGRES_PASSWORD
value: testpassword
- name: PGDATA
value: /data/pgdata
# Volume Claim
volumeClaimTemplates:
- metadata:
name: postgresql-db-disk
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 25Gi
postgres-service.yaml:
# PostgreSQL StatefulSet Service
apiVersion: v1
kind: Service
metadata:
name: postgres-db-lb
spec:
selector:
app: postgresql-db
type: LoadBalancer
ports:
- port: 5432
targetPort: 5432
I run them with:
# kubectl apply -f postgres-deployment.yaml
# kubectl apply -f postgres-service.yaml
The deployment works, I get the Cluster IP of the service with kubectl get all.
I run the pgAdmin with the command:
docker run -p 80:80
-e 'PGADMIN_DEFAULT_EMAIL=user#domain.com'
-e 'PGADMIN_DEFAULT_PASSWORD=SuperSecret'
-d dpage/pgadmin4
I try to connect to the postgres but I am unable to connect.
EDIT:
I changed the user for connection to postgres, still doesn't works.
I tried to change the LoadBalancer to ClusterIp and NodePort, it doesn't work either.
I tried to change my OS to Ubuntu, in case of some weird WSL issues, it doesn't work either.
To access the Postgres locally, I have to use NodePort.
We need to find the NodePort ip and port.
To find the nodeport internal-ip, do:
$ kubectl get nodes -o wide
For the port we can do kubectl describe svc postgres-db-lb or kubectl get svc.
In pgAdmin the hostname should-be <node-ip>:<node-port>.
We can also do minikube service postgres-db-lb to find the url.
EDIT
Or more simply minikube service <NAME_OF_SERVICE>.

How to access mysql pod in another pod(busybox)?

I am given a task of connecting mysql pod with any other working pod(preferably busybox) but was not able to that. Is there a way to do this task. I referred many places but the explanations was bit complicated as I am new to Kubernetes.
MySQL YAML config for Kubernets
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
you can use the service name to connect with the MySQL from busy box container
kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
above command will start one container of Busy box.
run kubectl get pods to check both pod status.
In Busy container you will be able to run the command to connect with the MySQL
mysql -h <MySQL service name> -u <Username> -p<password>
Ref doc : MySQL : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/
Busy box : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/

How can you connect to a database inside a k8 cluster that's behind a headless service?

given a database that is part of a statefulset and behind a headless service, how can I use a local client (outside of the cluster) to access the database? Is it possible to create a separate service that targets a specific pod by its stable id?
There are multiple ways you can conect to this database service
You can use
Port-forward : https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
Service as LoadBalancer : https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
Service as Nodeport : https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
Example MySQL database running on K8s : https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The easiest way is to try with port-forwarding :
kubectl port-forward -n <NAMESPACE Name> <POD name> 3306:3306
using the above command you can create the proxy from local to K8s cluster and test the localhost:3306
This is not a method for Prod use case it's can be used for debugging.
NodePort : Expose the port but use the worker node IPs so if worker not get killed during autoscaling IP may changed with time
I would recommend creating a new service with the respective label and type as LoadBalancer.

Spring boot application pod fails to find the mongodb pod on Kubernetes cluster

I have a Spring Boot Application backed by MongoDB. Both are deployed on a Kubernetes cluster on Azure. My Application throws "Caused by: java.net.UnknownHostException: mongo-dev-0 (pod): Name or service not known" when it tries to connect to MongoDB.
I am able to connect to the mongo-dev-0 pod and run queries on the MongoDB, so there is no issue with the Mongo itself and it looks like the Spring boot is able to connect to Mongo Service and discover the pod behind the service.
How do I ensure the pods are discoverable by my Spring Boot Application?
How do I go about debugging this issue?
Any help is appreciated. Thanks in advance.
Here is my config:
---
apiVersion: v1
kind: Service
metadata:
name: mongo-dev
labels:
name: mongo-dev
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo-dev
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo-dev
spec:
serviceName: "mongo-dev"
replicas: 3
template:
metadata:
labels:
role: mongo-dev
environment: dev
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo-dev
image: mongo:3.4
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
- "--auth"
- "--bind_ip"
- 0.0.0.0
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-dev-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo-dev,environment=dev"
- name: KUBERNETES_MONGO_SERVICE_NAME
value: "mongo-dev"
volumeClaimTemplates:
- metadata:
name: mongo-dev-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "devdisk"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: devdisk
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Premium_LRS
location: abc
storageAccount: xyz
To be able to reach your mongodb pod via its service from your spring boot application, you have to start the mongodb pod and the corresponding service first, and then start your spring boot application pod (let's name it sb-pod).
You can enforce this order by using an initContainer in your sb-pod; to wait for the database service to be available before starting. Something like:
initContainers:
- name: init-mongo-dev
image: busybox
command: ['sh', '-c', 'until nslookup mongo-dev; do echo waiting for mongo-dev; sleep 2; done;']
If you connect to your sb-pod using:
kubectl exec -it sb-pod bash
and type the env command, make sure you can see the environment variables
MONGO_DEV_SERVICE_HOST and MONGO_DEV_SERVICE_PORT
How about mongo-dev-0.mongo-dev.default.svc.cluster.local ?
<pod-id>.<service name>.<namespace>.svc.cluster.local
As in Stable Network ID.

Using a pod without using the node ip

I have a postgres pod running locally on a coreOS vm.
I am able to access postgres using the ip of the minion it is on but I'm attempting to set it up in such a manner as to not have to know exactly which minion the pod is on but still be able to use postgres.
Here is my pod
apiVersion: v1
kind: Pod
metadata:
name: postgresql
labels:
role: postgres-client
spec:
containers:
- image: postgres:latest
name: postgres
ports:
- containerPort: 5432
hostPort: 5432
name: pg-port
volumeMounts:
- name: nfs
mountPath: /mnt
volumes:
- name: nfs
nfs:
server: nfs.server
path: /
and here is a service I tried to set-up but it doesn't seem correct
apiVersion: v1
kind: Service
metadata:
name: postgres-client
spec:
ports:
- port: 5432
targetPort: 5432
selector:
app: postgres-client
I'm guessing that the selector for your service is not finding any matching backends.
Try changing
app: postgres-client
to
role: postgres-client
in the service definition (or vice versa in the pod definition above).
The label selector has to match both the key and value (i.e. role and postgres-client). See the Labels doc for more details.