MongoDB Community Kubernetes Operator Connection - mongodb

I'm trying to deploy a MongoDB replica set by using the MongoDB Community Kubernetes Operator in Minikube. To view the content of the database I would like to connect to the MongoDB replica set through Mongo Compass.
I followed the instructions on the official GitHub, so:
Install the CRD
Install the necessary roles and role-bindings
Install the Operator
Deploy the Replicaset
The yaml file used for the replica set deployment is the following one:
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongo-rs
namespace: mynamespace
spec:
members: 3
type: ReplicaSet
version: "4.4.0"
persistent: true
security:
authentication:
modes: ["SCRAM"]
users:
- name: user
db: admin
passwordSecretRef:
name: user
roles:
- name: userAdminAnyDatabase
db: admin
scramCredentialsSecretName: user-scram
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
---
apiVersion: v1
kind: Secret
metadata:
name: user
type: Opaque
stringData:
password: password
The MongoDB resource deployed and the mongo-rs pods are all running. Also, I'm able to connect to the replica set directly through the mongo shell within the Kubernetes cluster.
Anyway, I'd like to connect to the MongoDB replicaset also from outside the Kubernetes cluster, so in addition I've created a LoadBalancer service like the following one:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: mynamespace
spec:
type: LoadBalancer
selector:
app: mongo-rs-svc
ports:
- port: 27017
nodePort: 30017
The pods (namely mongo-rs-0, mongo-rs-1, mongo-rs-2) are correctly binded to the service. On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On minikube, the LoadBalancer type makes the Service accessible through the minikube service command, which generates a tunnel for the service mongodb-service (for instance: 127.0.0.1:34873), but if I try to connect to the MongoDB replica set through the Mongo Compass client using the connection string:
mongodb://user:password#127.0.0.1:34873/?authSource=admin&replicaSet=mongo-rs&readPreference=primary
The client cannon't connect to mongodb, returning the error:
getaddrinfo ENOTFOUND
mongo-rs-0.mongo-rs-svc.mynamespace.svc.cluster.local
Any suggestions on how to access the replica set from outside kubernetes?
Thanks in advance!
Edit: I know it's possible to connect from the outside using port forwarding, but I'd be interested in a more production-oriented approach.

minikube is a development app. So for you it may be sufficent to connect from your host(desktop) via localhost.
So first you can't use the type LoadBalancer, because this round robins between the mongodb instances, but only the primary in the replica set can write.
Normaly the mongo client with the right connection string will select the primary.
So take nodePort and you will get a connection to one mongodb instance.
Make a kubectl port-forward <resource-type/resource-name> [local_port]:<pod_port>` to the service
Before that you may change the mongodb-service. As far as I know it's not nodePort, its targetPort
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: mynamespace
spec:
type: nodePort
selector:
app: mongo-rs-svc
ports:
- port: 27017
targetPort: 27017
So something like that
kubectl port-forward mongodb-service 27017:27017
Aftert that you may connect you mongodb from localhost:27017 to the kubernetes service, which connects you to the mongo replica.
mongo localhost:27017
should connect. Edit the ports on your needs
Hope this idea helps,

Related

Kubernetes how to access application in one namespace from another

I have the following components up and running in a kubernetes cluster
A GoLang Application writing data to a mongodb statefulset replicaset in namespace app1
A mongodb replicaset (1 replica) running as a statefulset in the namespace ng-mongo
What I need to do is, I need to access the mongodb database by the golang application for write/read opeations, so what I did was;
Create a headless service for the mongodb in the ng-mongo namespace as below:
# Source: mongo/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: ng-mongo
labels:
app: mongo
spec:
ports:
- port: 27017
targetPort: 27017
name: mongo
clusterIP: None
selector:
role: mongo
And then I deployed the mongodb statefulset and initialized the replicaset as below:
kubectl exec -it mongo-0 -n ng-mongo mongosh
rs.initiate({_id: "rs0",members: [{_id: 0, host: "mongo-0"}]})
// gives output
{ ok: 1 }
Then I created an ExternalName service in the app1 namespace linking the above mongo service in step 1, look below:
# Source: app/templates/svc.yaml
kind: Service
apiVersion: v1
metadata:
name: app1
namespace: app1
spec:
type: ExternalName
externalName: mongo.ng-mongo.svc.cluster.local
ports:
- port: 27017
And at last, I instrumented my golang application as follows;
// Connection URI
const mongo_uri = "mongodb://app1" <-- Here I used the app1, as the ExternalName service's name is `app1`
<RETRACTED-CODE>
And then I ran the application, and checked the logs. Here is what I found:
2022/11/22 12:49:47 server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongo-0:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: dial tcp: lookup mongo-0 on 10.96.0.10:53: no such host }, ] }
Update: I haven't set any usernames or passwords for the mongodb
Can someone help me why this is happening?
After some digging, I was able to find the issue.
When specifying the host entry for the rs.initiate({}), I should provide the FQDN of the relevant mongodb instance (in my case it is the mongo-0 pod). Therefore, my initialisation command should look like this;
rs.initiate({_id: "rs0",members: [{_id: 0, host: "mongo-0.mongo.ng-mongo.svc.cluster.local:27017"}]})
From my understanding of what you are trying to do,
Your Pod(golang application) and app1 Service are already in the same namespace.
However, looking at the log,
2022/11/22 12:49:47 server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongo-0:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: dial tcp: lookup mongo-0 on 10.96.0.10:53: no such host }, ] }
The log means that the domain named 'mongo-0' could not be found in DNS. (Note that 10.96.0.10 IP is probably kube-dns)
Your application tries to connect to the domain mongo-0, but the domain mongo-0 does not exist in DNS (It means there is no service named mongo-0 on app1 namespace).
What is the 'mongo-0' that your Application trying to access?
(Obviously the log shows an attempt to access the domain mongo-0 and your golang applications mongo_uri indicates mongodb://app1)
Finding out why your application are trying to connect to the mongo-0 domain will help solve the problem.
Hope this helps you.

How to access a database that is only accesible from Kubernetes cluster locally?

I have a situation where I have a Kubernetes cluster that has access to a Postgres instance (which is not run in the Kubernetes cluster). The Postgres instance is not accessible from anywhere else.
What I would like to do is connect with my Database tools locally. What I have found is kubectl port-forward but I think this would only be a solution if the Postgres instance is run as a pod. What I basically need is a Pod, that forwards everything that is sent on Port 8432 to the postgres instance and then I could use the port forward.
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
What is the right way to do this?
You can create service for your postgresql instance:
---
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
ports:
- port: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgresql
subsets:
- addresses:
- ip: ipAddressOfYourPGInstance
ports:
- port: 5432
And then use:
kubectl port-forward service/postgresql 5432:5432
you can use the Postgres client to connect with the Postgres instance and expose that pod using the ingress and you can access the UI over the URL.
for Postgres client, you can use: https://hub.docker.com/r/dpage/pgadmin4/
you can set this as pgclient and use it

Accessing database with pgAdmin in Kubernetes cluster

In my current non-Kubernetes environment, if I need to access the Postgres database, I just setup an SSH tunnel with:
ssh -L 5432:localhost:5432 user#domain.com
I'm trying to figure out how to do something similar in a test Kubernetes cluster I am setting up in EKS, that doesn't involve a great security risk. For example, creating a path in the ingress control to the databases port is a terrible idea.
The cluster would be setup where Postgres is in a pod, but all of the data is on persistent volume claim so that the data persists when the pod is destroyed.
How would one use pgAdmin to access the database in this kind of setup?
The kubectl command can forward TCP ports into a POD via the kube-api
kubectl port-forward {postgres-pod-ID} 5432:5432
If you are not using a cluster-admin user, the user will need to be bound to a role that allows it to create pods/portforward in the pods namespace.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pg-portforward
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["create"]
I have similar kind of setup. My postgreSQL database is deployed in a pod in kubernetes cluster running in AWS cloud. Following are the steps that I performed to access this remote database from pgAdmin running in my local machine (Followed this video).
1 -> Modify the Service type of db from ClusterIP to NodePort as shown below.
apiVersion: v1
kind: Service
metadata:
name: db
spec:
selector:
app: db
ports:
- name: dbport
port: 5432
nodePort: 31000
type: NodePort
2-> Add new rule to security group of any node(EC2 Instance) of your Kubernetes cluster as shown below.
3-> Connect to this remote database using the public IPv4 address of the node(EC2 Instance).
HostAddress: IPv4 address of the node ec2 instance.
Port: 31000

Access SQL Server database from Kubernetes Pod

My deployed Spring boot application to trying to connect to an external SQL Server database from Kubernetes Pod. But every time it fails with error
Failed to initialize pool: The TCP/IP connection to the host <>, port 1443 has failed.
Error: "Connection timed out: no further information.
Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.
I have tried to exec into the Pod and successfully ping the DB server without any issues
Below are the solutions I have tried:
Created a Service and Endpoint and provided the DB IP in configuration file tried to bring up the application in the Pod
Tried using the Internal IP from Endpoint instead of DB IP in configuration to see Internal IP is resolved to DB IP
But both these cases gave the same result. Below is the yaml I am using the create the Service and Endpoint.
---
apiVersion: v1
kind: Service
metadata:
name: mssql
namespace: cattle
spec:
type: ClusterIP
ports:
- port: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
namespace: cattle
subsets:
- addresses:
- ip: <<DB IP>>
ports:
- port: 1433
Please let me know if I am wrong or missing in this setup.
Additional information the K8s setup
It is clustered master with external etcd cluster topology
OS on the nodes is CentOS
Able to ping the server from all nodes and the pods that are created
For this scenario a headless service is very useful. You will redirect traffic to this ip without defining an endpoint.
kind: "Service"
apiVersion: "v1"
metadata:
namespace: "your-namespace"
name: "ftp"
spec:
type: ExternalName
externalName: your-ip
The issue was resolved by updating the deployment yaml with IP address. Since all the servers were in same subnet, I did not need the to create a service or endpoint to access the DB. Thank you for all the inputs on the post

How to connect MongoDB which is running on Kubernetes cluster through UI tools like MongoDB Compass or RoboMongo?

I have multiple instances of Mongo db deployed inside my kubernetes cluster through helm packages.
They are running as a service, in NodePort.
How do I connect to those Mongo db instances through UI tools like MongoDB Compass and RoboMongo from outside the cluster?
Any help is appreciated.
You can use kubectl port-forward to connect to MongoDB from outside the cluster.
Run kubectl port-forward << name of a mongodb pod >> --namespace << mongodb namespace >> 27018:27018.
Now point your UI tool to localhost:27018 and kubectl will forward all connections to the pod inside the cluster.
Starting with Kubernetes 1.10+ you can also use this syntax to connect to a service (you don't have to find a pod name first):
kubectl port-forward svc/<< mongodb service name >> 27018:27018 --namespace << mongodb namespace>>
If it is not your production database you can expose it through a NodePort service:
# find mongo pod name
kubectl get pods
kubectl expose pod <<pod name>> --type=NodePort
# find new mongo service
kubectl get services
Last command will output something like
mongodb-0 10.0.0.45 <nodes> 27017:32151/TCP 30s
Now you can access your mongo instance with mongo <<node-ip>>:32151
Fetch the service associated with the mongo db:
kubectl get services -n <namespace>
Port forward using:
kubectl port-forward service/<service_name> -n <namespace> 27018:27017
Open Robomongo on localhost:27018
If not resolved, expose your mongo workload as a load balancer and use the IP provided by the service. Copy the LB IP and use the same in the robo3T. If it requires authentication, check my YAML file below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
volumeMounts:
- name: data
mountPath: "/data/db"
subPath: "mongodb_data"
ports:
- containerPort: 27017
protocol: TCP
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: xxxx
- name: MONGO_INITDB_ROOT_PASSWORD
value: xxxx
imagePullSecrets:
- name: xxxx
volumes:
- name: data
persistentVolumeClaim:
claimName: xxx
Set the same values in the authentication tab in ROBO3T
NOTE: I haven't mentioned the service section in the YAML since I directly exposed as an LB in the GCP UI itself.