Connecting from external sources to MongoDB replica set in Kubernetes fails with getaddrinfo ENOTFOUND error but standalone works - mongodb

I have a MongoDB replica set running in Kubernetes (AWS EKS), created using Helm charts from Bitnami. The services are configured to be external facing and set to NodePort.
mongo-mongodb-0-external NodePort 10.100.83.252 27017:30030/TCP
mongo-mongodb-1-external NodePort 10.100.15.184 27017:30031/TCP
mongo-mongodb-2-external NodePort 10.100.90.128 27017:30032/TCP
mongo-mongodb-arbiter-headless ClusterIP None 27017/TCP
mongo-mongodb-headless ClusterIP None 27017/TCP
On my laptop, I can connect Mongo CLI to the replica but connection fails from MongoDB Compass and Studio 3T.
The following works from Mongo CLI from my laptop...
mongo 'mongodb://root:mypassword#k8s_node_ip:30030,k8s_node_ip:30031,k8s_node_ip:30032/mydb?authSource=admin'
...and this works...
mongo admin --host "k8s_node_ip:30030,k8s_node_ip:30031,k8s_node_ip:30032" --authenticationDatabase admin -u root -p mypassword
...but this fails in MongoDB Compass and Studio 3T...
mongodb://root:mypassword#k8s_node_ip:30030,k8s_node_ip:30031,k8s_node_ip:30032/mydb?authSource=admin
The error message is:
getaddrinfo ENOTFOUND mongo-mongodb-0.mongo-mongodb-headless.mynamespace.svc.cluster.local
Bizarrely, the following standalone connection works in Studio 3T:
mongodb://root:mypassword#k8s_node_ip:30030/?serverSelectionTimeoutMS=5000&connectTimeoutMS=10000&authSource=admin&authMechanism=SCRAM-SHA-256

When connecting to a replica set, the host:port pairs in the connection string are a seedlist.
The driver/client will attempt to connect to each host in the seedlist in turn until it gets a connection.
It runs the isMaster command to determine which node is primary, and to get a list of all replica set members.
Then is drops the original seedlist connection, and attempts to connect to each replica set member using the host and port information retrieved.
The host information returned by the isMaster usually matches the entry in rs.conf(), which are the hostnames used to initiate the replica set.
In your Kubernetes cluster, the nodes have internal hostnames that are used to initiate the replica set, but that your external clients can't resolve.
In order to get this to work, you will need to have the mongod nodes isMaster command return a different set of hostnames depending on where the client request is coming from. This is similar to split-horizon DNS.
Look over the Deploy a Replica Set documentation for mongodb/kubernetes, and the replicaSetHorizons setting.

Make sure that you have added replica set nodes in the host machine in etc/hosts file.
Just like the below example -
127.0.0.1 mongoset1 mongoset2 mongoset3
Note - 127.0.0.1 is your host machine and mongoset1, mongoset2 and mongoset3 are the nodes (members) of the replicaset.

Related

Unable to connect to mongodb replicaset Google Compute Engine

I have created a 3 node replicaSet for mongodb on google compute engine, for the testing purpose i have added 0.0.0.0/0 for the firewall rule and I'm able to connect to individual node from anywhere and all the instances work without any issue! but the problem is when i tried to connect to the replicaset, using the following command
mongo "mongodb://username:password#public-ip-1:27017,public-ip-2:27017,public-ip-3:27017/production?replicaSet=rs0"
When I'm trying this code another instance on the same project, it work's without any issue
When I try from a different project instance or my local instance, it is throwing error as follow
2020-08-22T14:36:40.579+0530 I NETWORK [thread1] getaddrinfo("mongodb-1-servers-vm-0") failed: nodename nor servname provided, or not known
2020-08-22T14:36:40.582+0530 I NETWORK [thread1] getaddrinfo("mongodb-1-servers-vm-1") failed: nodename nor servname provided, or not known
2020-08-22T14:36:40.582+0530 W NETWORK [thread1] Unable to reach primary for set rs0
and from these instance I'm able to connect to individual nodes separately, as
mongo "mongodb://username:passsword#public-ip-1:27017/production"
mongo "mongodb://username:passsword#public-ip-2:27017/production"
mongo "mongodb://username:passsword#public-ip-3:27017/production"
What could be the problem ?
The second question is, on the firewall these is an option to add app engine service account? so if i disable the 0.0.0.0/0 public access and add this rule, can i connect from my app engine to these instance ?
When connecting to a replica set, the client uses a server discovery process. The basic steps are:
connect to the first host in the connection string
if that fails try the second
once a connection is established, run commands such isMaster to:
discover the names of all of the replica set members
discover which node is currently primary
close the initial connection
connect to every member of the replica set using the hostname and port discovered in step 2
What this means is that the host name returned from rs.conf() or rs.status() must be resolvable by every client that you want to be able to directly connect to the replica set.
From the error message, it looks like the nodes were added to the replica set using short names. If you add the short name for each node to the hosts file on the machine you are connection from, it should work.
Alternately, rebuild the replica set using publicly resolvable hostnames in the rs.initiate and rs.add commands.

Cannot access local MongoDB from sam local

I have Windows 10 machine where MongoDB is installed. I can connect it from a command line. I run NodeJS app with sam local. When I use a production environment, the app can access Mongo Atlas cloud instance. But when I switch to a dev environment with localhost MongoDB it fails to connect.
The sam command starts Docker so it is clear why it cannot connect Mongo running on windows localhost. I found relevant question: From inside of a Docker container, how do I connect to the localhost of the machine?. The problem is that I still cannot connect my local MongoDB, even if I try:
"MONGODB_URI": "mongodb://docker.for.win.localhost:27018/bud?retryWrites=true&w=majority"
or
"MONGODB_URI": "mongodb://host.docker.internal:27018/bud?retryWrites=true&w=majority"
Error:
Request failed { MongoNetworkError: failed to connect to server [docker.for.win.localhost:27018] on first connect [MongoNetworkError: connect ECONNREFUSED 192.168.65.2:27018]
Has anybody faced this issue as well and overcome it? Mongo is installed directly to windows, not in Docker.
If MongoDB is installed and running directly from windows, it should be accessible via localhost:27017. Default port for mongod is 27017, as described in mongoDB documentation page.
Try using:
"MONGODB_URI": "mongodb://localhost:27017/bud?retryWrites=true&w=majority"
If you are using NETWORKS_DRIVER other than bridge for your NodeJS docker container, which is set by default. Refer to Docker Network drivers
Other cases:
The default port for mongod is 27018 when running with --shardsvr command-line option or the shardsvr value for the clusterRole setting in a configuration file.
The default port for mongod is 27019 when running with --configsvr command-line option or the configsvr value for the clusterRole setting in a configuration file.
Remember, that localhost (or any name) is just for your convinience. Tcp stack works on ip addresses. If you configure dns service (e.g. via hosts file) to resolve name to 127.0.0.1 for container it doesn't mean your host, but 127.0.0.1 points to the container, always.
You could make mongo service to listen on your main ip and use it for docker app, but you can also leverage hyper-v virtual network cards and setup mongo to listen not only on host's loopback interface, but also on the virtual one and give docker app ip of that interface. It remains on your virtual lan, therefore it's not exposed to public. However, windows firewall might block it, so make sure you set it up as private network (it will be marked as unidentified and by default is public, which usually has stuff blocked).

MongoDB replica set initiate() failing on Docker

I am trying to set up a MongoDB replica set on Docker follwoing this guide. However when I get to the rs.intitate() command, I get an error.
"errmsg" : "replSetInitiate quorum check failed because not all proposed set members responded affirmatively: mongo-local-002:27017 failed with Connection refused, mongo-local-003:27017 failed with Connection refused"
edit: using latest version of docker container (3.6)
edit2: the problem is actually with mongo 3.6 (3.4 works). I have tried binding each replica ip to an ip in the docker network, however in this way I am not able to open the mongo console on the main replica (it says connection refused) even passing the IP instead of the replica name.
Does anyone know what I am missing here?
Starting in MongoDB 3.6, MongoDB binaries, mongod and mongos, bind to localhost by default. You need to start every mongod with --bind_ip. Try to bind not only a specific ip but to all IPv4. To bind to all IPv4 addresses, you can specify the bind ip address of 0.0.0.0. That way you will verify if the bind to a specific ip is the problem.

PyMongo MongoClient replica set won't connect

A bit of background: I used Bitnami to spin up a 3 node Mongo cluster on Azure (1 arbiter), each mongod hosted on separate VMs. I've confirmed that the replica set exists, and that each node is able to connect to one another. I've confirmed that when I take down my primary node, the secondary node steps up. When the primary node returns, it assumes the primary position again.
My problem is that I can't connect to my MongoDB replica set using MongoClient when specifying replicaset. I would get this error:
pymongo.errors.ServerSelectionTimeoutError: ArbiterIP:27017: [WinError
10061] No connection could be made because the target machine actively
refused it,PrimaryIP:27017: [WinError 10061] No connection could be
made because the target machine actively refused it,SecondaryIP:27017:
timed out
Using MongoClient, if I do:
connection = MongoClient('MyIP1:27017', w=2)
, it connects fine. When I do
connection = MongoClient('MyIP1:27017', w=2, replicaset="repsetname")
, that's when I get the error.
Would it be related to the fact that the arbiter node has no user info for authentication?
Taking a stab at it: if you connect with the shell and do db.isMaster(), are the hostnames you see in the config the same as the hostnames like "MyIP" that you're passing to PyMongo?
It sounds like when you pass "replicaset=", PyMongo takes the hostnames from the isMaster response and connects to those instead of MyIP, but the way your replica set is configured, that set of hostnames isn't available.
For more info on why PyMongo acts this way:
https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.rst#clients-use-the-hostnames-listed-in-the-replica-set-config-not-the-seed-list

Google Cloud Mongo DB: External IP not connecting

I have created a ready to go MongoDB server on Google Cloud using the default parameters. Everything is working fine between them (there is communication and I can add DBs and collections). However, I can't connect to MongoDB on any external machine.
I created the firewall rules in GCP allowing all the connections ("0.0.0.0./0") on the port 27017.
I am running the command:
giuseppe#ubuntu:~$ mongo --host rs0/104.154.xx.xxx,173.255.xxx.xxx,104.197.xxx.xxx
giuseppe#ubuntu:~$ mongo --host rs0/104.154.xxx.xxx:27017,173.255.xxx.xxx:27017,104.197.xxx.xxx:27017
I'm getting the same error on both of them. I don't know how to resolve this issue.
connecting to: rs0/104.154.41.xxx,173.255.xxx.xxx,104.197.22.xxx:27017/test
2015-03-18T19:47:33.770-0500 starting new replica set monitor for replica set rs0 with seeds 104.154.41.xxx:27017,104.197.22.1xx:27017,xx.255.114.xxx:27017
2015-03-18T19:47:33.770-0500 [ReplicaSetMonitorWatcher] starting
2015-03-18T19:47:34.119-0500 changing hosts to rs0/mongo-db-jff3:27017,mongo-db-vnc4:27017 from rs0/104.154.41.246:27017,1xx.197.22.xxx:27017,173.255.1xx.xx:27017
2015-03-18T19:47:34.493-0500 getaddrinfo("mongo-db-vnc4") failed: Name or service not known
2015-03-18T19:47:34.511-0500 getaddrinfo("mongo-db-jff3") failed: Name or service not known
2015-03-18T19:47:34.512-0500 Error: connect failed to replica set rs0/104.154.xxx.xxx:27017,173.2xx.xxx.68:27017,104.197.22.xxx:27017 at src/mongo/shell/mongo.js:148
EDIT:
Here are my firewall settings.
Did you
configure the firewall rule in Google cloud console
provide a tag in your firewall rule
tag your instance with the same tag as the firewall rule
?
I explained how to open a port to the outside world in detail over here. Replace with your own port number.
I belive the issue here is that the ReplicaSetMonitorWatcher is changing hosts to rs0/mongo-db-jff3:27017, where mongo-db-jff3 is not reachable from your network. You need to configure the hosts in the replica set to something that you can reach (static IP or URL).
https://docs.mongodb.com/manual/tutorial/change-hostnames-in-a-replica-set/
Quick example, mongo into your PRIMARY (SECONDARY if you want to do it no downtime):
cfg = rs.conf()
cfg.members[0].host = "mongodb0.example.net:27017"
cfg.members[1].host = "mongodb1.example.net:27017"
rs.reconfig(cfg)