I have a sharded, replicated mongodb cluster. I'm in the process of re-IPing the shards to be on a different subnet. Just started by re-IPing one secondary mongod. It now has the new IP and I flushed the DNS. However in Cloud Manager, that server shows as DOWN now.
What can I do to make MongoCloud see that server again? I know MongoCloud communicates with the shards via the mongo automation service that is installed on them, but I don't see any configuration in there about the IP address, etc.
Found the answer:
there was a hardcoded hostfile always forcing the mongod server to the old IP address. Once the host file entries were removed, everything was OK. The host file (in Windows) is found in C:\Windows\system32\drivers\etc\hosts on the 2 other replica set members (I have a sharded env. w/ 1 primary and 2 secondaries).
The host file contains a line per entry, host ip
Related
Not able to connect to mongo standalone node experiencing below error.
ERROR [cluster-ClusterId{value='asdfs', description='null'}-x.x.x.x:27017] org.mongodb.driver.cluster - Expecting replica set member, but found a STANDALONE. Removing x.x.x.x:27017 from client view of cluster.
Is it okay to give multiple Ip's in config file while only one mongo node is there?
Is it okay to give multiple Ip's in config file while only one mongo node is there?
Not for standalone connections, no.
Specify the address of your standalone only.
I have 4 servers each hosting a website and a mongo replica set.
MongoDB version: v3.4.13
Driver: PHP
Server 1 is PRIMARY.
Located on the west coast
Server 2 is SECONDARY with tag: { location: 'east' }
Located on the east coast
I'm connecting to the db with connection string: mongodb://localhost:27017/?replicaSet=rs&readPreference=nearest&readPreferenceTags=location:east
Server 3 and 4 are SECONDARY with no tags.
I want server 2 to read from it's local database, but instead it is reading from primary (or another secondary, I can't tell, but it's definitely not reading from it's local database)
I suspect it's not reading from it's own SECONDARY db because there is about a 3 second lag for any query I run.
How can I tell server 2 to read from it's own local SECONDARY db?
Include readPreferenceTags in your connection string. Refer to the online doc.
By the way, you shouldn't have a even number of nodes in a replica set unless one of them has no voting right.
SOLUTION: All servers within the cluster need to be able to connect to all other servers within the cluster.
Turns out, some of my secondary's weren't able to connect to the other secondary's because of some rules placed in my iptables.
Once all servers within the cluster were able to connect to each other the speed increased dramatically.
I have created a MongoDB replica set using 5 EC2 instances on AWS. I added the nodes using rs.add("[IP_Address]") command.
I want to perform network partition in the replica set. In order to that, I have specified 2 kinds of security groups. 'SG1' has 27017 port (MongoDB port) opened. 'SG2' doesn't expose 27017.
I want to isolate 2 nodes from the replica set. When I apply SG2 on these 2 nodes (EC2 instances), ideally they should stop getting write and read from the primary as I am blocking the 27017 port using security group SG2. But in my case, they are still writable. Data written on Primary reflects on the partitioned node. Can someone help? TYA.
Most firewalls, including AWS Security groups, will block incoming connections when the connection is being opened. Changing settings will affect all new connection, but existing open connections are not re-evaluated when they are applied.
MongoDB maintains connections between hosts and that would only get blocked after loss of connection between the hosts.
On Linux you can restart the networking which will reset the connections. You can do this after applying the new rules by running:
/etc/init.d/networking stop && /etc/init.d/networking start
Using recent client libraries (pymongo 3.4, mongodb (nodejs) 2.2.27), I am having trouble connecting to my mongodb servers with replication.
The replicaset configuration contains either the internal ips of the servers or the hostnames. I'm getting the following error:
pymongo.errors.ServerSelectionTimeoutError: mongodbdriver20151129-arbiter-1:27017: [Errno 8] nodename nor servname provided, or not known,mongodbdriver20151129-instance-1:27017: [Errno 8] nodename nor servname provided, or not known,mongodbdriver20151129-instance-2:27017: [Errno 8] nodename nor servname provided, or not known
or
pymongo.errors.ServerSelectionTimeoutError: 10.0.0.5:27017: timed out,10.0.0.6:27017: timed out,10.0.0.4:27017: timed out
I am currently working around it by changing the replicaset config to contain the external ips for the servers but I guess that would slow down the inter-server communication. How can I connect to my servers from an external location with the original rsconf?
[update] Note: I am trying to connect to the external ip of the servers and this worked fine when using pymongo 2.8 or mongodb (js) 2.1.4
[update] Follow this chat for more details/examples
Later versions of all officially supported MongoDB drivers (including the node driver) follows the Server Discovery and Monitoring spec (SDAM), which mandates that all drivers to monitor all nodes in a replica set (see Monitoring).
The reason for this monitoring is to be able to discover the status of the whole replica set at all times, and reconnect to a new primary should the current primary goes offline for any reason. See What's the point of periodic monitoring
To be able to monitor all nodes in a replica set, the driver must have access to each of the replica set member. Since your replica set is defined using internal IPs inaccessible by the driver, the driver cannot connect to them. This is the reason for the error you're seeing.
There are a couple of ways to solve this issue:
Use IP addresses or hostnames for the replica set configuration that are accessible by the driver (recommended).
Connect to one of the nodes without specifying a replica set, essentially treating the node as a standalone (not recommended).
If the older driver can connect without complaint, then either the driver is very outdated or doesn't follow the proper SDAM spec and should not be used, since its behaviour cannot be guaranteed. MongoDB publishes the SDAM spec and mandates all drivers to follow it for a good reason.
When adding a deployment to MongoDB, it is not correctly picking up the arbiter. The replicaset consists of a primary, secondary and arbiter. I have installed the automation agent on all 3 members and the monitoring and backup agents on the primary and secondary only.
Within the Deployment page, I click on the first servers button and everything is correct. Agents on all 3 servers are correct and present (green circle). Additionally, the server names are all shown as the correct hostname (fqdn). Versions of agents are consistent.
After adding the deployment, the primary and secondary nodes are picked up correctly, but the arbiter is not. Rather, it picks up the arbiter host, but by IP address. As such, it shows no agents at all.
From the primary and secondary members I can ping the arbiter and also connect to the arbiter using mongo --host --port.
I can't quite figure out what is wrong here and why I see all of the correct hosts in the servers section, but the deployment fails to correctly pick up the arbiter.
The problem was the replicaset configuration. The rs.config members were not the same case as the resulting hostname -f return.
To fix, I updated rs.conf. Assume the rs.conf shows Mongo-Arbiter:27017 for member[2]. For example:
hostname -f:
mongo-arbiter
cfg = rs.conf()
cfg.members[2].host="mongo-arbiter:27017"
rs.reconfig(cfg)
After ensuring all members in rs.conf matched their respective hostname, I could add the deployment to Ops Manager.