Expecting replica set member, but found a STANDALONE. Removing x.x.x.x:27017 from client view of cluster - mongodb

Not able to connect to mongo standalone node experiencing below error.
ERROR [cluster-ClusterId{value='asdfs', description='null'}-x.x.x.x:27017] org.mongodb.driver.cluster - Expecting replica set member, but found a STANDALONE. Removing x.x.x.x:27017 from client view of cluster.
Is it okay to give multiple Ip's in config file while only one mongo node is there?

Is it okay to give multiple Ip's in config file while only one mongo node is there?
Not for standalone connections, no.
Specify the address of your standalone only.

Related

MongoDB clients stopped connecting to secondary nodes of the replicaset

Application stopped connecting to the mongodb secondary replicaset. We have the read preference set to secondary.
mongodb://db0.example.com,db1.example.com,db2.example.com/?replicaSet=myRepl&readPreference=secondary&maxStalenessSeconds=120
Connections always go to the primary overloading the primary node. This issue started after restarting patching and restart of the servers.
Tried mongo shell connectvity using above resulting in command being abruptly terminated. I see the process for that connect in the server in ps -ef|grep mongo
Any one faced this issue? Any troubleshooting tips are appreciated. Log's aren't showing anything related to the terminated/stopped connection process.
We were able to fix the issue. It was an issue on the spring boot side. When the right bean (we have two beans - one for primary and one for secondary connections) was injected, the connection was established to the secondary node for heavy reading and reporting purposes.

How to use pymongo to see if my mongod node is primary?

I know we can connect to replica set using pymongo and it can automatically figure out which one is primary under normal circumstances. However, our replica set is running inside of containers, therefore, the hostname registered in the replica set configuration are not pingable outside of containers. I can only access the instance on a per node basis by supplying the IP of the host where the container reside.
My question arise from the need to figure out if the node I connect to is indeed primary, I know to check that under mongo shell, I could do either rs.isMaster() or db.isMaster().ismaster, but neither seems to work with my client connection via pymongo.
Hopefully my question is clear enough, if not please leave a comment and I can try to elaborate, any help is greatly appreciated. Thanks in advance. In the meantime, I will keep trying other things.

MongoDB error not master and slaveOk=false

I am using MongoDB with Loopback in my application with a loopback connector to the MongoDB. My application was working fine but now it throws an error
not master and slaveOk=false.
try running rs.slaveOk() in a mongoDB shell
You are attempting to connect to secondary replica whilst previously your app (connection) was set to connect likely to the primary, hence the error. If you use rs.secondaryOk() (slaveOk is deprecated now) you will possibly solve the connection problem but it might not be what you want.
To make sure you are doing the right thing, think if you want to connect to the secondary replica instead of primary. Usually, it's not what you want.
If you have permissions to amend the replica configuration
I suggest to connect using MongoDB Compass and execute rs.status() first to see the existing state and configuration for the cluster. Then, verify which replica is primary.
If necessary, adjust priorities in the replicaset configuration to assign primary status to the right replica. The highest priority number sets the replica as primary. This article shows how to do it right.
If you aren't able to change the replica configuration
Try a few things:
make sure your hostname points to the primary replica
if it is a local environment issue - make sure you added your local replica hostnames to the /etc/hosts pointing to 127.0.0.1
experiment with directConnection=true
experiment with multiple replica hosts and ?replicaSet=<name> - read this article (switch tabs to replica)
The best bet is that your database configuration has changed and your connection string no longer reflects it correctly. Usually, slight adjustments in the connection string are needed or just checking to what instance you want to connect.

IP address change caused shard to be DOWN in mongo MMS

I have a sharded, replicated mongodb cluster. I'm in the process of re-IPing the shards to be on a different subnet. Just started by re-IPing one secondary mongod. It now has the new IP and I flushed the DNS. However in Cloud Manager, that server shows as DOWN now.
What can I do to make MongoCloud see that server again? I know MongoCloud communicates with the shards via the mongo automation service that is installed on them, but I don't see any configuration in there about the IP address, etc.
Found the answer:
there was a hardcoded hostfile always forcing the mongod server to the old IP address. Once the host file entries were removed, everything was OK. The host file (in Windows) is found in C:\Windows\system32\drivers\etc\hosts on the 2 other replica set members (I have a sharded env. w/ 1 primary and 2 secondaries).
The host file contains a line per entry, host ip

How to add new server in replica set in production

I am new to mongodb replica set.
According to Replic Set Ref this should be connection string in my application to connect to mongodb
mongodb://db1.example.net,db2.example.net,db3.example.net:2500/?replicaSet=test
Suppose this is production replica set (i.e. I cannot change application code or stop all the mongo servers) And, I want to add another mongo db instance db4.example.net in test replica set. How will I do that?
How my application will know about the new db4.example.net
If you are looking for real-world scenario:
In situation when any of existing server is down due to hardware failure etc, it is natural to add another db server to the replica set to preserve the redundancy. But, how to do that.
The list of replica set hosts in your connection string is a "seed list", and does not have to include all of the members of your replica set.
The MongoDB client driver used by your application will iterate through the seed list until it can successfully connect to a host, and use that host to request the current replica set configuration which will list all current members of the replica set. Per the documentation, it is recommended to include at least two hosts in the connect string so that your driver can still connect in the event the first host happens to be down.
Any changes in replica set configuration (i.e. adding/removing members or election of a new primary) are automatically discovered by your client driver so you should not have to make any changes in the application configuration to add a new member to your replica set.
A change in replica set configuration may trigger an election for a new primary, so your application code should expect to handle transient errors for a few seconds during reconfiguration.
Some helpful mongo shell commands:
rs.conf() - display the current replication configuration
db.isMaster().primary - display the current primary
You should notice a version number in the configuration document returned by rs.conf(). This version is incremented on every configuration change so drivers and replica set nodes can check if they have a stale version of the config.
How my application will know about the new db4.example.net
Just rs.add("db4.example.net") and your application should discover this host automatically.
In your scenario, if you are replacing an entirely dead host you would likely also want to rs.remove() the original host (after adding the replacement) to maintain the voting majority for your replica set.
Alternatively, rather than adding a host with a new name you could replace the dead host with a new server using the same hostname as previously configured. For example, if db3.example.net died, you could replace it with a new db3.example.net and follow the steps to Resync a replica set member.
A way to provide abstraction to your database is to set up a sharded cluster. In that case, the access point between your application and the database are the mongodb routers. What happens behind them is outside of the visibility of the application. You can add shards, remove shards, turn shards into replica-sets and change those replica-sets all you want. The application keeps talking with the routers, and the routers know which servers they need to forward them. You can change the cluster configuration at runtime by connecting to the routers with the mongo shell.
When you have questions about how to set up and administrate MongoDB clusters, please ask on http://dba.stackexchange.com.
But note that in the scenario you described, that wouldn't even be necessary. When one of your database servers has a hardware failure and your system administrators want to replace it without application downtime, they can just assign the same IP and hostname to the new server so the application doesn't even notice that it's a replacement.
When you want to know details about how to do this, you will find help on http://serverfault.com