How can I tell if a specific Mongo instance has active connections? - mongodb

I'm upgrading a sharded cluster and want to turn one of three mongos instance off. I've guaranteed that new incoming connections will not take place because I disabled the box in my load balancer. However, I'm concerned there might be existing connections on the mongos instance still active.
I've run the following on the Mongo instance:
db._adminCommand("connPoolStats");
Do you have any tis on interpreting the result? Is this the correct command?

The cursorInfo command should work. If there are no more cursors, then it's ok to shut off the mongos. Any connections that still exist will simply fail over to another mongos through the load balancer when they try to reconnect (assuming they have an appropriate reconnection policy in place). The only thing you need to worry about is cursors, since they have state, which is taken care of by cursorInfo.

Related

Mongodb localhost:27017 replica set switched to secondary localhost:27027

I'm fairly new to mongodb, just a couple of months. I just converted my mongodb database to support a secondary replica set so I can watch collections. I only added one secondary which I'm guessing now may not be the best after reading you should create an odd number, but it is a localhost on one machine. I went through the instructions, got replication working fine for for half a day running my programs. But for some reason recently it has switched the database for port 27017 from primary to secondary. Primary was previously on localhost:27017 and secondary was on localhost:27027. Now my normal program can't connect to localhost:27017 without an error, which I believe it is because it is a secondary replica set now when it was primary before, assuming you can only connect to a primary. Here is the error msg.
Exception in thread "main" com.mongodb.MongoNotPrimaryException: Command failed with error 10107 (NotWritablePrimary): 'not master' on server localhost:27017.
I'm perplexed why mongodb switched the replica set primary in the first place. I doubt an error occurred, but certainly possible, but I haven't had a single "localhost" error in months of development.
For now, ideally how would I switch 27017 back to be the primary. How do I do that so my existing programs can function again?
Eventually when in production, what is the best methodology to handle this, assuming a lookup to a DNS entry to an ip address and suddenly the primary gets changed because of a fail over?
Given question 3 is a bit more involved, is there something I can do in my development environment to better simulate a production environment.
I use StackOverflow extensively but this is my first post so thanks for anyone who can provide advice.
Without knowing more about the replica configuration and circumstances of the switch over I'm not sure anyone could confidently answer question 1 but it may not be important compared to question 3.
When you want to manually switch the primary you can manipulate the priority settings:
https://docs.mongodb.com/manual/tutorial/force-member-to-be-primary/
Or run manual commands to freeze or step down the current primary:
https://docs.mongodb.com/manual/tutorial/force-member-to-be-primary/#force-a-member-to-be-primary-using-database-commands
The safest option is to ensure your application is aware of all replicas in the replica set. Then when you have these situations where something unexpected has happened the application will fail over to a writable db without any issues.
https://mongodb.github.io/mongo-java-driver/3.4/driver/tutorials/connect-to-mongodb/#connect-to-a-replica-set
I can only suggest setting up some VMs or containers as replica set members to better represent a production environment.
https://hub.docker.com/_/mongo
I was able to solve the problem by using the connect string which comprised both replica sets, which I was unaware I needed to do. Such as for java:
mongoClient = MongoClients.create("mongodb://localhost:27017,localhost:27027");
This also worked for Mongo Compass so I was able to connect to the secondary database. I didn't know you needed to provide paths to all replica sets when trying to connect, but in retrospect makes goods sense if something is down.
If you need a replica set for testing, you can create a single-node RS. Follow the instructions for creating a RS but only add one node.

Migrating MongoDB instances with no down-time

We are using MongoDB in production environment and now, due to some issues of current servers, I'm going to change the server and start a new MongoDB instance.
We have a replica set and a single mongod instance (two different MongoDB networks for different purposes). Now, first I should migrate the single mongod instance and then the whole replica set to the new server.
What I want to know is, how can I migrate both instances with no down-time? I don't want to shutdown the server or stop write operations.
Thanks in advance.
So first of all you should never run mongodb as a single instance for production. At a minimum you should have 1 primary, 1 secondary and 1 arbiter.
Second, even with a replica set you will always have a bit of write downtime when you switch primaries, as writes are not possible during the election process. From the docs:
IMPORTANT Elections are essential for independent operation of a
replica set; however, elections take time to complete. While an
election is in process, the replica set has no primary and cannot
accept writes. MongoDB avoids elections unless necessary.
Elections are going to occur when for example you bring down the primary to move it to a new server or virtual instance, or upgrade the database version (like going from 2.4 to 2.6).
You can keep downtime to a minimum with an existing replica set by setting the appropriate options to allow queries to run against secondaries. Again from the docs:
Maintaining availability during a failover. Use primaryPreferred if
you want an application to read from the primary under normal
circumstances, but to allow stale reads from secondaries in an
emergency. This provides a “read-only mode” for your application
during a failover.
This takes care of reads at least. Writes are best dealt with by having your application retry failed writes, or queue them up.
Regarding your standalone the documented procedures for converting to a replica set are well tested and can be completed very quickly with minimal downtime:
http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
You cannot have no downtime (a new mongod will run on new IP so you need to at least connect to it). But you can minimize downtime by making geographically distributed replica set.
Please Read
http://docs.mongodb.org/manual/tutorial/deploy-geographically-distributed-replica-set/
Use the given process but please note:
Do not set priority 0 of instances at New Location so that they become primary when old ones at Old Location step down.
You still need to restart mongod in replica set mode at Old Location.
You need 3 instances including an arbiter at New Location, if you want it to be
replica set.
When complete data is in sync with instances at New Location, step down instances at Old Location (one by one). Now everything will go to New Location but the problem is that it is directed through a distant mongod.
So stop mongod at Old Location and start a new one at new Location. Connect your applications to New Location Mongod.
Note: I have not done the same so far. I had planned it once but then I got the problem and it was not of hosting provider. Practically you may get some issues.
Replica Set is the feature provided by the Mongodb database to achieve high availability and automatic failover.
It is kinda traditional master-slave configuration but have capability of automatic failover.
It is basically group/cluster of the mongod instances which communicates, replicates to each other to provide high availability and to do automatic failover
Basically, in replica sets there are minimum 2 and maximum of 12 mongod instances can exist
In replica set following types of server exist. out of all, one server is always primary.
http://blog.ajduke.in/2013/05/31/setup-mongodb-replica-set-in-4-steps/
John answer is right, btw in your case you have no way to avoid downtime, you can just try to make it shorter as possible.
You can prepare the new replica set and save its configuration.
Same for the single mongod instance, prepare a js file with specific configuration (ie: stuff going on the admin database).
disable client connections on production servers.
copy the datafiles from the old servers to the new ones (http://docs.mongodb.org/manual/core/backups/#backup-with-file-copies)
apply your previous saved replica set config and configuration.
done
you can use diffent ways as add an hidden secondary member on the replica set if you have a lot of data, so you can wait it's is up-to-date before stopping the production server. Basically for the replica set you have many ways to handle a migration, with the single instance instead you don't have such features.

How to add new server in replica set in production

I am new to mongodb replica set.
According to Replic Set Ref this should be connection string in my application to connect to mongodb
mongodb://db1.example.net,db2.example.net,db3.example.net:2500/?replicaSet=test
Suppose this is production replica set (i.e. I cannot change application code or stop all the mongo servers) And, I want to add another mongo db instance db4.example.net in test replica set. How will I do that?
How my application will know about the new db4.example.net
If you are looking for real-world scenario:
In situation when any of existing server is down due to hardware failure etc, it is natural to add another db server to the replica set to preserve the redundancy. But, how to do that.
The list of replica set hosts in your connection string is a "seed list", and does not have to include all of the members of your replica set.
The MongoDB client driver used by your application will iterate through the seed list until it can successfully connect to a host, and use that host to request the current replica set configuration which will list all current members of the replica set. Per the documentation, it is recommended to include at least two hosts in the connect string so that your driver can still connect in the event the first host happens to be down.
Any changes in replica set configuration (i.e. adding/removing members or election of a new primary) are automatically discovered by your client driver so you should not have to make any changes in the application configuration to add a new member to your replica set.
A change in replica set configuration may trigger an election for a new primary, so your application code should expect to handle transient errors for a few seconds during reconfiguration.
Some helpful mongo shell commands:
rs.conf() - display the current replication configuration
db.isMaster().primary - display the current primary
You should notice a version number in the configuration document returned by rs.conf(). This version is incremented on every configuration change so drivers and replica set nodes can check if they have a stale version of the config.
How my application will know about the new db4.example.net
Just rs.add("db4.example.net") and your application should discover this host automatically.
In your scenario, if you are replacing an entirely dead host you would likely also want to rs.remove() the original host (after adding the replacement) to maintain the voting majority for your replica set.
Alternatively, rather than adding a host with a new name you could replace the dead host with a new server using the same hostname as previously configured. For example, if db3.example.net died, you could replace it with a new db3.example.net and follow the steps to Resync a replica set member.
A way to provide abstraction to your database is to set up a sharded cluster. In that case, the access point between your application and the database are the mongodb routers. What happens behind them is outside of the visibility of the application. You can add shards, remove shards, turn shards into replica-sets and change those replica-sets all you want. The application keeps talking with the routers, and the routers know which servers they need to forward them. You can change the cluster configuration at runtime by connecting to the routers with the mongo shell.
When you have questions about how to set up and administrate MongoDB clusters, please ask on http://dba.stackexchange.com.
But note that in the scenario you described, that wouldn't even be necessary. When one of your database servers has a hardware failure and your system administrators want to replace it without application downtime, they can just assign the same IP and hostname to the new server so the application doesn't even notice that it's a replacement.
When you want to know details about how to do this, you will find help on http://serverfault.com

node-mongodb-native does not recover on replica set primary network failure?

Our MongoDB setup uses three replica set shards. Each webserver runs a mongos instance locally, and the client node.js processes connect through that using Mongoose (3.6.20) and node-mongodb-native. So node-mongodb-native just connects to mongos on localhost.
When a replica set primary goes down hard (we can simulate this by doing 'ifdown eth0' on the primary) mongos properly detects this, and also detects that a new primary has been elected. So far, so good. But node-mongodb-native's connections to the mongos instance are still open but not functional, and a restart of the node procs is required.
Our assumption was that mongos would just kill any established connections to the dead primary and node-mongodb-native would reconnect, but that seems to not be the case; both the server and the OS think these connections are open. By contrast, on primary stepDown, the clients fail over fine, connections are closed and reopened.
We are looking at socketTimeoutMS, but that seems incorrect since it causes disconnects for connections that are merely idle.
Are we missing configuration to our client or mongos, or do we have to implement our own pinging?
Based on experimentation and the following MongoDB bug, this appears to just be a shortcoming of mongos (or, if you prefer, of the client libraries) at this point. Right now it looks like 'write your own pinging logic in your app and trigger a reconnect when that fails', so that's what we are doing.
https://jira.mongodb.org/browse/SERVER-9041

How the primary server down will be handled automatically in mongodb replication

I never have my hands on coding. I got a doubt regarding mongodb replica sets
below is the situation
I have an alert monitoring application.
It is using mongodb with replica set with 3 nodes.
Applications Java code base keep connecting to the primary and doing some transactions.
Now my question is that,
if the primary server is down, how will it effect the application server.
I mean, would the app server writes error saying connection failed like errors.
OR
the replica set will pick one of the slaves automatically as master and provides the application server to do its activity. How will it happen...?
Thanks & Regards,
UDAY
The replica set will try to pick another server as the new primary. If you have three nodes, and one goes down, the other two will negotiate which one becomes the new master. If two go down, or somehow communication between the remaining breaks down, there will be no new master until the situation is recovered.
The official drivers support this automatic fail-over, as does the mongos routing server if you use it. So the application code does not need to do anything here.
I am not sure if there will be connection errors during the brief period of time this fail-over negotiation takes (you will probably get errors for a few seconds).