I have a mongo 2.4.8 cluster. My software dynamically partitions data, and I now have about 30,000 sharded collections. The cluster currently contains only one shard (which is a replica set); it is a cluster to allow easy future expansion.
When I start a new mongos process and run show collections, it takes it several hours to complete. During this time the mongos is unresponsive to all clients (but the cluster is fine). If I never run show collectoins, all other operations through the mongos work normally.
Eventually show collections completes and after that the mongos works fine, and running show collections again on the same mongos returns right away. I only found out there was a problem when I needed to restart a mongos for the first time in many months, during which the collection count rose greatly.
Logically, it would seem that data transfer (about collection chunks) from the config servers to the new mongos is the bottleneck. But neither side shows high CPU or network activity while this is going on.
Is this known behavior? How can I further investigate the problem?
I traced the problem to a faulty config server. After replacing it, everything is working fine again.
Details: the bad server didn't respond to queries, after which they were re-sent to other servers. This created an effective latency for each request to the config servers, which was most pronounced in the 'show collections' operation that does at least one roundtrip per collection between mongos and the config servers, and does them all serially.
Related
We are currently setting up our mongodb environment for production. At the moment we only have one dedicated mongodb database server. We will expand this in the near future with a 2nd server and I already indicated to the management that for the ideal situation we should get a 3rd server as well.
Since I already know we're going to use sharding and replication in the near future I want to be prepared for it.
The idea I have now is to start now with the Development Configuration (as mongo's documentation names it).
Whenever our second server comes available I would like to expand this setup to a configuration with 2 configuration servers en 2 shards (replica sets).
And of course when our third server comes available have the fully functional sharded cluster configuration.
While reading mongo's documentation I was getting triggered by the note that de Development setup should not be used in production.
MongoDb Development Configuration
Keeping in mind that we will add more servers soon, would it be a bad idea to already configure the Development Configuration already so we can easily add the 2nd server to the cluster when it comes available?
After setting up the 'development sharded setup' I've found my anwser. Of course i'm happy to share in case anybody runs into the same questions as I do when starting with this.
In my case, it was ok to start with the development setup untill my new servers arrived. It was a temporary situation and when my new servers arived I was able to easily expand my replicasets. There are a number of reasons why this isn't adviced for production:
To state the obvious, there is no replication yet. Since I was running shards on one machine there is a single point of failure. If the machine, or one node goes down, the cluster won't work anymore.
Now this part is interesting. After I added a second server, I did have primary and secondary nodes. Primary nodes were used for writing and secondary for reading. I've eliminated the issue that there was no replication AND my data had a higher availability. However, I noticed with the 2-member replica sets, if one member of the replicaset went down (even is this was a secondary), the primary stepped down to a secondary node as well. This had to do with the voting mechanism that MongoDb uses. See Markus' more detailed answer on this.. Since there are no more primaries in the replicaset, my cluster won't function anymore. Now, if i were to use an arbiter I could eliminate this problem as well.
When you have a 3-member replicataset, automatic failover kicks in. Whenever a node goes down, another primary is assigned automatically and the cluster will continue performing as before.
During my tests I also got to a point where one of my MongoD.exe instances stopped working due to a "Out of memory exception". I was running a cluster with 3 replicasets, meaning every machine had at least 4 mongod.exe processes running (3 for the replicaset shards and one for the configuration server replicaset). Besides having a query which wasn't optimized yet I also noticed that the WiredTiger storage engine by default can use up to 50% of ram minus one gigabyte. Perhaps it wasn't the best choise to have multiple replicaset-shards on one machine but I was able to eliminate the problem by capping the wiredtiger memory usage.
I hope this answer helps anybody who's starting to set up replication and sharding for MongoDb.
Can we have this type of configuration?
Two server running the following things each-
1.Mongo Config Server.
2.Mongo Router.
3.Application.
Total 4 EC2 servers-
First Server-Running the web application & mongos.
Second Server-Running the web application & mongos.
Third Server-Running the First Shard with complete DB(Say for
example Demo).
Forth Server-Running The Second Shard with complete DB(Say for
example Demo).
Both the Mongos should point to one shard named Shard1?
Yes, You can have multiple mongos instances running against a single shard. Think of the mongos instances as clients for the sharded cluster which have to run as a daemon process in order to keep metadata and heartbeats up to date.
Edit: as for having a complete DB, this is only possible for a single DB. You can have one DB on shard1 and the other DB on shard2, for example. but you can never have a single complete DB on two shards. To achieve the goal of having db1 on shard1 and db2 on shard2, you simply make the respective shard the primary shard of the respective database and don't shard any collection. Please read the docs for the movePrimary command for details.
A bit OOT:
However, running a single config server is strongly advised against, and for a good reason. If the single config server goes down or gets corrupted, your cluster will be impossible to use - and recreating the sharded cluster will not an easy task to be done. And it's going to be a lengty process. So please, use three config servers.*
I'm running 4 vms (centos) on a single machine (Windows 2008 R2). The 4 vms are setup as below:
1 mongos
1 mongo configure server
2 mongod as sharding servers
OK, everything was fine before a power off accident. When the power came back, I did manually reboot all the VMs, and found the sharding setting is gone. I mean, the mongos can talk to the configure server, but somehow the sharding data is lost and it show the database is not sharded.
I setup the sharding based on documents from mongodb websites (e.g. running some command in mongo shell to enable sharding for the db and each collection). Do I need to do all the mongo shell commands again after rebooting? Or is it supposed to recover automatically once the sharding is enabled?
Thanks.
Once you've established a sharded cluster, it is certainly supposed to stay configured, even if servers fail, even if they all fail at the same time. Restarting the servers should bring everything up just the way it was before the outage. Based on your description, it is difficult to reason what might have gone wrong. A dump of the config database, and the log files of all the affected servers, would be necessary to analyze what happened. This should perhaps be filed as a ticket with MongoDB support.
(It is, by the way, recommended to run three config servers rather than a single one, for availability reasons. But even so, even a single server should recover just fine after having failed. The three-server recommendation is only to make sure that there's always a live config server even if one of them should fail.)
I am using 64-bit MongoDB, and i am undergoing test on multiple-shards. If i keep multiple shards in a single machine. Its working fine but if i keep shards in different machine, its failed in sharding to second shard. I have restricted the first-shard size to 10MB, once its reaches the limited size in first shard it should start sharding to second-shard but not happening so.Instead failed to store in second-shard updating to first shard. The following are my shard details. In my environment initially i have two shards. The first shard is on my first-machine running along with my application. The Second-shard is on my second machine.
Configuration as follows:-
*)On both of my shards, shard-server,configserver,mongos and i have connected mongo through mongos as follows ./mongo hostname:27017/admin and i have added both the shards in first & second shard and enabled sharding for database and collection level by using shard-key.
Please, let me know if i gone wrong anywhere in the configuration.
Advance Thanks,
Your post could use some editing, this is very difficult to read.
It looks like you have 2 machines. On each machine you have:
mongod process serving as one shard
mongod process serving as a config
mongos process
a copy of your application connecting to localhost:27017/admin
Please, let me know if i gone wrong anywhere in the configuration.
There are several possible problems here. Please check the following:
You can only have 1 or 3 config processes. It looks like you have 2, this will not work.
When you connect to localhost:27017/admin are you connecting to mongos or mongod? Either one could be running on those ports. Can you specify the ports for each process to help clarify? You must connect to mongos or the sharding will not happen.
Please look at the logs, they generally have output indicating what the server is doing. If there is no indication of "splits" or "chunks" happening, then your database may be configured incorrectly.
Your best bet is to start from top and test each piece one at a time.
Is it possible to set up mongoDB replica set with following scenario (if it is,how):
2 servers always online running mongodb, one of them holds the main node, the other one a rescue copy;
n computers each of them running mongodb, occasionally connected to internet, holding nodes which need synchronizing with main node, when they go online.
Backup only. In order to do this, you'll have to specify the priority of this node to 0. If your node is never going to be used as master nor queried, you can also set buildIndexes to false.
More informations here.
Intermitent slave. Due to limitations (mainly on the oplog queue), you can't have a slave halted for a very long time if you have many writes on your MongoDB, see here. However, you can use the mongodump and mongorestore tools directly over network or by script + sync the backup file. More informations here. Note that a restore will bring a db or collection in a server and recreate the indexes completely (if you restore the system.indexes collection too) which can take some time.