MongoDB db.collection.count() vs db.collection.find().length() - mongodb

I would like to understand why these commands, when run from a mongos instance against the same MongoDB collection, return different numbers?
db.users.count()
db.users.find().length()
What can be the reason and can it be a sign of underlying issues?

I believe your collection is sharded.
Most sharded databases solutions have such discrepancy, due to the fact that some commands consider the entire collection, meaning all the documents of all the shards, while some other commands only consider the documents of the shard it is connected to.
This is something to always keep in mind. It mostly applies to commands which:
count
return the document having the lowest value for a given field
return the document having the biggest value for a given field
...
Found on Mongo docs:
count() is equivalent to the db.collection.find(query).count()
construct. ... Sharded Clusters
On a sharded cluster, db.collection.count() can result in an
inaccurate count if orphaned documents exist or if a chunk migration
is in progress. ...
So in the case of Mongo, it is simply because Mongo always runs, in a background process, some rebalancing of the documents within a shard, in order to keep the shards distribution compliant with the sharding policy defined on the collection.
Keep in mind that to offer the best performance, most sharded solutions will write the documents on the shard the client is connected to, and then later put it where it is really meant to be.
This is why nosql DBs are often flagged as eventually consistent.

Related

Writing on multiple shards in mongodb

Generally, if a query spreads across multiple shards, it is considered less optimized. It takes more time than reading from single shard.
Does it hold true for writing as well? If I am writing some data and it will distribute among multiple shards, will it be considered less optimized?
If yes, what is the best way to write a batch that should go to different shard?
It depends on the operations, see https://docs.mongodb.com/manual/core/sharded-cluster-query-router/#sharding-mongos-targeted.
All insertOne() operations target to one shard. Each document in the insertMany() array targets to a single shard, but there is no guarantee all documents in the array insert into a single shard.
All updateOne(), replaceOne() and deleteOne() operations must include the shard key or _id in the query document. MongoDB returns an error if these methods are used without the shard key or _id.
Depending on the distribution of data in the cluster and the selectivity of the query, mongos may still perform a broadcast operation to fulfill these queries.

Should I shard all collections in my MongoDB or just some

I am running MongoDB cluster (backend to my website). I am converting my previous DB from being plain into sharded structure.
Question is: should I shard all my collections or only those that I expect to grow a lot. I have some collections that will never get bigger than a few thousands documents, few hundred thousands at most, should I shard them anyway? If yes when? Right now during conversion or convert it without shading and shard later?
To rephrase the question : if a table is not too big, are there any benefits for it to be sharded?
A common misconception is that sharding is based upon the size of a collection. This is totally untrue. It is however, true that common sense dictates that when a collection reaches a certain size it is possibly too much to store on a single server, but on the other hand the cause to shard is decided by operations not size.
It makes sense that those that will "grow a lot" should be sharded to distribute those operations within a cluster however those that might be a lot quieter, such as your smaller collections can happily remain on the primary shard.
As to when to shard them: that depends on the operations. Sharding is designed to scale out reads and writes so it is merely a question of when a collection needs to be scaled out.
You could have a collection of maybe a 1,000 items but if the operations call for it to be sharded then it needs sharding. Vice versa you could have a collection of 1 billion items and it still doesn't merit sharding.

What is the performance of a query that doesn't contains the shard key in a sharded MongoDB environment?

The title is saying everything. Assume that you have a sharded MongoDB environment and the user provide a query, which doesn't contain the shard key. What is the actual performance of the query? What happens in the background?
The performance depends on any number of factors however, the default action of MongoDB in this case is to do a global scatter and gather operation whereby it will send the query to all shards and then merge duplicates to give you an end result.
Returning to the performance, it normally depends upon the indexes on each shard and the isolated optimisation of their data sets and how much range of a dataset they hold.
However processing is parallel in sharding which means they all get the query and the "master" mongod will just merge as they come in, so the performance shouldn't be: go to shard 1, get it, then shard 2; instead it should be: go to all shards, each shard return its results and the master merges and returns.
Here is a good presentation (with nice pictures) on exactly how queries with sharding work in certain situations: http://www.slideshare.net/mongodb/how-queries-work-with-sharding
If the query is maked on the sharded collections the query is maked on all shard, if the query is maked on non shared collections, mongoDB take all data on the same shard.
I add the link for shard FAQ on MongoDB
http://docs.mongodb.org/manual/faq/sharding/

When the sort field is not the part of shard key, will mongos sort the data returned by all mongod?

When the sort field is not part of the shard key, mongos will send the query to all mongod instances. After all mongod instances return data, mongos will merge them.
Does this merge operation include a sort?
We know the sort field is not part of the shard key, so the returned data should be unordered, mongos must do sort. If so, when the returned data is very large, mongos will take up a lot of memory.
Is my understanding correct?
It's not the sort field that needs to be in the shard key, but rather the criteria you are using to select the data. That is, if the mongos cannot determine from the fields you are using as part of your query where the data lives specifically then it will send to all shards. This is the same as any other non-sort query. Sorting on a non-shardkey field does not affect the ability of the mongos to route the queries appropriately.
This is mentioned in the docs here:
https://docs.mongodb.org/v2.4/core/sharded-cluster-query-router/#how-mongos-handles-query-modifiers
The shards will receive the queries from mongos, they will sort their subset of results, and send them back to the mongos. The mongos then has to do a merge sort on the returned results before presenting them back. This is not as intensive as the full sort would be, since the results are ordered initially by the shards, but will still require resources. The amount of memory consumed will be related to the size of the result sets returned by the various shards.
Edit (May 2016): the above was true when originally answered in 2012, but (as pointed out in the comments below) the behavior changed with version 2.6 in 2014. The results are now sent to the primary shard for the sharded database to be merge sorted before being returned to the mongos (and then to the user). This makes a lot of sense since mongos instances are far less likely to have the resources to perform a large sort, but it does mean that you should pay close attention to where any databases which will be sorted frequently have their primary as it will see higher load as a result.
In 3.2 version, if primary shard is not used in fetch (in other words, the primary shard does NOT contain any of the documents in the find command), then a secondary shard may be used instead.

MongoDB Cluster and 100,000 Capped Collections

How does a MongoDB cluster distribute Capped Collections across nodes for balancing load? I am planning to use a Capped Collection for comments of each Post in a MongoDB based CMS. Lets assume we have 100,000 Posts and hence 100,000 Capped Collections storing comments for each post. Will these Capped Collections be distributed evenly across cluster for read and write scalability?
I dont want to shard a capped collection. I want to distribute all the capped collections evenly across the cluster for read and write scalability.
Lets assume we have 5 machines. When we create new collections, I need them to be created on different machines/nodes and also redistribute them when new machines are added.
1) When creating a collection (capped or not) it is set on the primary shard of the database. The solution would be to set a collection per database so that mongo equilibrate the databases across ythe cluster. The rule for equilibrium is not clear but depends mainly on the current load on each shard.
2) Believe me, you should use one big collection for all your post and shard it in a clever way. It will ensure really efficient and automatic balance of your data across your cluster.
More over capped collection are not really space efficient because it will pre-allocate all the space for all your collections (meaning that you'll have a lot of wasted space for nothing)
Unless you have a very good reason to go for capping, you have better try sharding.
One advice : use the 'postId' field in your shard key, it will probably the most performance.
Apparently it is not implemented yet for mongodb: Issue
Quote from similar question:
But you can create multiple capped collections on different shards to
increase write throughput; however, you must then run multiple queries
to access all your data.