I have a MongoDB collection named post with 35 million objects. The collection has two secondary indexes defined as follows.
> db.post.getIndexKeys()
[
{
"_id" : 1
},
{
"namespace" : 1,
"domain" : 1,
"post_id" : 1
},
{
"namespace" : 1,
"post_time" : 1,
"tags" : 1 // this is an array field
}
]
I expect the following query, which simply filters by namespace and post_time, to run in a reasonable time without scanning all objects.
>db.post.find({post_time: {"$gte" : ISODate("2013-04-09T00:00:00Z"), "$lt" : ISODate("2013-04-09T01:00:00Z")}, namespace: "my_namespace"}).count()
7408
However, it takes MongoDB at least ten minutes to retrieve the result and, curiously, it manages to scan 70 million objects to do the job according to the explain function.
> db.post.find({post_time: {"$gte" : ISODate("2013-04-09T00:00:00Z"), "$lt" : ISODate("2013-04-09T01:00:00Z")}, namespace: "my_namespace"}).explain()
{
"cursor" : "BtreeCursor namespace_1_post_time_1_tags_1",
"isMultiKey" : true,
"n" : 7408,
"nscannedObjects" : 69999186,
"nscanned" : 69999186,
"nscannedObjectsAllPlans" : 69999186,
"nscannedAllPlans" : 69999186,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 378967,
"nChunkSkips" : 0,
"millis" : 290048,
"indexBounds" : {
"namespace" : [
[
"my_namespace",
"my_namespace"
]
],
"post_time" : [
[
ISODate("2013-04-09T00:00:00Z"),
ISODate("292278995-01--2147483647T07:12:56.808Z")
]
],
"tags" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
},
"server" : "localhost:27017"
}
The difference between the number of objects and the number of scans must be caused by the lengths of the tag arrays (which are all equal to 2). Still, I don't understand why post_time filter does not make use of the index.
Can you tell me what I might be missing?
(I am working on a descent machine with 24 cores and 96 GB RAM. I am using MongoDB 2.2.3.)
Found my answer in this question: Order of $lt and $gt in MongoDB range query
My index is a multikey index (on tags) and I am running a range query (on post_time). Apparently, MongoDB cannot use both sides of the range as a filter in this case, so it just picks the $gte clause, which comes first. As my lower limit happens to be the lowest post_time value, MongoDB starts scanning all the objects.
Unfortunately, this is not the whole story. Trying to solve the problem, I created non-multikey indexes too but MongoDB insisted on using the bad one. That made me think that the problem was elsewhere. Finally, I had to drop the multikey index and create one without the tags field. Everything is fine now.
Related
I know it is not possible to remove the _id field in a mongodb collection. However, the size of my collections is large, that the index on the _id field prevents me from loading the other indices in the RAM. My machine has 125GB of RAM and my collection stats is as follows:
db.call_records.stats()
{
"ns" : "stc_cdrs.call_records",
"count" : 1825338618,
"size" : 438081268320,
"avgObjSize" : 240,
"storageSize" : 468641284752,
"numExtents" : 239,
"nindexes" : 3,
"lastExtentSize" : 2146426864,
"paddingFactor" : 1,
"systemFlags" : 0,
"userFlags" : 1,
"totalIndexSize" : 165290709024,
"indexSizes" : {
"_id_" : 73450862016,
"caller_id_1" : 45919923504,
"receiver_id_1" : 45919923504
},
"ok" : 1
}
When I do a query like the following:
db.call_records.find({ "$or" : [ { "caller_id": 125091840205 }, { "receiver_id" : 125091840205 } ] }).explain()
{
"clauses" : [
{
"cursor" : "BtreeCursor caller_id_1",
"isMultiKey" : false,
"n" : 401,
"nscannedObjects" : 401,
"nscanned" : 401,
"scanAndOrder" : false,
"indexOnly" : false,
"nChunkSkips" : 0,
"indexBounds" : {
"caller_id" : [
[
125091840205,
125091840205
]
]
}
},
{
"cursor" : "BtreeCursor receiver_id_1",
"isMultiKey" : false,
"n" : 383,
"nscannedObjects" : 383,
"nscanned" : 383,
"scanAndOrder" : false,
"indexOnly" : false,
"nChunkSkips" : 0,
"indexBounds" : {
"receiver_id" : [
[
125091840205,
125091840205
]
]
it takes more than 15 seconds on average to return the results. The indices for both caller_id and receiver_id should be around 90GB, which is OK. However, the 73GB index on the _id makes this query very slow.
You correctly told that you can not remove _id field from your document. You also can not remove an index from this field, so this is something you have to live with.
For some reason you start with the assumption that _id index makes your query slow, which is completely unjustifiable and most probably is wrong. This index is not used and just stays there untouched.
Few things I would try to do in your situation:
You have 400 billion documents in your collection, have you thought that this is a right time to start sharding your database? In my opinion you should.
use explain with your query to actually figure out what slows it down.
Looking at your query, I would also try to do the following:
change your document from
{
... something else ...
receiver_id: 234,
caller_id: 342
}
to
{
... something else ...
participants: [342, 234]
}
where your participants are [caller_id, receiver_id] in this order, then you can put only one index on this field. I know that it will not make your indices smaller, but I hope that because you will not use $or clause, you will get results faster. P.S. if you will do this, do not do this in production, test whether it give you a significant improvement and only then change in prod.
There are a lot of potential issues here.
The first is that your indexes do not include all of the data returned. This means Mongo is getting the _id from the index and then using the _id to retrieve and return the document in question. So removing the _id index, even if you could, would not help.
Second, the query includes an OR. This forces Mongo to load both indexes so that it can read them and then retrieve the documents in question.
To improve performance, I think you have just a few choices:
Add the additional elements to the indexes and restrict the data returned to what is available in the index (this would change indexOnly = true in the explain results)
Explore sharding as Skooppa.com mentioned.
Rework the query and/or the document to eliminate the OR condition.
I have a collection of 1.8 billion records stored in mongodb, where each record looks like this:
{
"_id" : ObjectId("54c1a013715faf2cc0047c77"),
"service_type" : "JE",
"receiver_id" : NumberLong("865438083645"),
"time" : ISODate("2012-12-05T23:07:36Z"),
"duration" : 24,
"service_description" : "NQ",
"receiver_cell_id" : null,
"location_id" : "658_55525",
"caller_id" : NumberLong("475035504705")
}
I need to get all the records for 2 million specific users (I have the users of interest id in a text file) and process it before I write the results to a database. I have indices on the receiver_id and on caller_id (each is part of a single index).
The current procedure I have is as the following:
for user in list_of_2million_users:
user_records = collection.find({ "$or" : [ { "caller_id": user }, { "receiver_id" : user } ] })
for record in user_records:
process(record)
However, it takes 15 seconds on average to consume the user_records cursor (the process function is very simple with low running time). This will not be feasible to process 2 million users. Any suggestions to speed up the $or query? as it seems to be the most time-consuming step.
db.call_records.find({ "$or" : [ { "caller_id": 125091840205 }, { "receiver_id" : 125091840205 } ] }).explain()
{
"clauses" : [
{
"cursor" : "BtreeCursor caller_id_1",
"isMultiKey" : false,
"n" : 401,
"nscannedObjects" : 401,
"nscanned" : 401,
"scanAndOrder" : false,
"indexOnly" : false,
"nChunkSkips" : 0,
"indexBounds" : {
"caller_id" : [
[
125091840205,
125091840205
]
]
}
},
{
"cursor" : "BtreeCursor receiver_id_1",
"isMultiKey" : false,
"n" : 383,
"nscannedObjects" : 383,
"nscanned" : 383,
"scanAndOrder" : false,
"indexOnly" : false,
"nChunkSkips" : 0,
"indexBounds" : {
"receiver_id" : [
[
125091840205,
125091840205
]
]
}
}
],
"cursor" : "QueryOptimizerCursor",
"n" : 784,
"nscannedObjects" : 784,
"nscanned" : 784,
"nscannedObjectsAllPlans" : 784,
"nscannedAllPlans" : 784,
"scanAndOrder" : false,
"nYields" : 753,
"nChunkSkips" : 0,
"millis" : 31057,
"server" : "some_server:27017",
"filterSet" : false
}
And this is the collection stats:
db.call_records.stats()
{
"ns" : "stc_cdrs.call_records",
"count" : 1825338618,
"size" : 438081268320,
"avgObjSize" : 240,
"storageSize" : 468641284752,
"numExtents" : 239,
"nindexes" : 3,
"lastExtentSize" : 2146426864,
"paddingFactor" : 1,
"systemFlags" : 0,
"userFlags" : 1,
"totalIndexSize" : 165290709024,
"indexSizes" : {
"_id_" : 73450862016,
"caller_id_1" : 45919923504,
"receiver_id_1" : 45919923504
},
"ok" : 1
}
I am running Ubuntu server with 125GB of RAM.
Note that I will run this analysis only once (not periodic thing I will do).
If the indices on caller_id and receiver_id are a single compound index, this query will do a collection scan instead of an index scan. Make sure they are both part of a separate index, i.e.:
db.user_records.ensureIndex({caller_id:1})
db.user_records.ensureIndex({receiver_id:1})
You can confirm that your query is doing an index scan in the mongo shell:
db.user_records.find({'$or':[{caller_id:'example'},{receiver_id:'example'}]}).explain()
If the explain plan returns its cursor type as BTreeCursor, you're using an index scan. If it says BasicCursor, you're doing a collection scan which is not good.
It would also be interesting to know the size of each index. For best query performances, both indices should be completely loaded into RAM. If the indices are so large that only one (or neither!) of them fit into RAM, you will have to page them in from disk to look up the results. If they're too big to fit in your RAM, your options are not too great, basically either splitting up your collection in some manner and re-indexing it, or getting more RAM. You could always get an AWS RAM-heavy instance just for the purpose of this analysis, since this is a one-off thing.
I am no expert in MongoDB, though I had the similar problem & following solutions helped me tackle the problem. Hope it helps you too.
Query is using indexes and scanning exact documents, so there are no issues with your indexing, though I'll suggest you to:
First of all try to see the status of command: mongostat --discover
See for the parameters such as page faults & index miss.
Have you tried warming up (performance of query after executing query for first)? What's the performance after warming up? If it's same as the previous one there might be page faults.
If you are going to run it as an analysis I think warming up the database might help you.
I don't know why your approach is so slow.
But you might want to try these alternative approaches:
Use $in with many ids at once. I'm not sure if mongodb handles millions of values well, but if it does not, sort the list of IDs and then split it into batches.
Do a collection scan in the application and check each entry against a hashset containing the interesting IDs. Should have acceptable performance for a one-off script, especially since you're interested in so many IDs.
We use mongoDB fulltext search to find products in our database.
Unfortunately it is incredible slow.
The collection contains 89.114.052 documents and I have the suspicion, that the full text index is not used.
Performing a search with explain(), nscannedObjects returns 133212.
Shouldn't this be 0 if an index is used?
My index:
{
"v" : 1,
"key" : {
"_fts" : "text",
"_ftsx" : 1
},
"name" : "textIndex",
"ns" : "search.products",
"weights" : {
"brand" : 1,
"desc" : 1,
"ean" : 1,
"name" : 3,
"shop_product_number" : 1
},
"default_language" : "german",
"background" : false,
"language_override" : "language",
"textIndexVersion" : 2
}
The complete test search:
> db.products.find({ $text: { $search: "playstation" } }).limit(100).explain()
{
"cursor" : "TextCursor",
"n" : 100,
"nscannedObjects" : 133212,
"nscanned" : 133212,
"nscannedObjectsAllPlans" : 133212,
"nscannedAllPlans" : 133212,
"scanAndOrder" : false,
"nYields" : 1041,
"nChunkSkips" : 0,
"millis" : 105,
"server" : "search2:27017",
"filterSet" : false
}
Please have a look at the question you asked:
".... The collection contains 89.114.052 documents and I have the suspicion, that the full text index is not used ...."
You are only "nScanned" for 133212 documents. Of course the index is used. If it was not then 89,114,052 documents ( because this is English locale and not German ) would have otherwise been reported in "nScanned" which means an index is not used.
Your query is slow. Well it seems your hardware is not up to the task of keeping 1333212 documents in memory or otherwise having the super fast disk to "page" effectively. But this is not a MongoDB problem but yours.
You have over 100,000 documents that match your query and even if you just want 100 then you need to accept this is how this works and MongoDB does not "give up" once you have matched 100 documents and yield control. The query pattern here finds all of the matches and then applies the "limit" to the cursor in order just to return the most recent.
Maybe some time in the future the "text" functionality might allow you do do things like you can do in the aggregate version of $geoNear and specify "minimum" and "maximum" values for a "score" in order to improve results. But right now it does not.
So either upgrade your hardware or use an external text search solution if your problem is the slow results on matching over 100,000 documents out of over 89,000,000 documents.
the query:
db.myColl.find({"M.ST": "mostrepresentedvalueinthecollection", "M.TS": new Date(2014,2,1)}).explain()
explain output :
"cursor" : "BtreeCursor M.ST_1_M.TS_1",
"isMultiKey" : false,
"n" : 587606,
"nscannedObjects" : 587606,
"nscanned" : 587606,
"nscannedObjectsAllPlans" : 587606,
"nscannedAllPlans" : 587606,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 9992,
"nChunkSkips" : 0,
"millis" : 174820,
"indexBounds" : {
"M.ST" : [
[
"mostrepresentedvalueinthecollection",
"mostrepresentedvalueinthecollection"
]
],
"M.TS" : [
[
ISODate("2014-03-01T00:00:00Z"),
ISODate("2014-03-01T00:00:00Z")
]
]
},
"server" : "myServer"
additional details: myColl contains about 40m documents, average object size is 300b.
I don't get why indexOnly is not set to true, I have a compound index on {"M.ST":1, "M.TS":1}
The mongo host is a unix box with 16gb RAM and 500gb disk space (spinning disk).
The total index size of the database is 10gb, we've got around 1k upserts/sec, on those 1K 20 are inserts the rest are Increments.
We have another query that adds a third field in the find query (called "M.X"), and also a compound index on "M.ST", "M.X", "M.TS". That one is lightning fast and scans only 330 documents.
Any idea what could be wrong ?
Thanks.
EDIT : here's the structure of a sample document:
{
"_id" : "somestring",
"D" : {
"20140301" : {
"IM" : {
"CT" : 143
}
},
"20140302" : {
"IM" : {
"CT" : 44
}
},
"20140303" : {
"IM" : {
"CT" : 206
}
},
"20140314" : {
"IM" : {
"CT" : 5
}
}
},
"Y" : "someotherstring",
"IM" : {
"CT" : 1
},
"M" : {
"X" : 99999,
"ST" : "mostrepresentedvalueinthecollection",
"TS" : ISODate("2014-03-01T00:00:00.000Z")
},
}
The idea is to store some analytics metrics by month, the "D" field represents an array of documents containing data for each day of the month.
EDIT:
This feature is not currently implemented. Corresponding JIRA ticket is SERVER-2104. You can upvote for it, but for now, to utilize covered index queries you need to avoid use of dot-notation/embedded document.
I think you need to set a projection on that query, to tell mongo what indexes it covers.
Try this..
db.myColl.find({"M.ST": "mostrepresentedvalueinthecollection", "M.TS": new Date(2014,2,1)},{ M.ST:1, M.TS:1, _id:0 }).explain()
What's the most efficient way to find data in Mongo, when the input data is a single value, and the collection data contains min/max ranges? E.g:
record = { min: number, max: number, payload }
Need to locate a record for a number that falls within the min/max range of the record. The ranges never intersect. There is no predictability about the size of the ranges.
The collection has ~6M records in it. If I unpack the ranges (have records for each value in range), I would be looking at about 4B records instead.
I've created the compound index of {min:1,max:1}, but attempt to search using:
db.block.find({min:{$lte:value},max:{$gte:value})
... takes anywhere from few to tens of seconds. Below are the output of explain() and getIndexes(). Is there any trick I can apply to make the search execute significantly faster?
NJmongo:PRIMARY> db.block.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "mispot.block",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"min" : 1,
"max" : 1
},
"ns" : "mispot.block",
"name" : "min_1_max_1"
}
]
NJmongo:PRIMARY> db.block.find({max:{$gte:1135194602},min:{$lte:1135194602}}).explain()
{
"cursor" : "BtreeCursor min_1_max_1",
"isMultiKey" : false,
"n" : 1,
"nscannedObjects" : 1,
"nscanned" : 1199049,
"nscannedObjectsAllPlans" : 1199050,
"nscannedAllPlans" : 2398098,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 7534,
"nChunkSkips" : 0,
"millis" : 5060,
"indexBounds" : {
"min" : [
[
-1.7976931348623157e+308,
1135194602
]
],
"max" : [
[
1135194602,
1.7976931348623157e+308
]
]
},
"server" : "ccc:27017"
}
If the ranges of your block records never overlap, then you can accomplish this much faster with:
db.block.find({min:{$lte:value}}).sort({min:-1}).limit(1)
This query will return almost instantly since it can find the record with a simple lookup in the index.
The query you are running is slow because the two clauses each match on millions of records that must be merged. In fact, I think your query would run faster (maybe much faster) with separate indexes on min and max since the max part of your compound index can only be used for a given min -- not to search for documents with a specific max.