I have a mongodb collection with about 100.000 documents.
Each document has an array with about ~ 100 elements. Is an array of strings like this:
features: [
"0_Toyota",
"29776_Grey",
"101037_Hybrid",
"240473_Iron Gray",
"46290_Aluminium,Magnesium",
"2787_14",
"9350_1920 x 1080",
"36303_Y",
"310870_N",
"57721_Y"
...
Making queries like this, are very fast. But sometimes gets very slow, including an specific extra condition inside $and. I have no idea why this happens. When gets slow, it takes more than 40 seconds. Always happens with the same extra condition. It is very possible that it happens with other conditions.
db.products.find({
$and:[
{
"features" : {
"$eq" : "36303_N"
}
},
{
"features" : {
"$eq" : "91135_IPS"
}
},
{
"features" : {
"$eq" : "9350_1366 x 768"
}
},
{
"features" : {
"$eq" : "178874_Y"
}
},
{
"features" : {
"$eq" : "43547_Y"
}
}
...
I'm running the same mongodb in my unix laptop and on a linux server instance.
Also trying indexing the field "features" with the same results.
use $all in your mongo query with your data helps you to query for an array
first create index on features
use this query may helps to you
db.products.find( { features: { $all: ["36303_N", "91135_IPS","others..."] } } )
by the way ,
if your query is very slow ,get the slow operation from your mongod log
show your mongodb version .
any writing when query (write will blocking read in some version)
I have realized that order inside $all matters. I change the order of elements by its number of documents that exists inside the collection, ascending. Making the query more selective.
Before, the query takes ~ 40 seconds to execute, now, with elements ordered, it takes ~ 22 seconds.
Still many seconds anyway.
Related
I'm running some queries to a mongodb 2.4.9 server that populate a datatable on a webpage. The user needs to be able to do a substring search across multiple fields, sort the data on various columns, and flip through the results in pages. I have to check multiple fields for matches since the user could be searching for anything related to the documents. There are about 300,000 documents in the collection so the database is relatively small.
I have indexes created for the created_by, requester, desc.name, metaprogram.id, program.id, and arr.programid fields. I've also created indexes [("created", 1), ("created_by", 1), ("requester", 1)] and [("created_by", 1), ("requester", 1)] at the suggestion of Dex.
It's also worth mentioning that documents might not have all of the fields that are being searched for here. Some documents might have a metaprogram.id but not the other ID fields for example.
An example of a query I might run is
{
"$query" : {
"$and" : [
{
"created_by" : {"$ne" : "automation"},
"requester" : {"$in" : ["Broadway", "Spec", "Falcon"] }
},
{
"$or" : [
{"requester" : /month/i },
{"created_by" : /month/i },
{"desc.name" : /month/i },
{"metaprogram.id" : {"$in" : [708, 2314, 709 ] } },
{"program.id" : {"$in" : [708, 2314, 709 ] } },
{"arr.programid" : {"$in" : [708, 2314, 709 ] } }
]
}
]
},
"$orderby" : {
"created" : 1
}
}
with differing orderby, limit, and skip values as well.
Queries on average take 500-1500ms to complete.
I've looked into how to make it faster, but haven't been able to come up with anything. Some of the text searching stuff looks handy but as far as I know each collection only supports at most one text index and it doesn't support pagination (skips). I'm sure that prefix searching instead of regex substring matches would be faster as well but I need substring matching.
Is there anything you can think of to improve the speed of a query like this?
It's quite hard to optimize a query when it's unpredictable.
Analyze how the system is being used and place indexes on the most popular fields.
Use .explain() to make sure the indexes are being used.
Also limit the results returned to a value of 50 or 100. The user doesn't need to see everything at once.
Try upgrading mongodb to see if there's a performance improvement.
Side note:
You might want to consider using ElasticSearch as a search engine instead of Mongodb. ElasticSearch would store the searchable fields and return the Mongodb Ids for matched results. ElasticSearch is a magnitude faster as a search engine than Mongodb.
More info:
How to find queries not using indexes or slow in mongodb
Range query for MongoDB pagination
http://www.elasticsearch.org/overview/
Let's assume I would like to track computers on my network storing information about mac-address with the device name, port name, vlan number and timestamp.
I could grab mac-address-table from all switches in regular intervals, parse it and dump that data into mongodb.
The problem is: how to STORE only last 100 unique entries for each mac-address.
Capped collections is no-go, because to do that I would have to create separate collection for each mac, which is bad idea.
The number of switches and mac-addresses may change over time, and new data might be inserted at irregular intervals.
The other idea I have is to write some query which looks for the timestamp of 100th oldest entry for each mac-address and remove all older entries, and run this queries after each batch of inserts. It may work, but doesn't seem very efficient.
Do you have any better ideas?
Hmm... found something interesting:
how about storing each version in an array using $push operator with $slice modifier?
there are some examples in the docs:
http://docs.mongodb.org/manual/reference/operator/update/slice/
The other idea I have is to write some query which looks for the timestamp of 100th oldest entry for each mac-address and remove all older entries, and run this queries after each batch of inserts. It may work, but doesn't seem very efficient.
That sounds good to me. It might be cleaner to use cyclic buffer for this:
{
mac : "AA:AA:AA:AA:AA",
entryPointer : 2, // pointer of the next entry to be written
lastEntries : [
{ "ip" : "127.0.0.1", "service" : "foo", ts : ISODate(...), ... },
{ "ip" : "127.0.0.1", "service" : "foo", ts : ISODate(...), ... },
{ "ip" : "255.255.255.255", "service" : "longestProbableServiceName",
ts : ISODate(0001-01-01), ... }
...
{ "ip" : "255.255.255.255", "service" : "longestProbableServiceName",
ts : ISODate(0001-01-01), ... }
]
}
An update would have to increase the pointer and overwrite the position in the array given by pointer % 100. It will be helpful to preallocate the memory in the array as demonstrated to avoid fragmentation and reallocation overhead.
As you pointed out, the modulo-update can be done using $slice and $push:
db.foo.update(
{ mac : "AA:AA:AA..." },
{
$push: {
lastEntries : {
$each: [ { "ip" : "012.002.003.012", ... } ],
$slice: -100
}
}
}
)
Pre-populating the array also comes with the advantage that the most recent entry is always at the last position.
I have a simple collection of elements like this
{_id: n, xs: [...]}
I'm trying to count total number of elements in all arrays
db.testRace.aggregate([{ $unwind : "$xs" }, { $group : { _id : null, count : { $sum : 1 } } }])
And it works great unless I start to do massive updates of this collection. Under heavy load of update operations I get wrong total - slightly bigger than it should be.
It can be easily reproduced.
First generate some test data
for(var i = 1; i <= 1000000; i++) {
db.testRace.insert({_id: i, xs: [i]});
}
Then simulate a lot of updates
while(true) {
var id = Math.floor((Math.random() * 1000000) + 1);
var obj = db.testRace.find({_id: id}).next();
obj.some="change";
db.testRace.update({_id: id}, obj);
}
And while it is running do aggregate unwind query.
Without load I get right result - 1000000. But when there are a lot of updates I get bigger numbers, like 1001456.
And if I run query like this
db.testRace.aggregate([{ $unwind : "$xs" }, {$group: {_id:"$xs", count:{$sum: 1}}}, { $sort : { count : -1 } }, { $limit : 2 }]);
I get
"result" : [
{
"_id" : 996972,
"count" : 2
},
{
"_id" : 997789,
"count" : 2
}
],
So it seems aggregate count some records twice.
Is it expected behaviour or maybe I'm doing aggregation wrong?
I tested on local mongodb instance, version - 2.4.9
It's expected behavior due to the way MongoDB handles read isolation. When you have a long running query (and an aggregation that reads every single document is a long running query) with updates to that data during the query it may impact whether or no the updated data is returned in the query - depending on what happens when, you could miss a document, receive it or receive it twice.
From the source code:
Any data inserted, deleted, or modified during a yield that should be
returned by a query may or may not be returned by that query. The
query could return: nothing; the data before; the data after; or both
the data before and the data after.
In short, there is no isolation between a query and an
insert/delete/update. AKA, READ_UNCOMMITTED.
https://github.com/mongodb/mongo/blob/master/src/mongo/db/exec/plan_stage.h
Your aggregation query is yielding mid query, during which some of the data is updated. This impacts the results of the query.
very interesting, mapreduce works fine in a single instance, but not on a sharded collection. as below, you may see that i got a collection and write a simple map-reduce
function,
mongos> db.tweets.findOne()
{
"_id" : ObjectId("5359771dbfe1a02a8cf1c906"),
"geometry" : {
"type" : "Point",
"coordinates" : [
131.71778292855996,
0.21856835860911106
]
},
"type" : "Feature",
"properties" : {
"isflu" : 1,
"cell_id" : 60079,
"user_id" : 35,
"time" : ISODate("2014-04-24T15:42:05.048Z")
}
}
mongos> db.tweets.find({"properties.user_id":35}).count()
44247
mongos> map_flow
function () { var key=this.properties.user_id; var value={ "cell_id":1}; emit(key,value); }
mongos> reduce2
function (key,values){ var ros={flows:[]}; values.forEach(function(v){ros.flows.push(v.cell_id);});return ros;}
mongos> db.tweets.mapReduce(map_flow,reduce2, { out:"flows2", sort:{"properties.user_id":1,"properties.time":1} })
but the results are not what i want
mongos> db.flows2.find({"_id":35})
{ "_id" : 35, "value" : { "flows" : [ null, null, null ] } }
I got lots of null and interesting all have three ones.
mongodb mapreduce seems not right on sharded collection?
The number one rule of MapReduce is:
thou shall emit the value of the same type as reduce function returneth
You broke this rule, so your MapReduce only works for small collection where reduce is only called once for each key (that's the second rule of MapReduce - reduce function may be called zero, once or many times).
Your map function emits exactly this value {cell_id:1} for each document.
How does your reduce function use this value? Well, you return a value which is a document with an array, into which you push the cell_id value. This is strange already, because that value was 1, so I'm not sure why you wouldn't just emit 1 (if you wanted to count).
But look what happens when multiple shards push a bunch of 1's into this flows array (whether it's what you intended, that's what your code is doing) and now reduce is called on several already reduced values:
reduce(key, [ {flows:[1,1,1,1]},{flows:[1,1,1,1,1,1,1,1,1]}, etc ] )
Your reduce function now tries to take each member of the values array (which is a document with a single field flows) and you push v.cell_id to your flows array. There is no cell_id field here, so of course you end up with null. And three nulls could be because you have three shards?
I would recommend that you articulate to yourself what exactly you are trying to aggregate in this code, and then rewrite your map and your reduce to comply with the rules that mapReduce in MongoDB expects your code to follow.
I have a collection of md5 in mongodb. I'd like to find all duplicates. The md5 column is indexed. Do you know any fast way to do that using map reduce.
Or should I just iterate over all records and check for duplicates manually?
My current approach using map reduce iterates over the collection almost twice (assuming that there is very small amount of duplicates):
res = db.files.mapReduce(
function () {
emit(this.md5, 1);
},
function (key, vals) {
return Array.sum(vals);
}
)
db[res.result].find({value: {$gte:1}}).forEach(
function (obj) {
out.duplicates.insert(obj)
});
I personally found that on big databases (1TB and more) accepted answer is terribly slow. Aggregation is much faster. Example is below:
db.places.aggregate(
{ $group : {_id : "$extra_info.id", total : { $sum : 1 } } },
{ $match : { total : { $gte : 2 } } },
{ $sort : {total : -1} },
{ $limit : 5 }
);
It searches for documents whose extra_info.id is used twice or more times, sorts results in descending order of given field and prints first 5 values of it.
The easiest way to do it in one pass is to sort by md5 and then process appropriately.
Something like:
var previous_md5;
db.files.find( {"md5" : {$exists:true} }, {"md5" : 1} ).sort( { "md5" : 1} ).forEach( function(current) {
if(current.md5 == previous_md5){
db.duplicates.update( {"_id" : current.md5}, { "$inc" : {count:1} }, true);
}
previous_md5 = current.md5;
});
That little script sorts the md5 entries and loops through them in order. If an md5 is repeated, then they will be "back-to-back" after sorting. So we just keep a pointer to previous_md5 and compare it current.md5. If we find a duplicate, I'm dropping it into the duplicates collection (and using $inc to count the number of duplicates).
This script means that you only have to loop through the primary data set once. Then you can loop through the duplicates collection and perform clean-up.
You can do a group by that field and then query to get the duplicated (having a count > 1). http://www.mongodb.org/display/DOCS/Aggregation#Aggregation-Group
Although, the fastest thing might be to just do a query which only returns that field and then to do the aggregation in the client. Group/Map-Reduce need to provide access to the whole document which is much more costly than just providing the data from the index (which is now covered in 1.7.3+).
If this is a general problem you need to run periodically, you might want to keep a collection which is just {md5:value, count:value} so you can skip the aggregation, and it will be extremely fast when you need to cull duplicates.