I have Mongo 2.2.2 running on Windows 7 x64 on i7 eight-core CPU. Our production servers are running under Red Hat Enterprise on 256-core machines with same version of Mongo.
In my tests of following call on my Windows machine
db.users_v2_prod.aggregate( { $group : {_id : "$email", total : { $sum : 1 } } }, { $match : { total : { $gte : 3 } } }, { $sort : {total : -1} }, {$limit : 5} )
I noticed that mongo underutilizes available resources. During the query total load on CPU is ~10%. According to Process Explorer computation occurs only in one thread. mongod seems to be using only 3 cores out of 8 I have and even they're used partially.
Could Mongo's engineers please explain their rationale to this implementation ? I'm curious why not use more resources if they are available. Why not parallel the load across all cores since you have index for a field I'm grouping at.
Given query was executed on collection with 6.5M documents (mongobackup produces 5GB file). So it's nothing crazy.
PS. And bonus question: have you thought about using GPU ? I have 1024-cores GPU on my laptop :)
In all likelihood, CPU is not the bounding factor here - that is true most of the time with typical use cases for MongoDB. Your query does not look computationally intensive, so it's more likely to be hitting a limit in terms of paging data off disk or running out of RAM.
It's hard to say without seeing actual stats for the run (for that I would recommend having the host in MMS with munin-node installed), but I have rarely seen the CPU be the bottleneck on a MongoDB instance.
Having said all that, the parallelization can probably be improved, but it may not be the quickest thing to get implemented. If none of the above is happening ore relevant, then I would see if you can run multiple jobs in parallel, or perhaps split up the work more on the client side to see if you can improve matters that way. You should also probably watch/vote/comment on these issues:
https://jira.mongodb.org/browse/SERVER-5091 (parallelize aggregating operations)
https://jira.mongodb.org/browse/SERVER-5088 (parallel query)
https://jira.mongodb.org/browse/SERVER-4504 (adding explain for aggregation framework) (added in 2.6)
Related
We have a problem of aggregation queries running long time (couple of minutes).
Collection:
We have a collection of 250 million documents with about 20 fields per document,
The total size of the collection is 110GB.
We have indexes over "our_id" and dtKey fields.
Hardware:
Memory:
24GB RAM (6 * 4GB DIMMS 1333 Mhz)
Disk:
Lvm 11TB built from 4 disks of 3TB disks:
600MB/s maximum instantaneous data transfers.
7200 RPM spindle. Average latency = 4.16ms
RAID 0
CPU:
2* E5-2420 0 # 1.90GHz
Total of 12 cores with 24 threads.
Dell R420.
Problem:
We are trying to make an aggregation query of the following:
db.our_collection.aggregate(
[
{
"$match":
{
"$and":
[
{"dtKey":{"$gte":20140916}},
{"dtKey":{"$lt":20141217}},
{"our_id":"111111111"}
]
}
},
{
"$project":
{
"field1":1,
"date":1
}
},
{
"$group":
{
"_id":
{
"day":{"$dayOfYear":"$date"},
"year":{"$year":"$date"}
},
"field1":{"$sum":"$field1"}
}
}
]
);
This query takes a couple of minutes to run, when it is running we can see the followings:
Mongo current operation is yielding more than 300K times
On iostat we see ~100% disk utilization
After this query is done it seems to be in cache and this can be done again in a split second,
After running it for 3 – 4 users it seems that the first one is already been swapped out from the cache and the query takes a long time again.
We have tested a count on the matching part and seen that we have users of 50K documents as well as users with 500K documents,
We tried to get only the matching part:
db.pub_stats.aggregate(
[
{
"$match":
{
"$and":
[
{"dtKey":{"$gte":20140916}},
{"dtKey":{"$lt":20141217}},
{" our_id ":"112162107"}
]
}
}
]
);
And the queries seems to take approximately 300-500M of memory,
But after running the full query, it seems to take 3.5G of memory.
Questions:
Why the pipelining of the aggregation takes so much memory?
How can we increase our performance for it to run on a reasonable time for HTTP request?
Why the pipelining of the aggregation takes so much memory?
Just performing a $match won't have to read the actual data, it can be done on the indexes. Through the projection's access of field1, the actual document will have to be read, and it will probably be cached as well.
Also, grouping can be expensive. Normally, it should report an error if your grouping stage requires more than 100M of memory - what version are you using? It requires to scan the entire result set before yielding, and MongoDB will have to at least store a pointer or an index of each element in the groups. I guess the key reason for the memory increase is the former.
How can we increase our performance for it to run on a reasonable time for HTTP request?
Your dtKey appears to encode time, and the grouping is also done based on time. I'd try to exploit that fact - for instance, by precomputing aggregates for each day and our_id combination - makes a lot of sense if there's no more criteria and the data doesn't change much anymore.
Otherwise, I'd try to move the {"our_id":"111111111"} criterion to the first position, because equality should always precede range queries. I guess the query optimizer of the aggregation framework is smart enough, but it's worth a try. Also, you might want to try turning your two indexes into a single compound index { our_id, dtkey }. Index intersections are supported now, but I'm not sure how efficient that is, really. Use the built-in profile and .explain() to analyze your query.
Lastly, MongoDB is designed for write-heavy use and scanning data sets of hundreds of GB from disk in a matter of milliseconds isn't feasible computationally at all. If your dataset is larger than your RAM, you'll be facing massive IO delays on the scale of tens of milliseconds and upwards, tens or hundreds of thousands of times, because of all the required disk operations. Remember that with random access you'll never get even close to the theoretical sequential disk transfer rates. If you can't precompute, I guess you'll need a lot more RAM. Maybe SSDs help, but that is all just guesswork.
I am trying to run some wild card/regex based query on mongo cluster from java driver.
Mongo Replica Set config:
3 member replica
16 CPU(hyperthreaded), 24G RAM Linux x86_64
Collection size: 6M rows, 7G data
Client is localhost (mac osx 10.8) with latest mongo-java driver
Query using java driver with readpref = primaryPreffered
{ "$and" : [{ "$or" : [ { "country" : "united states"}]} , { "$or" : [ { "registering_organization" : { "$regex" : "^.*itt.*hartford.*$"}} , { "registering_organization" : { "$regex" : "^.*met.*life.*$"}} , { "registering_organization" : { "$regex" : "^.*cardinal.*health.*$"}}]}]}
I have regular index on both "country" and "registering_organization". But as per mongo docs a single query can utilize only one index and I can see that from explain() on above query as well.
So my question is what is the best alternative to achieve better performance in above query.
Should I break the 'and' operations and do in memory intersection. Going further I shall have 'Not' operations in query too.
I think my application may turn into reporting/analytics in future but that's not down the line or i am not looking to design accordingly.
There are so many things wrong with this query.
Your nested conditional with regexes will never get faster in MongoDB. MongoDB is not the best tool for "data discovery" (e.g. ad-hoc, multi-conditional queries for uncovering unknown information). MongoDB is blazing fast when you know the metrics you are generating. But, not for data discovery.
If this is a common query you are running, then I would create an attribute called "united_states_or_health_care", and set the value to the timestamp of the create date. With this method, you are moving your logic from your query to your document schema. This is one common way to think about scaling with MongoDB.
If you are doing data discovery, you have a few different options:
Have your application concatenate the results of the different queries
Run query on a secondary MongoDB, and accept slower performance
Pipe your data to Postgresql using mosql. Postgres will run these data-discovery queries much faster.
Another Tip:
Your regexes are not anchored in a way to be fast. It would be best to run your "registering_organization" attribute through a "findable_registering_organization" filter. The filter would break apart the organization into an array of queryable name subsets, and you would quite using the regexes. +2 points if you can filter incoming names by an industry lookup.
I have quite a bit of experience with Mongo, but am on the verge of tears of frustration about this problem (that of course popped up from nowhere a day before release).
Basically I am querying a database to retrieve a document but it will often be an order of magnitude (or even two) worse than it should be, particularly since the query is returning nothing.
Query:
//searchQuery ex: { "atomic.Basic^SessionId" : "a8297898-7fc9-435c-96be-9c5e60901e40" }
var doc = FindOne(searchQuery);
Explain:
{
"cursor":"BtreeCursor atomic.Basic^SessionId",
"isMultiKey" : false,
" n":0,
"nscannedObjects":0,
"nscanned":0,
"nscannedObjectsAllPlans":0,
"nscannedAllPlans":0,
"scanAndOrder":false,
"indexOnly":false,
"nYields":0,
"nChunkSkips":0,
"millis":0,
"indexBounds":{
"atomic.Basic^SessionId":[
[
"a8297898-7fc9-435c-96be-9c5e60901e40",
"a8297898-7fc9-435c-96be-9c5e60901e40"
]
]
}
}
It is often taking 50-150 ms, even though mongotop reports at most 15 ms of read time (and that should be over several queries). There are only 6k documents in the database (only 2k or so are in the index, and the explain says it's using the index) and since the document being searched for isn't there, it can't be a deserialization issue.
It's not this bad on every query (sub ms most of the time) and surely the B-tree isn't large enough to have that much variance.
Any ideas will have my eternal gratitude.
MongoTop is not reporting the total query time. It reports the amount of time MongoDB is spending holding particular locks.
That query retuned in 0ms according to the explain (which is extremely quickly). What you are describing sounds like network latency. What is the latency when you ping the node? Is it possible that the network is flakey?
What version of MongoDB are you using? Consider upgrading both MongoDB and the C# driver to the latest stable versions.
I have 2 EC2 instances, one as a mongodb server and the other is a python web app (same availability zone). The python sever connects to mongo server using PyMongo and everything works fine.
The problem is, when I profile execution-time in python, in some calls (less than 5%) it takes even up to couple of second to return. I was able to narrow down the problem and the time delay was actually on the db calls to mongo server.
The two causes that I thought were,
1. The Mongo server is slow/over-loaded
2. Network latency
So, I tried upgrading the mongo servers to 4X faster instance but the issue still happens (Some calls takes even 3 secs to return) I assumed since both the servers are on EC2, network latency should not be a problem... but may be I was wrong.
How can I confirm if the issue is actually the network itself? If so, what the best way to solve it? Is there any other possible cause?
Any help is appreciated...
Thanks,
UPATE: The entities that I am fetching are very small (indexed) and usually the calls takes only 0.01-0.02 secs to finish.
UPDATE:
As suggested by "James Wahlin", I enabled profiling on my mongo server and got some interesting logs,
Fri Mar 15 18:05:22 [conn88635] query db.UserInfoShared query: { $or:
[ { _locked: { $exists: false } }, { _locked: { $lte:
1363370603.297361 } } ], _id: "750837091142" } nto return:1 nscanned:1 nreturned:1 reslen:47 2614ms
Fri Mar 15 18:05:22 [conn88635] command db.$cmd command: {
findAndModify: "UserInfoShared", fields: { _id: 1 }, upsert: true,
query: { $or: [ { _locked: { $exists: false } }, { _locked: { $lte:
1363370603.297361 } } ], _id: "750837091142" }, update: { $set: { _locked: 1363370623.297361 } }, new: true } ntoreturn:1 reslen:153 2614ms
You can see these two calls took more than 2 secs to finish. The field _id is unique indexed and finding it should not have taken this much time. May be I have to post a new question for it, but can the mongodb GLOBAL LOCK be the cause ?
#James Wahlin, thanks a lot for helping me out.
As it turned out the main cause of latency was mongodb GLOBAL LOCK itself. We had a lock percentage which was averaging at 5% and sometime peaks up to 30-50% and that results in the slow queries.
If you are facing this issue, the first thing you have to do it enable mongodb MMS service (mms.10gen.com), which will give you a lot of insights on what exactly is happening in your db server.
In our case the LOCK PERCENTAGE was really high and there were multiple reasons for it. First thing you have to do to figure it out is to read mongodb documentation on concurrency,
http://docs.mongodb.org/manual/faq/concurrency/
The reason for lock can be in application level, mongodb or hardware.
1) Our app was doing a lot of updates and each update(more than 100 ops/sec) holds a global lock in mongodb. The issue was that when an update happens for an entry which is not in memory, mongo will have to load the data into memory first and then update (in memory) and the whole process happens while inside the global lock. If say the whole thing takes 1 sec to complete (0.75sec to load the data from disk and 0.25sec to update in memory), the whole rest of the update calls waits (for the entire 1 sec) and such updates starts queuing up... and you will notice more and more slows requests in your app server.
The solution for it (while it might sound silly) is to query for the same data before you make an update. What it effectively does is moving the 'load data to memory' (0.75sec) part out of the global lock which greatly reduces your lock percentage
2) The other main cause of global lock is mongodb's data flush to disk. Basically in every 60sec (or less) mongodb (or the OS) writes the data to disk and a global lock is held during this process. (This kinda explains the random slow queries). In your MMS stats, see the graph for background flush avg... if its high, that means you have to get faster disks.
In our case, we moved to a new EBS optimized instance in EC2 and also bumped our provisioned IOPS from 100 to 500 which almost halved the background flush avg and the servers are much happier now.
Is there a way that I can protect my app against slow queries in MongoDB?
My application has tons of possibilities of filters and I'm monitoring all these queries but at the same time I don't want to compromise performance because of a missing index definition.
The 'notablescan' option, as #ghik mentioned, will prevent you from running queries that are slow due to not using an index. However, that option is global to the server, and it is not appropriate for use in a production environment. It also won't protect you from any other source of slow queries besides table scans.
Unfortunately, I don't think there is a way to directly do what you want right now. There is a JIRA ticket proposing the addition of a $maxTime or $maxScan query parameter, which sounds like it would help you, so please vote for it: https://jira.mongodb.org/browse/SERVER-2212.
There are options available on the client side (maxTimeMS starting in 2.6 release).
On the server side, there is no appealing global option, because it would impact all databases and all operations, even ones that the system needs to be long running for internal operation (for example tailing the oplog for replication). In addition, it may be okay for some of your queries to be long running by design.
The correct way to solve this would be to monitor currently running queries via a script and kill the ones that are long running and user/client initiated - you can then build in exceptions for queries that are long running by design, or have different thresholds for different queries/collections/etc.
You can then use db.currentOp() method (in the shell) to see all currently running operations. The field "secs_running" indicates how long the operation has been running. Be careful not to kill any long running operations that are not initiated by your application/client - it may be a necessary system operation, like chunk migration in a sharded cluster (as just one example).
Right now with version 2.6 this is possible. In their press release you can see the following:
with MaxTimeMS operators and developers can specify auto-cancellation
of queries, providing better control of resource utilization;
Therefore with MaxTimeMS you can specify how much time you allow your query to be executed. For example I do not want a specific query to run more than 200 ms.
db.collection.find({
// my query
}).maxTimeMS(200)
What is cool about this, is that you can specify different timeouts for different operations.
To answer OP's question in the comment. There is not global setting for this. One reason is that different queries can have different maximum tolerating time. For example you can have query that finds userInfo by it's ID. This is very common operation and should run super fast (otherwise we are doing something wrong). So we can not tolerate it to run longer than 200 ms.
But we also have some aggregation query, which we run once a day. For this operation it is ok to run for 4 seconds. But we can not tolerate it longer than 10 seconds. So we can put 10000 as maxTimeMS.
I guess there is currently no support for killing query by passing time argument. Though in your development side, you can set profiler level to 2. It will log every query that has been issued. From there you can see which queries take how much time. I know its not what you exactly wanted but it helps in getting the insight of what all queries are fat and then in your app logic you can have some way to gracefully handle such cases where those queries might originate. I usually go by this approach and it helps.
Just putting this here since I was struggling with the same for a while:
Here is how you can do it in python3
Tested on mongo version 4.0 and pymongo version 3.11.4
import pymongo
client = pymongo.MongoClient("mongodb://mongodb0.example.com:27017")
admin_db = client.get_database("admin")
milliseconds_running = 10000
query = [
{"$currentOp": {"allUsers": True, "idleSessions": True}},
{
"$match": {
"active": True,
"microsecs_running": {
"$gte": milliseconds_running * 1000
},
"ns": {"$in": ["mydb.collection1", "mydb.collection2"]},
"op": {"$in": ["query"]},
}
},
]
ops = admin_db.aggregate(query)
count = 0
for op in ops:
admin_db.command({"killOp": 1, "op": op["opid"]})
count += 1
logging.info("ops found: %d" % count)
I wrote a more robust and configurable script for it here.
It also has a Dockerfile file in case anyone wants to use this as a container. I am currently using this as a periodicallly running cleanup task.