Mongodb server side vs client side processing - mongodb

I have a shell script that creates a cursor on a collection then updates each docunment with data from another collection.
When I run it on a local db it finishes in about 15 sec, but on a hosted database, it runs for over 45 min.
db.col1.find().forEach(function(doc) {
db.col2.findAndModify(
{
query: {attr1: doc.attr1},
update: { $push: {attr2: doc.attr2},
upsert: true
});
});
So there is obviously network overhead between the client and server in order to process this script. Is there a way to keep the processing all server side? I have looked at server side javascript, but from what I read here, it is not a recommended practice.

Locally, you have almost no network overhead. No interference, no routers, no switches, no limited bandwidth. Plus, during most situations, your mass storage, be it SSD or HDD, more or less idles around (Unless you tend to play games while developing.) So when a operation requiring a lot of IO capabilities kicks in, it is available.
When you run your script from your local shell against a server, here is what happens.
db.col1.find().forEach The whole collection is going to be read, from an unknown medium (most likely HDDs of which the available IO could be shared among many instances). The documents will then be transferred to you local shell. Compared to a connection to localhost, each document retrieval is routed over dozens of hops, each adding a tiny amount of latency. Over presumably quite some documents, this adds up. Don't forget that the complete document is sent over the network, since you did not use projection to limit the fields returned to attr1 and attr2. External bandwidth of course is slower than a connection to localhost.
db.col2.findAndModify For each document, a query is done. Again, the shared IO might well kill performance.
{ query: {attr1: doc.attr1}, update: { $push: {attr2: doc.attr2}, upsert: true} Are you sure attr1 is indexed, by the way? And even when it is, it is not sure wether the index is currently in RAM. We are talking of a shared instance, right? And it might well be that your write operations have to wait until they are even processed by mongod, as per default write concern, the data has to be successfully applied to the in memory data set before it is acknowledged, but if gazillions of operations are sent to the shared instance, it might well be that your operation is number 1 bazillion and one in the queue. And the network latency is added a second time, since the values transferred to your local shell needs to be sent back.
What you can do
First of all, make sure that you
limit the returned values to those you need using projection:
db.col1.find({},{ "_id":0, "attr1":1, "attr2":1 })
Make sure you have attr1 indexed
db.col2.ensureIndex( { "attr1":1 } )
Use bulk operations. They are executed much faster, at the expense of reduced feedback in case of problems.
// We can use unordered here, because the operations
// each apply to only a single document
var bulk = db.col2.initializeUnorderedBulkOp()
// A counter we will use for intermediate commits
// We do the intermediate commits in order to keep RAM usage low
var counter = 0
// We limit the result to the values we need
db.col1.find({}.{"_id":0, "attr1":1, "attr2":1 }).forEach(
function(doc){
// Find the matching document
// Update exactly that
// and if it does not exist, create it
bulk
.find({"attr1": doc.attr1})
.updateOne({ $push: {"attr2": doc.attr2})
.upsert()
counter++
// We have queued 1k operations and can commit them
// MongoDB would split the bulk ops in batches of 1k operations anyway
if( counter%1000 == 0 ){
bulk.execute()
print("Operations committed: "+counter)
// Initialize a new batch of operations
bulk = db.col2.initializeUnorderedBulkOp()
}
}
)
// Execute the remaining operations not committed yet.
bulk.execute()
print("Operations committed: "+counter)

Related

Store many documents to mongoose fast

I need to insert 10K documents as fast as possible but it's taking a long time.
I am currently using Model.create([<huge array here>]) to do this.
Would it help to use multiple connections to the database? For example have 10 connections saving 1K each?
You can use model.insertMany(doc, options).
Some stuff to note below.
Connection Pool
10 connections is usually sufficient, but it greatly depends on your hardware. Opening up more connections may slow down your server.
In some cases, the number of connections between the applications and
the database can overwhelm the ability of the server to handle
requests.
Options
There are a couple of options for insertMany that can speed up insertion.
[options.lean «Boolean» = false] if true, skips hydrating and
validating the documents. This option is useful if you need the extra
performance, but Mongoose won't validate the documents before
inserting.
[options.limit «Number» = null] this limits the number of documents
being processed (validation/casting) by mongoose in parallel, this
does NOT send the documents in batches to MongoDB. Use this option if
you're processing a large number of documents and your app is running
out of memory.
Write concern
Setting options on writeConcern in options can also affect performance.
If applications specify write concerns that include the j option,
mongod will decrease the duration between journal writes, which can
increase the overall write load.
Use db.collection.insertMany([])
insertMany will accept array of objects and this is use to perform bulk insertions.

MongoDB: reduce read size and RAM needed with project?

I am designing a MongoDB database that looks something like this:
registry:{
id:1,
duration:123,
score:3,
text:"aaaaaaaaaaaaaaaaaaaaaaaaaaaa"
}
The text field is very big compared to the rest. I sometimes need to perform analytics queries that average the duration or the score, but never use the text.
I have queries that are more specific, and retrieve all the information about a single document. But in this queries I could spend more time making two queries to retrieve all the data.
My question is, if I make a query like this:
db.registries.aggregate( [
{
$group: {
_id: null,
averageDuration: { $avg: "$duration" },
}
}
] )
Would it need to read the data from the transcript field? That would make the query much slower and it would take a lot of RAM. If that is the case it would be better to split the records in two and have something like this right?:
registry:{
id:1,
duration:123,
score:3,
}
registry_text:{
id:1,
text:"aaaaaaaaaaaaaaaaaaaaaaaaaaaa"
}
Thanks a lot!
I don't know how the server works in this case but I expect that, for caching reasons, the server will load complete documents into memory when it reads them from disk. Disk reads are very slow (= expensive in time taken) and I expect server will aggressively use memory if it can to avoid reads.
An important note here is that the documents are stored on disk as lists of key-value pairs comprising their contents. To not load a field from disk the server would have to rebuild the document in question as part of reading it since there are length fields involved. I don't see this happening in practice.
So, once the documents are in memory I assume they are there with all of their fields and I don't expect you can tune this.
When you are querying, the server may or may not drop individual fields but this would only change the memory requirements for the particular query. Generally these memory requirements are dwarfed by the overall database cache size and aggregation pipelines. So I don't think it really matters at what point a large field is dropped from a document during query processing (assuming you project it out in the query).
I think this isn't a worthwhile matter to try to ponder/optimize. If you have a real system with real workloads, you'll be much more pressed to optimize something else.
If you are concerned with memory usage when the amount of available memory is consumer-sized (say, under 16 gb), just get more memory - it's insanely cheap given how much time you'd spend working around lack of it (whether we are talking about provisioning bigger AWS instances or buying more sticks of RAM).
You should be able to use $project to limit the fields read.
As a general advice, don't try to normalize the data with MongoDB as you would with SQL. Also, it's often more performant to read documents plain from DB and do the processing on your server.
I have found this answer that seems to indicate that project needs to fetch all document in the database server, it only reduces bandwith
When using projection to remove unused fields, the MongoDB server will
have to fetch each full document into memory (if it isn't already
there) and filter the results to return. This use of projection
doesn't reduce the memory usage or working set on the MongoDB server,
but can save significant network bandwidth for query results depending
on your data model and the fields projected.
https://dba.stackexchange.com/questions/198444/how-mongodb-projection-affects-performance

MongoDB concurrency - reduces the performance

I understand that mongo db does locking on read and write operations.
My Use case:
Only read operations. No write operations.
I have a collection about 10million documents. Storage engine is wiredTiger.
Mongo version is 3.4.
I made a request which should return 30k documents - took 650ms on an average.
When I made concurrent requests - same requests - 100 times - It takes in seconds - few seconds to 2 minutes all requests handled.
I have single node to serve the data.
How do I access the data:
Each document contains 25 to 40 fields. I indexed few fields. I query based on one index field.
API will return all the matching documents in json form.
Other informations: API is written using Spring boot.
Concurrency tested through JMeter shell script from command line on remote machine.
So,
My question:
Am I missing any optimizations? [storage engine level, version]
Can't I achieve all read requests to be served less than a second?
If so, what sla I can keep for this use case?
Any suggestions?
Edit:
I enabled database profiler in mongodb with level 2.
My single query internally converted to 4 queries:
Initial read
getMore
getMore
getMore
These are the queries found through profiler.
Totally, it is taking less than 100ms. Is it true really?
My concurrent queries:
Now, When I hit 100 requests, nearly 150 operations are more than 100ms, 100 operations are more than 200ms, 90 operations are more than 300ms.
As per my single query analysis, 100 requests will be converted to 400 queries internally. It is fixed pattern which I verified by checking the query tag in the profiler output.
I hope this is what affects my request performance.
My single query internally converted to 4 queries:
Initial read
getMore
getMore
getMore
It's the way mongo cursors work. The documents are transferred from the db to the app in batches. IIRC the first batch is around 100 documents + cursor Id, then consecutive getMore calls retrieve next batches by cursor Id.
You can define batch size (number of documents in the batch) from the application. The batch cannot exceed 16MB, e.g. if you set batch size 30,000 it will fit into single batch only if document size is less than 500B.
Your investigation clearly show performance degradation under load. There are too many factors and I believe locking is not one of them. WiredTiger does exclusive locks on document level for regular write operations and you are doing only reads during your tests, aren't you? In any doubts you can compare results of db.serverStatus().locks before and after tests to see how many write locks were acquired. You can also run db.serverStatus().globalLock during the tests to check the queue. More details about locking and concurrency are there: https://docs.mongodb.com/manual/faq/concurrency/#for-wiredtiger
The bottleneck is likely somewhere else. There are few generic things to check:
Query optimisation. Ensure you use indexes. The profiler should have no "COLLSCAN" stage in execStats field.
System load. If your database shares system resources with application it may affect performance of the database. E.g. BSON to JSON conversion in your API is quite CPU hungry and may affect performance of the queries. Check system's LA with top or htop on *nix systems.
Mongodb resources. Use mongostat and mongotop if the server has enough RAM, IO, file descriptors, connections etc.
If you cannot spot anything obvious I'd recommend you to seek professional help. I find the simplest way to get one is by exporting data to Atlas, running your tests against the cluster. Then you can talk to the support team if they could advice any improvements to the queries.

Mongodb: keeping a frequently written collection in RAM

I am collecting data from a streaming API and I want to create a real-time analytics dashboard. This dashboard will display a simple timeseries plotting the number of documents per hour. I am wondering if my current approach is optimal.
In the following example, on_data is fired for each new document in the stream.
# Mongo collections.
records = db.records
stats = db.records.statistics
on_data(self, data):
# Create a json document from data.
document = simplejson.loads(data)
# Insert the new document into records.
records.insert(document)
# Update a counter in records.statistics for the hour this document belongs to.
stats.update({ 'hour': document['hour'] }, { '$inc': { document['hour']: 1 } }, upsert=True)
The above works. I get a beautiful graph which plots the number of documents per hour. My question is about whether this approach is optimal or not. I am making two Mongo requests per document. The first inserts the document, the second updates a counter. The stream sends approximately 10 new documents a second.
Is there for example anyway to tell Mongo to keep the db.records.statistics in RAM? I imagine this would greatly reduce disk access on my server.
MongoDB uses memory map to handle file I/O, so it essentially treats all data as if it is already in RAM and lets the OS figure out the details. In short, you cannot force your collection to be in memory, but if the operating system handles things well, the stuff that matters will be. Check out this link to the docs for more info on mongo's memory model and how to optimize your OS configuration to best fit your use case: http://docs.mongodb.org/manual/faq/storage/
But to answer your issue specifically: you should be fine. Your 10 or 20 writes per second should not be a disk bottleneck in any case (assuming you are running on not-ancient hardware). The one thing I would suggest is to build an index over "hour" in stats, if you are not already doing that, to make your updates find documents much faster.

EC2 - MongoDB - PyMongo - Debugging Query Latency

I have 2 EC2 instances, one as a mongodb server and the other is a python web app (same availability zone). The python sever connects to mongo server using PyMongo and everything works fine.
The problem is, when I profile execution-time in python, in some calls (less than 5%) it takes even up to couple of second to return. I was able to narrow down the problem and the time delay was actually on the db calls to mongo server.
The two causes that I thought were,
1. The Mongo server is slow/over-loaded
2. Network latency
So, I tried upgrading the mongo servers to 4X faster instance but the issue still happens (Some calls takes even 3 secs to return) I assumed since both the servers are on EC2, network latency should not be a problem... but may be I was wrong.
How can I confirm if the issue is actually the network itself? If so, what the best way to solve it? Is there any other possible cause?
Any help is appreciated...
Thanks,
UPATE: The entities that I am fetching are very small (indexed) and usually the calls takes only 0.01-0.02 secs to finish.
UPDATE:
As suggested by "James Wahlin", I enabled profiling on my mongo server and got some interesting logs,
Fri Mar 15 18:05:22 [conn88635] query db.UserInfoShared query: { $or:
[ { _locked: { $exists: false } }, { _locked: { $lte:
1363370603.297361 } } ], _id: "750837091142" } nto return:1 nscanned:1 nreturned:1 reslen:47 2614ms
Fri Mar 15 18:05:22 [conn88635] command db.$cmd command: {
findAndModify: "UserInfoShared", fields: { _id: 1 }, upsert: true,
query: { $or: [ { _locked: { $exists: false } }, { _locked: { $lte:
1363370603.297361 } } ], _id: "750837091142" }, update: { $set: { _locked: 1363370623.297361 } }, new: true } ntoreturn:1 reslen:153 2614ms
You can see these two calls took more than 2 secs to finish. The field _id is unique indexed and finding it should not have taken this much time. May be I have to post a new question for it, but can the mongodb GLOBAL LOCK be the cause ?
#James Wahlin, thanks a lot for helping me out.
As it turned out the main cause of latency was mongodb GLOBAL LOCK itself. We had a lock percentage which was averaging at 5% and sometime peaks up to 30-50% and that results in the slow queries.
If you are facing this issue, the first thing you have to do it enable mongodb MMS service (mms.10gen.com), which will give you a lot of insights on what exactly is happening in your db server.
In our case the LOCK PERCENTAGE was really high and there were multiple reasons for it. First thing you have to do to figure it out is to read mongodb documentation on concurrency,
http://docs.mongodb.org/manual/faq/concurrency/
The reason for lock can be in application level, mongodb or hardware.
1) Our app was doing a lot of updates and each update(more than 100 ops/sec) holds a global lock in mongodb. The issue was that when an update happens for an entry which is not in memory, mongo will have to load the data into memory first and then update (in memory) and the whole process happens while inside the global lock. If say the whole thing takes 1 sec to complete (0.75sec to load the data from disk and 0.25sec to update in memory), the whole rest of the update calls waits (for the entire 1 sec) and such updates starts queuing up... and you will notice more and more slows requests in your app server.
The solution for it (while it might sound silly) is to query for the same data before you make an update. What it effectively does is moving the 'load data to memory' (0.75sec) part out of the global lock which greatly reduces your lock percentage
2) The other main cause of global lock is mongodb's data flush to disk. Basically in every 60sec (or less) mongodb (or the OS) writes the data to disk and a global lock is held during this process. (This kinda explains the random slow queries). In your MMS stats, see the graph for background flush avg... if its high, that means you have to get faster disks.
In our case, we moved to a new EBS optimized instance in EC2 and also bumped our provisioned IOPS from 100 to 500 which almost halved the background flush avg and the servers are much happier now.