Extremely slow deserialization for large documents with the MongoDB C# driver - mongodb

I am using 10 threads to do find-and-modify on large documents (one document at a time on each thread). The documents are about 600kB each and contain each a 10,000 elements array. The client and the MongoDB server are on the same LAN in a datacenter and are connected via a very fast connection. The client runs on small machine (CPU is fairly slow, 768 MB of RAM).
The problem is that every find-and-modify operation is extremely slow.
For example here is the timeline of an operation:
11:56:04: Calling FindAndModify on the client.
12:05:59: The operation is logged on the server ("responseLength" : 682598, "millis" : 7), so supposedly the server returns almost immediately.
12:38:39: FindAndModify returns on the client.
The CPU usage on the machine is only 10-20%, but the memory seems to be running low. The available bytes performance counter is around 40 MB, and the Private Bytes of the process that is calling MongoDB is 800 MB (which is more than the physical RAM on the machine). Also the Page/sec performance counter is hovering around 4,000.
My theory is that it took 33 minutes for the driver to deserialize the document, and that could be because the OS is swapping due to high memory usage caused by the deserialization of the large documents into CLR objects. Still, this does not really explain why it took 10 minutes between the call to FindAndModify and the server side execution.
How likely is it that this is the case? Do you have another theory? How can I verify this is the case?

Related

mongodb max number of parallel find() requests from single instance

What is the maximum theoretical number of parallel requests that we can squize from single mongodb instance before deciding to shard?
Considering the database and indexes fit in memory and all requests are find() queries fetching single document based on indexed field. The hosting OS is Ubuntu , the data partition is SSD. ulimits are set to max.
In my laptop with simple test on single instance I reach near 40k/sec , after that the avg execution times start to increase significantly, but wondering what can be the upper theoretical limit?
It depends. If your active dataset can fit in the memory - if most of the requests don't need to perform any disk I/O - then you can achieve 24k+ requests pretty easily. If not on a (bigger) single machine, then at least use a replica set cluster with multiple secondaries.
If an active dataset is much larger than the available RAM then you have the same problem as with any other database. The advantage of MongoDB's new engine WiredTiger (since v3.0) is a transparent compression - it can reduce the amount of data and I/O and thus improve performance - even despite the fact that compression adds CPU load.
For more performance it really helps:
if the most accessed documents are small so it takes less time to
load them, transfer them, and less time to deserialize in your app List item
If you use projections in find(), for the same reasons
if you use bulk operations to reduce networking I/O and context switches
Even MongoDB itself has an option to limit the maximum number of incoming connections. It defaults to 64k.
for more information you can refer link

MongoDB degrading write performance over time

I am importing a lot of data (18GB, 3 million documents) over time, almost all the data are indexed, so there are lots of indexing going on. The system consist of a single client (single process on a separate machine) establishing a single connection (using pymongo) and doing insertMany in batch of 1000 docs.
MongoDB setup:
single instance,
journaling enabled,
WiredTiger with default cache,
RHEL 7,
version 4.2.1
192GB RAM, 16 CPUs
1.5 TB SSD,
cloud machine.
When I start the server (after full reboot) and insert the collection, it takes 1.5 hours. If the server run for a while inserting some other data (from a single client), it finishes to insert the data, I delete the collection and run the same data to insert - it takes 6 hours to insert it (there is still sufficient disk more than 60%, nothing else making connections to the db). It feels like the server performance degrades over time, may be OS specific. Any similar experience, ideas?
I had faced similar issue, the problem was RAM.
After full restart the server had all RAM free, but after insertions the RAM was full. Deletion of collection and insertion same data again might take time as some RAM was still utilised and less was free for mongo.
Try freeing up RAM and cache after you drop the collection, and check if same behaviour persists.
As you haven't provided any specific details, I would recommend you enable profiling; this will allow you to examine performance bottlenecks. At the mongo shell run:
db.setProfilingLevel(2)
Then run:
db.system.profile.find( { "millis": { "$gt": 10 } }, { "millis": 1, "command": 1 }) // find operations over 10 milliseconds
Once done set reset the profiling mode:
db.setProfilingLevel(0)

MongoDB multiple database vs single database performance

Overview:
We are comparing performance of create/read/write/rw over two different architectures: Single Database vs Multiple Databases (15k-25k).
We prefer to use the Multi-DB architecture because that makes it easier to separate customers (customer = 1 company). However, due to performance degradation we fear this may not be a good solution.
Server Specification:
Single instance MongoDB server; 64GB RAM; 16 core; SSD HD
Test results:
Both test scenarios have the same total number of documents (and documents are roughly the same size). The variables are number of databases, collections per database and documents per collection.
All tests are conducted in parallel using 50 client threads (separate machine), with the exception of Read/Write, which uses 100 (50R/50W). 'directoryPerDB' is enabled.
(All times are in milliseconds per doc operation)
Test Creation Read Write Read/Write Notes
25000 DB 4 Coll 250 Doc 23ms 1-10ms 1-4ms 2-10ms Max 1400% CPU, noticeable "pauses" (CPU drops to 100%)
15000 DB 4 Coll 420 Doc 23ms 0.7-4ms 0.9-4ms 2-9ms Max 1400% CPU, noticeable "pauses" (CPU drops to 100%)
1 DB 4 Coll 125000 Doc 0.8ms 0.6ms 0.8ms 1.2-1.6ms Max 600% CPU, no pauses
Conclusion:
There seems to be noticeable performance degradation on a regular interval when the DB count is very high. It may be due to the sheer number of files (25000 DBs * 4 Colls * 2 files = 200k files) or some other bottleneck.
In the Single-DB test, the CPU stays around 600% and maintains that until completion. In the multi DB tests, the CPU (at peak performance) is somewhere between 800-1400%, but every so often the CPU drops to 100% and all operations are paused. This can be verified by watching the mongo log, as well as the logs from the test clients that are issuing R/W commands.
If it wasn't for these pauses, the Multi-DB architecture would be ~2x faster than Single-DB, however it appears there is some global contention that cannot be avoided.
I'm hoping someone might know what this global contention is and (if possible) how to solve it.

MongoDB consumes all memory(RAM)

I use MongoDB 3.2, WiredTiger engine. I am trying to use batch inserts on 10K records, the size of one record about kb. All great, but 60-70 million such records memory ends. Mongo is limited cache in 3GB, but the memory is consumed by memory map files collections and indexes. After some time Mongo CPU load at 100% and stops receiving data. OS: Windows 7. What am I doing wrong? :)

MongoDB Stops Responding During Background Flush

Mongodb Background Flushing blocks all the requests:
Server: Windows server 2008 R2
CPU Usage: 10 %
Memory: 64G, Used 7%, 250MB for Mongod
Disk % Read/Write Time: less than 5% (According to Perfmon)
Mongodb Version: 2.4.6
Mongostat Normally:
insert:509 query:608 update:331 delete:*0 command:852|0 flushes:0 mapped:63.1g vsize:127g faults:6449 locked db:Radius:12.0%
Mongostat Before(maybe while) Flushing:
insert:1 query:4 update:3 delete:*0 command:7|0 flushes:0 mapped:63.1g vsize:127g faults:313 locked db:local:0.0%
And Mongostat After Flushing:
insert:1572 query:1849 update:1028 delete:*0 command:2673|0 flushes:1 mapped:63.1g vsize:127g faults:21065 locked db:.:99.0%
As you see when flushes happening lock is 99% just at this point mongod stops responding any read/write operation (mongotop and mongostat also stop). The flushing takes about 7 to 8 seconds to complete which does not increase disk load more than 10%.
Is there any suggestions?
Under Windows server 2008 R2 (and other versions of Windows I would suspect, although I don't know for sure), MongoDB's (2.4 and older) background flush process imposes a global lock, doing substantial blocking of reads and writes, and the length of the flush time tends to be proportional to the amount of memory MongoDB is using (both resident and system cache for memory-mapped files), even if very little actual write activity is going on. This is a phenomenon we ran into at our shop.
In one replica set where we were using MongoDB version 2.2.2, on a host with some 128 GBs of RAM, when most of the RAM was in use either as resident memory or as standby system cache, the flush time was reliably between 10 and 15 seconds under almost no load and could go as high as 30 to 40 seconds under load. This could cause Mongo to go into long pauses of unresponsiveness every minute. Our storage did not show signs of being stressed.
The basic problem, it seems, is that Windows handles flushing to memory-mapped files differently than Linux. Apparently, the process is synchronous under Windows and this has a number of side effects, although I don't understand the technical details well enough to comment.
MongoDb, Inc., is aware of this issue and is working on optimizations to address it. The problem is documented in a couple of tickets:
https://jira.mongodb.org/browse/SERVER-13444
https://jira.mongodb.org/browse/SERVER-12401
What to do?
The phenomenon is tied, to some degree, to the minimum latency of the disk subsystem as measured under low stress, so you might try experimenting with faster disks, if you can. Some improvements have been reported with this approach.
A strategy that worked for us in some limited degree is avoiding provisioning too much RAM. It happened that we really didn't need 128 GBs of RAM, so by dialing back on the RAM, we were able to reduce the flush time. Naturally, that wouldn't work for everyone.
The latest versions of MongoDB (2.6.0 and later) seem to handle the
situation better in that writes are still blocked during the long
flush but reads are able to proceed.
If you are working with a sharded cluster, you could try dividing the RAM by putting multiple shards on the same host. We didn't try this ourselves, but it seems like it might have worked. On the other hand, careful design and testing would be highly recommended in any such scenario to avoid compromising performance and/or high availability
We tried playing with syncdelay. Reducing it didn't help (the long flush times just happened more frequently). Increasing it helped a little (there was more time between flushes to get work done), but increasing it too much can exacerbate the problem severely. We boosted the syncdelay to five minutes (300 seconds), at one point, and were rewarded with a background flush of 20 minutes.
Some optimizations are in the works at MongoDB, Inc. These may be available soon.
In our case, to relieve the pressure on the primary host, we periodically rebooted one of the secondaries (clearing all memory) and then failed over to it. Naturally, there is some performance hit due to re-caching, and I think this only worked for us because our workload is write-heavy. Moreover, this technique not in any sense a solution. But if high flush times are causing serious disruption, this may be one way to "reduce the fever" so to speak.
Consider running on Linux... :-)
Background flush by default does not block read/write. mongod does flush every 60s, unless otherwise specified with -syncDelay parameter. syncDelay uses fsync() operation, which can set to block write while in-memory pages flush to disk. A blocked write could have potential to block reads as well. Read more: http://docs.mongodb.org/manual/reference/command/fsync/
However, normally a flush should not take more than 1000ms (1 second). If it does, it is likely the amount of data flushing to disk is too large for your disk to handle.
Solution: upgrade to a faster disk like SSD, or decrease flush interval (try 30s, rather than the default 60s).