Mongo insert and update query performs slow. Why? - mongodb

I have a server which consistently insert and update the data in MongoDb collections.
I have 3 collections : handTab, video, handHistory.
When the no of documents in those collection reaches 40000, 40000, 80000 respectively, then the insert and update and findAndModify command's performance degrades.
In the mongo Compass I have seen that these query are the slowest operations.
Also, the CPU utilization on production server goes upto 100%.
This leads to the server connection timeout of my game-server API.
I have database on a machine and game-server running on other machine, Why the CPU utilization of database machine reaches 100% after the data size become large in collections, i have same number of mongo operations previously as well as now?

Related

MongoDB degrading write performance over time

I am importing a lot of data (18GB, 3 million documents) over time, almost all the data are indexed, so there are lots of indexing going on. The system consist of a single client (single process on a separate machine) establishing a single connection (using pymongo) and doing insertMany in batch of 1000 docs.
MongoDB setup:
single instance,
journaling enabled,
WiredTiger with default cache,
RHEL 7,
version 4.2.1
192GB RAM, 16 CPUs
1.5 TB SSD,
cloud machine.
When I start the server (after full reboot) and insert the collection, it takes 1.5 hours. If the server run for a while inserting some other data (from a single client), it finishes to insert the data, I delete the collection and run the same data to insert - it takes 6 hours to insert it (there is still sufficient disk more than 60%, nothing else making connections to the db). It feels like the server performance degrades over time, may be OS specific. Any similar experience, ideas?
I had faced similar issue, the problem was RAM.
After full restart the server had all RAM free, but after insertions the RAM was full. Deletion of collection and insertion same data again might take time as some RAM was still utilised and less was free for mongo.
Try freeing up RAM and cache after you drop the collection, and check if same behaviour persists.
As you haven't provided any specific details, I would recommend you enable profiling; this will allow you to examine performance bottlenecks. At the mongo shell run:
db.setProfilingLevel(2)
Then run:
db.system.profile.find( { "millis": { "$gt": 10 } }, { "millis": 1, "command": 1 }) // find operations over 10 milliseconds
Once done set reset the profiling mode:
db.setProfilingLevel(0)

Mongodb cluster sudden crashes with read / write concern queries

We have a mongodb cluster with 5 PSA replica sets and one sharded collection. About 3,5 TB of data, 2 billion docs on primaries. Average insert rate: 300rps. Average select rate: 1000rps. Mongodb version 4.0.6. Collection has only one extra unique index, all read queries use one of the indexes (no long running queries).
PROBLEM. Sometimes (4 times in a last 2 month) one of the nodes stops responding to the queries with specified read concern or write concern. The same query without read/write concern executes successfully regardless of doing it locally or through mongos. These queries never execute, no errors, no timeouts even when restarting mongos, that initiate the query. No errors in mongod logs, no errors in system logs. Restart of this node fixes the problem. Mongodb sees such broken node as normal, rs.status() shows that everything is ok.
Have no idea how to reproduce this problem, much more intense load testing have no results.
We would appreciate any help and suggestions.

MongoDB consumes all memory(RAM)

I use MongoDB 3.2, WiredTiger engine. I am trying to use batch inserts on 10K records, the size of one record about kb. All great, but 60-70 million such records memory ends. Mongo is limited cache in 3GB, but the memory is consumed by memory map files collections and indexes. After some time Mongo CPU load at 100% and stops receiving data. OS: Windows 7. What am I doing wrong? :)

MongoDB: blocked queries during write operation in a replica set

I am using MongoDB (3.0) with a replica set of 3 servers. I experience very slow queries since a week and I have tried to find out what was wrong on my servers.
By using the db.currentOp() command I can see that queries are sometimes blocked on the secondaries when a "replication worker" is running. All the queries are waiting for lock ("waitingForLock" : true) and it seems that the replication worker has taken this lock and is running since several minutes (seems pretty long).
To be more specific about my user case, I have multiple databases in the replica set, all these database containing the same collections but not the same amount of data (I use one database per client).
I use WiredTiger as a storage engine that normally (as the doc claims) do not use global locks. So I was expecting that queries on the specific collection to be slow if this collection is updated, but I was not expecting all the queries to be slow or blocked.
Does anyone experienced the same issue? Is there some limitation with MongoDB when read are performed when processes write in the database?
Furthermore, is there a way to tell MongoDB that I don't care about consistency for read operations (in order to avoid locks)?
Thanks.
Update :
By restarting the servers the problems disappeared. It seems that memory and cpu usage was growing (but was still very low) that this lead to slow replication process which hold a lock and prevent queries execution.
I still don't understand why the we have the problem on this database. Maybe version 3.0.9 has a bug (I will upgrade to 3.0.12). Still it takes one month to the database to be very slow and only a restart of all the servers solve the problem. Our workload is mainly writes (with findAndModify). Does anyone know about a bug in Mongo where intensive write leads to performance decreasing over the time ?

Extremely slow deserialization for large documents with the MongoDB C# driver

I am using 10 threads to do find-and-modify on large documents (one document at a time on each thread). The documents are about 600kB each and contain each a 10,000 elements array. The client and the MongoDB server are on the same LAN in a datacenter and are connected via a very fast connection. The client runs on small machine (CPU is fairly slow, 768 MB of RAM).
The problem is that every find-and-modify operation is extremely slow.
For example here is the timeline of an operation:
11:56:04: Calling FindAndModify on the client.
12:05:59: The operation is logged on the server ("responseLength" : 682598, "millis" : 7), so supposedly the server returns almost immediately.
12:38:39: FindAndModify returns on the client.
The CPU usage on the machine is only 10-20%, but the memory seems to be running low. The available bytes performance counter is around 40 MB, and the Private Bytes of the process that is calling MongoDB is 800 MB (which is more than the physical RAM on the machine). Also the Page/sec performance counter is hovering around 4,000.
My theory is that it took 33 minutes for the driver to deserialize the document, and that could be because the OS is swapping due to high memory usage caused by the deserialization of the large documents into CLR objects. Still, this does not really explain why it took 10 minutes between the call to FindAndModify and the server side execution.
How likely is it that this is the case? Do you have another theory? How can I verify this is the case?