I have a mongodb collection with custom _id and 500M+ documents. Size of the _id's index is ≈25Gb and whole collection is ≈125 Gb. Server has 96 Gb RAM. Read activity is only range queries by _id. Explain() shows that queries use the index. Mongo works rather fast some time after load tests start and slows down after a while. I can see in a log a lot of entries like this:
[conn116] getmore csdb.archive query: { _id: { $gt: 2812719756651008, $lt: 2812720361451008 } } cursorid:444942282445272280 ntoreturn:0 keyUpdates:0 numYields: 748 locks(micros) r:7885031 nreturned:40302 reslen:1047872 10329ms
A piece of db.currentOp():
"waitingForLock" : false,
"numYields" : 193,
"lockStats" : {
"timeLockedMicros" : {
"r" : NumberLong(869051),
"w" : NumberLong(0)
},
"timeAcquiringMicros" : {
"r" : NumberLong(1369404),
"w" : NumberLong(0)
}
}
What is locks(micros) r? What can I do to cut it down?
What is locks(micros) r?
The amount of time that read locks was held (in microseconds).
R - Global read lock
W - Global write lock
r - Database specific read lock
w - Database specific write lock
What can I do to cut it down?
How does sharding affect concurrency?
Sharding improves concurrency by distributing collections over multiple mongod instances, allowing shard servers (i.e. mongos processes) to perform any number of operations concurrently to the various downstream mongod instances.
Diagnosing Performance Issues (Locks)
MongoDB uses a locking system to ensure data set consistency. However, if certain operations are long-running, or a queue forms, performance will slow as requests and operations wait for the lock. Lock-related slowdowns can be intermittent. To see if the lock has been affecting your performance, look to the data in the globalLock section of the serverStatus output. If globalLock.currentQueue.total is consistently high, then there is a chance that a large number of requests are waiting for a lock. This indicates a possible concurrency issue that may be affecting performance.
If globalLock.totalTime is high relative to uptime, the database has existed in a lock state for a significant amount of time. If globalLock.ratio is also high, MongoDB has likely been processing a large number of long running queries. Long queries are often the result of a number of factors: ineffective use of indexes, non-optimal schema design, poor query structure, system architecture issues, or insufficient RAM resulting in page faults and disk reads.
How We Scale MongoDB (Vertically)
Sadly, MongoDB itself will usually become a bottleneck before the capacity of a server is exhausted. Write lock is almost always the biggest problem (though there are practical limits to how much IO capacity a single MongoDB process can take advantage of).
Related
I am trying to improve the oplog of my MongoDB server, because for now it's covering less hours, than I would like (I am not planning to increase oplog file size for now). What I found that there are many noops records in the oplog collection - { "op": "n" } + the whole document on "o". And they could take about ~20%-30% of the physical oplog size.
How could I find the reason for that, because it seems to be not ok ?
We are using MongoDB 3.6 + NodeJS 10 + Mongoose
p.s. it appears for many different collection and use cases, so it's hard to understand what is a application logic behind all these items.
No-op writes are expected in a MongoDB 3.4+ replica set in order to support the Max Staleness specification that helps applications avoid reading from stale secondaries and provides a more accurate measure of replication lag. These no-op writes only happen when the primary is idle. The idle write interval is not currently configurable (as at MongoDB 4.2).
The Max Staleness specification includes an example scenario and more detailed rationale for why the Primary must write periodic no-ops as well as other design decisions.
A relevant excerpt from the design rationale:
An idle primary must execute a no-op every 10 seconds (idleWritePeriodMS) to keep secondaries' lastWriteDate values close to the primary's clock. The no-op also keeps opTimes close to the primary's, which helps mongos choose an up-to-date secondary to read from in a CSRS.
Monitoring software like MongoDB Cloud Manager that charts replication lag will also benefit when spurious lag spikes are solved.
I am importing a lot of data (18GB, 3 million documents) over time, almost all the data are indexed, so there are lots of indexing going on. The system consist of a single client (single process on a separate machine) establishing a single connection (using pymongo) and doing insertMany in batch of 1000 docs.
MongoDB setup:
single instance,
journaling enabled,
WiredTiger with default cache,
RHEL 7,
version 4.2.1
192GB RAM, 16 CPUs
1.5 TB SSD,
cloud machine.
When I start the server (after full reboot) and insert the collection, it takes 1.5 hours. If the server run for a while inserting some other data (from a single client), it finishes to insert the data, I delete the collection and run the same data to insert - it takes 6 hours to insert it (there is still sufficient disk more than 60%, nothing else making connections to the db). It feels like the server performance degrades over time, may be OS specific. Any similar experience, ideas?
I had faced similar issue, the problem was RAM.
After full restart the server had all RAM free, but after insertions the RAM was full. Deletion of collection and insertion same data again might take time as some RAM was still utilised and less was free for mongo.
Try freeing up RAM and cache after you drop the collection, and check if same behaviour persists.
As you haven't provided any specific details, I would recommend you enable profiling; this will allow you to examine performance bottlenecks. At the mongo shell run:
db.setProfilingLevel(2)
Then run:
db.system.profile.find( { "millis": { "$gt": 10 } }, { "millis": 1, "command": 1 }) // find operations over 10 milliseconds
Once done set reset the profiling mode:
db.setProfilingLevel(0)
I have a server which consistently insert and update the data in MongoDb collections.
I have 3 collections : handTab, video, handHistory.
When the no of documents in those collection reaches 40000, 40000, 80000 respectively, then the insert and update and findAndModify command's performance degrades.
In the mongo Compass I have seen that these query are the slowest operations.
Also, the CPU utilization on production server goes upto 100%.
This leads to the server connection timeout of my game-server API.
I have database on a machine and game-server running on other machine, Why the CPU utilization of database machine reaches 100% after the data size become large in collections, i have same number of mongo operations previously as well as now?
I've got 3 Mongo DB (v 3.4.10) servers (256 Gb RAM, 1 Tb HDD, 12 CPUs each) as a replica setup. Servers are under decent load and HDD is eaten up quite rapidly. I'm considering sharding big collections, but not yet there.
In the meantime, the typical scenario I face:
Morning I see an alert that database HDD is 92% used
Midday I delete bunch of redundant data from big collections (1M - 4M entries) on master. I either update collection like this:
update({}, {'$unset' : {'key_1' : true, 'key_2' : true, 'key_3' : true}}, {"multi" : 1})
or create new collection, insert only needed data there and drop old one.
Evening (about 4-5 hours after deletion, usually peak of the load) Mongo response time increases dramatically from 3-4ms to 500ms. This period lasts for a while, during which my application is almost down. It only restores back to normal performance after I stop my application completely for 10-20 minutes and try to start it back again.
The days I do not delete data - database performs like normal.
I read a bit about oplog and nuances of deleting data on replicated servers. However, in my case the lag between deletion and performance drop is several hours.
Is there any internal Mongo process, which happens hours after massive update/insert? How should I bulk update/insert to avoid this?
I am running a fairly standard MongoDB (3.0.5) replica set with 1 primary and 2 secondaries. My PHP application's read preference is primary, so no reads take place on the secondaries - they are only for failover. I am running a load test on my application, which creates around 600 queries / updates per second. The operations are all being run against a collection that has ~500,000 documents. However, the queries are optimized and supported by indexes. Any query will not take longer than 40ms max.
My problem is that I am getting a quite high CPU load on all 3 nodes (200% - 300%) - sometimes the load on the secondaries is even higher than on the primary. Disk IO and RAM usage seem to be okay - at least they are not hitting any limits.
The primary's log file contains a huge amount of getmore oplog queries - I would guess that any operation on the primary creates an oplog query. It appears to me that this is too much replication overhead but I don't have any prior experience with MongoDB under load and I don't have any key figures.
As the setup will have to tolerate even more load in production, my question is whether the replication overhead is to be expected and whether it's normal that the CPU load goes up that high, even on the secondaries or is there something I'm missing?
Think about it this way. Whatever data-changing operation happens on the primary, it also needs to happen on every secondary. If there are many such operations and they create high CPU load on the primary, well, then the same situation will repeat itself on the secondaries.
Of course, in your case you'd expect the primary's CPU to be more stressed, because in addition to the writes it also handles all the reads. Probably, in your scenario, reads are relatively light and there aren't many of them when compared to the amount of writes. This would explain why the load on the primary is roughly the same as on the secondaries.
my question is whether the replication overhead is to be expected
What you call replication overhead I see as the nature of replication. A primary stressed by writes results in all secondaries being stressed by writes as well.
and whether it's normal that the CPU load goes up that high, even on the secondaries
You have 600 write queries per second and your RAM and disk are not stressed, to me this signifies that you've set up your indexes properly. High CPU load is expected with this amount of write operations per second, because the indexes are being used intensively.
Please keep in mind that once you have gathered more data, the indexes and the memory-mapped data may not fit into memory anymore, and then both the RAM and the disk will be stressed, while CPU is unlikely to be under high load anymore. In this situation, you will probably want to either add more RAM or look into sharding.