I am using MongoDB (3.0) with a replica set of 3 servers. I experience very slow queries since a week and I have tried to find out what was wrong on my servers.
By using the db.currentOp() command I can see that queries are sometimes blocked on the secondaries when a "replication worker" is running. All the queries are waiting for lock ("waitingForLock" : true) and it seems that the replication worker has taken this lock and is running since several minutes (seems pretty long).
To be more specific about my user case, I have multiple databases in the replica set, all these database containing the same collections but not the same amount of data (I use one database per client).
I use WiredTiger as a storage engine that normally (as the doc claims) do not use global locks. So I was expecting that queries on the specific collection to be slow if this collection is updated, but I was not expecting all the queries to be slow or blocked.
Does anyone experienced the same issue? Is there some limitation with MongoDB when read are performed when processes write in the database?
Furthermore, is there a way to tell MongoDB that I don't care about consistency for read operations (in order to avoid locks)?
Thanks.
Update :
By restarting the servers the problems disappeared. It seems that memory and cpu usage was growing (but was still very low) that this lead to slow replication process which hold a lock and prevent queries execution.
I still don't understand why the we have the problem on this database. Maybe version 3.0.9 has a bug (I will upgrade to 3.0.12). Still it takes one month to the database to be very slow and only a restart of all the servers solve the problem. Our workload is mainly writes (with findAndModify). Does anyone know about a bug in Mongo where intensive write leads to performance decreasing over the time ?
Related
I am trying to improve the oplog of my MongoDB server, because for now it's covering less hours, than I would like (I am not planning to increase oplog file size for now). What I found that there are many noops records in the oplog collection - { "op": "n" } + the whole document on "o". And they could take about ~20%-30% of the physical oplog size.
How could I find the reason for that, because it seems to be not ok ?
We are using MongoDB 3.6 + NodeJS 10 + Mongoose
p.s. it appears for many different collection and use cases, so it's hard to understand what is a application logic behind all these items.
No-op writes are expected in a MongoDB 3.4+ replica set in order to support the Max Staleness specification that helps applications avoid reading from stale secondaries and provides a more accurate measure of replication lag. These no-op writes only happen when the primary is idle. The idle write interval is not currently configurable (as at MongoDB 4.2).
The Max Staleness specification includes an example scenario and more detailed rationale for why the Primary must write periodic no-ops as well as other design decisions.
A relevant excerpt from the design rationale:
An idle primary must execute a no-op every 10 seconds (idleWritePeriodMS) to keep secondaries' lastWriteDate values close to the primary's clock. The no-op also keeps opTimes close to the primary's, which helps mongos choose an up-to-date secondary to read from in a CSRS.
Monitoring software like MongoDB Cloud Manager that charts replication lag will also benefit when spurious lag spikes are solved.
I am using Scala, Reactive Mongo 0.10.5 and Mongo 2.6.4 running on Ubuntu. I have tested on a few machine configurations but right now I am working with 15gb of memory, 2 cores and 60gb of SSD storage (AWS)
I have just set up a test mongo instance and have been using it to benchmark a few things, however I am seeing some inconsistency that I can't explain.
I am writing a consistent amount of data using 10 separate threads to a single collection. Each write consists of a document containing an array which contains 1000 elements. Each element is a complex document consisting of several fields and nested fields. I have tested with arrays of 1000, 10000 and 100 and have seen the same behavior with all. Each write is unique (i.e. I never write to the same document twice)
The write speed tends to be around 100-200ms per write with the current hardware I am using. I would like better but that isn't my main issue.
My main issue is that sometimes the write times will spike. When they do, it can take a single write several seconds to complete. They do eventually complete but it takes a while. I have timeouts built into the app doing the writing (10 seconds) and when the spikes happen it will frequently hit that timeout. I have increased the timeout and verified that the write does eventually complete but it can take a long time (30+ seconds).
I have worked with Mongo before using the Mongo Java Driver in Scala and have not noticed this problem. However it is unclear whether the issue is a result of the driver, or my Mongo setup.
I have looked at the logs and while they report when the query is taking longer, they don't actually provide any information about why it is taking longer. I have done the same with profiling and again they report a long query but don't say why it is long.
I have run mongostat while running and it seems that when the writes start taking a long time I notice a similar slow down in mongostat. I.E. mongostat will pause for several seconds before continuing.
The mongo machine itself is bored while this is happening. Load averages are minimal as are CPU and memory usage. It does not appear to be going into swap.
I suspect I just have something configured incorrectly in the Mongo but I haven't been able to find anything that indicates what.
Has anyone seen this behavior before? Is it something in my configuration or perhaps something with the Reactive Mongo driver?
UPDATE:
Using iostat I was able to determine that the normal writes/second is hitting around 1Mb/second. However during the slow periods it spikes to 6-7Mb/second.
I also found the following in the mongo logs.
[DataFileSync] flushing mmaps took 15621ms for 35 files
[DataFileSync] flushing mmaps took 14816ms for 22 files
In at least one case this log statement corresponds exactly with one of the slow downs.
This definitely seems to be a disk flush problem based on these observations.
Does this imply that I am pushing more data than the current Mongo configuration can handle? Or is there some other configuration that can be done to reduce the impact of those flushes?
It appears that in this case the problem may actually have been related to thread locking within the application itself. Once I resolved the issues with thread locking these other issues seemed to go away.
To be honest I don't know why thread locking would result in the observed behavior in Mongo, but if the problem is gone I am not going to complain.
We're using MongoDB 2.2.0 at work. The DB contains about 51GB of data (at the moment) and I'd like to do some analytics on the user data that we've collected so far. Problem is, it's the live machine and we can't afford another slave at the moment. I know MongoDB has a read lock which may affect any writes that happen especially with complex queries. Is there a way to tell MongoDB to treat my (particular) query with the lowest priority?
In MongoDB reads and writes do affect each other. Read locks are shared, but read locks block write locks from being acquired and of course no other reads or writes are happening while a write lock is held. MongoDB operations yield periodically to keep other threads waiting for locks from starving. You can read more about the details of that here.
What does that mean for your use case? Because there is no way to tell MongoDB to access the data without a read lock, nor is there a way to prioritize the requests (at least not yet) whether the reads significantly affect the performance of your writes depends on how much "headroom" you have available while write activity is going on.
One suggestion I can make is when figuring out how to run analytics, rather than scanning the entire data set (i.e. doing an aggregation query over all historical data) try running smaller aggregation queries on short time slices. This will accomplish two things:
reads jobs will be shorter lived and therefore will finish quicker, this will give you a chance to assess what impact the queries have on your "live" performance.
you won't be pulling all old data into RAM at once - by spacing out these analytical queries over time you will minimize the impact it will have on current write performance.
Depending on what it is you can't afford about getting another server - you might consider getting a short lived AWS instance which may be not very powerful but would be available to run a long analytical query against a copy of your data set. Just be careful when making it a copy of your data - doing a full sync off of the production system will place a heavy load on it (more effective way would be to use a recent backup/file snapshot to resume from).
Such operations are best left for slaves of a replica set. For one thing, read locks can be shared to allow many reads at once, but write locks will block reads. And, while you can't prioritize queries, mongodb yields long running read/write queries. Their concurrency docs should help
If you can't afford another server, you can setup a slave on the same machine, provided you have some spare RAM/Disk headroom, and you use the slave lightly/occasionally. You must be careful though, your disk I/O will increase significantly.
We are using Mongo DB in our Java EE based application.
As per the architecture of our application, the Mongo DB gets continuously updated through different threads (keeping Mongo DB always busy).
So as result Mongo DB is continuously busy with reads, writes and updates.
We have a concern. What we observed is that when Mongo DB is at its peek utilization (known through top command in linux ) , the reads from Mongo DB fetches wrong values.
Sometimes this is reproducible and sometimes not, it depends on the load.
Please let us know how to address this issue.
Due to the replication in asynchronous in mongoDB when you using SlaveOk:true you also read from the secondaries but the data is not certainly present there. Specially when you have big load, the replication over network can slow down, it called replication lag, and you can check it with: http://docs.mongodb.org/manual/reference/method/db.printSlaveReplicationInfo/#db.printSlaveReplicationInfo
If you turn off reading from secondaries with slaveOk : false, your app will read from primaries, to decrease the load use sharding in this case, and have no worries about the primary going down if you have appropriate replication structure one secondary will take over.
I am currently evaluating MongoDb for our write heavy application...
Currently MongoDb uses single thread for write operation and also uses global lock whenever it is doing the write... Is it possible to exploit multiple CPU on a multi-CPU server to get better write performance? What are your workaround for global write lock?
No, it is still recommended to use sharding to utilize multiple CPU cores.
As stated in the FAQ
Sharding improves concurrency by distributing collections over multiple mongod instances, allowing shard servers (i.e. mongos processes) to perform any number of operations concurrently to the various downstream mongod instances.
Each mongod instance is independent of the others in the shard cluster and uses the MongoDB readers-writer lock). The operations on one mongod instance do not block the operations on any others.
Sharding on a single box has its issues, as one user stated in the mongodb-user mailing list
After some significant experimentation, I've found a single MongoDB shard daemon CANNOT use more than one CPU. On a 24 CPU box, performance scales up until we hit about 8 shards, then another limit kicks in.
So right now, the easy solution is to shard.
Yes, normally sharding is done across servers. However, it is completely possible to shard on a single box. You simply fire up the shards on different ports and provide them with different folders. Here's a sample configuration of 2 shards on one box.
The MongoDB team recognizes that this is kind of sub-par, and I know from talking to them that they're looking at better ways to do this.
Obviously once you get multiple shards on one box and increase your write threads, you will have to be wary of disk IO. In my experience, I've been able to saturate disks with a single write thread. If your inserts/updates are relatively simple, you may find that extra write threads don't do anything. (Map-Reduces are the exception here, sharding definitely helps there)