Query uses correct index still takes long execution time - mongodb

I am working on optimizing the mongo query. One of the queries is taking too long to execute on the index. Sharing the code snippet below:
Here is the command copied from Atlas:
"command": {
"getMore": 5992505034453534,
"collection": "data",
"$db": "prod",
"$clusterTime": {
"clusterTime": {
"$timestamp": {
"t": 1670439680,
"i": 1071
}
},
"originatingCommand": {
"find": "data",
"filter": {
"accountId": "QQQAAQAQAQAQA",
"custId": "62a7b11fy883bhedge73",
"state": {
"$in": [
"INITIALIZING",
"RUNNING"
]
},
"startTime": {
"$lte": {
"$date": "2022-12-07T17:39:28.573Z"
}
}
},
"maxTimeMS": 300000,
....
"planSummary": [
{
"IXSCAN": {
"accountId": 1,
"custId": 1,
"state": 1,
"startTime": 1
}
}
],
"cursorid": 5992505034144062000,
"keysExamined": 2520,
"docsExamined": 2519,
"cursorExhausted": 1,
"numYields": 130,
"nreturned": 2519,
"reslen": 4898837,
I have the below Index on Mongo:
Index Name: accountId_custId_state_startTime
accountId:1 custId:1 state:1 startTime:1
Atlas Stats:
Index Size: 776.5MB
Usage: 73.58/min
I do not understand why the execution time is too high. Why it's taking 1672ms to query?

From an indexing perspective, the operation is perfectly efficient:
"keysExamined": 2520,
"docsExamined": 2519,
"nreturned": 2519,
It only scanned the relevant portion of the index, pulling only documents that were sent back to the client as part of the result set. There is nothing that can be improved from an indexing perspective here. Therefore any observed slowness is likely being caused by "something else".
In general it shouldn't take the database 1.6 seconds to process 2,519 documents (~5MB). But without knowing more about your environment we can't really say anything more specific. Is there a meaningful amount of concurrent workload that may be competing for resources here? Is the cluster itself undersized for the workload? It is notable that the ratio of yields to documents returned seems higher than usual, which could be an indicator of problems like these.
I would recommend looking at the overall health of the cluster and at the other operations that are running at the same time. My impression is that running this operation in isolation would probably result in it executing faster, further suggesting that the problem (and therefore the resolution as well) is somewhere other than the index used by this operation.

Related

Performance drop in upsert after delete with replica set

I will need your help with understanding an performance problem.
We have a system where we are storing set of documents (1k-4k docs) in batches. Documents have this structure: {_id: ObjectId(), RepositoryId: UUID(), data...}
where repository id is same for all instance in the set. We also set an unique indexes for: {_id: 1, RepositoryId: 1}, {RepositoryId: 1, ...}.
In the usecase is: delete all documents with same RepositoryId:
db.collection.deleteMany(
{ RepositoryId: UUID("SomeGUID") },
{ writeConcern: {w: "majority", j: true} }
)
And then re-upsert batches (300 items per batch) with same RepositoryId as we delete before:
db.collection.insertMany(
[ { RepositoryId: UUID(), data... }, ... ],
{
writeConcern: {w: 1, j: false},
ordered: false
}
)
The issue is that upsert of first few (3-5) batches take much more time then reset (first batch: 10s, 8th bach 0.1s). There is also entry in log file:
{
"t": {
"$date": "2023-01-19T15:49:02.258+01:00"
},
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn64",
"msg": "Slow query",
"attr": {
"type": "command",
"ns": "####.$cmd",
"command": {
"update": "########",
"ordered": false,
"writeConcern": {
"w": 1,
"fsync": false,
"j": false
},
"txnNumber": 16,
"$db": "#####",
"lsid": {
"id": {
"$uuid": "6ffb319a-6003-4221-9925-710e9e2aa315"
}
},
"$clusterTime": {
"clusterTime": {
"$timestamp": {
"t": 1674139729,
"i": 5
}
},
"numYields": 0,
"reslen": 11550,
"locks": {
"ParallelBatchWriterMode": {
"acquireCount": {
"r": 600
}
},
"ReplicationStateTransition": {
"acquireCount": {
"w": 601
}
},
"Global": {
"acquireCount": {
"w": 600
}
},
"Database": {
"acquireCount": {
"w": 600
}
},
"Collection": {
"acquireCount": {
"w": 600
}
},
"Mutex": {
"acquireCount": {
"r": 600
}
}
},
"flowControl": {
"acquireCount": 300,
"timeAcquiringMicros": 379
},
"readConcern": {
"level": "local",
"provenance": "implicitDefault"
},
"writeConcern": {
"w": 1,
"j": false,
"wtimeout": 0,
"provenance": "clientSupplied"
},
"storage": {
},
"remote": "127.0.0.1:52800",
"protocol": "op_msg",
"durationMillis": 13043
}
}
}
}
Is there some background process that is running after delete that affects upsert pefrormance of first batches? It was not a problem until we switched from standalone to single instance replica set, due to transaction support in another part of app. This case does not require transaction but we can not host two instances of mongo with different setup. The DB is exclusive for this operation, no other operation runs on DB (running in isolated test environment). How we can fix it?
The issue is reproducible, seems when there is time gap in test run (few minutes), the problem is not there for first run but then following runs are problematic.
Runing on machine with Ryzen 7 PRO 4750U, 32 GB Ram and Samsung 970 EVO M2 SSD. MongoDB version 5.0.5
In that log entry timeAcquiringMicros indicates that this operation waited while attempt to acquire a lock.
flowControl is a throttling mechanism that delays writes on the primary node when the secondary nodes are lagging, with the intent of letting them catch up before the get so far behind that consistency is lost.
Waiting on the flowControl lock would suggest that there was a backlog of operations that were still be replicated to the secondaries, and they were a bit behind, so the new writes were being slowed.
See Replication Lag and Flow Control for more detail

Poor Write perfomance with MongoDB 5.0.8 in a PSA (Primary-Secondary-Arbiter) setup

I have some write performance struggle with MongoDB 5.0.8 in an PSA (Primary-Secondary-Arbiter) deployment when one data bearing member goes down.
I am aware of the "Mitigate Performance Issues with PSA Replica Set" page and the procedure to temporarily work around this issue.
However, in my opinion, the manual intervention described here should not be necessary during operation. So what can I do to ensure that the system continues to run efficiently even if a node fails? In other words, as in MongoDB 4.x with the option "enableMajorityReadConcern=false".
As I understand the problem has something to do with the defaultRWConcern. When configuring a PSA Replica Set in MongoDB you are forced to set the DefaultRWConcern. Otherwise the following message will appear when rs.addArb is called:
MongoServerError: Reconfig attempted to install a config that would
change the implicit default write concern. Use the setDefaultRWConcern
command to set a cluster-wide write concern and try the reconfig
again.
So I did
db.adminCommand({
"setDefaultRWConcern": 1,
"defaultWriteConcern": {
"w": 1
},
"defaultReadConcern": {
"level": "local"
}
})
I would expect that this configuration causes no lag when reading/writing to a PSA System with only one data bearing node available.
But I observe "slow query" messages in the mongod log like this one:
{
"t": {
"$date": "2022-05-13T10:21:41.297+02:00"
},
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn149",
"msg": "Slow query",
"attr": {
"type": "command",
"ns": "<db>.<col>",
"command": {
"insert": "<col>",
"ordered": true,
"txnNumber": 4889253,
"$db": "<db>",
"$clusterTime": {
"clusterTime": {
"$timestamp": {
"t": 1652430100,
"i": 86
}
},
"signature": {
"hash": {
"$binary": {
"base64": "bEs41U6TJk/EDoSQwfzzerjx2E0=",
"subType": "0"
}
},
"keyId": 7096095617276968965
}
},
"lsid": {
"id": {
"$uuid": "25659dc5-a50a-4f9d-a197-73b3c9e6e556"
}
}
},
"ninserted": 1,
"keysInserted": 3,
"numYields": 0,
"reslen": 230,
"locks": {
"ParallelBatchWriterMode": {
"acquireCount": {
"r": 2
}
},
"ReplicationStateTransition": {
"acquireCount": {
"w": 3
}
},
"Global": {
"acquireCount": {
"w": 2
}
},
"Database": {
"acquireCount": {
"w": 2
}
},
"Collection": {
"acquireCount": {
"w": 2
}
},
"Mutex": {
"acquireCount": {
"r": 2
}
}
},
"flowControl": {
"acquireCount": 1,
"acquireWaitCount": 1,
"timeAcquiringMicros": 982988
},
"readConcern": {
"level": "local",
"provenance": "implicitDefault"
},
"writeConcern": {
"w": 1,
"wtimeout": 0,
"provenance": "customDefault"
},
"storage": {},
"remote": "10.10.7.12:34258",
"protocol": "op_msg",
"durationMillis": 983
}
The collection involved here is under proper load with about 1000 reads and 1000 writes per second from different (concurrent) clients.
MongoDB 4.x with "enableMajorityReadConcern=false" performed "normal" here and I have not noticed any loss of performance in my application. MongoDB 5.x doesn't manage that and in my application data is piling up that I can't get written away in a performant way.
So my question is, if I can get the MongoDB 4.x behaviour back. A write guarantee from the single data bearing node which is available in the failure scenario would be OK for me. But in a failure scenario, having to manually reconfigure the faulty node should actually be avoided.
Thanks for any advice!
At the end we changed the setup to a PSS layout.
This was also recommended in the MongoDB Community Forum.

MongoDB Query doesn't return with a sort

I have the query:
db.changes.find(
{
$or: [
{ _id: ObjectId("60b1e8dc9d0359001bb80441") },
{ _oid: ObjectId("60b1e8dc9d0359001bb80441") },
],
},
{
_id: 1,
}
);
which returns almost instantly.
But the moment I add a sort, the query doesn't return. The query just runs. The longest I could tolerate the query running was over 30 Min, so I'm not entirely sure if it does eventually return.
db.changes
.find(
{
$or: [
{ _id: ObjectId("60b1e8dc9d0359001bb80441") },
{ _oid: ObjectId("60b1e8dc9d0359001bb80441") },
],
},
{
_id: 1,
}
)
.sort({ _id: -1 });
I have the following indexes:
[
{
"_oid" : 1
},
{
"_id" : 1
}
]
and this is what db.currentOp() returns:
{
"host": "xxxx:27017",
"desc": "conn387",
"connectionId": 387,
"client": "xxxx:55802",
"appName": "MongoDB Shell",
"clientMetadata": {
"application": {
"name": "MongoDB Shell"
},
"driver": {
"name": "MongoDB Internal Client",
"version": "4.0.5-18-g7e327a9017"
},
"os": {
"type": "Linux",
"name": "Ubuntu",
"architecture": "x86_64",
"version": "20.04"
}
},
"active": true,
"currentOpTime": "2021-09-24T15:26:54.286+0200",
"opid": 71111,
"secs_running": NumberLong(23),
"microsecs_running": NumberLong(23860504),
"op": "query",
"ns": "myDB.changes",
"command": {
"find": "changes",
"filter": {
"$or": [
{
"_id": ObjectId("60b1e8dc9d0359001bb80441")
},
{
"_oid": ObjectId("60b1e8dc9d0359001bb80441")
}
]
},
"sort": {
"_id": -1.0
},
"projection": {
"_id": 1.0
},
"lsid": {
"id": UUID("38c4c09b-d740-4e44-a5a5-b17e0e04f776")
},
"$readPreference": {
"mode": "secondaryPreferred"
},
"$db": "myDB"
},
"numYields": 1346,
"locks": {
"Global": "r",
"Database": "r",
"Collection": "r"
},
"waitingForLock": false,
"lockStats": {
"Global": {
"acquireCount": {
"r": NumberLong(2694)
}
},
"Database": {
"acquireCount": {
"r": NumberLong(1347)
}
},
"Collection": {
"acquireCount": {
"r": NumberLong(1347)
}
}
}
}
This wasn't always a problem, it's only recently started. I've also rebuilt the indexes, and nothing seems to work. I've tried using .explain(), and that also doesn't return.
Any suggestions would be welcome. For my situation, it's going to be much easier to make changes to the DB than it is to change the query.
This is happening due to the way Mongo chooses what's called a "winning plan", I recommend you read more on this in my other answer which explains this behavior. However it is interesting to see if the Mongo team will consider this specific behavior a feature or a bug.
Basically the $or operator has some special qualities, as specified:
When evaluating the clauses in the $or expression, MongoDB either performs a collection scan or, if all the clauses are supported by indexes, MongoDB performs index scans. That is, for MongoDB to use indexes to evaluate an $or expression, all the clauses in the $or expression must be supported by indexes. Otherwise, MongoDB will perform a collection scan.
It seems that the addition of the sort is disrupting the usage this quality, meaning you're running a collection scan all of a sudden.
What I recommend you do is use the aggregation pipeline instead of the query language, I personally find it has more stable behavior and it might work there. If not maybe just do the sorting in code ..
The server can use a separate index for each branch of the $or, but in order to avoid doing an in-memory sort the indexes used would have to find the documents in the sort order so a merge-sort can be used instead.
For this query, an index on {_id:1} would find documents matching the first branch, and return them in the proper order. For the second branch, and index on {oid:1, _id:1} would do the same.
If you have both of those indexes, the server should be able to find the matching documents quickly, and return them without needing to perform an explicit sort.

Embed or reference in Mongodb

I am developing a small app which will store information on users, accounts and transactions. The users will have many accounts (probably less than 10) and the accounts will have many transactions (perhaps 1000's). Reading the Docs it seems to suggest that embedding as follows is the way to go...
{
"username": "joe",
"accounts": [
{
"name": "account1",
"transactions": [
{
"date": "2013-08-06",
"desc": "transaction1",
"amount": "123.45"
},
{
"date": "2013-08-07",
"desc": "transaction2",
"amount": "123.45"
},
{
"date": "2013-08-08",
"desc": "transaction3",
"amount": "123.45"
}
]
},
{
"name": "account2",
"transactions": [
{
"date": "2013-08-06",
"desc": "transaction1",
"amount": "123.45"
},
{
"date": "2013-08-07",
"desc": "transaction2",
"amount": "123.45"
},
{
"date": "2013-08-08",
"desc": "transaction3",
"amount": "123.45"
}
]
}
]
}
My question is... Since the list of transactions will grow to perhaps 1000's within the document will the data become fragmented and slow the performance. Would I be better to have a document to store the users and the accounts which will not grow as big and then a separate collection to store transactions which are referenced to the accounts. Or is there a better way?
This is not the way to go. You have a lot of transactions, and you don't know how many you will get. Instead of this, you should store them like:
{
"username": "joe",
"name": "account1",
"date": "2013-08-06",
"desc": "transaction1",
"amount": "123.45"
},
{
"username": "joe",
"name": "account1",
"date": "2013-08-07",
"desc": "transaction2",
"amount": "123.45"
},
{
"username": "joe",
"name": "account1",
"date": "2013-08-08",
"desc": "transaction3",
"amount": "123.45"
},
{
"username": "joe",
"name": "account2",
"date": "2013-08-06",
"desc": "transaction1",
"amount": "123.45"
},
{
"username": "joe",
"name": "account2",
"date": "2013-08-07",
"desc": "transaction2",
"amount": "123.45"
},
{
"username": "joe",
"name": "account2",
"date": "2013-08-08",
"desc": "transaction3",
"amount": "123.45"
}
In a NoSQL database like MongoDB you shouldn't be afraid to denormalise. As you noticed, I haven't even bothered with a separate collection for users. If your users have more information that you will have to show with each transaction, you might want to consider including that information as well.
If you need to search on, or select by, any of those fields, then don't forget to create indexes, for example:
// look up all transactions for an account
db.transactions.ensureIndex( { username: 1, name: 1 } );
and:
// look up all transactions for "2013-08-06"
db.transactions.ensureIndex( { date: 1 } );
etc.
There are a lot of advantages to duplicate data. With a schema like above, you can have as many transactions as possible and you will never get any fragmentation as documents never change - you only add to them. This also increases write performance and also makes it a lot easier to do other queries.
Alternative
An alternative might be to store username/name in a collection and only use it's ID with the transactions:
Accounts:
{
"username": "joe",
"name": "account1",
"account_id": 42,
}
Transactions:
{
"account_id": 42,
"date": "2013-08-06",
"desc": "transaction1",
"amount": "123.45"
},
This creates smaller transaction documents, but it does mean you have to do two queries to also get user information.
Since the list of transactions will grow to perhaps 1000's within the document will the data become fragmented and slow the performance.
Almost certainly, infact I would be surprised if over a period of years transactions only reached into the thousands instead of 10's of thousand for a single account.
Added the level of fragmentation you will witness from the consistently growing document over time you could end up with serious problems, if not running out of root document space (with it being 16meg). In fact looking at the fact that you store all accounts for a person under one document I would say you run a high risk of filling up a document in the space of about 2 years.
I would reference this relationship.
I would separate the transactions to a different collections. Seems like the data and update patterns between users and transactions are quite different. If transactions are constantly added to the user and causes it to grow all the time it will be moved a lot in the mongo file. So yes, it brings performance impact (fragmentation, more IO, more work for mongo).
Also, array operation performance sometimes desegregates on big arrays in documents, so holding 1000s of object in an array might not be a good idea (depends on what you do with it).
You should consider creating indexes, using the ensureIndex() function, it should reduce the risk of performance issues.
The earlier you add these, the better you'll understand how the collection should be structured.
I haven't been using mongo too long but I haven't come across any issues(not yet anyway) of data being fragmented
Edit If you intend to use this for multi-object commits, mongo doesn't support rollbacks. You need to use the 64bit version to allow journaling and make transactions durable.

ElasticSearch with MongoDB doesn't indexing big objects

I created ES (with MongoDB river plugin) index with folowing information:
{
"type": "mongodb",
"mongodb": {
"db": "mydatabase",
"collection": "Users"
},
"index": {
"name": "users",
"type": "user"
}
}
When I insert simple object like:
{
"name": "Joe",
"surname": "Black"
}
Everything work without problem (I can see data using ES Head web interface).
But when I insert bigger object, it doesn't index it:
{
"object": {
"text": "Let's do it again!",
"boolTest": false
},
"type": "coolType",
"tags": [
""
],
"subObject1": {
"count": 0,
"last3": [],
"array": []
},
"subObject2": {
"count": 0,
"last3": [],
"array": []
},
"subObject3": {
"count": 0,
"last3": [],
"array": []
},
"usrID": "5141a5a4d8f3a79c09000001",
"created": Date(1363527664000),
"lastUpdate": Date(1363527664000)
}
Where can be problem please?
Thank you for your help!
EDIT: This is error from ES console:
org.elasticsearch.index.mapper.MapperParsingException: object mapping
for [stream] tried to parse as object, but got EOF, has a concrete
value been provided to it? at
org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:457)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:486)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:430)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:318)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:157)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:533)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:431)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722) [2013-03-20
10:35:05,697][WARN
][org.elasticsearch.river.mongodb.MongoDBRiver$Indexer] failed to
executefailure in bulk execution: [0]: index [stream], type [stream],
id [514982c9b7f3bfbdb488ca81], message [MapperParsingException[object
mapping for [stream] tried to parse as object, but got EOF, has a
concrete value been provided to it?]] [2013-03-20 10:35:05,698][INFO
][org.elasticsearch.river.mongodb.MongoDBRiver$Indexer] Indexed 1
documents, 1 insertions 0, updates, 0 deletions, 0 documents per
second
Which version of MongoDB river are you using?
Please look at issue #26 [1]. It contains examples on indexing large json documents with no issue.
If you can still reproduce the issue please provide more details: river settings, mongodb (version, specific settings), elasticsearch (version, specific settings).
https://github.com/richardwilly98/elasticsearch-river-mongodb/issues/26