I'm trying to optimize a mongodb query. I have an index on from_account_id, to_account_id, and created_at. But the following query does a full collection scan.
{
"ts": {
"$date": "2012-03-18T20:29:27.038Z"
},
"op": "query",
"ns": "heroku_app2281692.transactions",
"query": {
"$query": {
"$or": [
{
"from_account_id": {
"$oid": "4f55968921fcaf0001000005"
}
},
{
"to_account_id": {
"$oid": "4f55968921fcaf0001000005"
}
}
]
},
"$orderby": {
"created_at": -1
}
},
"ntoreturn": 25,
"nscanned": 2643718,
"responseLength": 20,
"millis": 10499,
"client": "10.64.141.77",
"user": "heroku_app2281692"
}
If I don't do the or, and only query from_account_id or to_account_id with an order on it, it's fast.
What's the best way to get the desired effect? Should I be keeping account_ids (both from and to) in one field like an array? Or perhaps there is a better way. Thanks!
Unfortunately, as you have discovered, an $or clause can make life difficult for the optimizer.
So, to work around this you have a couple options. Among them:
Divide your query into two and manually merge the results.
Change your data model to allow efficient querying. For example, you might add a "referenced_accounts" field that is an array of all the accounts referenced in the transaction.
Related
I have a collection in MongoDB containing search history of a user where each document is stored like:
"_id": "user1"
searchHistory: {
"product1": [
{
"timestamp": 1623482432,
"query": {
"query": "chocolate",
"qty": 2
}
},
{
"timestamp": 1623481234,
"query": {
"query": "lindor",
"qty": 4
}
},
],
"product2": [
{
"timestamp": 1623473622,
"query": {
"query": "table",
"qty": 1
}
},
{
"timestamp": 1623438232,
"query": {
"query": "ike",
"qty": 1
}
},
]
}
Here _id of document acts like a foreign key to the user document in another collection.
I have backend running on nodejs and this function is used to store a new search history in the record.
exports.updateUserSearchCount = function (userId, productId, searchDetails) {
let addToSetData = {}
let key = `searchHistory.${productId}`
addToSetData[key] = { "timestamp": new Date().getTime(), "query": searchDetails }
return client.db("mydb").collection("userSearchHistory").updateOne({ "_id": userId }, { "$addToSet": addToSetData }, { upsert: true }, async (err, res) => {
})
}
Now, I want to get search history of a user based on query only using the db.find().
I want something like this:
db.find({"_id": "user1", "searchHistory.somewildcard.query": "some query"})
I need a wildcard which will replace ".somewildcard." to search in all products searched.
I saw a suggestion that we should store document like:
"_id": "user1"
searchHistory: [
{
"key": "product1",
"value": [
{
"timestamp": 1623482432,
"query": {
"query": "chocolate",
"qty": 2
}
}
]
}
]
However if I store document like this, then adding search history to existing document becomes a tideous and confusing task.
What should I do?
It's always a bad idea to save values are keys, for this exact reason you're facing. It heavily limits querying that field, obviously the trade off is that it makes updates much easier.
I personally recommend you do not save these searches in nested form at all, this will cause you scaling issues quite quickly, assuming these fields are indexed you will start seeing performance issues when the arrays get's too large ( few hundred searches ).
So my personal recommendation is for you to save it in a new collection like so:
{
"user_id": "1",
"key": "product1",
"timestamp": 1623482432,
"query": {
"query": "chocolate",
"qty": 2
}
}
Now querying a specific user or a specific product or even a query substring is all very easily supported by creating some basic indexes. an "update" in this case would just be to insert a new document which is also much faster.
If you still prefer to keep the nested structure, then I recommend you do switch to the recommended structure you posted, as you mentioned updates will become slightly more tedious, but you can still do it quite easily using arrayFilters for updating a specific element or just using $push for adding a new search
I have a collection containing objects with the following structure
{
"dep_id": "some_id",
"departament": "dep name",
"employees": [{
"name": "emp1",
"age": 31
},{
"name": "emp2",
"age": 35
}]
}
I would like to sort and save the array of employees for the object with id "some_id", by employees.age, descending. The best outcome would be to do this atomically using mongodb's query language. Is this possible?
If not, how can I rearrange the subdocuments without affecting the parent's other data or the data of the subdocuments? In case I have to download the data from the database and save back the sorted array of children, what would happen if something else performs an update to one of the children or children are added or removed in the meantime?
In the end, the data should be persisted to the database like this:
{
"dep_id": "some_id",
"departament": "dep name",
"employees": [{
"name": "emp2",
"age": 35
},{
"name": "emp1",
"age": 31
}]
}
The best way to do this is to actually apply the $sort modifier as you add items to the array. As you say in your comment "My actual objects have a "rank" and 'created_at'", which means that you really should have asked that in your question instead of writing a "contrived" case ( don't know why people do that ).
So for "sorting" by multiple properties, the following reference would adjust like this:
db.collection.update(
{ },
{ "$push": { "employees": { "$each": [], "$sort": { "rank": -1, "created_at": -1 } } } },
{ "multi": true }
)
But to update all the data you presently have "as is shown in the question", then you would sort on "age" with:
db.collection.update(
{ },
{ "$push": { "employees": { "$each": [], "$sort": { "age": -1 } } } },
{ "multi": true }
)
Which oddly uses $push to actually "modify" an array? Yes it's true, since the $each modifier says we are not actually adding anything new yet the $sort modifier is actually going to apply to the array in place and "re-order" it.
Of course this would then explain how "new" updates to the array should be written in order to apply that $sort and ensure that the "largest age" is always "first" in the array:
db.collection.update(
{ "dep_id": "some_id" },
{ "$push": {
"employees": {
"$each": [{ "name": "emp": 3, "age": 32 }],
"$sort": { "age": -1 }
}
}}
)
So what happens here is as you add the new entry to the array on update, the $sort modifier is applied and re-positions the new element between the two existing ones since that is where it would sort to.
This is a common pattern with MongoDB and is typically used in combination with the $slice modifier in order to keep arrays at a "maximum" length as new items are added, yet retain "ordered" results. And quite often "ranking" is the exact usage.
So overall, you can actually "update" your existing data and re-order it with "one simple atomic statement". No looping or collection renaming required. Furthermore, you now have a simple atomic method to "update" the data and maintain that order as you add new array items, or remove them.
In order to get what you want you can use the following query:
db.collection.aggregate({
$unwind: "$employees" // flatten employees array
}, {
$sort: {
"employees.name": -1 // sort all documents by employee name (descending)
}
}, {
$group: { // restore the previous structure
_id: "$_id",
"dep_id": {
$first: "$dep_id"
},
"departament": {
$first: "$departament"
},
"employees": {
$push: "$employees"
},
}
}, {
$out: "output" // write everything out to a separate collection
})
After this step you would want to drop your source table and rename the "output" collection to match your source table name.
This solution will, however, not deal with the concurrency issue. So you should remove write access from the collection first so nobody modifies it during the process and then restore it once you're done with the migration.
You could alternatively query all data first, then sort the employees array on the client side and then use either single update queries or - faster but more complicated - a bulk write operation with all the individual update calls in order to update the existing documents. Here, you could use the entire document that you've initially read as a filter for the update operation. So if an individual update does not modify any document you'd know straight away, that some other change must have modified the document you read before. Those cases you'd need to retry later (or straight away until the update does actually modify a document).
I'm trying to get MongoDB to aggregate for me over an array with different key-value pairs, without knowing keys (Just a simple sum would be ok.)
Example docs:
{data: [{a: 3}, {b: 7}]}
{data: [{a: 5}, {c: 12}, {f: 25}]}
{data: [{f: 1}]}
{data: []}
So basically each doc (or it's array really) can have 0 or many entries, and I don't know the keys for those objects, but I want to sum and average the values over those keys.
Right now I'm just loading a bunch of docs and doing it myself in Node, but I'd like to offload that work to MongoDB.
I know I can unwind those first, but how to proceed from there? How to sum/avg/min/max the values if I don't know the keys?
If you do not know the keys or cannot make a reasonable educated guess then you are basically stuck from going any further with the aggregation framework. You could supply "all of the keys" for consideration, but I supect your acutal data looks more like this:
{ "data": [{ "film": 10 }, { "televsion": 5 },{ "boardGames": 1 }] }
So there would be little point here findin out all the "key names" and then throwing that at an aggregation statement.
For the record though, "this is why you do not structure your data storage like this". Information like "film" here should not be used as a "key" name, because it is useful "data" that could be searched upon and most importantly "indexed" in a database system.
So your data should really look like this:
{
"data": [
{ "type": "film", "value": 10 },
{ "type": "televsion", "valule": 5 },
{ "type": "boardGames", "value": 1 }
]
}
Then the aggregation statement is simple, as are many other things:
db.collection.aggregate([
{ "$unwind": "$data" },
{ "$group": {
"_id": null,
"sum": { "$sum": "$data.value" },
"avg": { "$avg": "$data.value" }
}}
])
But since the key names are constantly changing in documents and do not have a uniform structure, then you need JavaScript processing on the server to traverse the keys, and that meand mapReduce:
db.collection.mapReduce(
function() {
this.data.forEach(function(data) {
Object.keys(data).forEach(function(key) {
emit(null,data[key]); // emit the value regardless of key name
});
});
},
function(key,values) {
return Array.sum(values); // Just summing for example
},
{ "out": { "inline": 1 } }
)
And of course the JavaScript execution here will work much more slowly than the native coded operators available to the aggregation framework.
So this should be an abject lesson as to why you don not use "data" as "key names" when storing data in a database. The aggregation framework works with standard structres and is fast, falling back to JavaScript processing is more flexible, but the cost is mostly in speed and other features.
I'm trying to sort this in MongoDB with mongojs on a find():
{
"songs": {
"bNppHOYIgRE": {
"id": "bNppHOYIgRE",
"title": "Kygo - ID (Ultra Music Festival Anthem)",
"votes": 1,
"added": 1428514707,
"guids": [
"MzM3NTUx"
]
},
"izJzdDPH9yw": {
"id": "izJzdDPH9yw",
"title": "Benjamin Francis Leftwich - Atlas Hands (Samuraii Edit)",
"votes": 1,
"added": 1428514740,
"guids": [
"MzM3NTUx"
]
},
"Yifz3X_i-F8": {
"id": "Yifz3X_i-F8",
"title": "M83 - Wait (Kygo Remix)",
"votes": 0,
"added": 1428494338,
"guids": []
},
"nDopn_p2wk4": {
"id": "nDopn_p2wk4",
"title": "Syn Cole - Miami 82 (Kygo Remix)",
"votes": 0,
"added": 1428494993,
"guids": []
}
}
}
and I want to sort the keys in the songs on votes ascending and added descending.
I have tried
db.collection(coll).find().sort({votes:1}, function(err, docs) {});
but that doesn't work.
If this is an operation that you're going to be doing often, I would strongly consider changing your schema. If you make songs an array instead of a map, then you can perform this query using aggregation.
db.coll.aggregate([{ "$unwind": "$songs" }, { "$sort": { "songs.votes": 1, "songs.added": -1 }}]);
And if you put each of these songs in a separate songs collection, then you could perform the query with a simple find() and sort().
db.songs.find().sort({ "votes": 1, "added": -1 });
With your current schema, however, all of this logic would need to be in your application and it would get messy. A possible solution would be to get all of the documents and while iterating through the cursor, for each document, iterate through the keys, adding them to an array. Once you have all of the subdocuments in the array, sorting the array according to votes and added.
It is possible, but unnecessarily complex. And, of course, you wouldn't be able to take advantage of indexes, which would have an impact on your performance.
You already include the key inside the subdocument, so I would really recommend you reconsider your schema.
I have a collection with array countries values like this. I want to sum the values of the countries.
{
"_id": ObjectId("54cd5e7804f3b06c3c247428"),
"country_json": {
"AE": NumberLong("13"),
"RU": NumberLong("16"),
"BA": NumberLong("10"),
...
}
},
{
"_id": ObjectId("54cd5e7804f3b06c3c247429"),
"country_json": {
"RU": NumberLong("12"),
"ES": NumberLong("28"),
"DE": NumberLong("16"),
"AU": NumberLong("44"),
...
}
}
How to sum the values of countries to get a result like this?
{
"AE": 13,
"RU": 28,
..
}
This can simply be done using aggregation
> db.test.aggregate([
{$project: {
RU: "$country_json.RU",
AE: "$country_json.AE",
BA: "$country_json.BA"
}},
{$group: {
_id: null,
RU: {$sum: "$RU"},
AE: {$sum: "$AE"},
BA: {$sum: "$BA"}
}
])
Output:
{
"_id" : null,
"RU" : NumberLong(28),
"AE" : NumberLong(13),
"BA" : NumberLong(10)
}
This isn't a very good document structure if you intend to aggregate statistics across the "keys" like this. Not really a fan of "data as key names" anyway, but the main point is it does not "play well" with many MongoDB query forms due to the key names being different everywhere.
Particularly with the aggregation framework, a better form to store the data is within an actual array, like so:
{
"_id": ObjectId("54cd5e7804f3b06c3c247428"),
"countries": [
{ "key": "AE", "value": NumberLong("13"),
{ "key": "RU", "value": NumberLong("16"),
{ "key": "BA", "value": NumberLong("10")
]
}
With that you can simply use the aggregation operations:
db.collection.aggregate([
{ "$unwind": "$countries" },
{ "$group": {
"_id": "$countries.key",
"value": { "$sum": "$countries.value" }
}}
])
Which would give you results like:
{ "_id": "AE", "value": NumberLong(13) },
{ "_id": "RU", "value": NumberLong(28) }
That kind of structure does "play well" with the aggregation framework and other MongoDB query patterns because it really is how it's "expected" to be done when you want to use the data in this way.
Without changing the structure of the document you are forced to use JavaScript evaluation methods in order to traverse the keys of your documents because that is the only way to do it with MongoDB:
db.collection.mapReduce(
function() {
var country = this.country_json;
Object.keys(country).forEach(function(key) {
emit( key, country[key] );
});
},
function(key,values) {
return values.reduce(function(p,v) { return NumberLong(p+v) });
},
{ "out": { "inline": 1 } }
)
And that would produce exactly the same result as shown from the aggregation example output, but working with the current document structure. Of course, the use of JavaScript evaluation is not as efficient as the native methods used by the aggregation framework so it's not going to perform as well.
Also note the possible problems here with "large values" in your cast NumberLong fields, since the main reason they are represented that way to JavaScipt is that JavaScipt itself has limitations on the size of that value than can be represented. Likely your values are just trivial but simply "cast" that way, but for large enough numbers as per the intent, then the math will simply fail.
So it's generally a good idea to consider changing how you structure this data to make things easier. As a final note, the sort of output you were expecting with all the keys in a single document is similarly counter intuitive, as again it requires traversing keys of a "hash/map" rather than using the natural iterators of arrays or cursors.