Add field (boolean) to returned objects, when a specified value is in array, without including the array itself - mongodb

I have a mongoose Schema that looks likes this :
var AnswerSchema = new Schema({
author: {type: Schema.Types.ObjectId, ref: 'User'},
likes: [{type: Schema.Types.ObjectId, ref: 'User'}]
text: String,
....
});
and I have an API endpoint that allow to get answers posted by a specific user (which exclude the likes array). What I want to do is add a field (with "true/false" value for example) to the answer(s) returned by the mongoose query, when a specific user_id is (or is not) in the likes array of an answer. This way, I can display to the user requesting the answers if he already liked an answer or not.
How could I achieve this in an optimised way ? I would like to avoid fetching the likes array, then look into it myself in my Javascript code to check if specified userId is present in it, then remove it before sending it back to the client... because it sounds wrong to fetch all this data from mongoDB to my node app for nothing. I'm sure there is a better way by using aggregation but I never used it and am a bit confused on how to do it right.
The database might grow very large so it must be quick and optimised.

One approach you could take is via the aggregation framework which allows you to add/modify fields via the $project pipeline, applying a host of logical operators that work in cohort to achieve the desired end result. For instance, in your above case this would translate to:
Answer.aggregate()
.project({
"author": 1,
"matched": {
"$eq": [
{
"$size": {
"$ifNull": [
{ "$setIntersection": [ "$likes", [userId] ] },
[]
]
}
},
1
]
}
})
.exec(function (err, docs){
console.log(docs);
})
As an example to test in mongo shell, let's insert some few test documents to the test collection:
db.test.insert([
{
"likes": [1, 2, 3]
},
{
"likes": [3, 2]
},
{
"likes": null
},
{
"another": "foo"
}
])
Running the above aggregation pipeline on the test collection to get the boolean field for userId = 2:
var userId = 2;
db.test.aggregate([
{
"$project": {
"matched": {
"$eq": [
{
"$size": {
"$ifNull": [
{ "$setIntersection": [ "$likes", [userId] ] },
[]
]
}
},
1
]
}
}
}
])
gives the following output:
{
"result" : [
{
"_id" : ObjectId("564f487c7d3c273d063cd21e"),
"matched" : true
},
{
"_id" : ObjectId("564f487c7d3c273d063cd21f"),
"matched" : true
},
{
"_id" : ObjectId("564f487c7d3c273d063cd220"),
"matched" : false
},
{
"_id" : ObjectId("564f487c7d3c273d063cd221"),
"matched" : false
}
],
"ok" : 1
}

Related

Conditionally update/upsert embedded array with findOneAndUpdate in MongoDB

I have a collection in the following format:
[
{
"postId": ObjectId("62dffd0acb17483cf015375f"),
"userId": ObjectId("62dff9584f5b702d61c81c3c"),
"state": [
{
"id": ObjectId("62dffc49cb17483cf0153220"),
"notes": "these are my custom notes!",
"lvl": 3,
},
{
"id": ObjectId("62dffc49cb17483cf0153221"),
"notes": "hello again",
"lvl": 0,
},
]
},
]
My goal is to be able to update and add an element in this array in the following situation:
If the ID of the new element is not in the state array, push the new element in the array
If the ID of the new element is in the state array and its lvl field is 0, update that element with the new information
If the ID of the new element exists in the array, and its lvl field is not 0, then nothing should happen. I will throw an error by seeing that no documents were matched.
Basically, to accomplish this I was thinking about using findOneAndUpdate with upsert, but I am not sure how to tell the query to update the state if lvl is 0 or don't do anything if it is bigger than 0 when the match is found.
For solving (1) this is what I was able to come up with:
db.collection.findOneAndUpdate(
{
"postId": ObjectId("62dffd0acb17483cf015375f"),
"userId": ObjectId("62dff9584f5b702d61c81c3c"),
"state.id": {
"$ne": ObjectId("62dffc49cb17483cf0153222"),
},
},
{
"$push": {"state": {"id": ObjectId("62dffc49cb17483cf0153222"), "lvl": 1}}
},
{
"new": true,
"upsert": true,
}
)
What is the correct way to approach this issue? Should I just split the query into multiple ones?
Edit: as of now I have done this in more than one query (one to fetch the document, then I iterate over its state array to check if the ID exists in it, and then I perform (1), (2) and (3) in a normal if-else clause)
If the ID of the new element exists in the array, and its lvl field is not 0, then nothing should happen. I will throw an error by seeing that no documents where matched.
First thing FYI,
upsert is not possible in the nested array
upsert will not add new elements to the array
upsert can add a new document with the new element
if you want to throw an error if the record does not present then you don't need upsert
Second thing, you can achieve this in one query by using an update with aggregation pipeline in MongoDB 4.2,
Note: Here i must inform you, this query will respond updated document but there will be no flag or any clue if this query fulfilled your first situation or second situation, or the third situation out of 3, you have to check in your client-side code through query response.
check conditions for postId and userId fields only
we are going to update state field under $set stage
check the condition if the provided id is present in state's id?
true, $map to iterate loop of state array
check conditions for id and lvl: 0?
true, $mergeObjects to merge current object with the new information
false, it will not do anything
false, then add that new element in state array, by $concatArrays operator
db.collection.findOneAndUpdate(
{
postId: ObjectId("62dffd0acb17483cf015375f"),
userId: ObjectId("62dff9584f5b702d61c81c3c")
},
[{
$set: {
state: {
$cond: [
{ $in: [ObjectId("62dffc49cb17483cf0153221"), "$state.id"] },
{
$map: {
input: "$state",
in: {
$cond: [
{
$and: [
{ $eq: ["$$this.id", ObjectId("62dffc49cb17483cf0153221")] },
{ $eq: ["$$this.lvl", 0] }
]
},
{
$mergeObjects: [
"$$this",
{
// update your new fields here
"notes": "new note"
}
]
},
"$$this"
]
}
}
},
{
$concatArrays: [
"$state",
[
// add new element
{
"id": ObjectId("62dffc49cb17483cf0153221"),
"lvl": 1
}
]
]
}
]
}
}
}],
{ returnNewDocument: true }
)
Playrgound
Third thing, you can execute 2 update queries,
The first query, for the case: element does not present and it will push a new element in state
let response = db.collection.findOneAndUpdate({
postId: ObjectId("62dffd0acb17483cf015375f"),
userId: ObjectId("62dff9584f5b702d61c81c3c"),
"state.id": { $ne: ObjectId("62dffc49cb17483cf0153221") }
},
{
$push: {
state: {
id: ObjectId("62dffc49cb17483cf0153221"),
lvl: 1
}
}
},
{
returnNewDocument: true
})
The second query on the base of if the response of the above query is null then this query will execute,
This will check state id and lvl: 0 conditions if conditions are fulfilled then execute the update fields operation, it will return null if the document is not found
You can throw if this will return null otherwise do stuff with response data and response success
if (response == null) {
response = db.collection.findOneAndUpdate({
postId: ObjectId("62dffd0acb17483cf015375f"),
userId: ObjectId("62dff9584f5b702d61c81c3c"),
state: {
$elemMatch: {
id: ObjectId("62dffc49cb17483cf0153221"),
lvl: 0
}
}
},
{
$set: {
// add your update fields
"state.$.notes": "new note"
}
},
{
returnNewDocument: true
});
// not found and throw an error
if (response == null) {
return {
// throw error;
};
}
}
// do stuff with "response" data and return result
return {
// success;
};
Note: As per the above options, I would recommend you that I explained in the Third thing that you can execute 2 update queries.
What you're trying became possible with the introduction pipelined updates, here is how I would do it by using $concatArrays to concat the exists state array with the new input and $ifNull in case of an upsert to init the empty value, like so:
const inputObj = {
"id": ObjectId("62dffc49cb17483cf0153222"),
"lvl": 1
};
db.collection.findOneAndUpdate({
"postId": ObjectId("62dffd0acb17483cf015375f"),
"userId": ObjectId("62dff9584f5b702d61c81c3c")
},
[
{
$set: {
state: {
$ifNull: [
"$state",
[]
]
},
}
},
{
$set: {
state: {
$concatArrays: [
{
$map: {
input: "$state",
in: {
$mergeObjects: [
{
$cond: [
{
$and: [
{
$in: [
inputObj.id,
"$state.id"
]
},
{
$eq: [
inputObj.lvl,
0
]
}
]
},
inputObj,
{},
]
},
"$$this"
]
}
}
},
{
$cond: [
{
$not: {
$in: [
inputObj.id,
"$state.id"
]
}
},
[
],
[]
]
}
]
}
}
}
],
{
"new": true,
"upsert": true
})
Mongo Playground
Prior to version 4.2 and the introduction of this feature what you're trying to do was not possible using the naive update syntax, If you are using an older version then you'd have to split this into 2 separate calls, first a findOne to see if the document exists, and only then an update based on that. obviously this can cause stability issue's if you have high update volume.

How to update a document with a reference to its previous state?

Is it possible to reference the root document during an update operation such that a document like this:
{"name":"foo","value":1}
can be updated with new values and have the full (previous) document pushed into a new field (creating an update history):
{"name":"bar","value":2,"previous":[{"name:"foo","value":1}]}
And so on..
{"name":"baz","value":3,"previous":[{"name:"foo","value":1},{"name:"bar","value":2}]}
I figure I'll have to use the new aggregate set operator in Mongo 4.2, but how can I achieve this?
Ideally I don't want to have to reference each field explicitly. I'd prefer to push the root document (minus the _id and previous fields) without knowing what the other fields are.
In addition to the new $set operator, what makes your use case really easier with Mongo 4.2 is the fact that db.collection.update() now accepts an aggregation pipeline, finally allowing the update of a field based on its current value:
// { name: "foo", value: 1 }
db.collection.update(
{},
[{ $set: {
previous: {
$ifNull: [
{ $concatArrays: [ "$previous", [{ name: "$name", value: "$value" }] ] },
[ { name: "$name", value: "$value" } ]
]
},
name: "bar",
value: 2
}}],
{ multi: true }
)
// { name: "bar", value: 2, previous: [{ name: "foo", value: 1 }] }
// and if applied again:
// { name: "baz", value: 3, previous: [{ name: "foo", value: 1 }, { name: "bar", value: 2 } ] }
The first part {} is the match query, filtering which documents to update (in our case probably all documents).
The second part [{ $set: { previous: { $ifNull: [ ... } ] is the update aggregation pipeline (note the squared brackets signifying the use of an aggregation pipeline):
$set is a new aggregation operator and an alias of $addFields. It's used to add/replace a new field (in our case "previous") with values from the current document.
Using an $ifNull check, we can determine whether "previous" already exists in the document or not (this is not the case for the first update).
If "previous" doesn't exist (is null), then we have to create it and set it with an array of one element: the current document: [ { name: "$name", value: "$value" } ].
If "previous" already exist, then we concatenate ($concatArrays) the existing array with the current document.
Don't forget { multi: true }, otherwise only the first matching document will be updated.
As you mentioned "root" in your question and if your schema is not the same for all documents (if you can't tell which fields should be used and pushed in the "previous" array), then you can use the $$ROOT variable which represents the current document and filter out the "previous" array. In this case, replace both { name: "$name", value: "$value" } from the previous query with:
{ $arrayToObject: { $filter: {
input: { $objectToArray: "$$ROOT" },
as: "root",
cond: { $ne: [ "$$root.k", "previous" ] }
}}}
Imho, you are making your life indefinitely more complex for no reason with such complicated data models.
Think of what you really want to achieve. You want to correlate different values in one or more interconnected series which are written to the collection consecutively.
Storing this in one document comes with some strings attached. While it seems to be reasonable in the beginning, let me name a few:
How do you get the most current document if you do not know it's value for name?
How do you deal with very large series, which make the document hit the 16MB limit?
What is the benefit of the added complexity?
Simplify first
So, let's assume you have only one series for a moment. It gets as simple as
[{
"_id":"foo",
"ts": ISODate("2019-07-03T17:40:00.000Z"),
"value":1
},{
"_id":"bar",
"ts": ISODate("2019-07-03T17:45:00.000"),
"value":2
},{
"_id":"baz",
"ts": ISODate("2019-07-03T17:50:00.000"),
"value":3
}]
Assuming the name is unique, we can use it as _id, potentially saving an index.
You can actually get the semantic equivalent by simply doing a
> db.seriesa.find().sort({ts:-1})
{ "_id" : "baz", "ts" : ISODate("2019-07-03T17:50:00Z"), "value" : 3 }
{ "_id" : "bar", "ts" : ISODate("2019-07-03T17:45:00Z"), "value" : 2 }
{ "_id" : "foo", "ts" : ISODate("2019-07-03T17:40:00Z"), "value" : 1 }
Say you only want to have the two latest values, you can use limit():
> db.seriesa.find().sort({ts:-1}).limit(2)
{ "_id" : "baz", "ts" : ISODate("2019-07-03T17:50:00Z"), "value" : 3 }
{ "_id" : "bar", "ts" : ISODate("2019-07-03T17:45:00Z"), "value" : 2 }
Should you really need to have the older values in a queue-ish array
db.seriesa.aggregate([{
$group: {
_id: "queue",
name: {
$last: "$_id"
},
value: {
$last: "$value"
},
previous: {
$push: {
name: "$_id",
value: "$value"
}
}
}
}, {
$project: {
name: 1,
value: 1,
previous: {
$slice: ["$previous", {
$subtract: [{
$size: "$previous"
}, 1]
}]
}
}
}])
Nail it
Now, let us say you have more than one series of data. Basically, there are two ways of dealing with it: put different series in different collections or put all the series in one collection and make a distinction by a field, which for obvious reasons should be indexed.
So, when to use what? It boils down wether you want to do aggregations over all series (maybe later down the road) or not. If you do, you should put all series into one collection. Of course, we have to slightly modify our data model:
[{
"name":"foo",
"series": "a"
"ts": ISODate("2019-07-03T17:40:00.000Z"),
"value":1
},{
"name":"bar",
"series": "a"
"ts": ISODate("2019-07-03T17:45:00.000"),
"value":2
},{
"name":"baz",
"series": "a"
"ts": ISODate("2019-07-03T17:50:00.000"),
"value":3
},{
"name":"foo",
"series": "b"
"ts": ISODate("2019-07-03T17:40:00.000Z"),
"value":1
},{
"name":"bar",
"series": "b"
"ts": ISODate("2019-07-03T17:45:00.000"),
"value":2
},{
"name":"baz",
"series": "b"
"ts": ISODate("2019-07-03T17:50:00.000"),
"value":3
}]
Note that for demonstration purposes, I fell back for the default ObjectId value for _id.
Next, we create an index over series and ts, as we are going to need it for our query:
> db.series.ensureIndex({series:1,ts:-1})
And now our simple query looks like this
> db.series.find({"series":"b"},{_id:0}).sort({ts:-1})
{ "name" : "baz", "series" : "b", "ts" : ISODate("2019-07-03T17:50:00Z"), "value" : 3 }
{ "name" : "bar", "series" : "b", "ts" : ISODate("2019-07-03T17:45:00Z"), "value" : 2 }
{ "name" : "foo", "series" : "b", "ts" : ISODate("2019-07-03T17:40:00Z"), "value" : 1 }
In order to generate the queue-ish like document, we need to add a match state
> db.series.aggregate([{
$match: {
"series": "b"
}
},
// other stages omitted for brevity
])
Note that the index we created earlier will be utilized here.
Or, we can generate a document like this for every series by simply using series as the _id in the $group stage and replace _id with name where appropriate
db.series.aggregate([{
$group: {
_id: "$series",
name: {
$last: "$name"
},
value: {
$last: "$value"
},
previous: {
$push: {
name: "$name",
value: "$value"
}
}
}
}, {
$project: {
name: 1,
value: 1,
previous: {
$slice: ["$previous", {
$subtract: [{
$size: "$previous"
}, 1]
}]
}
}
}])
Conclusion
Stop Being Clever when it comes to data models in MongoDB. Most of the problems with data models I saw in the wild and the vast majority I see on SO come from the fact that someone tried to be Smart (by premature optimization) ™.
Unless we are talking of ginormous series (which can not be, since you settled for a 16MB limit in your approach), the data model and queries above are highly efficient without adding unneeded complexity.
addMultipleData: (req, res, next) => {
let name = req.body.name ? req.body.name : res.json({ message: "Please enter Name" });
let value = req.body.value ? req.body.value : res.json({ message: "Please Enter Value" });
if (!req.body.name || !req.body.value) { return; }
//Step 1
models.dynamic.findOne({}, function (findError, findResponse) {
if (findResponse == null) {
let insertedValue = {
name: name,
value: value
}
//Step 2
models.dynamic.create(insertedValue, function (error, response) {
res.json({
message: "succesfully inserted"
})
})
}
else {
let pushedValue = {
name: findResponse.name,
value: findResponse.value
}
let updateWith = {
$set: { name: name, value: value },
$push: { previous: pushedValue }
}
let options = { upsert: true }
//Step 3
models.dynamic.updateOne({}, updateWith, options, function (error, updatedResponse) {
if (updatedResponse.nModified == 1) {
res.json({
message: "succesfully inserted"
})
}
})
}
})
}
//This is the schema
var multipleAddSchema = mongoose.Schema({
"name":String,
"value":Number,
"previous":[]
})

MongoDB query for values contained in array and results contain only those values

Let's say I have the following DB:
pizzas = [{
name: "pizza1",
toppings: ['mushrooms', 'pepperoni', 'sausage']
},
{
name: "pizza2",
toppings: ['mushrooms', 'pepperoni']
},
{
name: "pizza3",
toppings: ['mushrooms', 'onions']
},
{
name: "pizza4",
toppings: ['mushrooms']
}]
Now I want to fetch the pizzas that have 'mushrooms', 'pepperoni', or 'onions' and any combination of those. Then the query could be:
pizzas.find({toppings: ['mushrooms', 'pepperoni', 'onions']})
This would return all four pizzas in my db. But here's the problem. What if I wanted pizzas with any combination of only those three toppings, i.e. a pizza can not contain a different topping like 'sausage'. For this query, I only want "pizza2", "pizza3", and "pizza4" to be returned. I could make a query like:
pizzas.find({$and: [{toppings: ['mushrooms', 'pepperoni', 'onions']}, {$not: {toppings: ['sausage']}}]
The problem is that this requires me to know all of the possible toppings to exclude. Is there a better way to construct this query?
You basically need to find the "Set Difference" between the stored array and the desired list and see if there are any items stored that are not one of the desired ingredients. Therefore if the returned list is greater than 0 it contains another ingredient in the list.
If you have at least MongoDB 2.6, there is a $setDifference operator you can use in a $redact statement:
db.pizzas.aggregate([
{ "$match": {
"toppings": { "$in": [ "mushrooms", "pepperoni", "onions" ] }
}},
{ "$redact": {
"$cond": {
"if": {
"$eq": [
{ "$size": {
"$setDifference": [
"$toppings",
[ "mushrooms", "pepperoni", "onions" ]
]
}},
0
]
},
"then": "$$KEEP",
"else": "$$PRUNE"
}
}}
])
If your MongoDB is older than that, then you can implement the same logic in JavaScript using $where:
db.pizzas.find({
"toppings": { "$in": [ "mushrooms", "pepperoni", "onions" ] },
"$where": function() {
return this.toppings.filter(function(topping) {
return [ "mushrooms", "pepperoni", "onions" ].indexOf(topping) == -1;
}).length == 0;
}
})
Both exclude "pizza1" from results by the same comparison, with the native operators in .aggregate() being faster:
{
"_id" : ObjectId("564d44a59f28c6e0feabceea"),
"name" : "pizza2",
"toppings" : [
"mushrooms",
"pepperoni"
]
}
{
"_id" : ObjectId("564d44a59f28c6e0feabceeb"),
"name" : "pizza3",
"toppings" : [
"mushrooms",
"onions"
]
}
{
"_id" : ObjectId("564d44a59f28c6e0feabceec"),
"name" : "pizza4",
"toppings" : [
"mushrooms"
]
}
Noting here that it is still wise to use $in to filter first, as it at least narrows down to possible results, and does not need a brute force match of the whole collection. You use it as opposed to a "raw array" as in your question, since your demonstrated form would match only elements with the exact array, and in order.

How to remove an array value from item in a nested document array

I want to remove "tag4" only for "user3":
{
_id: "doc"
some: "value",
users: [
{
_id: "user3",
someOther: "value",
tags: [
"tag4",
"tag2"
]
}, {
_id: "user1",
someOther: "value",
tags: [
"tag3",
"tag4"
]
}
]
},
{
...
}
Note: This collection holds items referencing many users. Users are stored in a different collection. Unique tags for each user are also stored in the users collection. If an user removes a tag (or multiple) from his account it should be deleted from all items.
I tried this query, but it removes "tag4" for all users:
{
"users._id": "user3",
"users.tags": {
$in: ["tag4"]
}
}, {
$pullAll: {
"users.$.tags": ["tag4"]
}
}, {
multi: 1
}
I tried $elemMatch (and $and) in the selector but ended up with the same result only on the first matching document or noticed some strange things happen (sometimes all tags of other users are deleted).
Any ideas how to solve this? Is there a way to "back reference" in $pull conditions?
You need to use $elemMatch in your query object so that it will only match if both the _id and tags parts match for the same element:
db.test.update({
users: {$elemMatch: {_id: "user3", tags: {$in: ["tag4"]}}}
}, {
$pullAll: {
"users.$.tags": ["tag4"]
}
}, {
multi: 1
})

way to update multiple documents with different values

I have the following documents:
[{
"_id":1,
"name":"john",
"position":1
},
{"_id":2,
"name":"bob",
"position":2
},
{"_id":3,
"name":"tom",
"position":3
}]
In the UI a user can change position of items(eg moving Bob to first position, john gets position 2, tom - position 3).
Is there any way to update all positions in all documents at once?
You can not update two documents at once with a MongoDB query. You will always have to do that in two queries. You can of course set a value of a field to the same value, or increment with the same number, but you can not do two distinct updates in MongoDB with the same query.
You can use db.collection.bulkWrite() to perform multiple operations in bulk. It has been available since 3.2.
It is possible to perform operations out of order to increase performance.
From mongodb 4.2 you can do using pipeline in update using $set operator
there are many ways possible now due to many operators in aggregation pipeline though I am providing one of them
exports.updateDisplayOrder = async keyValPairArr => {
try {
let data = await ContestModel.collection.update(
{ _id: { $in: keyValPairArr.map(o => o.id) } },
[{
$set: {
displayOrder: {
$let: {
vars: { obj: { $arrayElemAt: [{ $filter: { input: keyValPairArr, as: "kvpa", cond: { $eq: ["$$kvpa.id", "$_id"] } } }, 0] } },
in:"$$obj.displayOrder"
}
}
}
}],
{ runValidators: true, multi: true }
)
return data;
} catch (error) {
throw error;
}
}
example key val pair is: [{"id":"5e7643d436963c21f14582ee","displayOrder":9}, {"id":"5e7643e736963c21f14582ef","displayOrder":4}]
Since MongoDB 4.2 update can accept aggregation pipeline as second argument, allowing modification of multiple documents based on their data.
See https://docs.mongodb.com/manual/reference/method/db.collection.update/#modify-a-field-using-the-values-of-the-other-fields-in-the-document
Excerpt from documentation:
Modify a Field Using the Values of the Other Fields in the Document
Create a members collection with the following documents:
db.members.insertMany([
{ "_id" : 1, "member" : "abc123", "status" : "A", "points" : 2, "misc1" : "note to self: confirm status", "misc2" : "Need to activate", "lastUpdate" : ISODate("2019-01-01T00:00:00Z") },
{ "_id" : 2, "member" : "xyz123", "status" : "A", "points" : 60, "misc1" : "reminder: ping me at 100pts", "misc2" : "Some random comment", "lastUpdate" : ISODate("2019-01-01T00:00:00Z") }
])
Assume that instead of separate misc1 and misc2 fields, you want to gather these into a new comments field. The following update operation uses an aggregation pipeline to:
add the new comments field and set the lastUpdate field.
remove the misc1 and misc2 fields for all documents in the collection.
db.members.update(
{ },
[
{ $set: { status: "Modified", comments: [ "$misc1", "$misc2" ], lastUpdate: "$$NOW" } },
{ $unset: [ "misc1", "misc2" ] }
],
{ multi: true }
)
Suppose after updating your position your array will looks like
const objectToUpdate = [{
"_id":1,
"name":"john",
"position":2
},
{
"_id":2,
"name":"bob",
"position":1
},
{
"_id":3,
"name":"tom",
"position":3
}].map( eachObj => {
return {
updateOne: {
filter: { _id: eachObj._id },
update: { name: eachObj.name, position: eachObj.position }
}
}
})
YourModelName.bulkWrite(objectToUpdate,
{ ordered: false }
).then((result) => {
console.log(result);
}).catch(err=>{
console.log(err.result.result.writeErrors[0].err.op.q);
})
It will update all position with different value.
Note : I have used here ordered : false for better performance.