I am using skip and limit in mongodb mongoose it returns random documents.
Sometimes it returns same documents.
I am trying to get limit number of documents and after skip i want to get next limit documents.
I guess you are trying to use the pagination concept here. In both SQL and NoSQL, the data must be sorted in a specific order to achieve pagination without jumbled data on each db call.
for example:
await Event.find({}).sort({"createdDate": 0}).skip(10).limit(10);
I'm trying to fetch the 10-20 data in the above case, which is sorted using createdDate. So, the data won't shuffled while fetching, unless you insert or delete the data.
I hope this answers your question
Related
I am trying to get some random records from my small MongoDB database but I want to sometimes see duplicates.
Sample function doesn't really work for me because it might create duplicates only with specific conditions my query and database won't meet.
Is there any reasonable way of getting some random records while allowing duplicates without downloading entire collection?
I would like to understand which of the below queries would be faster, while doing updates, in mongo db? I want to update few thousands of records at one stretch.
Accumulating the object ids of those records and firing them using $in or using bulk update?
Using one or two fields in the collection which are common for those few thousand records - akin to "where" in sql and firing an update using those fields. These fields might or might not be indexed.
I know that query will be much smaller in the 2nd case as every single "_id" (oid) is not accumulated. Does accumulating _ids and using those to update documents offer any practical performance advantages?
Does accumulating _ids and using those to update documents offer any practical performance advantages?
Yes because MongoDB will certainly use the _id index (idhack).
In the second method - as you observed - you can't tell whether or not an index will be used for a certain field.
So the answer will be: it depends.
If your collection has million of documents or more, and / or the number of search fields is quite large you should prefer the first search method. Especially if the id list size is not small and / or the id values are adjacent.
If your collection is pretty small and you can tolerate a full scan you may prefer the second approach.
In any case, you should testify both methods using explain().
Can we save new record in decending order in MongoDB? So that the first saved document will be returned last in a find query. I do not want to use $sort, so data should be presaved in decending order.
Is it possible?
According to above mentioned description ,as an alternative solution if you do not need to use $sort, you need to create a Capped collection which maintains order of insertion of documents into MongoDB collection
For more detailed description regarding Capped collections in MongoDB please refer the documentation mentioned in following URL
https://docs.mongodb.org/manual/core/capped-collections/
But please note that capped collections are fixed size collections hence it will automatically flush old documents in case when collection size exceeds size of capped collection
The order of the records is not guaranteed by MongoDB unless you add a $sort operator. Even if the records happen to be ordered on disk, there is no guarantee that MongoDB will always return the records in the same order. MongoDB does quite a bit of work under the hood and as your data grows in size, the query optimiser may pick a different execution plan and return the data in a different order.
I've heard using MongoDB's skip() to batch query results is a bad idea because it can lead to the server becoming IO bound as it has to 'walk through' all the results. I want to only return a maximum of 200 documents at a time, and then the user will be able to fetch the next 200 if they want (assuming they haven't limited it to less).
Initially I read up on paginating results and most things I read said the easiest way in MongoDB at least is to modify the query criteria to simulate skipping through.
For example if a field called account number on the last document is 28022004, then the next query should have "accNumber > 28022004" in the criteria. But what if there are no unique fields included in the projection? What if the user wants to sort the records by a non-unique field?
I have been scouring the internet for a few hours to look at a solution where doing bulk upserts in Meteor.js's smart collections is efficient.
Scenario:
I am hitting an api to get updated info for 200 properties asynchronously every 12 hours. For each property I get an array of about 300 JSON objects on average. 70% of the objects might not have been updated. But for the rest of 30% objects I need to update them in the database. As there is no way to find those 30% objects without matching them with the documents in database, I decided to upsert all documents.
My options:
Run a loop over the objects array and upsert each document in the database.
Remove all documents from collection and bulk insert the new objects.
For option 1 running a loop and upserting 60K documents (which are going to increase with time), takes a-lot of time. But at the moment seems like the only plausible option.
For option 2 meteor.js does not allow bulk inserts in their smart collections. Even for that we will have to loop over the array of objects.
Is there another option where I can achieve this efficiently?
MongoDb supports inserting an array of documents. So you can insert all your documents in one call from Meteors 'rawCollection'.
MyCollection.remove({}); // its empty
var theRaw = MyCollection.rawCollection();
var mongoInsertSync = Meteor.wrapAsync(theRaw.insert, theRaw);
var result = mongoInsertSync(theArrayOfDocs);
In production code you will wrap this in a try/catch to get hold of the error if your insert fails, result is only good if the inserting of the array of documents succeeds.
The above solution with the rawCollection does insert, but it appears not to support the ordered:false directive to continue processing if one document fails, the raw collection exits on the first error, which is unfortunate.
"If false, perform an unordered insert, and if an error occurs with one of documents, continue processing the remaining documents in the array."