Is it possible to go through mongodb results ordered by constantly changing field with pagination? - mongodb

I have a collection with following schema
{
content: string,
score: Decimal
}
The goal is to go through whole collection with pagination ordered by score, but score can change every minute. So it is possible that score on the first page can became equal or less then score on the second page and will return same object (from the first page) as a result on the second page. Is it possible to iterate over whole collection without doubles?

Related

Displaying Sum of Each Element in Firestore (Flutter)

I have this this collection in Firestore where each document has 'count' property, and it is int of 1. I would want to have a function that adds that count property of 1 to total property somewhere in Firebase, I presume it would be another collection that would just calculate the sum as they are added 1 by 1. I'm at a bit of a loss how I would go around doing that.
The collection is displayed in a regular Stream and each element is in a List Tile. So what I'm after is to get a function that would be called somewhere that would add that '1' to the let's say collection called 'Total' and it would calculate whenever that function is executed and add that 1 to total. Also, how would I display that 'Total' collection somewhere else on another page as a string perhaps?
I attached a general image how my Firestore looks like just to give you guys a general picture.
I don't think code is necessary here, since as I said, it is just a regular List Tile and Stream of the 'Names' collection, but let me know if you need more context. Thank you!
In Cloud Firestore in order to increase an integer property, you would need first to iterate through the documents, by reading stored data and subsequently perform an update to change the value of this specific field in the corresponding "Total" collection.
To display the collection using flutter, you can take a look at StreamBuilder which can listen to the document changes and display appropriately, I also found this Medium article which might help.

MongoDb get position of an item in the collection with sorting

I use a table with sorting and pagination.
To get data I use find with sort, skip, limit, to get data for the current page of the table.
Question: When I add a new item to the collection, I need to open the page in the table.
That is, I need to get the position of the item in the collection, taking into account the sorting.

how to efficient paging in mongodb [duplicate]

This question already has answers here:
Implementing pagination in mongodb
(2 answers)
Closed 5 years ago.
I want to sort all docs by field x (multiple docs can have same value on this field). Then every time I press "next", it loads the 10 more docs.
If multiple docs have the same value, they can be displayed at whatever order among them, it doesn't matter.
Since skip() is inefficient on large dataset, how do this efficiently? No pagination number needed, only infinite scroll.
If you don't require pagination number, then you can just utilise a monotonically increasing unique id values; such as _id with ObjectId().
Using your example:
/* Value of first scroll record, and to be updated every iteration */
var current_id;
var scroll_size = 10;
db.collection.find({_id: {$lt: current_id}}).
limit(scroll_size).
sort({
_id: -1,
x: 1 // Depending on your use case
});
The example above will give you most recent records. Depending on your use case you would have to decide what to do with newly inserted document.
If you are using a different field than _id to scroll through, make sure you add appropriate indexes on the field.

Solr: How to Search by Time *AND* Distance

We are working on an app using Solr to search by distance. A Solr consultant wrote the original code but is no longer available, so I, a Solr newbie, try to take this over. The current index insertion code looks like this:
{add:
{doc:
{id: <my_id>,
category: <my_type>,
resourcename: <private_flag>,
store: <my_latlng>,
},
overwrite: true,
commitWithin: <commit_time>
}
}
And the query below properly returns all the documents near (mylat, mylng):
localhost:8983/solr/select?wt=json&q=category:"<mytype>"&fl=id&fq=
{!bbox}&sfield=store&pt=<mylat>,<mylng>&d=<my_distance>&rows=200
All was well in paradise. Now we need to add a time dimension, meaning instead of just retrieve nearby docs, we need to retrieve nearby docs within a specific time range (e.g. 2 days ago, 2 month ago). This means adding to each index a "origin_time" field, and then modify the query to search for TIME plus distance.
Can anyone suggest how I should add this time field to the index and how to adjust the query to search by distance and time?
Thanks!

Get top 50 records for a certain value w/ mongo and meteor

In my meteor project, I have a leaderboard of sorts, where it shows players of every level on a chart, spread across every level in the game. For simplicitys sake, lets say there are levels 1-100. Currently, to avoid overloading meteor, I just tell the server to send me every record newer than two weeks old, but that's not sufficient for an accurate leaderboard.
What I'm trying to do is show 50 records representing each level. So, if there are 100 records at level 1, 85 at level 2, 65 at level 3, and 45 at level 4, I want to show the latest 50 records from each level, making it so I would have [50, 50, 50, 45] records, respectively.
The data looks something like this:
{
snapshotDate: new Date(),
level: 1,
hp: 5000,
str: 100
}
I think this requires some mongodb aggregation, but I couldn't quite figure out how to do this in one query. It would be trivial to do it in two, though - select all records, group by level, sort each level by date, then take the last 50 records from each level. However, I would prefer to do it in one operation, if I could. Is it currently possible to do something like this?
Currently there is no way to pick up n top records of a group, in the aggregation pipeline. There is an unresolved open ticket regarding this: https://jira.mongodb.org/browse/SERVER-9377.
There are two solutions to this:
Keep your document structure as it is now and aggregate, but,
grab the n top records and slice off the remaining records for each group, in the client side.
Code:
var top_records = [];
db.collection.aggregate([
// The sort operation needs to come before the $group,
// because once the records are grouped by level,
// there exists only one document per group.
{$sort:{"snapshotDate":-1}},
// Maintain all the records in an array in sorted order.
{$group:{"_id":"$level","recs":{$push:"$$ROOT"}}},
],{allowDiskUse: true}).forEach(function(level){
level.recs.splice(50); //Keep only the top 50 records.
top_records.push(level);
})
Remember that this loads all the documents for each level and removes the unwanted records in the client side.
Alter your document structure to accomplish what you really need. If
you only need the top n records always, keep them always in sorted
order in the root document.This is accomplished using a sorted capped array.
Your document would look like this:
{
level:1,
records:[{snapshotDate:2,hp:5000,str:100},
{snapshotDate:1,hp:5001,str:101}]
}
where, records is an capped array of size n and always has sub documents sorted in descending order of their snapshotDate.
To make the records array work that way, we always perform an update operation when we need to insert documents to it for any level.
db.collection.update({"level":1},
{$push:{
recs:{
$each:[{snapshotDate:1,hp:5000,str:100},
{snapshotDate:2,hp:5001,str:101}],
$sort:{"snapshotDate":-1},
$slice:50 //Always trim the array size to 50.
}
}},{upsert:true})
What this does is, is always keeps the size of the records array to 50 and always sorts the records whenever new sub documents are inserted at a level.
A simple find, db.collection.find({"level":{$in:[1,2,..]}}), would give you the top 50 records in order, for each selected level.