Pagination and Filtering Best Practice - rest

I have data that I need to paginate on a table with a lot of filters to apply to it. For the best user experience, fetching data every time you apply a filter is a bad practice. Because it can cause timeouts and if it doesn't, it can take more than 5 seconds. So I am thinking of instead of having only level pagination on the backend, I do multi-level pagination.
Filter the already existing data on the front end.
Fetch the data for the second and third pages in pagination in the background.
Don't filter everything on the back end.
Is that a good practice or is there a better one that you guys think?

Related

NoSQL how to lookup id in a collection

NoSQL noob here. I'm building an app using Firestore NoSQL. I'm looping through items where every item has a owner id (creator user id).
I want to display owner's name on the listing page. In traditional SQL, i have foreign key so I can just make reference to say, Item.Owner.FirstName
What's the best practice in NoSQL? Should I be saving owner name as a field at the time of saving the item? or do a lookup of each owner id to get user object whilst i'm looping through items?
Second option sounds expensive so i'm assuming 1st way is the way to go. Unless there's a better, more accepted way?
Both will work. You either reference the data in the other document in whatever way you see fit, or you duplicate information into the document that you intend to query to build the display. You just have to decide what which problem you want to deal with:
If you duplicate data among documents (known as "denormalization"), then you'll have to put effort into keeping them all up to date with each other, if that's what you require. So, writing one document might actually turn into writing multiple documents.
If you normalize your data with no duplication, then each of your queries will require more queries to get the related data from other documents. This could result in a drop in performance and an increase in cost for apps with heavy read loads.
Since we don't know the performance requirements and usage behavior of your app, there is no way to give specific advice. You will have to think carefully about which problem you want to have, perhaps based on complexity, performance, and overall cost.

Firestore pagination of multiple queries

In my case, there are 10 fields and all of them need to be searched by "or", that is why I'm using multiple queries and filter common items in client side by using Promise.all().
The problem is that I would like to implement pagination. I don't want to get all the results of each query, which has too much "read" cost. But I can't use .limit() for each query cause what I want is to limit "final result".
For example, I would like to get the first 50 common results in the 10 queries' results, if I do limit(50) to each query, the final result might be less than 50.
Anyone has ideas about pagination for multiple queries?
I believe that the best way for you to achieve that is using query cursors, so you can better manage the data that you retrieve from your searches.
I would recommend you to take a look at the below links, to find out more information - including a question answered by the community, that seems similar to your case.
Paginate data with query cursors
multi query and pagination with
firestore
Let me know if the information helped you!
Not sure it's relevant but I think I'm having a similar problem and have come up with 4 approaches that might be a workaround.
Instead of making 10 queries, fetch all the products matching a single selection filter e.g. category (in my case a customer can only set a single category field). And do all the filtering on the client side. With this approach the app still reads lots of documents at once but at least reuse these during the session time and filter with more flexibility than firestore`s strict rules.
Run multiple queries in a server environment, such as cloud store functions with Node.js and get only the first 50 documents that are matching all the filters. With this approach client only receives wanted data not more, but server still reads a lot.
This is actually your approach combined with accepted answer
Create automated documents in firebase with the help of cloud functions, e.g. Colors: {red:[product1ID,product2ID....], ....} just storing the document IDs and depending on filters get corresponding documents in server side with cloud functions, create a cross product of matching arrays (AND logic) and push first 50 elements of it to the client side. Knowing which products to display client then handle fetching client side library.
Hope these would help. Here is my original post Firestore multiple `in` queries with `and` logic, query structure

How do I efficiently page a large collection of query results with Sails.js / Waterline?

I'm working with a large dataset behind the Waterline ORM. In several use-cases I need to do some processing on many/most of the record–10's of thousands.
So far I've been working with .find(), but that executes and returns the entire result set. Is there a Sails/Waterline approach to iterating over a query result–which preserves the storage-agnostic aspect of the ORM?
You can use paginate, something like -> Model.find().paginate({page: xx, limit: xx});
More info here: http://sailsjs.org/documentation/concepts/models-and-orm/query-language
Search for pagination :)
If you want to keep the storage agnostic waterline trait you will have to take a look to your actual schema implementation (even if you're coding storage agnostic).
You can:
Use pagination like #holzanic answers, however this might come up with critital performance issues in some storage technologies.
Use streams.
If you will be listing whole objects from a Model, you can make sure you can craft paginate by id. You can take first n elements in a query and then try to obtain the next page where their id attribute is bigger than last received in previous page.

Pagination solution for Salat/Cashbah

I am interested in a pagination solution for documents stored in MongoDB. I use Salat/Casbah in order to work with this data. As far as I can tell, there is nothing readily available in as far as open source to paginate data using those two solutions. Is there a solution I'm currently overlooking in order to paginate data that I'm displaying in an HTTP API using those as my drivers?
Despite its cheesy attempts at humor, this post on MongoDB paging is pretty good and focuses on range queries and associated techniques to paginate your data. How you actually do it depends on the amount of data and the nature of your application.
please, be careful with pagination! In MongoDB pagination very often results in iteration over entire collection. Exactly because of this casbah doesn't have good pagination solution. You can try to use filtering instead of pagination, for example when result is ordered by field relevance, selecting results where relevance > some value
There're a lot of information about how to do efficient paging in mongodb, e.g.: MongoDB - paging

MongoDB. Use cursor as value for $in in next query

Is there a way to use the cursor returned by the previous query as a value for $in in the next query? For example, something like this:
var users = db.user.find({state:1})
var offers = db.offer.find({user:{$in:users}})
I think this can reduce the traffic between mongodb and client in case the client doesn't need user information at all, just offers. Am i wrong?
Basically you want to do a join between two collections which Mongo doesn't support. You can reduce the amount of data being transferred from the server by limiting the fields returned from the first query to only the unique user information (i.e. the _id) that you need to get data from the offers collection.
If you really just want to make one query then you should store more information in the offers collection. For example, if you're trying to find offers for active users then you would store the active state of the user in the offers collection.
To work from your comment:
Yes, that's why I used tag 'join' in a question. The idea is that I
can make a first query more сomplex using a bunch of fields and
regexes without storing user data in other collections except
references. In these cases I always have to perform two consecutive
queries, but transfering of the results of the first query is not
necessary neither for me nor for the mongodb itself. I just want to
understand could it be done now, will it be possible to do so in the
future or it cannot be implemented for some technical reasons
As far as I understand it there is no immediate hurry to make this possible. Also the way it is coded atm will make this quite a big change to the way cursors work and are defined. A change big enough to possibly cause implementation breaks for other people. It is really a case of whether to set safe for inserts and updates for all future drivers. It is recognised that safe should be default but this will break implementation for other people who expect it the other way around.
It is rather inefficient if you don't require the results of the first query at all however since most networks are prepped with high traffic in mind and the traffic is cheap there hasn't been a demand to make it able to do chained queries server side in the cursor.
However subselects (which this basically is, it is selecting a set of rows based upon a sub selection of previous rows) have been on mongodb-user a couple of times and there might even be a JIRA for it somewhere, if not might be useful to make one.
As for doing it right now: there is no way.