We know that RESTHEART API for Mongo provides facility to get data by pages and the maximum limit of pages is 1000.
Is there a way in Restheart or outside to get all pages data in single call?
I am just trying to avoid multiple rest calls using restheart for every page.
It is not possible as it is not possible to retrieve an entire collection with a single mongodb driver call.
The limit of 1000 is imposed to bound the http request. With documents up to 10 megabytes or more of json it could even result in a huge payload!
You can however make concurrent requests for different pages to speed up the data retrieval...
Related
I have a document store on MongoDB Atlas where each document is roughly 1MB. I am using REST API and a GET request to access data from this database. This GET request filters the database and can return a bunch of documents based on the filter.
My question is : Is there a recommended payload size limit in terms of amount of data that we can transmit? For example if my filter results in 100 documents I would be transmitting approx 100MB of data.
Are there better ways to do this?
I want to do the pagination in Firestore using the rest API's (using springboot). Is there any way similar to spring data?
Currently I am using the Paginate data with query cursors. But here we will require the last element (last token/last count) or last snapshot to get the next batch.
Is there any other way to paginate using just page and page size in Firestore?
Unfortunately this is not possible, Firestore's pagination was design with a query cursor approach in mind.
Even if there were to exist a library that does this under the hood and gives you an illusion of a page/pagesize approach, which I don't think there is, it won't matter since you are using the REST API to connect to Firestore, so you should use the parameters that Firestore was designed to accept which are, as you mentioned, the last element(last token/last count) or last snapshot to get the next batch.
I'm getting over 100 MB of results from an API which I need to paginate because it takes over 10 minutes to sort. All of the results are reglar JSON Objects
How could I paginate this amount of data using wicket?
100MB is a an amount that you better do not keep in memory! Better store it (temporarily) into some Document NoSQL database (like Couchbase, MongoDB, or similar). Then use the database query language to read one page at a time.
Wicket offers components working with an IDataProvider, an interface that support data paging.
https://ci.apache.org/projects/wicket/guide/8.x/single.html#_pageable_repeaters
You'll probably have to cache your 100 MB result somewhere, since you don't want to reload the data on each paging. You should not store it inside a component though, or it will be serialized along with the containing page.
I have configured RestHeart to get data from MongoDB. Requests that involve String, Object, Number work well and return the same results than if I use a client to query the MongoDB (RoboMongo, MongoDB Compass...). However requests that involved Date data type take longer than with MongoDB clients and Nginx closes the connection after 60s (the same query with a client takes 0.163s)
## Query in a MongoDB client
db.getCollection('collection').find({"DATE_A_FMT": {'$gte':ISODate('2017-02-21T05:00:00.000Z')}})
## Request with RestHeart
https://IP/DB/collection/?filter={'DATE_A_FMT': {'$gte':{'$date':'2017-02-20T05:00:00.000Z'}}}
The collection has an index for DATE_A_FMT field that is used when the query is executed with a client. In addition, I have tried to add sorting but I have the same result.
The configuration of RestHeart is the same as the default configuration in the documentation with the difference of the connection to MongoDB. In this case I use a cluster with 3 instances (1 Master and 2 slaves). Furthermore the RestHeart log file shows all the request that are executed except these requests so I can't see what happen with them.
Any suggestion in order to discover what and where is the issue with this queries? Thanks in advance.
Just for testing purpose I would like to get 100 , 500 , 1000 , 5000 , 10000 , 20000 ... records from a Collection. At the moment the largest pagesize is 1000. How can I increase it to whatever I would like for just testing ?
RESTHeart has a pagesize limit of 1000 pages per request and that's hardcoded into class org.restheart.handlers.injectors.RequestContextInjectorHandler.
If you, for any reason, want to increase that limit then you have to change the source code and build your own jar.
However, RESTHeart speedups the execution of GET requests to collections resources via its db cursors pre-allocation engine. This applies when several documents need to be read from a big collection and moderates the effects of the MongoDB cursor.skip() method that slows downs linearly. So it already optimizes the navigation of large MongoDB collections, if this is what you are looking for.
Please have a look at Speedup Requests with Cursor Pools and Performances page in the official documentation for more information.