Spring Webflux - reactive couchbase - pagination response with total records, page number, page size, etc - spring-data-jpa

I'm using Spring Webflux & reactor, Java 11, Spring boot 2.4.5, Spring 5.3.6, spring-boot-starter-data-couchbase-reactive 2.4.5 versions for this reactive application.
Use case:
Website wants data in pagination format (to display total number of pages with the ability for the user to select any page).
But spring-data-reactive doesn't return response with Pageable object.
is there a way to make spring-data-reactive return pageable object in response?
my current code:
Flux<Entity> findByEventId(Integer eventId, Pageable pageable);
One way to overcome this is by calling count method and get the record count.
Mono<Long> countByEventId(Integer eventId);
Is there a better solution to this at API layer? (also any suggestion to cater use case at UI side)

Related

REST API for processing data stored in hbase

I have a lot of records in hbase store (millions) like this
key = user_id:service_id:usage_timestamp value = some_int
That means an user used some service_id for some_int at usage_timestamp.
And now I wanted to provide some rest api for aggregating that data. For example "find sum of all values for requested user" or "find max of them" and so on. So I'm looking for the best practise. Simple java application doesn't met my performance expectations.
My current approach - aggregates data via apache spark application, looks good enough but there are some issues to use it with java rest api so far as spark doesn't support request-response model (also I have took a view into spark-job-server, seems raw and unstable)
Thanks,
Any ideas?
I would offer Hbase + Solr if you are using Cloudera (i.e Cloudera search)
Solrj api for aggregating data(instead of spark), to interact with rest services
Solr Solution (in cloudera its Cloudera search) :
Create a collection (similar to hbase table) in solr.
Indexing : Use NRT lily indexer or custom mapreduce solr document creator to load data as solr documents.
If you don't like NRT lily indexer you can use spark or mapreduce job with Solrj to do the indexing For ex: Spark Solr :
Tools for reading data from Solr as a Spark RDD and indexing objects from Spark into Solr using SolrJ.
Data Retrieval : Use Solrj to get the solr docs from your web service call.
In Solrj,
There is FieldStatInfo through which Sum,Max etc.... can be achieved
There are Facets and Facetpivots to group data
Pagination is supported for rest API calls
you can integrate solr results with Jersey or some other web service as we have already implemented this way.
/**This method returns the records for the specified rows from Solr Server which you can integrate with any rest api like jersey etc...
*/
public SolrDocumentList getData(int start, int pageSize, SolrQuery query) throws SolrServerException {
query.setStart(start); // start of your page
query.setRows(pageSize);// number of rows per page
LOG.info(ClientUtils.toQueryString(query, true));
final QueryResponse queryResponse = solrCore.query(query, METHOD.POST); // post is important if you are querying huge result set Note : Get will fail for huge results
final SolrDocumentList solrDocumentList = queryResponse.getResults();
if (isResultEmpty(solrDocumentList)) { // check if list is empty
LOG.info("hmm.. No records found for this query");
}
return solrDocumentList;
}
Also look at
my answer in "Create indexes in solr on top of HBase"
https://community.hortonworks.com/articles/7892/spark-dataframe-to-solr-cloud-runs-on-sandbox-232.html
Note : I think same can be achieved with elastic search as well. But out of my experience , Im confident with Solr + solrj
I see two possibilities:
Livy REST Server - new REST Server, created by Cloudera. You can submit Spark jobs in REST way. It is new and developed by Cloudera, one of the biggest Big Data / Spark company, so it's very possible that it will be developed in future, not abandoned
You can run Spark Thrift Server and connect just like to normal database via JDBC. Here you've got documentation. Workflow: read data, preprocess and then share by Spark Thrift Server
If you want to isolate third-party apps from Spark you can create simple application that will have user-friendly endpoint and will translate query received by endpoint to Livy-Spark jobs or SQL that will be used with Spark Thrift Server

Filtering Records on the basis of complex queryParams

I need to perform complex filtering depending upon the query params.
Using SelectableEntityFiltering in Jersey 2.17
This post and every where i can only find filtering for the select operation.
what if i need to do more complex fetching like
api/employees?query=empId>10 AND empId<20 AND firstName LIKE "abc*"
This kind of feature is possible in ADF REST Framework
Filtering a Resource Collection with a Query Parameter with ADF REST Framework
I need to know if this kind of approach is possible via Jersey.
A million thanks in Advance.

How to implement pagination with display tag and get records from database for each page

I am new to spring mvc and there is requirement to display records in a jsp using pagination.Database contains more than 100k rows and I cant fetch all the rows in single query and cache it. I have seen some examples with getting records in single query and doing cache but in my case when user clicks on each page link i want query the database and fetch the records every time.I googled it.Many mentioned about PaginatedList Interface of display tag.But i have no idea how to implement it
How to work with display tag using PaginatedList Interface or are there any alternate
I am using spring mvc4.1 ,postgressql 9.4 and jdbc template and jsp

totalResults and pagination with Grails RestfulController

I am trying to build a frontend using grails RestfulController based backend. My plain get requests get me the list of all objects. I am able to pass in the regular gomr params like max and sort.
My question is how to get the count of all the objects? I need this to correctly implement the pagination on the frontend.

Spring batch Item reader to iterate over a rest api call

I have a spring batch job, which needs to fetch details from rest api call and process the particular data on my side. My rest api call will have mainly the below parameters :
StartinIdNumber(offset)
PageSize(limit)
ps: StartinIdNumber serves the same purpose as rownumber or "offset" in this particular API. The API response results are sorted by IdNumber, so by specifying a StartinIdNumber, the API will in turn perform a "where IdNumber >= StartinIdNumber order by IdNumber limit pageSize" in their DB query.
It will return the given number of user details, I need to iterate through all the ids by changing the StartingIdNumber parameter for each request.
I have seen current ItemReader implementations of spring batch framework,which read through database or xml etc. But I didn't come across any reader which helps in my case. Please suggest a way to iterate through the user details as specified above .
Note : If I write my own custom item reader, I have to take care of preserving state (last processed "StartingIdNumer") which is proving challenging to me.
Does implementing ItemStream serves my purpose? Or is there any better way?
Implementing the ItemStream interface and writing my own custom reader served my purpose. It is now state-full as required for me. Thanks.