Implementing pagination and sorting on a ReactiveMongoRepository with a dynamic query - reactive-programming

I know pagination is somewhat against reactive principles, but due to requirements I have to make it work somehow. I'm using Spring Data 2.1.6 and I can't upgrade so ReactiveQuerydslSpecification for the dynamic query is out of the question. I figured I could use ReactiveMongoTemplate so I came up with this:
public interface IPersonRepository extends ReactiveMongoRepository<Person, String>, IPersonFilterRepository {
Flux<Person> findAllByCarId(String carId);
}
public interface IPersonFilterRepository {
Flux<Person> findAllByCarIdAndCreatedDateBetween(String carId, PersonStatus status,
OffsetDateTime from, OffsetDateTime to,
Pageable pageable);
}
#Repository
public class PersonFilterRepository implements IPersonFilterRepository {
#Autowired
private ReactiveMongoTemplate reactiveMongoTemplate;
#Override
public Flux<Person> findAllByCarIdAndCreatedDateBetween(String carId, PersonStatus status,
OffsetDateTime from, OffsetDateTime to,
Pageable pageable) {
Query query = new Query(Criteria.where("carId").is(carId));
if (status != null) {
query.addCriteria(Criteria.where("status").is(status));
}
OffsetDateTime maxLimit = OffsetDateTime.now(ZoneOffset.UTC).minusMonths(3).withDayOfMonth(1); // beginning of month
if (from == null || from.isBefore(maxLimit)) {
from = maxLimit;
}
query.addCriteria(Criteria.where("createdDateTime").gte(from));
if (to == null) {
to = OffsetDateTime.now(ZoneOffset.UTC);
}
query.addCriteria(Criteria.where("createdDateTime").lte(to));
// problem is trying to come up with a decent page-ish behavior compatible with Flux
/*return reactiveMongoTemplate.count(query, Person.class)
.flatMap(count -> reactiveMongoTemplate.find(query, Person.class)
.flatMap(p -> new PageImpl<Person>(p, pageable, count))
.collectList()
.map());*/
/* return reactiveMongoTemplate.find(query, Person.class)
.buffer(pageable.getPageSize(), pageable.getPageNumber() + 1)
//.elementAt(pageable.getPageNumber(), new ArrayList<>())
.thenMany(Flux::from);*/
}
I've tried to return a Page<Person> (assuming for once this single method could be non-reactive, for once) and it fails with the following error while running testing (Spring context does not load successfully due to: InvalidDataAccessApiUsageException: 'IDocumentFilterRepository.findAllByCustomerIdAndCreatedDateBetween' must not use sliced or paged execution. Please use Flux.buffer(size, skip). I've also tried returning Mono<Page<Person>> and then fails with "Method has to use a either multi-item reactive wrapper return type or a wrapped Page/Slice type. Offending method: 'IDocumentFilterRepository.findAllByCustomerIdAndCreatedDateBetween', so I guess my only option is returning a Flux, according to Example 133, snippet 3

Turns out you can just add the following to the query object:
query.with(pageable);
reactiveMongoTemplate.find(query, Person.class);
Return Flux<T> and it will work out of the box.

Related

Solr custom query component does not return correct facet counts

I have a simple Solr query component as follows:
public class QueryPreprocessingComponent extends QueryComponent implements PluginInfoInitialized {
private static final Logger LOG = LoggerFactory.getLogger( QueryPreprocessingComponent.class );
private ExactMatchQueryProcessor exactMatchQueryProcessor;
public void init( PluginInfo info ) {
initializeProcessors(info);
}
private void initializeProcessors(PluginInfo info) {
List<PluginInfo> queryPreProcessors = info.getChildren("queryPreProcessors")
.get(0).getChildren("queryPreProcessor");
for (PluginInfo queryProcessor : queryPreProcessors) {
initializeProcessor(queryProcessor);
}
}
private void initializeProcessor(PluginInfo queryProcessor) {
QueryProcessorParam processorName = QueryProcessorParam.valueOf(queryProcessor.name);
switch(processorName) {
case ExactMatchQueryProcessor:
exactMatchQueryProcessor = new ExactMatchQueryProcessor(queryProcessor.initArgs);
LOG.info("ExactMatchQueryProcessor initialized...");
break;
default: throw new AssertionError();
}
}
#Override
public void prepare( ResponseBuilder rb ) throws IOException
{
if (exactMatchQueryProcessor != null) {
exactMatchQueryProcessor.modifyForExactMatch(rb);
}
}
#Override
public void process(ResponseBuilder rb) throws IOException
{
// do nothing - needed so we don't execute the query here.
return;
}
}
This works as expected functionally except when I use this in a distributed request, it has an issue with facets counts returned. It doubles the facet counts.
Note that I am not doing anything related to faceting in plugin. exactMatchQueryProcessor.modifyForExactMatch(rb); does a very minimal processing if the query is quoted otherwise it does nothing. Even if the incoming query is not quoted, facet count issue is there. Even if I comment everything inside prepare function, issue persists.
Note that this component is declared in as first-components in solrconfig.xml.
I resolved this issue by extending the class to SearchComponent instead of QueryComponent. It seems that SearchComponent sits at higher level of abstraction than QueryComponent and is useful when you want to work on a layer above shards.

AspNet Boilerplate Parallel DB Access through Entity Framework from an AppService

We are using ASP.NET Zero and are running into issues with parallel processing from an AppService. We know requests must be transactional, but unfortunately we need to break out to slow running APIs for numerous calls, so we have to do parallel processing.
As expected, we are running into a DbContext contingency issue on the second database call we make:
System.InvalidOperationException: A second operation started on this context
before a previous operation completed. This is usually caused by different
threads using the same instance of DbContext, however instance members are
not guaranteed to be thread safe. This could also be caused by a nested query
being evaluated on the client, if this is the case rewrite the query avoiding
nested invocations.
We read that a new UOW is required, so we tried using both the method attribute and the explicit UowManager, but neither of the two worked.
We also tried creating instances of the referenced AppServices using the IocResolver, but we are still not able to get a unique DbContext per thread (please see below).
public List<InvoiceDto> CreateInvoices(List<InvoiceTemplateLineItemDto> templateLineItems)
{
List<InvoiceDto> invoices = new InvoiceDto[templateLineItems.Count].ToList();
ConcurrentQueue<Exception> exceptions = new ConcurrentQueue<Exception>();
Parallel.ForEach(templateLineItems, async (templateLineItem) =>
{
try
{
XAppService xAppService = _iocResolver.Resolve<XAppService>();
InvoiceDto invoice = await xAppService
.CreateInvoiceInvoiceItem();
invoices.Insert(templateLineItems.IndexOf(templateLineItem), invoice);
}
catch (Exception e)
{
exceptions.Enqueue(e);
}
});
if (exceptions.Count > 0) throw new AggregateException(exceptions);
return invoices;
}
How can we ensure that a new DbContext is availble per thread?
I was able to replicate and resolve the problem with a generic version of ABP. I'm still experiencing the problem in my original solution, which is far more complex. I'll have to do some more digging to determine why it is failing there.
For others that come across this problem, which is exactly the same issue as reference here, you can simply disable the UnitOfWork through an attribute as illustrated in the code below.
public class InvoiceAppService : ApplicationService
{
private readonly InvoiceItemAppService _invoiceItemAppService;
public InvoiceAppService(InvoiceItemAppService invoiceItemAppService)
{
_invoiceItemAppService = invoiceItemAppService;
}
// Just add this attribute
[UnitOfWork(IsDisabled = true)]
public InvoiceDto GetInvoice(List<int> invoiceItemIds)
{
_invoiceItemAppService.Initialize();
ConcurrentQueue<InvoiceItemDto> invoiceItems =
new ConcurrentQueue<InvoiceItemDto>();
ConcurrentQueue<Exception> exceptions = new ConcurrentQueue<Exception>();
Parallel.ForEach(invoiceItemIds, (invoiceItemId) =>
{
try
{
InvoiceItemDto invoiceItemDto =
_invoiceItemAppService.CreateAsync(invoiceItemId).Result;
invoiceItems.Enqueue(invoiceItemDto);
}
catch (Exception e)
{
exceptions.Enqueue(e);
}
});
if (exceptions.Count > 0) {
AggregateException ex = new AggregateException(exceptions);
Logger.Error("Unable to get invoice", ex);
throw ex;
}
return new InvoiceDto {
Date = DateTime.Now,
InvoiceItems = invoiceItems.ToArray()
};
}
}
public class InvoiceItemAppService : ApplicationService
{
private readonly IRepository<InvoiceItem> _invoiceItemRepository;
private readonly IRepository<Token> _tokenRepository;
private readonly IRepository<Credential> _credentialRepository;
private Token _token;
private Credential _credential;
public InvoiceItemAppService(IRepository<InvoiceItem> invoiceItemRepository,
IRepository<Token> tokenRepository,
IRepository<Credential> credentialRepository)
{
_invoiceItemRepository = invoiceItemRepository;
_tokenRepository = tokenRepository;
_credentialRepository = credentialRepository;
}
public void Initialize()
{
_token = _tokenRepository.FirstOrDefault(x => x.Id == 1);
_credential = _credentialRepository.FirstOrDefault(x => x.Id == 1);
}
// Create an invoice item using info from an external API and some db records
public async Task<InvoiceItemDto> CreateAsync(int id)
{
// Get db record
InvoiceItem invoiceItem = await _invoiceItemRepository.GetAsync(id);
// Get price
decimal price = await GetPriceAsync(invoiceItem.Description);
return new InvoiceItemDto {
Id = id,
Description = invoiceItem.Description,
Amount = price
};
}
private async Task<decimal> GetPriceAsync(string description)
{
// Simulate a slow API call to get price using description
// We use the token and credentials here in the real deal
await Task.Delay(5000);
return 100.00M;
}
}

Unable to cast the type 'System.Int32' to type 'System.Object during EF Code First Orderby Function

I'm using Specification pattern in EF Code First. When I do order by operation, VS throw a new exception
The specification pattern is copy from eShopOnWeb
I just change a little bit, here is my change code:
public class Specification<T> : ISpecification<T>
{
public Expression<Func<T, object>> OrderBy { get; private set; }
public Specification(Expression<Func<T, bool>> criteria)
{
Criteria = criteria;
}
public Specification<T> OrderByFunc(Expression<Func<T, object>> orderByExpression)
{
OrderBy = orderByExpression;
return this;
}
}
Here is my invoke code, it's very pretty simple:
static void TestSpec()
{
var spec = new Specification<ExcelData>(x => x.RowIndex == 5)
.OrderByFunc(x => x.ColumnIndex);
using (var dbContext = new TechDbContext())
{
var top10Data = dbContext.ExcelData.Take(10).ToList();
var listExcel = dbContext.ApplySpecification(spec).ToList();
Console.WriteLine();
}
}
If I comment OrderByFunc, then everything is fine to me. no error throw from vs.
I had try many times search the error message in google, but none of answer is my case.
So I have to ask a question in here.
When I debug OrderBy property in SpecificationEvaluator.cs, I found there is a Convert method.
So I know the error is about cast error, but how do I fix this cast type error?
Please help me!
The solution is to create new lambda expression with cast (Convert) removed, and then use it to call the Queryable class OrderBy / OrderByDescending method either dynamically (using DLR dispatch or reflection) or by emitting Expression.Call to it.
For the first part, add the following helper method to the SpecificationEvaluator class:
static LambdaExpression RemoveConvert(LambdaExpression source)
{
var body = source.Body;
while (body.NodeType == ExpressionType.Convert)
body = ((UnaryExpression)body).Operand;
return Expression.Lambda(body, source.Parameters);
}
Then replace the code
query = query.OrderBy(specification.OrderBy);
with either
query = Queryable.OrderBy((dynamic)query, (dynamic)RemoveConvert(specification.OrderBy));
or
var keySelector = RemoveConvert(specification.OrderBy);
query = query.Provider.CreateQuery<T>(Expression.Call(
typeof(Queryable), nameof(Queryable.OrderBy),
new[] { typeof(T), keySelector.ReturnType },
query.Expression, keySelector));
Do similar for the specification.OrderByDescending.

How apply pagination in reactive Spring Data?

In Spring Data, we have PagingAndSortingRepository which inherits from CrudRepository. In reactive Spring Data, we only have
ReactiveSortingRepository which inherits from ReactiveCrudRepository.
How could we make pagination in a reactive way ?
Will we able to make this in future with ReactivePagingAndSortingRepository for instance?
Reactive Spring Data MongoDB repositories do not provide paging in the sense of paging how it's designed for imperative repositories. Imperative paging requires additional details while fetching a page. In particular:
The number of returned records for a paging query
Optionally, total count of records the query yields if the number of returned records is zero or matches the page size to calculate the overall number of pages
Both aspects do not fit to the notion of efficient, non-blocking resource usage. Waiting until all records are received (to determine the first chunk of paging details) would remove a huge part of the benefits you get by reactive data access. Additionally, executing a count query is rather expensive, and increases the lag until you're able to process data.
You can still fetch chunks of data yourself by passing a Pageable (PageRequest) to repository query methods:
interface ReactivePersonRepository extends Repository<Person, Long> {
Flux<Person> findByFirstnameOrderByLastname(String firstname, Pageable pageable);
}
Spring Data will apply pagination to the query by translating Pageable to LIMIT and OFFSET.
References:
Reference documentation: Reactive repository usage
import com.thepracticaldeveloper.reactiveweb.domain.Quote;
import org.springframework.data.domain.Pageable;
import org.springframework.data.mongodb.repository.Query;
import org.springframework.data.repository.reactive.ReactiveCrudRepository;
import reactor.core.publisher.Flux;
public interface QuoteMongoReactiveRepository extends ReactiveCrudRepository<Quote, String> {
#Query("{ id: { $exists: true }}")
Flux<Quote> retrieveAllQuotesPaged(final Pageable page);
}
more details , you could check here
I created a service with this method for anyone that may still be looking for a solution:
#Resource
private UserRepository userRepository; //Extends ReactiveSortingRepository<User, String>
public Mono<Page<User>> findAllUsersPaged(Pageable pageable) {
return this.userRepository.count()
.flatMap(userCount -> {
return this.userRepository.findAll(pageable.getSort())
.buffer(pageable.getPageSize(),(pageable.getPageNumber() + 1))
.elementAt(pageable.getPageNumber(), new ArrayList<>())
.map(users -> new PageImpl<User>(users, pageable, userCount));
});
}
I have created another approach using #kn3l solution (without using #Query ):
fun findByIdNotNull(page: Pageable): Flux< Quote>
It creates the same query without using #Query method
public Mono<Page<ChatUser>> findByChannelIdPageable(String channelId, Integer page, Integer size) {
Pageable pageable = PageRequest.of(page, size, Sort.by(Sort.Direction.DESC, "chatChannels.joinedTime"));
Criteria criteria = new Criteria("chatChannels.chatChannelId").is(channelId);
Query query = new Query().with(pageable);
query.addCriteria(criteria);
Flux<ChatUser> chatUserFlux = reactiveMongoTemplate.find(query, ChatUser.class, "chatUser");
Mono<Long> countMono = reactiveMongoTemplate.count(Query.of(query).limit(-1).skip(-1), ChatUser.class);
return Mono.zip(chatUserFlux.collectList(),countMono).map(tuple2 -> {
return PageableExecutionUtils.getPage(
tuple2.getT1(),
pageable,
() -> tuple2.getT2());
});
}
I've faced same issue and end up having similar approach as the above but changed slightly the code as I use Query DSL, following example if someone needed.
#Repository
public interface PersonRepository extends ReactiveMongoRepository<Person, String>, ReactiveQuerydslPredicateExecutor<Person> {
default Flux<Person> applyPagination(Flux<Person> persons, Pageable pageable) {
return persons.buffer(pageable.getPageSize(), (pageable.getPageNumber() + 1))
.elementAt(pageable.getPageNumber(), new ArrayList<>())
.flatMapMany(Flux::fromIterable);
}
}
public Flux<Person> findAll(Pageable pageable, Predicate predicate) {
return personRepository.applyPagination(personRepository.findAll(predicate), pageable);
}
Searching for some idea for reactive pageable repositories I had seen solutions that will result in horrible boiler plate code so I ended up with this (did not tried yet in real life but should work fine, or maybe it can be inspiration for your solution)
So … let's create a brand new toolbox class with such method
public static
<R extends PageableForReactiveMongo<S, K>, S, T, K> Mono<Page<T>>
pageableForReactiveMongo(Pageable pageable,
R repository, Class<T> clazzTo) {
return repository.count()
.flatMap(c ->
repository.findOderByLimitedTo(pageable.getSort(),
pageable.getPageNumber() + 1)
.buffer(pageable.getPageSize(), (pageable.getPageNumber() + 1))
.elementAt(pageable.getPageNumber(), new ArrayList<>())
.map(r -> mapToPage(pageable, c, r, clazzTo))
);
}
and it will need also something like that:
private static <S, T> Page<T> mapToPage(Pageable pageable, Long userCount, Collection<S> collection, Class<T> clazzTo) {
return new PageImpl<>(
collection.stream()
.map(r -> mapper.map(r, clazzTo))
.collect(Collectors.toList())
, pageable, userCount);
}
and then we need also an abstract layer wrapping reactive repositories
public interface PageableForReactiveMongo<D, K> extends ReactiveMongoRepository<D, K> {
Flux<D> findOderByLimitedTo(Sort sort, int i);
}
to let it instantiate by spring
#Repository
interface ControllerRepository extends PageableForReactiveMongo<ControllerDocument, String> {
}
And finally use it many many many times like that
public Mono<Page<Controller>> findAllControllers(Pageable pageable) {
return getFromPageableForReactiveMongo(pageable, controllerRepository, Controller.class);
}
this is how your code can look like :) please tell me if it is fine, or helped out with something

Junit with new Date()

What would the junit test be when i have the following method:
#Override
public void saveLastSuccesfullLogin(final User user) {
gebruiker.setLastLogin(new Date());
storeUser(user);
}
submethode storeUser:
#Override
public void storeUser(final User user) {
EntityManager em = emf.createEntityManager();
em.getTransaction().begin();
em.merge(user);
em.getTransaction().commit();
em.close();
}
The problem i have is the date, being set for the entity user and then stored. Im using junit and easymock.
Try pulling the new Date() into a method with default access specifier like below
#Override
public void saveLastSuccesfullLogin(final User user) {
gebruiker.setLastLogin(getDate());
storeUser(user);
}
Date getDate() {
return new Date();
}
In your test class override the class as below using a mock or stubbed date.
<ClassUnderTest> classUnderTest = new <ClassUnderTest> () {
#Override
Date getDate() {
return mockDate;
}
}
In this way you can assert the date value easily as it is going to be stubbed out.
What's the problem with the Date? That you don't know what it is to assert later? A few alternatives:
Pass the date into the method
Create a factory to get the current date/time so you can mock it out
Assert the date within a threshold of correctness
There is also a more "enterprise" approach that may be used where Dependency Injection is available (like in EJB, Spring etc.).
You can define an interface, for example TimeService and add e method that returns the current date.
public interface TimeService {
Date getCurrentDate();
}
You can implement this to return new Date() and use it like this:
gebruiker.setLastLogin(timeService.getCurrentTime());
This will obviously be very easy to test because you can mock the TimeService. Using EasyMock (just an example), this might be:
Date relevantDateForTest = ...
expect(timeService.getCurrentTime()).andReturn(relevantDateForTest);
replay(timeService);
Using the TimeService throughout the entire code and never using new Date() is a pretty good practice and has other advantages as well. I found it helpful in a number of occasions, including manual functional testing of features that would activate in the future. Going even further, the system time may be retrieved from an external system thus making it consistent across clusters etc.
You can also create a getDate method, and a date static var:
private static Date thisDate = null;
#Override
public void saveLastSuccesfullLogin(final User user) {
gebruiker.setLastLogin(getDate());
storeUser(user);
}
public Date getDate() {
if(thisDate != null) return thisDate;
return new Date();
}
public void setDate(Date newDate) {
thisDate = newDate;
}
Then in your test method, you can go ahead and call setDate to control what date you will get.