Mybatis generator - Generate implementation for Mapper interface - mybatis

Mybatis generator generates the Mapper Interface. It does not even have a provision for generating implementation for mapper interface assuming that mybatis provides a default MapperProxy to handle the default implementation and in normal cases one would not require a custom impl.
Use-Case:
Need to override a particular Mapper methods implementation (say adding bulk insert support).
Question:
Is the solution to hand-craft the implementation for all methods in the Mapper or is there a smarter way to achieve this?

Related

Approach for Mapstruct Mappers with method overloading and InheritedConfiguration

We are using MapStruct with Spring Data to convert between JPA entities and DTO classes. All the mappers follow the same pattern with methods beanToDTO() and dtoToBean(). After a learning cure, we have all this working. Now we are trying to use Spring injection to replace the implementation on JPA entity, DTO, and Mapper classes. We have the JPA entities and DTO replacement working. So now we are trying to have Spring inject alternative Mapper implementation.
For our problem, we can subclass the mapper interface and not it will have 2 beanToDTO() methods and 2 dtoToBean methods(), 1 for the base JPA entity and DTO and 1 for the subclassed JPA entity and DTO. This works fine for straightforward examples.
For mappers that require some customization, we utilize the #Mapping annotation and #InheritInverseConfiguration for the base mapper. For the subclassed mapper, we try the same thing but the problem is the InheritInverseConfiguration in the subclass mapper gives the error "Several matching inverse methods exist: beanToDTO(), beanToDTO(). Specify a name explicitly."
Both methods have the same name so we have no way to identify the implementation we want to reference. I realize that the problem is due to our implementation approach but it simplifies our code to:
- getBean()
- getMapper().beanToDTO()
and we will be able to replace JPA entity, Mapper , and DTO via Spring injection.
Are there other MapStruct trick that will help us with this problem?
Thanks
Have you looked at the #MapperConfig.. checkout our unit test. I would advise to put your base / prototype methods in #MapperConfig annotated shared configuration interface that you can reference in #Mapper
See this unit test for more info. Or checkout the user guide.

Best way to re-use ServiceStack web api interface

I have many models that defines all my db tables; I wondering which is the best way to create one single CRUD ServiceStack interface for all these models without write the same code for each one.
I'd like to keep it DRY to ease future maintaining.
Thank you.
Checkout AutoQuery which lets you expose a rich, queryable API's for each table by just declaring its Request DTO:
[Route("/movies")]
public class FindMovies : QueryBase<Movie> {}
You want a typed Request DTO for each Service, but other than that you can use a base class, shared extension or utility methods to execute common logic as you would in normal C#. The built-in Auto Mapping also reduces the boilerplate for populating a Table POCO from a request DTO.

Combining Spring Data query builder with Spring Data JPA Specifications?

Spring Data allows you to declare methods like findByLastname() in your repository interface and it generates the queries from the method name automatically for you.
Is it possible to somehow have these automatically-generated queries also accept a Specification, so that additional restrictions can be made on the data before it's returned?
That way, I could for example call findByLastname("Ted", isGovernmentWorker()), which would find all users that have the last name Ted AND who satisfy the isGovernmentWorker() specification.
I need this because I'd like the automated query creation provided by Spring Data and because I still need to be able to apply arbitrary specifications at runtime.
There is no such feature. Specifications can only be applied on JpaSpecificationExecutor operations.
Update
The data access operations are generated by a proxy. Thus if we want to group the operations (as in findByName + Criteria) in a single SELECT call, the proxy must understand and support this kind of usage; which it does not.
The intended usage, when employing Specification API would look like this for your case:
findAll(Specifications.where(hasLastName("Ted")).and(isGovernmentWorker())
Spring data allows you to implement custom repository and use Specifications or QueryDSL.
Please see this article.
So at the end you will have one YourCustomerRepository and appropriate YourRepositoryImpl implementation, where you will put your findByLastname("Ted", isGovernmentWorker()) method.
And then YourRepository should extend YourCustomerRepository interface.

How to see the repository implementation generated by Spring Data MongoDB?

When is the implementation for repositories generated by Spring Data? At compile time or runtime? Can I see the implementation repository implementation generated by Spring Data?
tl;dr
No, for a very simple reason: there's no code generation going on. The implementation is based on proxies and a method interceptor delegating the call executions to the right places.
Details
Effectively, a method execution can be backed by 3 types of code:
The store specific implementation of CrudRepository. Have a look for types named Simple(Jpa|Mongo|Neo4|…)Repository (see the JPA specific one here). They have "real" implementations for all of the methods in CrudRepository and PagingAndSortingRepository.
Query methods are effectively executed by QueryExecutorMethodInterceptor.doInvoke(…) (see here). It's basically a 3-step-process to find the delegation target and invoke it. The actual execution is done in classes named (Jpa|Mongo|Neo4j…)QueryExecution (see this one for example).
Custom implementation code is called directly, also from QueryExecutorMethodInterceptor.
The only thing left is the query derivation, which consists of two major parts: method name parsing and query creation. For the former, have a look at PartTree. It takes a method name and a base type and will return you a parsed AST-like structure or throw an exception if it fails to resolve properties or the like.
The latter is implemented in classes named PartTree(Jpa|Mongo|Neo4j|…)Query and delegates to additional components for actually creating the store specific query. E.g. for JPA the interesting bits are probably in JpaQueryCreator.PredicateBuilder.build() (see here).

Integration tests in Scala when using compagnons with Play2? -> Cake pattern?

I'm working on my first Scala application, where we use an ActiveRecord style to retrieve data from MongoDB.
I have models like User and Category, which all have a companion object that uses the trait:
class MongoModel[T <: IdentifiableModel with CaseClass] extends ModelCompanion[T, ObjectId]
ModelCompanion is a Salat class which provide common MongoDB crud operations.
This permits to retrieve data like this:
User.profile(userId)
I never had any experience with this ActiveRecord query style. But I know Rails people are using it. And I think I saw it on Play documentation (version 1.2?) to deal with JPA.
For now it works fine, but I want to be able to run integration tests on my MongoDB.
I can run an "embedded" MongoDB with a library. The big deal is that my host/port configuration are actually kind of hardcoded on the MongoModel class which is extended by all the model companions.
I want to be able to specify a different host/port when I run integration tests (or any other "profile" I could create in the future).
I understand well dependency injection, using Spring for many years in Java, and the drawbacks of all this static stuff in my application. I saw that there is now a scala-friendly way to configure a Spring application, but I'm not sure using Spring is appropriate in Scala.
I have read some stuff about the Cake pattern and it seems to do what I want, being some kind of typesafe, compile-time-checked spring context.
Should I definitely go to the Cake pattern, or is there any other elegant alternative in Scala?
Can I keep using an ActiveRecord style or is it a total anti-pattern for testability?
Thanks
No any static references - using Cake pattern you got 2 different classes for 2 namespaces/environments, each overriding "host/port" resource on its own. Create a trait containing your resources, inherit it 2 times (by providing actual information about host/port, depending on environment) and add to appropriate companion objects (for prod and for test). Inside MongoModel add self type that is your new trait, and refactor all host/port references in MongoModel, to use that self type (your new trait).
I'd definitely go with the Cake Pattern.
You can read the following article with show an example of how to use the Cake Pattern in a Play2 application:
http://julien.richard-foy.fr/blog/2011/11/26/dependency-injection-in-scala-with-play-2-it-s-free/