Unable to create hazelcast cache in mybatis mapper annotation - mybatis

I have designed the Data access object mybatis mapper to read the read only data from Mysql database.
#Mapper
public interface XYZMapper {
#Select("SELECT TYPES FROM abc WHERE STORE_ID = #{storeId} and CUSTOMER_ID = #{customerId}")
public String getDisabledSubscriptions(#Param("storeId") int storeId, #Param("customerId") int customerId);
#Select("SELECT TYPES as messageTypes, NAME as eventName FROM abc WHERE STORE_ID = #{storeId}")
public EventSubscription getEventAllSubscriptions(#Param("storeId") int storeId);
http://mybatis.org/hazelcast-cache/
Above link gives solutions for caching using hazelcast. This is used when we configure mapper as xml file. How can we cache each of the above queries as L2 cache by using annotation mapper

Solved the issue in following steps
a) #EnableCaching with SpringBootApplication classes
b) add #Cacheable ("abc") in every database method defined in mapper (that you wanted to cache)
c) Defined hazelcast.yml in resources folder with
hazelcast:
network:
join:
multicast:
enabled: true

Related

Java JPA write only ID for nested entity

How can I avoid unnecessary queries to the DB?
I have LoadEntity with two nested entity - CarrierEntity and DriverEntity. Java class:
#Entity
public class LoadEntity {
...
#ManyToOne
#JoinColumn(name="carrier_id", nullable=false)
private CarrierEntity carrierEntity;
#ManyToOne
#JoinColumn(name="driver_id", nullable=false)
private DriverEntity driverEntity;
}
But API send me carrierId and driverId. I make it:
DriverEntity driverEntity = driverService.getDriverEntityById(request.getDriverId());
loadEntity.setDriverEntity(driverEntity);
loadRepository.save(loadEntity);
How can I write only driverId with JPA?
With Spring Data JPA you can always fall back on plain SQL.
Of course, this will side step all the great/annoying logic JPA gives you.
This means you won't get any events and the entities in memory might be out of sync with the database.
For this reason you might also increase the version column, if you are using optimistic locking.
That said you could update a sing field like this:
interface LoadRepository extends CrudRepository<LoadEntity, Long> {
#Query(query="update load_entity set driver_id = :driverId where carrier_id=:carrier_id", nativeQuery=true)
#Modifying
void updateDriverId(Long carrierId, Long driverId);
}
If you just want to avoid the loading of the DriverEntity you may also use JpaRepository.getById

Spring Data JPA: Work with Pageable but with a specific set of fields of the entity

I am working with Spring Data 2.0.6.RELEASE.
I am working about pagination for performance and presentation purposes.
Here about performance I am talking about that if we have a lot of records is better show them through pages
I have the following and works fine:
interface PersonaDataJpaCrudRepository extends PagingAndSortingRepository<Persona, String> {
}
The #Controller works fine with:
#GetMapping(produces=MediaType.TEXT_HTML_VALUE)
public String findAll(Pageable pageable, Model model){
Through Thymeleaf I am able to apply pagination. Therefore until here the goal has been accomplished.
Note: The Persona class is annotated with JPA (#Entity, Id, etc)
Now I am concerned about the following: even when pagination works in Spring Data about the amount the records, what about of the content of each record?.
I mean: let's assume that Persona class contains 20 fields (consider any entity you want for your app), thus for a view based in html where a report only uses 4 fields (id, firstname, lastname, date), thus we have 16 unnecessary fields for each entity in memory
I have tried the following:
interface PersonaDataJpaCrudRepository extends PagingAndSortingRepository<Persona, String> {
#Query("SELECT p.id, id.nombre, id.apellido, id.fecha FROM Persona p")
#Override
Page<Persona> findAll(Pageable pageable);
}
If I do a simple print in the #Controller it fails about the following:
java.lang.ClassCastException:
[Ljava.lang.Object; cannot be cast to com.manuel.jordan.domain.Persona
If I avoid that the view fails with:
Caused by:
org.springframework.expression.spel.SpelEvaluationException:
EL1008E:
Property or field 'id' cannot be found on object of type
'java.lang.Object[]' - maybe not public or not valid?
I have read many posts in SO such as:
java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to
I understand the answer and I am agree about the Object[] return type because I am working with specific set of fields.
Is mandatory work with the complete set of fields for each entity? Should I simply accept the cost of memory about the 16 fields in this case that never are used? It for each record retrieved?
Is there a solution to work around with a specific set of fields or Object[] with the current API of Spring Data?
Have a look at Spring data Projections. For example, interface-based projections may be used to expose certain attributes through specific getter methods.
Interface:
interface PersonaSubset {
long getId();
String getNombre();
String getApellido();
String getFecha();
}
Repository method:
Page<PersonaSubset> findAll(Pageable pageable);
If you only want to read a specific set of columns you don't need to fetch the whole entity. Create a class containing requested columns - for example:
public class PersonBasicData {
private String firstName;
private String lastName;
public PersonBasicData(String firstName, String lastName) {
this.firstName = fistName;
this.lastName = lastName;
}
// getters and setters if needed
}
Then you can specify query using #Query annotation on repository method using constructor expression like this:
#Query("SELECT NEW some.package.PersonBasicData(p.firstName, p.lastName) FROM Person AS p")
You could also use Criteria API to get it done programatically:
CriteriaBuilder cb = entityManager.getCriteriaBuilder();
CriteriaQuery<PersonBasicData> query = cb.createQuery(PersonBasicData.class);
Root<Person> person = query.from(Person.class);
query.multiselect(person.get("firstName"), person.get("lastName"));
List<PersonBasicData> results = entityManager.createQuery(query).getResultList();
Be aware that instance of PersonBasicData being created just for read purposes - you won't be able to make changes to it and persist those back in your database as the class is not marked as entity and thus your JPA provider will not work with it.

Insert & Update from single Spring Batch ItemReader

My process transforms the data into SCD2 pattern. Thus, any update in the source data culminates into updating the end_date & active_ind in the dimension table and inserting a new record.
I have configured the SQL in an ItemReader implementation which identifies the records which got changed in the source data.
I need help/suggestion on how to route the data to 2 writers, 1 each for update & insert?
There is a general pattern in Spring for this type of use case and not necessarily for Spring Batch using Classifier Interface.
You can use BackToBackPatternClassifier implementation of this interface.
Additionally, you need to use Spring Batch provided ClassifierCompositeItemWriter.
Here is a summary of steps:
The POJO/Java Bean that is passed on to writer should have some kind of String field that can identify the target ItemWriter for that POJO.
Then you write a Classifier that returns that String type for each POJO like this:
public class UpdateOrInsertClassifier {
#Classifier
public String classify(WrittenMasterBean writtenBean){
return writtenBean.getType();
}
}
and
#Bean
public UpdateOrInsertClassifier router() {
return new UpdateOrInsertClassifier();
}
I assume that WrittenMasterBean is POJO that you sent to either of writers and it has a private String type; field This Classifier is your router.
Then you implement BackToBackPatternClassifier like -
#Bean
public Classifier classifier() {
BackToBackPatternClassifier classifier = new BackToBackPatternClassifier();
classifier.setRouterDelegate(router());
Map<String,ItemWriter<WrittenMasterBean>> writerMap = new HashMap();
writerMap.put("Writer1", writer1());
writerMap.put("Writer2", writer2());
classifier.setMatcherMap(writerMap);
return classifier;
}
i.e. I assume that keys Writer1 and Writer2 will identify your writers for that particular bean.
writer1() and writer2() return actual ItemWriter beans.
BackToBackPatternClassifier needs two fields - one router classifier and another matcher map.
Restriction is that keys are Strings in this classifier. You can't use any other type of keys.
Pass on BackToBackPatternClassifier to ClassifierCompositeItemWriter - You need to use Spring Batch provided ClassifierCompositeItemWriter
#Bean
public ItemWriter<WrittenMasterBean> classifierWriter(){
ClassifierCompositeItemWriter<WrittenMasterBean> writer = new ClassifierCompositeItemWriter();
writer.setClassifier(classifier());
return writer;
}
You configure this - classifierWriter() into your Step .
Then you are good to go.

A strange phenomenon when use dozer in jpa project,why Mapping annotation in lazy load object can't work?

I met a very strange phenomenon when using dozer in jpa project.
I have a UserSupplier object and a Supplier object.
UserSupplier:
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "supplier_id", nullable = false)
private Supplier supplier;
In my code I first query a UserSupplier List, then convert it to SupplierList.
List<Supplier> supplierList = new ArrayList<>(usList.size());
usList.forEach(us -> supplierList.add(us.getSupplier()));
Then I convert SupplierList to SupplierView List and return it to Caller.
BeanMapper.mapList(supplierList, SupplierView.class);
My dozer configure in these objects like below
Supplier:
#Id
#GeneratedValue
#Mapping("supplierId")
private int id;
SupplierView:
private int supplierId;
Very funny, supplierId in SupplierView always 0(default int value),but other fileds can convert successfully, only id field fail. I don't why is this, why only id field can't convert to supplierId, but other fields could?
For above problem, there are below solutions
1. Change field name (supplierId to id):
Supplier:
// #Mapping("supplierId")
private int id;
SupplierView:
private int id;
but Caller(front-end) have to change code.
2. Change fetchType to eager:
UserSupplier:
#ManyToOne
private Supplier supplier;
After reading dozer documentation, I find some thing. After trying it, I got another solution.
That is add a dozer.properties into classpath, content inside is
org.dozer.util.DozerProxyResolver=org.dozer.util.HibernateProxyResolver
More detail please see
http://dozer.sourceforge.net/documentation/proxyhandling.html
This is probably because JPA uses proxy objects for lazy loading of single entity reference. Proxy object is effectively a subclass of your entity class. I guess that dozer can find #Mapping annotation only on fields declared in the class of given object, and not on fields defined in parent classes. Dozer project states that annotation mapping is experimental. Therefore it is possible that it does not cover mapping class hierarchies well.
I suggest to try configure mapping of supplierId by other means (XML, dozer mapping API) and see if it works. If all fails, you could write a custom MapperAware converter between Supplier and SupplierView. You would map source object to target object using supplied mapper, and finilize it by copying value of id to supplierId.

Override findAll() Spring Data Gemfire Repo Queries

I have millions of objects populated in GemFire regions. I don't want default findAll() SDR query to be executed to retrieve the millions of objects in one shot. I am trying to figure out if there is a way to override the default findAll query and provide the LIMIT param to restrict the number of objects retrieved from GemFire Regions. Here is an example of what I want to do:
NoRepositoryBean
public interface CrudRepository<T, ID extends Serializable> extends Repository<T, ID> {
/**
* Returns all instances of the type.
*
* #return all entities
*/
Iterable<T> findAll();
}
public interface MyRepository extends CrudRepository<MyRepoObject, String> {
#Query("SELECT * FROM MyRegion LIMIT $1")
Iterable<CellTower> findAll(#Param("limit") String limit);
}
Currently, I am on Spring Data Gemfire 1.4.0.BUILD-SNAPSHOT and Spring Data REST 2.0.0.BUILD-SNAPSHOT version
This handy getting started guide on Accessing GemFire Data with REST (https://spring.io/guides/gs/accessing-gemfire-data-rest/) was written not long ago and may help with your particular use case.
The following worked for me. Try using an Integer instead of a String as your parameter to findAll
#Query("SELECT * FROM /Customer LIMIT $1")
List<Customer> findAll(#Param("limit") Integer max);