Why don't exceptions in spring-data-commons extend DataAccessException? - spring-data

I'm trying to handle the Spring DAO Exceptions (http://docs.spring.io/spring/docs/4.0.3.RELEASE/spring-framework-reference/htmlsingle/#dao-exceptions) in the service layer of my application, just to discover that the exceptions in the spring-data-commons module don't extend org.springframework.dao.DataAccessException.DataAccessException.
Example: PropertyReferenceException.
As far as I can tell, all exceptions in this module, and maybe in the other sub-modules of Spring Data projects should extend DataAccessException.
Is there anything obvious that I'm not seeing here?

There is no need for Spring Data to use Spring Core's DataAccessException hierarchy as one may use the PersistenceExceptionTranslator. The latter translates thrown exceptions of e.g. Spring Data Repositories to a DataAccessException subtype.
PersistenceExceptionTranslator kicks in automatically when marking your repository with the "#Repository" annotation. The service (using the annotated repository) may catch the DataAccessException, if needed.

Related

CDI with JEE: how handle dependency injection in the backend without EJB

I'm working on a REST api with Java, using JAX-RS, EJB, JPA and JasperReports, basically the API call a oracle function that return an id, with that id i make a select and generate reports with Jasper Reports, then i send the report as a response, works fine.
But, i have some questions if i should use or not use EJB, because i dont see why i should use EJB in this case, since the oracle function have commit inside of it, if something goes wrong, the rollback triggered by the EJB will not do nothing right? Also, the select that generates the report is simple, just one table, and i saw some articles saying that if you do just a select theres no need for using EJB to control the transaction.
Also, how to use CDI in this case? #Named in the classes and #Inject in the fields? I have some coworkers saying that #Named should be used just with JSF, but i'm a junior, seeking for the truth about this, after researching a lot i still dont know how to handle this, i apreciate any help.
Thanks!
Do I need EJBs for transactions?
If you are using JEE-7+ then you can use #Transactional for your CDI Beans instead of EJB-Beans with #Stateless and #TransactionManagement and #TransactionAttribute. #Transactional provides the same properties as #TransactionAttribute and makes any CDI Bean transactional without the need for an EJB Container. All of these approaches require JPA to be used, which for a simple single query is maybe an overkill.
https://docs.oracle.com/javaee/7/api/javax/transaction/Transactional.html
What can I use instead of EJBs and #Transactional?
If you don't need/want to use an EntityManager, then just use plain JDBC.
What does #Named do?
#Named makes CDI Beans accessible to Java-EL via their defined name, or if no one is defined then via their simple class name. You can also use #Named to distinguish between implementations, but I think CDI Qualifiers are more suitable to achieve that. So, if you don't need it, then don't annotate it.
How to provide CDI Beans to other CDI Beans?
In my opinion CDI Beans should be injected via Fields and not constructor arguments. Injection in constructor arguments is done because of testability, so you can test your beans without using CDI, which these days is not that hard to achieve anymore.
https://deltaspike.apache.org/documentation/test-control.html

Custom data store module for Spring Data REST

I like Spring Data REST but I need to explore the possibility to use it without exposing directly our JPA Entities. Furthermore, our access layer is currently build using GenericDAO instead of repository (or Spring Data JPA).
So the idea is to build a custom spring data module that implementing the Repository interfaces allow me to benefit of all the other functionalities of Spring Data REST. The entities managed by our custom spring data module will be at the end DTOs of the original JPA entities.
I'm currently looking to spring-data-ldap as implementation to mimic as it looks as the most simple data module available but I'm facing with the following issue
we get an exception at the application startup saying
Parameter 0 of constructor in org.springframework.data.dspace.repository.support.DSpaceRepositoryFactoryBean required a bean of type 'java.lang.Class' that could not be found.
Action:
Consider defining a bean of type 'java.lang.Class' in your configuration.
DSpaceRepositoryFactoryBean is our implementation that just return a very simple DSpaceFactoryBean that always return a banal implementation of the CrudRepository (all methods return null)
#Override
protected Class<?> getRepositoryBaseClass(RepositoryMetadata metadata) {
return SimpleDSpaceRepository.class;
}
have sense this approach? or will be better to give up with Spring Data REST and just use Spring MVC + HATEOS?

Can Hibernate OGM Persistence Provider be used with Spring-data-jpa?

I really like the simplicity of spring data repository, however need to use hibernate as persistence provider for consistency and few other factors. (I am using mongodb but not using mongo template). Few things I noticed --
The HibernateJpaVendorAdapter uses "org.springframework.orm.jpa.vendor.SpringHibernateEjbPersistenceProvider"
The provider configured with the persistence unit ( ""org.hibernate.ogm.jpa.HibernateOgmPersistence" ) is not considered, while constructing the EntityManagerFactory through a "org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" bean.
If there are multiple persistence units configured for the project there is no apparent way to associate a persistence unit for a repository.
Questions:
Is there a way to use the configured persistence provider , instead of the default one ? The default provider is not working with mongodb.
Is there a way to associate a repository with a specific persistence unit ?
A partial solution was to
implement
org.springframework.orm.jpa.vendor.AbstractJpaVendorAdapter and
return an instance of
org.hibernate.ogm.jpa.HibernateOgmPersistence in
getPersistenceProvider() method
Source the jpaVendorAdapter property for entityManagerFactory
bean in spring config
However it still doesn't work good wherever there is reference to Pageable. Some design change can circumvent the issue.

Proper way to inject dependencies into clustered persistent Akka actors?

I'm using Akka Persistence with Cluster Sharding. What is the proper way to provide dependencies into such PersistentActor-s?
As far as I understand, passing them as constructor arguments is not possible, as Cluster Sharding is creating these actors.
Using Spring/Guice/etc. is not idiomatic Scala (and possibly has other issues (?)).
Using an object to implement a singleton makes for cumbersome testing and seems bad style.
What is the proper way?
P.S. If you plan to suggest the Cake pattern, please provide sample code in this specific Akka Persistence Cluster Sharding context.
UPDATED VERSION:
the solution I offered earlier did not allow to mock services of the actor under test in unit test cases.
I am using instead one solution offered on that article http://letitcrash.com/post/55958814293/akka-dependency-injection that is called "aspect weaving" and that consists of injecting the dependencies in the actor using aspect oriented programming.
This solution can be used to inject Spring dependencies on any bean not controlled by Spring container (potentially useful for legacy code).
A full example is provided by the above article: https://github.com/huntc/akka-spring/blob/f137c98b621517301f636e6ea03519388fcd5fff/src/main/scala/org/typesafe/Akkaspring.scala
And to enable aspect weaving in a spring based application you should check the documentation on Spring doc
In my case, on a jetty application server, it consists of using the spring agent and setting it in the jvm arguments.
As far as tests are concerned, I :
created setters for the injected services
created basic configuration for my actors with null beans referenced for my dependencies
instantiated the actor in my test case
replace the actor's services with mocks
run the actor's inner methods and check the results, actor's state or calls to dependencies
ORIGINAL:
I am using Akka in a Spring Application to enable clustering. At first
it raises the following issue: you cannot inject spring managed
dependencies in the actor constructor, as you said. (it tries to
serialize the application context and fails)
So I created a class that holds the application context and provides a
static method to retrieve beans I need. I retrieve the bean only if I
need it, this way:
public void onReceive{
if (message instanceof HandledMessage) {
(MyService) SpringApplicationContext.getBean("myService");
...
}
}
It's not conventional but it does the job, what do you think? Hope
otherwise it might help another one.

In a system using EJB 3.1 and JPA 2.0, where should one cache CriteriaQuery objects?

The JPA 2.0 specification mentions in section 6.9 that CriteriaQuery objects are serializable, and hence may outlive any open EntityManagers or EntityManagerFactory instances:
CriteriaQuery objects must be serializable. A persistence vendor is required to support the subse- quent deserialization of a CriteriaQuery object into a separate JVM instance of that vendor’s runt- ime, where both runtime instances have access to any required vendor implementation classes.
The EJB 3.1 specification says in section 21.2.2:
An enterprise bean must not use thread synchronization primitives to synchronize execution of multiple instances, except if it is a Singleton session bean with bean-managed concurrency.
If I have a stateless session bean that wishes to pre-build a bunch of CriteriaQuery objects using a CriteriaBuilder obtained from an injected #PersistenceContext, where should I stash the results?
I can think of the following possibilities but am concerned that all but one run afoul of the "no synchronization primitives" clause above:
In a Map that is stored as the value of one of my bean's instance fields, understanding that I'll have to synchronize access to the map. My take: section 21.2.2 violation.
In a ConcurrentMap that is stored as the value of one of my bean's instance fields. My take: still a section 21.2.2 violation, as I'm sure the ConcurrentMap implementation synchronizes somewhere.
In a #Singleton EJB's instance field somewhere, where the #Singleton exists only to serve as this kind of cache; with bean-managed concurrency this should be legal, but now all my stateless session beans that want to make use of this CriteriaQuery cache have to inject the singleton into themselves...seems like a lot of overhead.
So it sounds like strictly speaking the last option is the only specification-compliant one. Am I correct?
I would consider putting them in a simple static context, accessible from anywhere. The problem lies in initializing them since you need an entity manager instance to do that. Perhaps a singleton ejb for initializing things as described at Call method in EJB on JBoss startup. The singleton could initialize your criteria query cache, which then could serve criteria queries to your DAOs through static context.
Another option would be to use JPQL which has built in support for precompiled queries. Of course you'd lose some advantages of using the criteria API, though I think the main issue (type safety etc.) might be OK since precompiled queries should throw an exception if they are invalid at deploy time rather than runtime.