Can I use Spring Data JPA #Entity and Spring Data GemFire #Region together on the same POJO? - spring-data

When I try to use the same POJO for Spring Data JPA integration with Spring Data GemFire, the repository always accesses the database with the POJO. But I want the repository to access data from GemFire, even though I added annotations #EnableGemfireRepositories and #EnableEntityDefinedRegions.
I think it is because I added the #Entity and #Region together on the same POJO class.
Please help fix and let me know if I can do so? Do I need to separate it into 2 POJO classes working for database and GemFire?
Thanks

No, you do not need 2 separate POJOs. However, you do need 2 separate Repository interface definitions, 1 for JPA and a 2nd for GemFire. I have an example of such an implementation here, in the repository-example.
In the contacts-core module, I have an example.app.model.Contact class that is annotated with both JPA's #Entity annotation as well as SDG's #Region annotation in addition to other annotations (e.g. Jackson).
I then create 2 Repository interfaces in the repository-example module, 1 for JPA, which extends o.s.d.jpa.repository.JpaRepository, and another for GemFire, which extends o.s.d.gemfire.repository.GemfireRepository. Notice too that these Repositories are separated by package (i.e. example.app.repo.jpa vs. example.app.repo.gemfire) in my example.
Keep in mind, Spring Data enforces a strict policy mode which prevents ambiguity if the application Repository definition (e.g. ContactRepository) is generic, meaning that the interface extends 1 of the common Spring Data interfaces: o.s.d.repository.Repository, o.s.d.repository.CrudRepository or perhaps o.s.d.repository.PagingAndSortingRepository, and the interface resides in the same package as the "scan" for both JPA and GemFire. This is the same for any Spring Data module that supports the Repository abstraction, including, but not limited to, MongoDB and Redis.
You must be very explicit in your declarations and intent. While it is generally not a requirement to extend store-specific Repository interface definitions (e.g. o.s.d.gemfire.repository.GemfireRepository), and rather extend a common interface (e.g. o.s.d.repository.CrudRepository), it is definitely advisable to put your different, per store Repository definitions in a separate package and configure the scan accordingly. This is good practice to limit the scan in the first place.
Some users are tempted to want a single, "reusable" Repository interface definition per application domain model type (e.g. Contact) for all the stores they persist the POJO to. For example, a single ContactRepository for both JPA and GemFire. This is ill-advised.
This stems from the fact that while most stores support basic CRUD and simple queries (e.g. findById(..)), though not all (so be careful), not all stores are equal in their query capabilities (e.g. JOINS) or function (e.g. Paging). For example, SDG does not, as of yet, support Paging.
So the point is, use 1 domain model type, but define a Repository per store clearly separated by package. Then you can configure the Spring Data Repository infrastructure accordingly. For instance, for JPA I have a configuration which points to the JPA-based ContactRepository using the ContactRepository class (which is type-safe and better than specifying the package by name using the basePackages attribute). Then, I do the same for the GemFire-based ContactRepository here.
By following this recipe, then all is well and then you can inject the appropriate Repository (by type) into the service class that requires it. If you have a service class that requires both Repositories, then you must inject them appropriately, for example.
Hope this helps!

Related

Spring data repository and DAO Java Generics

Reading about using Java Generics in DAO layer, I have a doubt applying this in spring data repositories. I mean, with spring data repositories, you have something like this:
public interface OrderRepository extends CrudRepository<Order,OrderPK>{
}
But if I have other 10 entities, I have to create 10 interfaces like the one above to execute CRUD operations and so on and I think this is not very scalable. Java Generics and DAO is about creating one interface and one implementation and reuse this for entities but with Spring Data repositories I have to create one interface for each entity so ...
You didn't really state a question, so I just add
Is this really true? And if so, why?
and answer it:
Yes, this is (almost) correct. Almost, because you should not create one repository per entity, but one repository per Aggregate Root. See http://static.olivergierke.de/lectures/ddd-and-spring/
Spring Data Repositories offer various features for which Spring Data needs to know, what entity it is dealing with. For example query methods need to know the properties of the entity, in order to convert the method name to JPA based query. So you have to pass in the information to Spring Data at some point and you also have to pass in the information, which entities should be considered Aggregate Roots. The way you do that, is by specifying the interface.
Do you really need that? Well if all you want is generic Crud functionality, you can get that straight out of the box with JPA. But if you want query methods, Pagination, simple native queries and much more Spring Data is a nice way to avoid lots of boiler-plate code.
(Please keep in mind that I'm biased)

Trying to use Crate.io NoSql database with an existing Spring Data / Mysql project

I'm attempting to add Crate.IO capability to an existing Spring Data/Eclipselink/MySql web application. For this specific use case, we want to persist data to both MySql AND Crate (for evaluation purposes) in the most painless way possible. I'm using the Spring-Data-Crate project in order to be able to use Spring Data Repositories with Crate.
I've been able to setup a separate Crate specific entity manager with a filter to only utilize repos that implement CrateRepository. The problem I'm having is determining how to use the existing Spring Data/MySql entity classes with Crate. (or derive from them)
1) If I annotate an existing Spring Data #Entity class with the Spring-Data-Crate
#Table annotation, the mapping to the crate DB will fail because EclipseLink/JPA adds hidden persistence fields to entities objects that start with an underscore, which is apparently not allowed by the spring-data-crate adapter
2) I tried to use entity inheritance, with a base class that both the MySql and Crate entity can extend, with only the MySql entity having the spring data #Entity annotation. Unfortunately, this causes Spring Data to lose visibility of the base class fields unless the base class is annotated with #MappedSuperClass. But adding this annotation introduces the hidden "_"-prefixed persistence properties to the derived crate entity.
3) I could use separate entities entirely and have them implement a common interface, but I can't assign the interface as the type of the spring data crate repository.
... Not sure where to go from here
Spring Data Crate adapter project - https://github.com/KPTechnologyLab/spring-data-crate
Spring Data Crate Tutorial - https://crate.io/a/using-sprint-data-crate-with-your-java-rest-application/
i'm johannes from crate.
we didn't test the use of spring data crate in that manner so we can't state any information if this should or shouldn't work.
sorry, johannes

What are the differences between GemfireRepository and CrudRepository in spring data gemfire

GemfireRepository is a gemfire specific implementation of CrudRepository but the spring data gemfire reference guide says if we use GemfireRepository then we need to
have our domain classes correctly mapped to configured regions as the bottstrap process will
fail otherwise..does that mean that we need to have #Region annotation on the domain classes?In case we use CrudRepository then #Region annotation is not required because CrudRepository is not dependent on Region ?
So I am using GemfireRepository and I have a cacheloader configured as plug in to a region and the cacheloader depends on the GemfireRepository to fetch the data from RDBMS. So according to the reference documentation if GemfireRepository is internally dependent on Region..then does that create a circular dependency?
The SDG GemfireRepository interface extends SDC's CrudRepository interface and adds a couple of methods (findAll(:Sort), and an overloaded save(:Wrapper):T method), See...
http://docs.spring.io/spring-data-gemfire/docs/current/api/org/springframework/data/gemfire/repository/GemfireRepository.html
GemfireRepository interface is "backed" by the SimpleGemfireRepository class.
Whether your application-specific Repository interface extends GemfireRepository or CrudRepository, or even just org.springframework.data.repository.Repository, does not really matter. Extending a Repository interface provided by the framework only determines what methods will be exposed in the backing implementation by the "Proxy" created with the framework.
E.g. if you wanted to create a read-only Repo, you would directly extend org.springframework.data.repository.Repository, and copy only the "read-only" methods from the CrudRepository interface into your application-specific Repository interface (e.g. findOne(:ID), findAll(), exists(:ID), i.e. no data store mutating methods, such as save(:S):S).
But, by using the namespace element in your Spring config, you are instructing the framework to use SDG's Repository infrastructure to handle persistent operations of your application domain objects into GemFire, and specifically Regions. Therefore, either the application domain object must be annotated with #Region, or now, SDG allows an application Repository interface to be annotated with #Region, in cases where you want your application domain object needs to be stored in multiple Regions of GemFire. See 8.1, Entity Mapping in the SDG Ref Guide, for further details..
http://docs.spring.io/spring-data-gemfire/docs/1.4.0.RELEASE/reference/html/mapping.html#mapping.entities
Regarding the "circular dependency"... yes, it creates a circular dependency...
Region A -> CacheLoader A -> ARepository -> Region A.
And will lead to a...
org.springframework.beans.factory.BeanCurrentlyInCreationException
You need to break the cycle.

How does Spring Data know wicht store to back a repository with if multiple modules are used?

In a Spring Data project if I am using multiple types of repositories i.e JPA repository and Mongo repository and if I am extending CrudRepository then how does Spring Data know which store to choose for that repository? It can use JPA or Mongo. Is it based on the annotation #Document or #Entity added on every persisting entity?
The decision which store a proxy created for a Spring Data repository interface is only made due to your configuration setup. Assume you have the following config:
#Configuration
#EnableJpaRepositories("com.acme.foo")
#EnableMongoRepositories("com.acme.foo")
class Config { }
This is going to blow up at some point as the interfaces in package com.acme.foo are both detected by the MongoDB and JPA infrastructure. To resolve this, both the JavaConfig and XML support allows you to define include and exclude filters so that you can either use naming conventions, additional annotations or the like:
#Configuration
#EnableJpaRepositories(basePackages = "com.acme.foo",
includeFilters = #Filter(JpaRepo.class))
#EnableMongoRepositories(base Packages = "com.acme.foo",
includeFilters = #Filter(MongoRepo.class))
class Config { }
In this case, the two annotations #JpaRepo and #MongoRepo (to be created by you) would be used to selectively trigger the detection by annotating the relevant repository interfaces with them.
A real auto-detection is sort of impossible as it's hard to tell which store you're targeting solely from the repository interface declaration and at the point in time when the bean definitions are created we don't even know about any further infrastructure (an EntityManager or the like) yet.

Accessing JPA Class Mapping

Found an article in springsource which describes how to manipulate the schema name at runtime.
http://forum.springsource.org/showthread.php?18715-changing-hibernate-schemas-at-runtime
We're using pure jpa however where were using a LocalContainerEntityManagerFactory and don't have access to Session or Conofiguration instances.
Can anyone provide insight on how to access the metadata at runtime (via the entitymanager) to allow modifying the schema?
Thanks
Changing meta-data at runtime is JPA provider specific. JPA allows you to pass a Map of provider specific properties when creating an EntityManagerFactory or EntityManager. JPA also allows you to unwrap() an EntityManager to a provider specific implementation.
If you are using EclipseLink you can set the schema using the setTableQualifier() API on the Session's login.
You can't using standard JPA (which is your requirement going by your question); it doesn't allow you to dynamically define metadata, only view (a limited amount of) specified metadata via its metamodel API. You'd have to delve into implementation specifics to get further, but then your portability goes down the toilet at that point, which isn't a good thing.
JDO, on the other hand, does allow you to dynamically define metadata (and hence schema) using standardised APIs.