What are the differences between GemfireRepository and CrudRepository in spring data gemfire - spring-data-gemfire

GemfireRepository is a gemfire specific implementation of CrudRepository but the spring data gemfire reference guide says if we use GemfireRepository then we need to
have our domain classes correctly mapped to configured regions as the bottstrap process will
fail otherwise..does that mean that we need to have #Region annotation on the domain classes?In case we use CrudRepository then #Region annotation is not required because CrudRepository is not dependent on Region ?
So I am using GemfireRepository and I have a cacheloader configured as plug in to a region and the cacheloader depends on the GemfireRepository to fetch the data from RDBMS. So according to the reference documentation if GemfireRepository is internally dependent on Region..then does that create a circular dependency?

The SDG GemfireRepository interface extends SDC's CrudRepository interface and adds a couple of methods (findAll(:Sort), and an overloaded save(:Wrapper):T method), See...
http://docs.spring.io/spring-data-gemfire/docs/current/api/org/springframework/data/gemfire/repository/GemfireRepository.html
GemfireRepository interface is "backed" by the SimpleGemfireRepository class.
Whether your application-specific Repository interface extends GemfireRepository or CrudRepository, or even just org.springframework.data.repository.Repository, does not really matter. Extending a Repository interface provided by the framework only determines what methods will be exposed in the backing implementation by the "Proxy" created with the framework.
E.g. if you wanted to create a read-only Repo, you would directly extend org.springframework.data.repository.Repository, and copy only the "read-only" methods from the CrudRepository interface into your application-specific Repository interface (e.g. findOne(:ID), findAll(), exists(:ID), i.e. no data store mutating methods, such as save(:S):S).
But, by using the namespace element in your Spring config, you are instructing the framework to use SDG's Repository infrastructure to handle persistent operations of your application domain objects into GemFire, and specifically Regions. Therefore, either the application domain object must be annotated with #Region, or now, SDG allows an application Repository interface to be annotated with #Region, in cases where you want your application domain object needs to be stored in multiple Regions of GemFire. See 8.1, Entity Mapping in the SDG Ref Guide, for further details..
http://docs.spring.io/spring-data-gemfire/docs/1.4.0.RELEASE/reference/html/mapping.html#mapping.entities
Regarding the "circular dependency"... yes, it creates a circular dependency...
Region A -> CacheLoader A -> ARepository -> Region A.
And will lead to a...
org.springframework.beans.factory.BeanCurrentlyInCreationException
You need to break the cycle.

Related

Can I use Spring Data JPA #Entity and Spring Data GemFire #Region together on the same POJO?

When I try to use the same POJO for Spring Data JPA integration with Spring Data GemFire, the repository always accesses the database with the POJO. But I want the repository to access data from GemFire, even though I added annotations #EnableGemfireRepositories and #EnableEntityDefinedRegions.
I think it is because I added the #Entity and #Region together on the same POJO class.
Please help fix and let me know if I can do so? Do I need to separate it into 2 POJO classes working for database and GemFire?
Thanks
No, you do not need 2 separate POJOs. However, you do need 2 separate Repository interface definitions, 1 for JPA and a 2nd for GemFire. I have an example of such an implementation here, in the repository-example.
In the contacts-core module, I have an example.app.model.Contact class that is annotated with both JPA's #Entity annotation as well as SDG's #Region annotation in addition to other annotations (e.g. Jackson).
I then create 2 Repository interfaces in the repository-example module, 1 for JPA, which extends o.s.d.jpa.repository.JpaRepository, and another for GemFire, which extends o.s.d.gemfire.repository.GemfireRepository. Notice too that these Repositories are separated by package (i.e. example.app.repo.jpa vs. example.app.repo.gemfire) in my example.
Keep in mind, Spring Data enforces a strict policy mode which prevents ambiguity if the application Repository definition (e.g. ContactRepository) is generic, meaning that the interface extends 1 of the common Spring Data interfaces: o.s.d.repository.Repository, o.s.d.repository.CrudRepository or perhaps o.s.d.repository.PagingAndSortingRepository, and the interface resides in the same package as the "scan" for both JPA and GemFire. This is the same for any Spring Data module that supports the Repository abstraction, including, but not limited to, MongoDB and Redis.
You must be very explicit in your declarations and intent. While it is generally not a requirement to extend store-specific Repository interface definitions (e.g. o.s.d.gemfire.repository.GemfireRepository), and rather extend a common interface (e.g. o.s.d.repository.CrudRepository), it is definitely advisable to put your different, per store Repository definitions in a separate package and configure the scan accordingly. This is good practice to limit the scan in the first place.
Some users are tempted to want a single, "reusable" Repository interface definition per application domain model type (e.g. Contact) for all the stores they persist the POJO to. For example, a single ContactRepository for both JPA and GemFire. This is ill-advised.
This stems from the fact that while most stores support basic CRUD and simple queries (e.g. findById(..)), though not all (so be careful), not all stores are equal in their query capabilities (e.g. JOINS) or function (e.g. Paging). For example, SDG does not, as of yet, support Paging.
So the point is, use 1 domain model type, but define a Repository per store clearly separated by package. Then you can configure the Spring Data Repository infrastructure accordingly. For instance, for JPA I have a configuration which points to the JPA-based ContactRepository using the ContactRepository class (which is type-safe and better than specifying the package by name using the basePackages attribute). Then, I do the same for the GemFire-based ContactRepository here.
By following this recipe, then all is well and then you can inject the appropriate Repository (by type) into the service class that requires it. If you have a service class that requires both Repositories, then you must inject them appropriately, for example.
Hope this helps!

What are the DAO, DTO and Service layers in Spring Framework?

I am writing RESTful services using spring and hibernate. I read many resource in internet, but they did not clarify my doubts. Please explain me in details what are DAO, DTO and Service layers in spring framework? And why usage of these layers is required in spring to develop RESTfull API services.
First off, these concepts are Platform Agnostic and are not exclusive to Spring Framework or any other framework, for that matter.
Data Transfer Object
DTO is an object that carries data between processes. When you're working with a remote interface, each call it is expensive. As a result you need to reduce the number of calls. The solution is to create a Data Transfer Object that can hold all the data for the call. It needs to be serializable to go across the connection. Usually an assembler is used on the server side to transfer data between the DTO and any domain objects. It's often little
more than a bunch of fields and the getters and setters for them.
Data Access Object
A Data Access Object abstracts and encapsulates all access to
the data source. The DAO manages the connection with the data source to
obtain and store data.
The DAO implements the access mechanism required to work with the data source.
The data source could be a persistent store like an RDBMS, or a business service accessed via REST or SOAP.
The DAO abstracts the underlying data access implementation for the Service objects to
enable transparent access to the data source. The Service also delegates
data load and store operations to the DAO.
Service
Service objects are doing the work that the
application needs to do for the domain you're working with. It involves calculations based on inputs and
stored data, validation of any data that comes in from the presentation, and figuring out exactly what data
source logic to dispatch, depending on commands received from the presentation.
A Service Layer defines an application's boundary and its set of available operations from
the perspective of interfacing client layers. It encapsulates the application's business logic, controlling
transactions and coordinating responses in the implementation of its operations.
Recommended References
Martin Fowler has a great book on common Application Architecture Patterns named Patterns of Enterprise Application Architecture. There is also, Core J2EE Patterns that worth looking at.
DAO - Data Access Object:
An object that provides a common interface to perform all database operations like persistence mechanism.
public interface GenericDao<T> {
public T find(Class<T> entityClass, Object id);
public void save(T entity);
public T update(T entity);
public void delete(T entity);
public List<T> findAll(Class<T> entityClass);
}
See this example : Spring – DAO and Service layer
DTO - Data Transfer Object:
An object that carries data between processes in order to reduce the number of method calls means you combine more than one POJO entities in service layer.
For example a GET request /rest/customer/101/orders is to retrieve all the orders for customer id 101 along with customer details hence you need combine entity Customer and entity Orders with details.
Enterprise application is divided into tiers for easy maintenance and development. Tiers are dedicated to particular type of tasks like
presentation layer (UI)
Business layer
Data access layer (DAO, DTO)
Why this design:
Let's pick an example you have an application which reads data from db and performs some business logic on it then present it to user. Now if you want to change your DB let say earlier application was running on Oracle now you want to use mysql so if you don't develop it in tiers you will doing changes everywhere in application. But if you implement DAO in application then this can be done easily
DAO: Data Access Object is design pattern
just provide an interface for accessing data to service layer and provide different implementations for different data sources (Databases, File systems)
Example code:
public interface DaoService {
public boolean create(Object record);
public CustomerTemp findTmp(String id);
public Customer find(String id);
public List getAllTmp();
public List getAll();
public boolean update(Object record);
public boolean delete(Object record);
public User getUser(String email);
public boolean addUser(User user);
}
Service layer using Dao
#Service("checkerService")
public class CheckerServiceImpl implements CheckerService{
#Autowired
#Qualifier("customerService")
private DaoService daoService;
Now i can provide any implementation of DaoService interface.
Service and DTO are also used for separation of concerns.
DTO: The data object that we pass between different processes or within the same process. It can be a wrapper around the actual entity object. Using entity object as is for DTO is not safe and not recommended. Design of this object is based on various factors like simplicity of representation, security of exposing Ids, consumer requirement and so on.
In Spring, DTO can be formed with a simple model/pojo object.
DAO: The object responsible for CRUD operations.
In Spring, this can be an object that implements JPARepository interface, or any bean that connects with database and does a CRUD for us. Please remember your journey from JDBC to Hibernate to Spring data JPA. :)
Service: Core bean for business logic implementation. This object may has DAO object as its dependency. Core business logic for the particular use case will go here.
In Spring, Service object/bean can be created either by annotating the bean with #Service or #Component annotation or simply representing the object as a bean using java configuration. Make sure you inject all the required dependencies into the service bean for its heavy lifting job.
SERVICE LAYER:
It receives the request from controller layer and process the request to Persistence layer
#Controller:It is the annotation which initialize whole controller layer.
#Service:It is the annotation which initialize whole service layer.
#Repository: It is the annotation which initialize whole persistence layer.
DTO:
It is an Data Transfer object which used to pass the properties from service layer to persistence layer.
DAO:
It is an Data Access object. it is also known as persistence layer. In this DAO we receive property values from service layer in DTO object. Here we write an persistence logic to db.

How does Spring Data know wicht store to back a repository with if multiple modules are used?

In a Spring Data project if I am using multiple types of repositories i.e JPA repository and Mongo repository and if I am extending CrudRepository then how does Spring Data know which store to choose for that repository? It can use JPA or Mongo. Is it based on the annotation #Document or #Entity added on every persisting entity?
The decision which store a proxy created for a Spring Data repository interface is only made due to your configuration setup. Assume you have the following config:
#Configuration
#EnableJpaRepositories("com.acme.foo")
#EnableMongoRepositories("com.acme.foo")
class Config { }
This is going to blow up at some point as the interfaces in package com.acme.foo are both detected by the MongoDB and JPA infrastructure. To resolve this, both the JavaConfig and XML support allows you to define include and exclude filters so that you can either use naming conventions, additional annotations or the like:
#Configuration
#EnableJpaRepositories(basePackages = "com.acme.foo",
includeFilters = #Filter(JpaRepo.class))
#EnableMongoRepositories(base Packages = "com.acme.foo",
includeFilters = #Filter(MongoRepo.class))
class Config { }
In this case, the two annotations #JpaRepo and #MongoRepo (to be created by you) would be used to selectively trigger the detection by annotating the relevant repository interfaces with them.
A real auto-detection is sort of impossible as it's hard to tell which store you're targeting solely from the repository interface declaration and at the point in time when the bean definitions are created we don't even know about any further infrastructure (an EntityManager or the like) yet.

Accessing JPA Class Mapping

Found an article in springsource which describes how to manipulate the schema name at runtime.
http://forum.springsource.org/showthread.php?18715-changing-hibernate-schemas-at-runtime
We're using pure jpa however where were using a LocalContainerEntityManagerFactory and don't have access to Session or Conofiguration instances.
Can anyone provide insight on how to access the metadata at runtime (via the entitymanager) to allow modifying the schema?
Thanks
Changing meta-data at runtime is JPA provider specific. JPA allows you to pass a Map of provider specific properties when creating an EntityManagerFactory or EntityManager. JPA also allows you to unwrap() an EntityManager to a provider specific implementation.
If you are using EclipseLink you can set the schema using the setTableQualifier() API on the Session's login.
You can't using standard JPA (which is your requirement going by your question); it doesn't allow you to dynamically define metadata, only view (a limited amount of) specified metadata via its metamodel API. You'd have to delve into implementation specifics to get further, but then your portability goes down the toilet at that point, which isn't a good thing.
JDO, on the other hand, does allow you to dynamically define metadata (and hence schema) using standardised APIs.

Domain Driven Design issue regarding repository

I am trying to implement the DDD so I have created the following classes
- User [the domain model]
- UserRepository [a central factory to manage the object(s)]
- UserMapper + UserDbTable [A Mapper to map application functionality and provide the CRUD implementation]
My first question is that when a model needs to communicate with the persistent layer should it contact the Repository or the mapper? Personally I am thinking that it should ask the repository which will contact the mapper and provide the required functionality.
Now my second concern is that there should be only one repository for all the objects of same class, so that means I will be creating a singleton. But if my application has lots of domain models (lets say 20), then there will be 20 singletons. And it don't feel right. The other option is to use DI (dependency injection) but the framework I am using (Zend Framework 1.11) has no support for DIC.
My third
UserRepository [a central factory to manage the object(s)]
In DDD Repository is not a Factory. Repository is responsible for middle and end of life of the domain object. Factory is responsible for beginning. Conceptually, persisting and restoring happens to the domain object in its middle life.
UserMapper + UserDbTable [A Mapper to map application functionality and provide the CRUD implementation]
These classes do not belong to a domain layer, this is data access. They would all be encapsulated by repository implementation (or would not exist at all if you use ORM).
My first question is that when a model needs to communicate with the
persistent layer should it contact the Repository or the mapper?
Personally I am thinking that it should ask the repository which will
contact the mapper and provide the required functionality.
Model does not need to communicate with persistence layer. In fact you should try to make your model as persistence-agnostic as possible. From the perspective of your domain model, Repository is just an interface. The implementation of this interface belongs to a different layer - data access. The implementation is injected later, somewhere in your Application layer. Application layer is aware of persistence and transactions. This is where you can implement Unit Of Work pattern (that also does not belong to domain layer).
Now my second concern is that there should be only one repository for all the objects of same class, so that means I will be creating a singleton. But if my application has lots of domain models (lets say 20), then there will be 20 singletons.
First of all you can have more than one Repository for a given domain object. This is what happens most of the time anyway because you want to avoid 'method explosion' on your Repository interface. Secondly singleton Repository is a bad idea because it would couple all consumers to a single implementation which, among other things, would make unit testing hard. Thirdly, there is nothing wrong with having 20 or more Repositories, in fact the more focused classes you have the better, see SRP.
UPDATE:
I think that you are confusing regular Factory pattern and DDD Factory. In DDD terms, when the object is restored from the database it already exists conceptually (even though it is a new object in memory). So it is a responsibility of the Repository to persist and restore it. DDD Factory comes into play when the complex domain object begins its life - whether it would be a long lived object (saved in db) or not.
Answering your second question. The ZF1 way would be to create a singleton per object class. You could have a factory/registry that creates these for you and returns the previously created one when you ask for an already created one. Alternatively, if you are using PHP 5.3, use a DI container such as Pimple or Zend\Di.