Accessing JPA Class Mapping - jpa

Found an article in springsource which describes how to manipulate the schema name at runtime.
http://forum.springsource.org/showthread.php?18715-changing-hibernate-schemas-at-runtime
We're using pure jpa however where were using a LocalContainerEntityManagerFactory and don't have access to Session or Conofiguration instances.
Can anyone provide insight on how to access the metadata at runtime (via the entitymanager) to allow modifying the schema?
Thanks

Changing meta-data at runtime is JPA provider specific. JPA allows you to pass a Map of provider specific properties when creating an EntityManagerFactory or EntityManager. JPA also allows you to unwrap() an EntityManager to a provider specific implementation.
If you are using EclipseLink you can set the schema using the setTableQualifier() API on the Session's login.

You can't using standard JPA (which is your requirement going by your question); it doesn't allow you to dynamically define metadata, only view (a limited amount of) specified metadata via its metamodel API. You'd have to delve into implementation specifics to get further, but then your portability goes down the toilet at that point, which isn't a good thing.
JDO, on the other hand, does allow you to dynamically define metadata (and hence schema) using standardised APIs.

Related

Can I use Spring Data JPA #Entity and Spring Data GemFire #Region together on the same POJO?

When I try to use the same POJO for Spring Data JPA integration with Spring Data GemFire, the repository always accesses the database with the POJO. But I want the repository to access data from GemFire, even though I added annotations #EnableGemfireRepositories and #EnableEntityDefinedRegions.
I think it is because I added the #Entity and #Region together on the same POJO class.
Please help fix and let me know if I can do so? Do I need to separate it into 2 POJO classes working for database and GemFire?
Thanks
No, you do not need 2 separate POJOs. However, you do need 2 separate Repository interface definitions, 1 for JPA and a 2nd for GemFire. I have an example of such an implementation here, in the repository-example.
In the contacts-core module, I have an example.app.model.Contact class that is annotated with both JPA's #Entity annotation as well as SDG's #Region annotation in addition to other annotations (e.g. Jackson).
I then create 2 Repository interfaces in the repository-example module, 1 for JPA, which extends o.s.d.jpa.repository.JpaRepository, and another for GemFire, which extends o.s.d.gemfire.repository.GemfireRepository. Notice too that these Repositories are separated by package (i.e. example.app.repo.jpa vs. example.app.repo.gemfire) in my example.
Keep in mind, Spring Data enforces a strict policy mode which prevents ambiguity if the application Repository definition (e.g. ContactRepository) is generic, meaning that the interface extends 1 of the common Spring Data interfaces: o.s.d.repository.Repository, o.s.d.repository.CrudRepository or perhaps o.s.d.repository.PagingAndSortingRepository, and the interface resides in the same package as the "scan" for both JPA and GemFire. This is the same for any Spring Data module that supports the Repository abstraction, including, but not limited to, MongoDB and Redis.
You must be very explicit in your declarations and intent. While it is generally not a requirement to extend store-specific Repository interface definitions (e.g. o.s.d.gemfire.repository.GemfireRepository), and rather extend a common interface (e.g. o.s.d.repository.CrudRepository), it is definitely advisable to put your different, per store Repository definitions in a separate package and configure the scan accordingly. This is good practice to limit the scan in the first place.
Some users are tempted to want a single, "reusable" Repository interface definition per application domain model type (e.g. Contact) for all the stores they persist the POJO to. For example, a single ContactRepository for both JPA and GemFire. This is ill-advised.
This stems from the fact that while most stores support basic CRUD and simple queries (e.g. findById(..)), though not all (so be careful), not all stores are equal in their query capabilities (e.g. JOINS) or function (e.g. Paging). For example, SDG does not, as of yet, support Paging.
So the point is, use 1 domain model type, but define a Repository per store clearly separated by package. Then you can configure the Spring Data Repository infrastructure accordingly. For instance, for JPA I have a configuration which points to the JPA-based ContactRepository using the ContactRepository class (which is type-safe and better than specifying the package by name using the basePackages attribute). Then, I do the same for the GemFire-based ContactRepository here.
By following this recipe, then all is well and then you can inject the appropriate Repository (by type) into the service class that requires it. If you have a service class that requires both Repositories, then you must inject them appropriately, for example.
Hope this helps!

How does Spring Data know wicht store to back a repository with if multiple modules are used?

In a Spring Data project if I am using multiple types of repositories i.e JPA repository and Mongo repository and if I am extending CrudRepository then how does Spring Data know which store to choose for that repository? It can use JPA or Mongo. Is it based on the annotation #Document or #Entity added on every persisting entity?
The decision which store a proxy created for a Spring Data repository interface is only made due to your configuration setup. Assume you have the following config:
#Configuration
#EnableJpaRepositories("com.acme.foo")
#EnableMongoRepositories("com.acme.foo")
class Config { }
This is going to blow up at some point as the interfaces in package com.acme.foo are both detected by the MongoDB and JPA infrastructure. To resolve this, both the JavaConfig and XML support allows you to define include and exclude filters so that you can either use naming conventions, additional annotations or the like:
#Configuration
#EnableJpaRepositories(basePackages = "com.acme.foo",
includeFilters = #Filter(JpaRepo.class))
#EnableMongoRepositories(base Packages = "com.acme.foo",
includeFilters = #Filter(MongoRepo.class))
class Config { }
In this case, the two annotations #JpaRepo and #MongoRepo (to be created by you) would be used to selectively trigger the detection by annotating the relevant repository interfaces with them.
A real auto-detection is sort of impossible as it's hard to tell which store you're targeting solely from the repository interface declaration and at the point in time when the bean definitions are created we don't even know about any further infrastructure (an EntityManager or the like) yet.

Auditing with Spring Data JPA

I am using Spring Data JPA in an application in which all entity objects need auditing. I know that I can have each either implement Auditable or extend AbstractAuditable, but my problem is coming with the overall auditing implementation.
The example on the Spring Data JPA reference pages seems to indicate that you need an AuditableAware bean for each entity. Is there any way to avoid this extra code and handle it in one place or through one configuration?
The generic parameter of AuditorAware is not the entity you want to capture the auditing information for but rather the creating/modifying one. So it will typically be the user currently logged in or the like.

How to read the schema used by a JPA implementation

My EntityManager is using a persistence unit that uses a data source provided by our Websphere configuration. The DS configuration includes an environment specific DB to use.
The EM successfully uses this schema, but I can't figure out a way to log or display the schema being used. I was thing something like em.getCurrentSchema would be available..
Any help would be great, thanks.
No API to do this (in JPA). You could do it via JDBC and use of DatabaseMetaData.
JPA is to provide an object view of the data and ease persistence of those objects, not to just present datastore specifics to the user.

How can I leverage JPA when code is generated?

I have classes for entities like Customer, InternalCustomer, ExternalCustomer (with the appropriate inheritance) generated from an xml schema. I would like to use JPA (suggest specific implementation in your answer if relevant) to persist objects from these classes but I can't annotate them since they are generated and when I change the schema and regenerate, the annotations will be wiped. Can this be done without using annotations or even a persistence.xml file?
Also is there a tool in which I can provide the classes (or schema) as input and have it give me the SQL statements to create the DB (or even create it for me?). It would seem like that since I have a schema all the information it needs about creating the DB should be in there. I am not talking about creating indexes, or any tuning of the db but just creating the right tables etc.
thanks in advance
You can certainly use JDO in such a situation, dynamically generating the classes, the metadata, any byte-code enhancement, and then runtime persistence, making use of the class loader where your classes have been generated in and enhanced. As per
http://www.jpox.org/servlet/wiki/pages/viewpage.action?pageId=6619188
JPA doesn't have such a metadata API unfortunately.
--Andy (DataNucleus)