Is there a REST-based JPA provider? - rest

A common requirement is to access a JPA DataSource via REST. I want the opposite, i.e. a JPA provider that works by sending HTTP requests to a RESTful persistence service. The benefit of this is that any application written against the JPA API could easily switch between a traditional JPA provider (e.g. Hibernate) and the REST-based JPA provider, with no code changes required.
So my question is whether there is an existing REST-based JPA provider, and if not, would such a thing even be feasible?

Datanucleaus has a JPA implementation over a RESTful json API. However, your REST API must adhere to their conventions: http://www.datanucleus.org/products/accessplatform_3_0/json/support.html
Their S3 and GoogleStorage extend the json API.
EDIT: Put link to wrong product in my original answer.

First of all, JPA is really designed for relational databases...
Second, there is no standard for RESTful persistence so a JPA-REST provider would be specific to that REST persistence application.
You could implement something using EclipseLink-EIS. You'd just have to create the JCA_RestAdapter implementation.
If you mean one of the NoSQL databases when you say "RESTful persistence service" then maybe. Some of these NoSQL DBs provide a REST based interface and some JPA providers are starting to support NoSQL DBs. See http://wiki.eclipse.org/EclipseLink/FAQ/NoSQL.
Honestly you'd be better off just implementing the DAO pattern and abstracting your CRUD(L) operations. This is exactly what DAOs are for.

There are several alternatives out there. For example, take a look at "JEST":
https://www.ibm.com/developerworks/mydeveloperworks/blogs/pinaki/entry/rest_and_jpa_working_together71?lang=en
REST is not an API (Application Programming Interface). It is an
architectural style that prescribes not to have an API to access the
facilities of a service.
...
On the opposite end of the stateless spectrum lies the principle of
JEE Application Servers -- where the server maintains state of
everything and there exists one (or multiple) API for everything. Such
server-centric, stateful, API-oriented principles of JEE led to
several roadblocks.
...
I found REST principles concise and elegant. I also find Java
Persistence API (JPA) providers have done a great job in standardizing
and rationalizing the classic object-relational impedance mismatch.
JPA is often misconstrued as a mere replacement of JDBC -- but it is
much more than JDBC and even more than Object-Relational Mapping
(ORM). JPA is be a robust way to view and update relational data as an
object graph. Also core JPA notions such as detached transaction or
customizable closure or persistent identity are seemed to neatly
aligned with REST principles.
Further links:
http://openjpa.apache.org/jest.html
http://www.ibm.com/developerworks/java/library/j-jest/index.html?ca=drs-

Related

Does JpaTokenStore have any downsides when compared to JdbcTokenStore for spring security oauth

I currently use Jpa via Hibernate in my application. Since spring security oauth2 provides JdbcTokenStore, I started using it. But the problem with that is, I cannot use cache (which all my entities in the application currently share).
It hits the database in a separate flow.
I am thinking implementing JpaTokenStore thats backed by Jpa & leverage the cache advantages that comes with it.
Did anyone try implementing this/see any downsides using this approach?
In one project I've implmented org.springframework.security.oauth2.client.token.ClientTokenServices with JPA and didn't notice any problems. I was able to use all standard features of JPA including #Transactional for JPAClientTokenServices#saveAccessToken
There is nothing stopping you from doing it, and plenty of people do use JPA for all sorts of things, but IMO JPA is not ideal for handling storage of identity data. JPA is designed and optimized for cacheing data for the duration of a JDBC connection (a transaction basically), while identity data have a typically different and much longer lifetime. If you store long lived data using JPA, you have to deal with the consequences of what happens when you access it outside its normal lifetime, e.g. use DTOs, which ends up negating the benefits of using it in the first place to some extent.

How to manually hookup a jaydata model to a (non-O-Data) restful service

I like all the features that JayData provides. I am wondering for when I occasionally have a non-O-Data restful service if there is a way to manually hookup CRUD ops to my existing jaydata entity definitions so that I can take advantage of all the kendoui/knockout goodness that comes with this.
Is there any example where a jaydata entity definition is manually hooked up to restful service url kind of like the jquery method?
Thanks
Our webapi provider is what your are after. Do not worry about its name, webapi is a microsoft framework for rest apis, hence the name, but it should work with other restful endpoints, php, java, ruby, etc. Of course it is only good for crud, as filtering, paging, ordering and projection is only standardized in odata. Also, for paging length() is needed, so that must be implemented on the server side, too.
Give it a try and tell us about your experience, good or bad, we're to help you.
Or consider using oData, JayData can act as an odata endpoint on the server side, we also have hosted odata service.

How can I setup OData and EF with out coupling to my database structure?

I really like OData (WCF Data Services). In past projects I have coded up so many Web-Services just to allow different ways to read my data.
OData gives great flexibility for the clients to have the data as they need it.
However, in a discussion today, a co-worker pointed out that how we are doing OData is little more than giving the client application a connection to the database.
Here is how we are setting up our WCF Data Service (Note: this is the traditional way)
Create an Entity Framework (E)F Data Model of our database
Publish that model with WCF Data Services
Add Security to the OData feed
(This is where it is better than a direct connection to the SQL Server)
My co-worker (correctly) pointed out that all our clients will be coupled to the database now. (If a table or column is refactored then the clients will have to change too)
EF offers a bit of flexibility on how your data is presented and could be used to hide some minor database changes that don't affect the client apps. But I have found it to be quite limited. (See this post for an example) I have found that the POCO templates (while nice for allowing separation of the model and the entities) also does not offer very much flexibility.
So, the question: What do I tell my co-worker? How do I setup my WCF Data Services so they are using business oriented contracts (like they would be if every read operation used a standard WCF Soap based service)?
Just to be clear, let me ask this a different way. How can I decouple EF from WCF Data Services. I am fine to make up my own contracts and use AutoMapper to convert between them. But I would like to not go directly from EF to OData.
NOTE: I still want to use EF as my ORM. Rolling my own ORM is not really a solution...
If you use your custom classes instead of using classes generated directly by EF you will also change a provide for WCF Data Services. It means you will no more pass EF context as generic parameter to DataService base class. This will be OK if you have read only services but once you expect any data modifications from clients you will have a lot of work to do.
Data services based on EF context supports data modifications. All other data services use reflection provider which is read only by default until you implement IUpdatable on your custom "service context class".
Data services are technology for creating quickly services exposing your data. They are coupled with their context and it is responsibility of the context to provide abstraction. If you want to make quick and easy services you are dependent on features supported by EF mapping. You can make some abstractions in EDMX, you can make projections (DefiningQuery, QueryView) etc. but all these features have some limitations (for example projections are readonly unless you use stored procedures for modifications).
Data services are not the same as providing connection to database. There is one very big difference - connection to database will ensure only access and execution permissions but it will not ensure data security. WCF Data Services offer data security because you can create interceptors which will add filters to queries to retrieve only data the user is allowed to see or check if he is allowed to modify the data. That is the difference you can tell your colleague.
In case of abstraction - do you want a quick easy solution or not? You can inject abstraction layer between service and ORM but you need to implement mentioned method and you have to test it.
Most simple approach:
DO NOT PUBLISH YOUR TABLES ;)
Make a separate schema
Add views to this
Put those views to EF and publish them.
The views are decoupled from the tables and thus can be simplified and refactored separately.
Standard approach, also for reporting.
Apart from achieving more granular data authorisation (based of certain field values etc) OData also allows your data to be accessible via open standards like JSON/Xml over Http using OAuth. This is very useful for the web/mobile applications. Now you could create a web service to expose your data but that will warrant a change every time your client needs change in the data requirements (e.g. extra fields needed) whereas OData allows this via OData queries. In a big enterprise this is also useful for designing security at infrastructure level as it will only allow the text based (http) calls which can be inspected/verified for security threats via network firewalls'.
You have some other options for your OData client. Have a look at Simple.OData.Client, described in this article: http://www.codeproject.com/Articles/686240/reasons-to-consume-OData-feeds-using-Simple-ODa
And in case you are familiar with Simple.Data microORM, there is an OData adapter for it:
https://github.com/simplefx/Simple.OData/wiki
UPDATE. My recommendations go for client choice while your question is about setting up your server side. Then of course they are not what you are asking. I will leave however my answer so you aware of client alternatives.

Do Domain Classes usually get JPA or JAXB Annotations or both?

I have a Java enterprise application that provides a web service, has a domain layer, and a hibernate persistence layer. In this particular case, there is not a huge difference (at the moment) between the objects I send over the wire, the domain objects, and the persistence objects.
Currently the application uses DTO's on the persistence side and annotates the Domain classes with JAXB annotations. However, the more I read and think about it, the more this seems backwards! (Not to mention there is a lot of code to support the mindless back and forth between the DTO's and the Domain objects.) It seems like most architects suggest puting JPA annotations on the domain model and create DTO's for sending objects over the wire.
In my case, could I put both the JAXB and JPA (Hibernate) annotations on my domain classes?
The thought of keeping my web service facade, my domain, and my persistence all tightly bundled together seems easy to maintain, but does concern me as these may need to change in time. But would it be smarter to create a set of DTO classes for the web services side and skip the DTO's for the persistence side?
There's no functional reason for not annotating the same class with both JPA and JAXB annotations, I've done it myself on occasion. It does become a bit hard to read, though, and sometimes you want different class design trade-offs with JAXB and JPA. In my experience, these trade-offs usually mean you end up with two class models.
I agree that using the same model classes is the right approach. If you are concerned about annotation clutter, you could use a JAXB implementation (such as EclipseLink JAXB) that provides a mechanism for externalizing the metadata:
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/EclipseLink-OXM.XML
Also since you are using a JPA model EclipseLink JAXB (MOXy) has extensions for making this easier:
http://bdoughan.blogspot.com/2010/07/jpa-entities-to-xml-bidirectional.html
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/JPA
Here is an example of using one model with JAXB & JPA to create a RESTful service:
Part 1 - The database
Part 2 - JPA entities
Part 3 - Mapping entities to XML using JAXB
Part 4 - The RESTful service
Part 5 - The client
There is no problem in using both annotations on the same class. I even tend to encourage this, because thus you don't have to copy-paste when changes occur.
In some cases, some properties differ in behaviour - for example an auto-generated ID might not be required to be marshalled. #XmlTransient and #Transient are then combined. It does become a bit hard to read, but not too hard, because it is obvious what all the annotations mean.
Anyone tempted to put atom link objects in your persisted domain because you have committed to defining your web service xml structure there? It seems strange to me to do this. Hateoas links seem like a good idea but the persisted domain and the service impl (not the web service) have no interest in atom links. Then again, using xml annotations and having jersey serialize my domain for me certainly is convenient. Another downside of this approach though is that it is to easy to impact your web service consumers at runtime with persistence domain "layer" refactoring.
I know this question is a bit old, but I thought I'd weigh in anyways since this is an issue that I've recently come across. My recommendation would be to leave your JAXB annotated classes alone, since any schema change will require re-generating these classes. Meaning you will have to re-enter any hibernate annotations, etc. manually. This may be a little out-dated solution, but I think it would be perfectly reasonable to create a hibernate mapping file (.hbm.xml) to house the mappings externally. This is a little more flexible, less cluttered, and just as useful in my opinion.

Difference between JPA and JDO?

want to develop my project on Google App Engine .I want to use google big table as database. For the database I have two options JPA and JDO. Will you guys please suggest me on it? Both are new for me and I need to learn them. So I will be focused on one after your replies.
Since you're using Data Nucleus, see their FAQ on JDO vs JPA. http://www.datanucleus.org/products/accessplatform_3_0/jdo_jpa_faq.html
DataNucleus AccessPlatform supports both JDO and JPA specifications of Java persistence. As such it has no "vested interest" in either technology, believing that it is for users to choose which they like best. There has been much FUD on the web about JDO and JPA, largely perpetrated by RDBMS vendors. This FAQ corrects many of these points
A key difference is that JDO support a rich domain model (logic together with data), in fact all persistent classes can have a reference to the current PersistenceManager, issue queries, and, I guess, it's possible not to have fields persistent by default.
JPA does not support such software design. In fact each Entity doesn't have a reference to the PersistenceManager, to have it you have to resort to ThreadLocal variables, which is not a very elegant and robust solution.
Since GAE BigTable is not an RDBMS, JDO is a better choice. There are some detailed comparision articles in Aphache JDO, it is helpful for me.
JPA persists java objects to relational data via ORM, while JDO is more general specification for java object persistence. So using JDO will give you more freedom in storage implementation options for your objects.
JPA is the leading java standard for persistence. So I'll say use JPA if you are using RDBMS and require ORM.
Hibernate is generally used as JPA implementation. If you need some extra features you can use hibernate specific annotations.
This question already looks to be discussed here JDO vs JPA for Java on Google App Engine