GWT non-Java backend application structure - gwt

Does anyone know where is a good example of GWT application with non-Java backend? Something like the "Contacts" application on the official pages of GWT. I'm interested in the following topics specifically (this is how I see the "back" part of the application, but it may be different, of course):
DTO serialization.
Communication layer (the very back one). Should it use generics and work with any abstract DTO? Is there any other proven approach?
Service layer, which uses the communication layer with specific DTOs.
Requests caching. Should it be implemended on the service or communication level?
Good abstraction. So we can easily substitute any parts for testing and other purposes, like using different serializators (e. g. XML, JSON), different servers' behaviors when managing user sessions (URL might change once the user is logged in).
I know, there are many similar topics here, but I haven't found one, which is focused on the structure of the client part.

Use REST on both sides. PHP/Python REST server-side. GWT client-side.
Don't use GWT-RPC.
The following gives a guideline for Java-java REST, but there is leeway to move out of server-side Java. It explains why REST is appropriate.
http://h2g2java.blessedgeek.com/2011/11/gwt-with-jax-rs-aka-rpcrest-part-0.html.
REST is an industry established pattern (Google. Yahoo, in fact every stable-minded establishment deploys REST services).
REST is abstracted as a HTTP level data structure. Which Java, PHP and Python have established libraries to comply to ensure DTO integrity.
Communication layer (the very back one). Should it use generics ???
Don't understand the question or why the question exists. Just use the REST pattern to provide integrity to non-homogeneous language between server and client and to HTTP request/response.
If you are using Java at back-end, there is no escape from using generics. Generics saves code. But using generics extensively needs programmer to have equally extensive visual capacity to visualise the generics. If your back-end is on PHP or Python, are there generics for PHP? Python generics? Might as well stay in Java land or C# land and forget about Java-free service provider.
Did you mean DTO polymorphism? Don't try polymorphism or decide on it until after you have established your service. Then adaptively and with agility introduce polymorphism into your DTOs if you really see the need. But try to avoid it because with JSON data interchange, it gets rather confusing between server and client. Especially if they don't speak the same programming language.
If you are asking HTTP level generics? I don't know of any framework, not SOAP, not REST where you could have generics carried by the XML or JSON. Is there? Generics?
Service layer? REST.
Requests caching? Cache at every appropriate opportunity. Have service-provider cache query results for items common and static to all sessions like menu, menu/drop-down box choices, labels, etc. Cache your history and places.
On GWT side cache records so that forward/backward button will not trigger inadvertent query. Use MVP pattern and history to manage history traversal that might trigger redisplay of info.
If you are talking about unified info abstraction, you should start your project with JAX-RS to define/test the API and perform data abstraction. Without performing any business logic.
Then, once your HTTP-level APIs andDTOs are defined, convert server-side to using language of your choice to proceed to write more complex code.
BTW, I don't dig your terminology "Backend".
We normally use the terminology client-side for service consumer, server-side for service provider, backend for data repository/persistence access, mid-tier or middle-ware for intervening/auxiliary software required to provide mathematical/scientific/graphical analysis/synthesis.
If our terminologies did not coincide, I probably answered this question wrong.

Related

Conceptual doubts regarding Rest and Soap : Backends for Frontend

I have 2 years in the IT industry,i love to read a lot ,but when i go deep in some subjects i see a lot of contradiction in somes articles,forum or terms that are used interchangeable.
I understand the difference between Soap and Rest.
When we want to communicate between backends, we can use either of these 2 approaches, each with its advantages and disadvantages.
Situation :
If i have an application, which can be monolithic or not, where I have a backend and I will only have a front end that consumes it. Usually we create a Rest Api so that our front end can consume it. But we will never think about exposing our backend with Soap.(Lot of reasons)
Questions:
1 -Is it okay if I say that Rest , in addition to allowing us to exchange information between application and application (backend to backend ), is it also useful when exposing services for our front end? And SOAP is only useful for Server - Server communication?
2 -And finally, if I expose a backend only for a front end, it is ok to say that we expose a web service or conceptually we say that it is a backend for frontend ?
Question 1: No the First question is wrong Assumption. We can say that in SOAP, XML is the only means of communication, while in Rest, the accepted means is JSON, while there are other formats like XML, JSON, PDF, HTML etc. and Ofcourse, XML can be converted back from server into UI Language and XML Request deciphered at Server for a Response. So, its not Ok to say that SOAP is only for Server - Server Communication.
2. No, when you have typically exposed backend only for consumption by a Front end, you can typically say that it is a backend for numerous front end client requests. But IMO, Backend for a front end is a monolithic webapplication, both bundled in WAR. so in that sense, any UI Request can request response from the Back end web service. Hope i am able to clear your understanding about web services.
I see that in your question there are actually 4 embedded independent topics. And probably because they are always used in conjunction it is sometimes tough to understand.
I will give a short answer first:
REST and SOAP both can be used for Client-Server and Server-Server integrations. But the choice will be dependent on the questions like where you want server-side UI technology/client-side UI technology or is it a single page application/portal technology, etc.
If you expose a single-backend for a single-frontend it's technically a BFF although the term BFF is used only in the case where you have separate-backends for each type of frontend application. e.g. one for mobile, one for web, one for IoT devices, etc.
The long answer is to clarify the 4 principles. Let me give a try at this by separating the topics into the below four headings:
1. Backend(Business Layer) vs Backend for frontend(BFF)
In classical 3-tier architecture (UI-Business Layer-Database) world, the middle-layer that consists of the business and integration logic is mostly referred to as backend/business layer.
This layer can be separated from the UI/Frontend using multiple different options like APIs(REST/SOAP), RPC, Servlet Technology, etc. The limitation with this 3-tier architecture is that, it is still tightly coupled to the type of users and use-cases are limited to web/browser based. It is not a good choice when you want to reuse the business-layer for both web and mobile as the mobile applications are required to be light-weight by principle.
That's where we lean on to multi-tier architecture with Backend For Frontend(BFF) as a savior. It's just a methodology to segregate the business-layers based on consumers.
2. Monolith vs SOA vs Microservices(Optimized SOA)
In a monolith world all the code components mostly UI and Business Layer sits in a tightly coupled fashion. The simplest example would be a Java Servlet Pages(JSP) application with Java as Business Layer. These are typically server-side UI technologies.
In Service Oriented Architecture(SOA), the usecases revolve around leveraging reusable business layer functions aka services. Here one would have to deal with UI-Server, Inter-Service and Server-Server integration scenarios. It's heavily service dependent, meaning it's like a spider-web of dependent applications.
The Microservices is an extension of SOA, but the approach is to keep a resource in focus instead of services to reduce the spider-web dependencies. Hence, self-sufficient and standalone service-clusters are the base of micro-services architecture.
3. SOAP vs REST Webservices
SOAP stands for Simple Object Access Protocol, typically used by the business-layer to provide user-defined methods/services to manipulate an object. For example look at the names of the services for accessing a book collection
To get a book getABook()
Get the whole list of books listAllBooks()
Find a book by name searchABook(String name)
Update a book's details updateABookDetails()
On the other hand, REST is representational state transfer which transfers the state of a web-resource to the client using underlying existing HTTP methods. So the above services for accessing a book collection would look like
To get a book /book(HTTP GET)
Get the whole list of books /books(HTTP GET)
Find a book by name /book?name={search}(HTTP GET)
Update a book's details /book/{bookId}(HTTP PATCH/PUT)
4. How to make a correct choice of architecture?
Spot the diversity of the application user groups and usages: This will help to understand the platform(web/mobile/IoT/etc), nature of the application and session-management.
Determine the estimated/required throughput: This will help you to understand the scalability requirements.
How frequently and who will be maintaining the application: This will help to gauge the application and technology complexity, deployment cycles, deployment strategy, appetite for downtime, etc.
In conclusion, always follow the divine rule of KISS: Keep it simple, stupid.
1.)
A webapplication is for H2M communication a webservice is for M2M communication through the web. The interface of the service is more standardized, more structured, so machines can easily use it and parse the messages.
I don't think it matters where your service consumer is, it can run in a browser or it can run on the server. As long as it can communicate with the service on a relative safe channel it is ok.
You design a service usually to decouple it from multiple different consumers, so you don't have to deal with the consumer implementations. This makes sense usually when you have potentially unknown consumers programmed usually by 3rd party programmers you don't even know or want to know about. You version the service or at least the messages to stay compatible with old consumers.
If you have only a single consumer developed by you, then it might be too much extra effort to maintain a service with a quasi-standard interface. You can easily change the code of the consumer when you change the interface of the service, so thinking about interface design, standardization, backward compatibility, etc. does not make much sense. Though you can still use REST or SOAP ad hoc without much design. In this case having a RESTish CRUD API without hypermedia is a better choice I think.
2.)
I think both are good, I would say backend in your scenario.

CQ(R)S using RPC-style API instead of REST

I'm working an a PHP/JS based project, where I like to introduce Domain Driven Design in the backend. I find that commands and queries are a better way to express my public domain, than CRUD, so I like to build my HTTP-based API following the CQS principle. It is not quiet CQRS since I want to use the same model for the command and query side, however many principles are the same. For API documentation I use Swagger.
I found an article which exposes CQRS through REST resources (https://www.infoq.com/articles/rest-api-on-cqrs). They use 5LMT to distinct commands, which Swagger does not support. Moreover, don't I loose the benefit of the intention-revealing interface which CQS provides by putting it into a resource-oriented REST API? I didn't find any articles or products which expose commands and queries directly through an HTTP-based backend.
So my question is: Is it a good idea to expose commands and queries directly through the API. It would look something like this:
POST /api/module1/command1
GET /api/module1/query1
...
It wouldn't be REST but I don't see how REST brings anything beneficial to the table. Maintaining REST resource would introduce yet another model. Moreover, having commands and queries in the URL would allow to use features like routing frameworks and access logs.
The commands and queries are an implementation detail. This is apparent from the fact that they wouldn't exist at all if you had chosen an alternative style.
A RESTful API usually (if done right) follows the conceptual domain model. The conceptual domain model is not an implementation detail, because it is in your users heads and is the a source for requirements for your system.
Thus, a RESTful API is usually much easier to understand, because the clients (developers) have to understand the conceptual domain model anyway, and RESTful interfaces follow the concepts of such a model. The same is not true for a queries and commands based API.
So we have a trade-off
You already identified the drawbacks of building a RESTful API around the commands and queries, and I pointed out the drawbacks of your suggestion. My advice would be the following:
If you're building an API that other teams or even customers consume, then go the RESTful way. It will be much easier for the clients to understand your API.
If, on the other hand, the API is only an internal one that is e.g. used by a JS front-end that your team builds, and you have no external clients on the API, then your suggestion of exposing the commands and queries directly can be short-cut that's worth the (above mentioned) drawbacks.
If you take the shortcut, be honest to yourself and acknowledge it as such. This means that as soon as your requirements change and you now have external clients, you should probably build a RESTful API.
don't I loose the benefit of the intention-revealing interface which
CQS provides by putting it into a resource-oriented REST API?
Intention revealing for whom? A client side programmer? A server side programmer? In the same team/org that maintains the domain model? Outside of that team/org? Someone on the internet who would access your API naively by just probing a starting URI with an OPTIONS request? Someone who would have access to the full API documentation with URIs and payloads structure?
REST is orthogonal to CQRS. In the end, no matter how you expose your resources on the web, domain notions will be reflected somewhere, whether in the URI, the payloads, the media types. I don't think using DDD or CQRS should influence the way you design your API that much.

RESTful API runtime discoverability / HATEOAS client design

For a SaaS startup I'm involved in, I am building both a RESTful web API and a couple of client apps on different platforms that consume it. I think I've got the API figured out, but now I'm turning to the clients. As I've been reading about REST, I see that a key part of REST is discovery, but there seems to be a lot of debate between two different interpretations of what discovery really means:
Developer discovery: The developer hard-codes copious amounts of API details into the client, such as resource URI's, query parameters, supported HTTP methods, and other details that they've discovered through browsing the docs and experimenting with the API's responses. This type of discovery IMHO necessitates cool linkage and the API versioning question, and leads to hard coupling of the client code to the API. Not much better than if using a well-documented collection of RPC's it seems.
Runtime discovery - The client app itself is able to figure out everything it needs with little or no out-of-band information (presumably, only a knowledge of the media types the API deals with.) Links can be hot. But to make the API very efficient, a lot of link templating for query parameters seems to be needed, which makes out-of-band info creep back in. There are possibly other difficulties I haven't thought of yet since I haven't gotten to that point in development. But I do like the idea of loose coupling.
Runtime discovery seems to be the holy grail of REST, but I'm seeing precious little discussion about how to implement such a client. Almost all REST sources I've found seem to assume Developer discovery. Anyone know of some Runtime discovery resources? Best practices? Examples or libraries with real code? I'm working in PHP (Zend Framework) for one client. Objective-C (iOS) for the other.
Is Runtime discovery a realistic goal, given the present set of tools and knowledge in the developer community? I can write my client to treat all of the URI's in an opaque manner, but how to do this most efficiently is a question, especially over low-bandwidth connections. Anyway, URI's are only part of the equation. What about link templating in the Runtime context? How about communicating what methods are supported, aside from making a lot of OPTIONS requests?
This is definitely a tough nut to crack. At Google, we've implemented our Discovery Service that all our new APIs are built against. The TL;DR version is we generate a JSON Schema-like spec that our clients can parse - many of them dynamically.
That results means easier SDK upgrades for the developer and easy/better maintenance for us.
By no means the perfect solution, but many of our devs seem to like.
See link for more details (and make sure to watch the vid.)
Fascinating. What you are describing is basically the HATEOAS principle. What is HATEOAS you ask? Read this: http://en.wikipedia.org/wiki/HATEOAS
In layman's terms, HATEOAS means link following. This approach decouples your client from specific URL's and gives you the flexibility to change your API without breaking anyone.
You did your home work and you got to the heart of it: runtime discovery is holy grail. Don't chase it.
UDDI tells a poignant story of runtime discovery: http://en.wikipedia.org/wiki/Universal_Description_Discovery_and_Integration
One of the requirements that should be satisfied before you can call an API 'RESTful' is that it should be possible to write a generic client application on top of that API. With the generic client, a user should be able to access all the API's functionality. A generic client is a client application that does not assume that any resource has a specific structure beyond the structure that is defined by the media type. For example, a web browser is a generic client that knows how to interpret HTML, including HTML forms etc.
Now, suppose we have a HTTP/JSON API for a web shop and we want to build a HTML/CSS/JavaScript client that gives our customers an excellent user experience. Would it be a realistic option to let that client be a generic client application? No. We want to provide a specific look-and-feel for every specific data element and every specific application state. We don't want to include all knowledge about these presentation-specifics in the API, on the contrary, the client should define the look and feel and the API should only carry the data. This implies that the client has hard-coded coupling of specific resource elements to specific layouts and user interactions.
Is this the end of HATEOAS and thus the end of REST? Yes and no.
Yes, because if we hard-code knowledge about the API into the client, we loose the benefit of HATEOAS: server-side changes may break the client.
No, for two reasons:
Being "RESTful" is a property of the API, not of the client. As long as it is possible, in theory, to build a generic client that offers all capabilities of the API, the API can be called RESTful. The fact that clients don't obey the rules, is not the API's fault. The fact that a generic client would have a lousy user experience is not an issue. Why is it important to know that it is possible to have a generic client, if we don't actually have that generic client? This brings me to the second reason:
A RESTful API offers clients the option to choose how generic they want to be, i.e. how resilient to server-side changes they want to be. Clients which need to provide a great user experience may still be resilient to URI changes, to changes in default values and more. Clients doing batch jobs without user interaction may be resilient to other kinds of changes.
If you are interested in practical examples, checkout my JAREST paper. The last section is about HATEOAS. You will see that with JAREST, even highly interactive and visually attractive clients can be quite resilient to server-side changes, though not 100%.
I think the important point about HATEOAS is not that it is some holy grail client-side, but that it isolates the client from URI changes - it is assumed you are using known (or developer discovered custom) Link Relations that will allow the system to know which link for an object is the editable form. The important point is to use a media type that is hypermedia aware (e.g. HTML, XHTML, etc).
You write:
To make the API very efficient, a lot of link templating for query parameters seems to be needed, which makes out-of-band info creep back in.
If that link template is supplied in the previous request, then there is no out-of-band information. For example a HTML search form uses link templating (/search?q=%#) to generate a URL (/search?q=hateoas), but nothing is known by the client (the web browser) other than how to use HTML forms and GET.

When should I use RequestFactory vs GWT-RPC?

I am trying to figure out if I should migrate my gwt-rpc calls to the new GWT2.1 RequestFactory cals.
Google documentation vaguely mentions that RequestFactory is a better client-server communication method for "data-oriented services"
What I can distill from the documentation is that there is a new Proxy class that simplifies the communication (you don't pass back and forth the actual entity but just the proxy, so it is lighter weight and easier to manage)
Is that the whole point or am I missing something else in the big picture?
The big difference between GWT RPC and RequestFactory is that the RPC system is "RPC-by-concrete-type" while RequestFactory is "RPC-by-interface".
RPC is more convenient to get started with, because you write fewer lines of code and use the same class on both the client and the server. You might create a Person class with a bunch of getters and setters and maybe some simple business logic for further slicing-and-dicing of the data in the Person object. This works quite well until you wind up wanting to have server-specific, non-GWT-compatible, code inside your class. Because the RPC system is based on having the same concrete type on both the client and the server, you can hit a complexity wall based on the capabilities of your GWT client.
To get around the use of incompatible code, many users wind up creating a peer PersonDTO that shadows the real Person object used on the server. The PersonDTO just has a subset of the getters and setters of the server-side, "domain", Person object. Now you have to write code that marshalls data between the Person and PersonDTO object and all other object types that you want to pass to the client.
RequestFactory starts off by assuming that your domain objects aren't going to be GWT-compatible. You simply declare the properties that should be read and written by the client code in a Proxy interface, and the RequestFactory server components take care of marshaling the data and invoking your service methods. For applications that have a well-defined concept of "Entities" or "Objects with identity and version", the EntityProxy type is used to expose the persistent identity semantics of your data to the client code. Simple objects are mapped using the ValueProxy type.
With RequestFactory, you pay an up-front startup cost to accommodate more complicated systems than GWT RPC easily supports. RequestFactory's ServiceLayer provides significantly more hooks to customize its behavior by adding ServiceLayerDecorator instances.
I went through a transition from RPC to RF. First I have to say my experience is limited in that, I used as many EntityProxies as 0.
Advantages of GWT RPC:
It's very easy to set-up, understand and to LEARN!
Same class-based objects are used on the client and on the server.
This approach saves tons of code.
Ideal, when the same model objects (and POJOS) are used on either client and server, POJOs == MODEL OBJECTs == DTOs
Easy to move stuff from the server to client.
Easy to share implementation of common logic between client and server (this can turn out as a critical disadvantage when you need a different logic).
Disadvatages of GWT RPC:
Impossible to have different implementation of some methods for server and client, e.g. you might need to use different logging framework on client and server, or different equals method.
REALLY BAD implementation that is not further extensible: most of the server functionality is implemented as static methods on a RPC class. THAT REALLY SUCKS.
e.g. It is impossible to add server-side errors obfuscation
Some security XSS concerns that are not quite elegantly solvable, see docs (I am not sure whether this is more elegant for RequestFactory)
Disadvantages of RequestFactory:
REALLY HARD to understand from the official doc, what's the merit of it! It starts right at completely misleading term PROXIES - these are actually DTOs of RF that are created by RF automatically. Proxies are defined by interfaces, e.g. #ProxyFor(Journal.class). IDE checks if there exists corresponding methods on Journal. So much for the mapping.
RF will not do much for you in terms of commonalities of client and server because
On the client you need to convert "PROXIES" to your client domain objects and vice-versa. This is completely ridiculous. It could be done in few lines of code declaratively, but there's NO SUPPORT FOR THAT! If only we could map our domain objects to proxies more elegantly, something like JavaScript method JSON.stringify(..,,) is MISSING in RF toolbox.
Don't forget you are also responsible for setting transferable properties of your domain objects to proxies, and so on recursively.
POOR ERROR HANDLING on the server and - Stack-traces are omitted by default on the server and you re getting empty useless exceptions on the client. Even when I set custom error handler, I was not able to get to low-level stack traces! Terrible.
Some minor bugs in IDE support and elsewhere. I filed two bug requests that were accepted. Not an Einstein was needed to figure out that those were actually bugs.
DOCUMENTATION SUCKS. As I mentioned proxies should be better explained, the term is MISLEADING. For the basic common problems, that I was solving, DOCS IS USELESS. Another example of misunderstanding from the DOC is connection of JPA annotations to RF. It looks from the succinct docs that they kinda play together, and yes, there is a corresponding question on StackOverflow. I recommend to forget any JPA 'connection' before understanding RF.
Advantages of RequestFactory
Excellent forum support.
IDE support is pretty good (but is not an advantage in contrast with RPC)
Flexibility of your client and server implementation (loose coupling)
Fancy stuff, connected to EntityProxies, beyond simple DTOs - caching, partial updates, very useful for mobile.
You can use ValueProxies as the simplest replacement for DTOs (but you have to do all not so fancy conversions yourself).
Support for Bean Validations JSR-303.
Considering other disadvantages of GWT in general:
Impossible to run integration tests (GWT client code + remote server) with provided JUnit support <= all JSNI has to be mocked (e.g. localStorage), SOP is an issue.
No support for testing setup - headless browser + remote server <= no simple headless testing for GWT, SOP.
Yes, it is possible to run selenium integration tests (but that's not what I want)
JSNI is very powerful, but at those shiny talks they give at conferences they do not talk much about that writing JSNI codes has some also some rules. Again, figuring out how to write a simple callback was a task worth of true researcher.
In summary, transition from GWT RPC to RequestFactory is far from WIN-WIN situation,
when RPC mostly fits your needs. You end up writing tons conversions from client domain objects to proxies and vice-versa. But you get some flexibility and robustness of your solution. And support on the forum is excellent, on Saturday as well!
Considering all advantages and disadvantages I just mentioned, it pays really well to think in advance whether any of these approaches actually brings improvement to your solution and to your development set-up without big trade-offs.
I find the idea of creating Proxy classes for all my entities quite annoying. My Hibernate/JPA pojos are auto-generated from the database model. Why do I now need to create a second mirror of those for RPC? We have a nice "estivation" framework that takes care of "de-hibernating" the pojos.
Also, the idea of defining service interfaces that don't quite implement the server side service as a java contract but do implement the methods - sounds very J2EE 1.x/2.x to me.
Unlike RequestFactory which has poor error handling and testing capabilities (since it processes most of the stuff under the hood of GWT), RPC allows you to use a more service oriented approach. RequestFactory implements a more modern dependency injection styled approach that can provide a useful approach if you need to invoke complex polymorphic data structures. When using RPC your data structures will need to be more flat, as this will allow your marshaling utilities to translate between your json/xml and java models. Using RPC also allows you to implement more robust architecture, as quoted from the gwt dev section on Google's website.
"Simple Client/Server Deployment
The first and most straightforward way to think of service definitions is to treat them as your application's entire back end. From this perspective, client-side code is your "front end" and all service code that runs on the server is "back end." If you take this approach, your service implementations would tend to be more general-purpose APIs that are not tightly coupled to one specific application. Your service definitions would likely directly access databases through JDBC or Hibernate or even files in the server's file system. For many applications, this view is appropriate, and it can be very efficient because it reduces the number of tiers.
Multi-Tier Deployment
In more complex, multi-tiered architectures, your GWT service definitions could simply be lightweight gateways that call through to back-end server environments such as J2EE servers. From this perspective, your services can be viewed as the "server half" of your application's user interface. Instead of being general-purpose, services are created for the specific needs of your user interface. Your services become the "front end" to the "back end" classes that are written by stitching together calls to a more general-purpose back-end layer of services, implemented, for example, as a cluster of J2EE servers. This kind of architecture is appropriate if you require your back-end services to run on a physically separate computer from your HTTP server."
Also note that setting up a single RequestFactory service requires creating around 6 or so java classes where as RPC only requires 3. More code == more errors and complexity in my book.
RequestFactory also has a little bit more overhead during the request processing, as it has to marshal serialization between the data proxies and actual java models. This added interface adds extra processing cycles which can really add up in an enterprise or production environment.
I also do not believe that RequestFactory services are serialization like RPC services.
All in all after using both for some time now, i always go with RPC as its more lightweight, easier to test and debug, and faster then using a RequestFactory. Although RequestFactory might be more elegant and extensible then its RPC counter part. The added complexity does not make it a better tool necessary.
My opinion is that the best architecture is to use two web apps , one client and one server. The server is a simple lightweight generic java webapp that uses the servlet.jar library. The client is GWT. You make RESTful request via GWT-RPC into the server side of the client web application. The server side of the client is just a pass though to apache http client which uses a persistant tunnel into the request handler you have running as a single servlet in your server servlet web application. The servlet web application should contain your database application layer (hibernate, cayenne, sql etc..) This allows you to fully divorce the database object models from the actual client providing a much more extensible and robust way to develop and unit test your application. Granted it requires a tad bit of initial setup time, but in the end allows you to create a dynamic request factory sitting outside of GWT. This allows you to leverage the best of both worlds. Not to mention being able to test and make changes to your server side without having to have the gwt client compiled or build.
I think it's really helpful if you have a heavy pojo on the client side, for example if you use Hibernate or JPA entities.
We adopted another solution, using a Django style persistence framework with very light entities.
The only caveat I would put in is that RequestFactory uses the binary data transport (deRPC maybe?) and not the normal GWT-RPC.
This only matters if you are doing heavy testing with SyncProxy, Jmeter, Fiddler, or any similar tool that can read/evaluate the contents of the HTTP request/response (like GWT-RPC), but would be more challenging with deRPC or RequestFactory.
We have have a very large implementation of GWT-RPC in our project.
Actually we have 50 Service interfaces with many methods each, and we have problems with the size of TypeSerializers generated by the compiler that turns our JS code huge.
So we are analizing to move towards RequestFactory.
I have been read for a couple of days digging into the web and trying to find what other people are doing.
The most important drawback I saw, and maybe I could be wrong, is that with RequestFactory your are no longer in control of the communication between your Server Domain objects and your client ones.
What we need is apply the load / save pattern in a controlled way. I mean, for example client receive the whole object graph of objects belonging to a specific transaction, do his updates and them send the whole back to the server. The server will be responsible for doing validation, compare old with new values and do persistance. If 2 users from different sites gets the same transaction and do some updates, the resulting transaction shouldn't be the merged one. One of the updates should fail in my scenario.
I don't see that RequestFactory helps supporting this kind of processing.
Regards
Daniel
Is it fair to say that when considering a limited MIS application, say with 10-20 CRUD'able business objects, and each with ~1-10 properties, that really it's down to personal preference which route to go with?
If so, then perhaps projecting how your application is going to scale could be the key in choosing your route GWT RPC or RequestFactory:
My application is expected to stay with that relatively limited number of entities but will massively increase in terms of their numbers. 10-20 objects * 100,000 records.
My application is going to increase significantly in the breadth of entities but the relative numbers involved of each will remain low. 5000 objects * 100 records.
My application is expected to stay with that relatively limited number of entities AND will stay in relatively low numbers of e.g. 10-20 objects * 100 records
In my case, I'm at the very starting point of trying to make this decision. Further complicated by having to change UI client side architecture as well as making the transport choice. My previous (significantly) large scale GWT UI used the Hmvc4Gwt library, which has been superseded by the GWT MVP facilities.

What is the reason for using WADL?

To describe RESTful we can say that every resource has its own URI. Using HTTP GET, POST, PUT and DELETE, we can operate on these resources. All resources are representational. Whoever wants to use our resources can do so via a browser or REST client.
That's the main idea of a RESTful architecture. This architecture allows services on the internet. So why does this architecture need WADL? What does WADL offer that standard HTTP does not? Why does WADL need to exist?
The purpose of WADL is to define a contract. Contract specifies how one party can call another.
When you create a web application from scratch, you don't need contract and WADL.
When you integrate your system with the other system and you can communicate clearly with their development team, you don't need contract and WADL (because you can make a phone call to make things clear).
However when you integrate a complex enterprise system with several others complex enterprise systems maintained by several different companies (or federal institutions), then believe me you want to have a communication contract defined as strictly as possible. Then you need WADL or Open Specification. Need it badly.
People with weak enterprise background tend to see entire IT as a collection of separated web applications developed independently. But enterprise reality is sometimes tough. Sometimes you can't even call or write to the people developing the application you have to integrate with. Sometimes you communicate with a legacy application that is no longer maintained--it just runs and you need to figure out how to communicate with it properly. In such conditions you need a contract because it saves your ass.
Actually client generation is the minor feature of the contract definition. It's just a toy. Contract enforces bad communicators to communicate integration rules clearly. This is the main reason to use WADL or Open Specification or whatever.
Using WADL implies that you just might be gracious enough to actually define the data / documents you are passing back and forth. Say you are passing some XML fragments, they might actually be part of a defined schema.
Whether or not you use the DL to generate code is not very important to me. What matters, in my subjective opinion, is that it is important to have a formal agreement on interfaces between business partners. Even if what is passed is obvious, it helps to identify who has to fix what later if somebody changes the previous interface.
Data format is just as much a part of an interface as verb names.
WADL appeals to people coming from the SOAP world where it is common to use a code generator to create client side code based on the WSDL. I don't think that mechanism is useful in REST as it creates client code that is coupled to server endpoints.
I believe that if you properly define your media-types and use hypermedia within those media-types, then it is not necessary to have WADL. The description of the available end-points is contained within the media-type definitions themselves. And if you are now saying to yourself, but application/xml doesn't contain any information about available hyperlinks, then I say BINGO. That's why I don't think application/xml and application/json are appropriate media-types for REST. I'm not saying don't use XML or JSON, just don't use the generic media type name.
The other appeal of WADL is for the purpose of documenting REST services. Unfortunately, it leads developers down the wrong path as WADL attempts to document server-side end points. Documenting a REST services should focus primarily on the media-types. A client developer should be able to write a REST client without knowing any url other than the root url.
WADL allows you to generate code, tests and documentation. Actually there are few very useful tools utilizing WADL, you can see some examples here. The problem with the "pure" REST, as described in Fielding's dissertation, is writing clients supporting Hypermedia (imagine writing Java Swing-based client application for example). With WADL this task is completely automated, and it's a huge advantage in my view. Testing becomes a way easier too.
Before I give my explanation, let me say that most pure REST extremists will deride it to the ends of the earth. I don't agree with them, as i'd rather get something done, but just so you know.
WADL is a description of a web service API, a little like WSDL is for SOAP type web services, that is designed to be more in tune with RESTful interfaces (something WSDL is poor at).
It's primary usage in my experience is to allow you to generate client code that can call the service (handy if it's a very large API, which literally saves hours of work). It also serves the purpose of documenting a REST-like interface.
REST specifies nothing about WADL.
When you want to expose the REST services ,the best way is to generate WADL and share with consumer(similar to WSDL in SOAP based web services).WADL is used to describe service all in on place.
WADL is not necessary to use. But, If you are working with complex existing application and you want to implement REST service call by replacing the EJB/SOAP service call, Then it is very safe and good practice that you use WADL. By using WADL generate client side java stubs you will be in sync with the service.
You can generate client side java stub using WADL file with help of wadl2java maven plugin.