Micro-Service Applications in Java - rest

When UI is supplied by Angular web application, the responsibility of the other micro service applications gets limited to supplying JSONS. But they still need to support ReST paths and cannot be a monolithic giant. Meaning, can't just make those as JARs and bind it as dependency to some WAR app. Is there a better way to do this than keeping the WAR applications that are just going to supply JSONs as UI less WARs for the sake of making sure that they are not mixed up?

Related

How to add REST services to a RAP application

I am working on a web application which uses RAP. In the application there is one bundle which contains the model which is backed by a database. I would like to create bundles which provide REST services which will make use of the model bundle.
I looked at the Application#addEntryPoint but that it just for UI contributions - not for services as such.
I also read FrankAppel's post http://www.codeaffine.com/2011/08/26/raprwt-osgi-integration/ and wonder if RWT and Felix might be the way to go. It looks promising but Felix is new to me.
Is it possible to add these REST bundles to the RAP application and set them up to handle /rest/* URLs? Or would it be more sensible to keep the 2 parts completely separate and to share the model bundle in a different way?
When using RAP, any active bundle may contribute to the usual "org.eclipse.equinox.http.registry.servlets" and "org.eclipse.equinox.http.registry.resources" extension points. You just need to make sure the name of your RAP application's entry point(s) and the paths of your ressources and servlets won't overlap.
So in practice, you can just develop your REST services as if there was no RAP component. The two will happily live side-by-side within the same servlet context.

Identifying the set of Java EE specs for given requirements - a case for EJB, JPA and REST?

I am trying to identify the correct set of technologies to develop an application that supports the following.
Provide web service capabilities (preferably REST)
Be able to handle updates to multiple data resources in a single transaction
Have some of form persistence capability.
Based on these basic requirements, my current plan is to build a REST based service using JAX-RS and JPA to handle persistence and use EJB to be able to handle multiple updates to different resources in a single transaction.
Are these the correct set of technologies or am I making my application bulkier.
Thanks for any suggestions. Finally, the application will be deployed on Websphere Application Server v8.5
Yes, those sound like reasonable technology choices for your project. There are all part of Java EE, which provides a ton of other nice features, too, so it provides some room for your application to "grow" without having to worry about being bogged down with huge numbers of libraries from different vendors. In my opinion, using Java EE is not alone a cause for worry about "bulkiness".
Since the application is going to be deployed onto WebSphere Application Server V8.5 there's no need to constrain yourself with available technologies if they are already a part of the runtime environment.
Having said that I however thought about the developers who will develop the application and may suffer from the time it takes to do frequent restarts, redeployments and such. It's not to say WAS 8.5 can't handle them, but since it's a full-blown application server meant for production environment, it might be too much for your development environment. If so, read on.
There's a lighter profile of WAS available - WebSphere Application Server 8.5 Liberty Profile. It's based on WebSphere AS's codebase, has a small footprint, and is precisely for customers who know the deployment platform is WAS, but they need a lighter solution on their development laptops.
It might be very well-tailored for your needs and if Eclipse is the IDE you may be pleasantly surprised how much slimmer the development environment became with Liberty Profile.

What is proper design for system with external database and RESTful web gui and service?

Basically I started to design my project to be like that:
Play! Framework for web gui (consuming RESTful service)
Spray Framework for RESTful service, connects to database, process incoming data, serves data for web gui
Database. Only service have rights to access it
Now I'm wondering, if it's really best possible design.
In fact with Play! I could easily have both web gui and service hosted at once.
That would be much easier to test and deploy in simple cases probably.
In complicated cases when high performance is needed I still can run one instance purely for gui and few next just to work as services (even if each of them could still serve full functionality).
On the other side I'm not sure if it won't hit performance too hard (services will be processing a lot of data, not only from the web gui). Also, isn't it mixing things which I should keep separated?
If I'll decide to keep them separated, should I allow to connect to database only through RESTful service? How to resolve problem with service and web gui trying to use different version of database? Should I use versioned REST protocol in that case?
----------------- EDIT------------------
My current system structure looks like that:
But I'm wondering if it wouldn't make sense to simplify it by putting RESTful service inside Play! gui web server directly.
----------------- EDIT 2------------------
Here is the diagram which illustrates my main question.
To say it clearly in other words: would it be bad to connect my service and web gui and share the model? And why?
Because there is also few advantages:
less configuration between service and gui needed
no data transfer needed
no need to create separate access layer (that could be disadvantage maybe, but in what case?)
no inconsistencies between gui/service model (for example because of different protocol versions)
easier to test and deploy
no code duplication (normally we need to duplicate big part of the model)
That said, here is the diagram:
Why do you need the RESTful service to connect to the database? Most Play! applications access the database directly from the controllers. The Play! philosophy considers accessing your models through a service layer to be an anti-pattern. The service layer could be handy if you intend to share that data with other (non-Play!) applications or external systems outside your control, but otherwise it's better to keep things simple. But you could also simply expose the RESTful interface from the Play! application itself for the other systems.
Play! is about keeping things simple and avoiding the over-engineered nonsense that has plagued Java development in the past.
Well, after few more hours of thinking about this, I think I found solution which will satisfy my needs. The goals which I want to be fulfilled are:
Web GUI cannot make direct calls to the database; it need to use proper model which will in turn use some objects repository
It must be possible to test and deploy whole thing as one packet with minimum configuration (at least for development phase, and then it should be possible to easy switch to more flexible solution)
There should be no code duplication (i.e. the same code in the service and web gui model)
If one approach will appear to be wrong I need to be able to switch to other one
What I forget to say before is that my service will have some embedded cache used to aggregate and process the data, and then make commits to database with bigger chunks of them. It's also present on the diagram.
My class structure will look like this:
|
|- models
|- IElementsRepository.scala
|- ElementsRepositoryJSON.scala
|- ElementsRepositoryDB.scala
|- Element.scala
|- Service
|- Element.scala
|- Web
|- Element.scala
|- controlers
|- Element.scala
|- views
|- Element
|- index.scala.html
So it's like normal MVC web app except the fact that there are separate model classes for service and web gui inheriting from main one.
In the Element.scala I will have IElementsRepository object injected using DI (probably using Guice).
IElementsRepository have two concrete implementations:
ElementsRepositoryJSON allows to retrieve data from service through JSON
ElementsRepositoryDB allows to retrieve data from local cache and DB.
This means that depending on active DI configuration both service and web gui can get the data from other service or local/external storage.
So for early development I can keep everything in one Play! instance and use direct cache and DB access (through ElementsRepositoryDB) and later reconfigure web gui to use JSON (through ElementsRepositoryJSON). This also allows me to run gui and service as separated instances if I want. I can even configure service to use other services as data providers (however for now I don't have such a needs).
More or less it will look like that:
Well, I think there's no objectively right or wrong answer here, but I'll offer my opinion: I think the diagram you've provided is exactly right. Your RESTful service is the single point of access for all clients including your web front-end, and I'd say that's the way it should be.
Without saying anything about Play!, Spray or any other web frameworks (or, for that matter, any database servers, HTML templating libraries, JSON parsers or whatever), the obvious rule of thumb is to maintain a strict separation of concerns by keeping implementation details from leaking into your interfaces. Now, you raised two concerns:
Performance: The process of marshalling and unmarshalling objects into JSON representations and serving them over HTTP is plenty fast (compared to JAXB, for example) and well supported by Scala libraries and web frameworks. When you inevitably find performance bottlenecks in a particular component, you can deal with those bottlenecks in isolation.
Testing and Deployment: The fact that the Play! framework shuns servlets does complicate things a bit. Normally I'd suggest for testing/staging, that you just take both the WAR for your front-end stuff and the WAR for your web service, and put them side-by-side in the same servlet container. I've done this in the past using the Maven Cargo plugin, for example. This isn't so straightforward with Play!, but one module I found (and have never used) is the play-cargo module... Point being, do whatever you need to do to keep the layers decoupled and then glue the bits together for testing however you want.
Hope this is useful...

Architecture Question: GWT or Vaadin to create Desktop Application?

We're planning on creating a feedreader as a windows desktop- and iPad application. As we want to be able to show Websites AND to run (our own) JavaScript in this application, we thought about delivering the application as HTML/CSS/JavaScript, just wrapped by some .NET control or a Cocoa Touch webbrowser component. So the task at hand is to find out which framework to use to create the HTML/CSS/JS files to embed in the application.
For the development of the HTML/CSS/JavaScript we would be happy to use Vaadin, GWT, or some other framework, as we're a lot better with Java than with JS. We favor Vaadin after a short brainstorming, as the UI components are very nice, but I fear that most of the heavy lifting will be on the server and not in the client (and that wouldn't be too nice). We would also like GWT, but the Java-to-JS compiling takes a lot of time and an extra step, and slowed down development time in the past when using it.
The question is: which development framework would you choose (given you wanted to implement this project and you mostly did Java so far) and why? If there are better framework options (List of Rich Client Frameworks), please let me know.
Edit: The application will need to talk to our server from time to time (sync what has been read for example), but mainly should get the xml feeds itself. Therefore I hope that most of the generated code can be embedded in the application and there doesn't need to be heavy activity with our server.
Edit2: We (realistically even if you doubt) expect at least 10000 users.
Based on my experience with Vaadin, I'd go for that, but your requirements are somewhat favoring pure-GWT instead.
Vaadin needs the server and server connection. If building mostly offline desktop application, this can be solved with an embedded Jetty for example. (synchronize with an online service only when needed), but for iPad you would need to connect online right away to start the Vaadin application.
GWT runs completely at the client-side and you can build a JavaScript browser application that only connects when needed.
Because Vaadin is much quicker to develop, you could build a small Vaadin version first and see if that is actually problem on the iPad.
On the other hand, if you can assume going online right away, you can skip the local server installation altogether. Just run the application online and implement the desktop version using operating systems default browser control (i.e. the .NET control you suggested). Then Vaadin is easier.
Vaadin is just framework base on GWT but have two very important features:
don't need to run GWT compiler. It is pure java. Of course if not add addons because then gwt compiler must run.
you don't need to write communication code. So you don't need to solve DTO problems, non-serializable object mappings and dont need to write servlets.
I use Vaadin in my work for one year and we haven't performance problems yet (desktop like application with ~500 users). IMO very good solution is to use Vaadin just for UI, logic move to independent beans and connect this two elements using Spring or Guice.
In this case you should use MVP pattern and Domain Driven Development.
Bussines beans is domain objects and logic that use view interfaces to send responses.
Custom Vaadin components (could extends standard components) implements view interfaces.
That way is good when you decide to change UI engine if Vaadin is not for you. Just rewrite guice/spring mappings and write new implementations of view interfaces.
My 3 cents:
If you decide to use vaadin, You will benefit from already GOOD LOOKING components. Since you dont want to write (alot of) CSS , vaadin is already good looking out of the box. How ever, Vaadin is a SERVERSIDE framework. User interface interactions will hit the back end even if they dont involve getting any data (e.g moving from one tab to the other) .
If you decide to use GWT, you will have to atleast style the application (this is not hard) . There is also the problem of long compilation time (but you can test and debug on hosted mode which allows you to run the application without compiling , then you compile only when deploying). The main advantage of gwt is that you control what gets sent to the wire, For UI interactions that dont require getting data from the backend, it will work purely on the client side. You the developer will determine when to send a request to the back end. (Doing RPC requests in GWT is very easy)
Hope this will help you make the decision.

Why should I prefer OSGi Services over exported packages?

I am trying to get my head around OSGi Services. The main question I keep asking myself is: What's the benefit of using services instead of working with bundles and their exported packages?
As far as I know it seems the concept of Late Binding has something to do with it. Bundle dependencies are wired together at bundle start, so they are pretty fixed I guess. But with services it seems to be almost the same. A bundle starts and registers services or binds to services. Of course services can come and go whenever they want and you have to keep track of these chances. But the core idea doesn't seem that different to me.
Another aspect to this seems to be that services are more flexible. There could be many implementations for one specific Interface. On the other hand there can be a lot of different implementations for a specific exported package too.
In another text I read that the disadvantage of using exported packages is that they make the application more fragile than services. The author wrote that if you remove one bundle from the dependency graph other dependencies would no longer be met, thus possibly causing a domino effect on the whole graph. But couldn't the same happen if a service would go offline? To me it looks like service dependencies are no better than bundle dependencies.
So far I could not find a blog post, book or presentation that could clearly describe why services are better than just exposing functionality by exporting and importing packages.
To sum my questions up:
What are the key benefits of using OSGi Services that make them superior to exporting and importing packages?
Addition
I have tried to gather further information about this issue and come up with some kind of comparison between plain export/import of packages and services. Maybe this will help us to find a satisfying answer.
Start/Stop/Update
Both, bundles (hence packages) and services, can be started and stopped. In addition to that they can be kind of updated. Services are also tied to the bundle life cycle itself. But in this case I just mean if you can start and stop services or bundles (so that the exported packages "disappear").
Tracking of changes
ServiceTracker and BundleTracker make it possible to track and react to changes in the availability of bundles and services.
Specific dependencies to other bundles or services.
If you want to use an exported package you have to import it.
Import-Package: net.jens.helloworld
Would net.jens.helloworld provide a service I would also need to import the package in order to get the interface.
So in both cases their would be some sort of "tight coupling" to a more or less specific package.
Ability to have more than one implementation
Specific packages can be exported by more than one bundle. There could be a package net.jens.twitterclient which is exported by bundle A and bundle B. The same applies to services. The interface net.jens.twitterclient.TwitterService could be published by bundle A and B.
To sum this up here a short comparison (Exported packages/services):
YES/YES
YES/YES
YES/YES
YES/YES
So there is no difference.
Additionally it seems that services add more complexity and introduce another layer of dependencies (see image below).
alt text http://img688.imageshack.us/img688/4421/bundleservicecomparison.png
So if there is no real difference between exported packages and services what is the benefit of using services?
My explanation:
The use of services seems more complex. But services themselves seem to be more lightweight. It should be a difference (in terms of performance and resources) if you start/stop a whole bundle or if you just start and stop a specific service.
From a architectural standpoint I also guess that bundles could be viewed as foundation of the application. A foundation shouldn't change often in terms of starting and stopping bundles. The functionality is provided by services of this packages in some kind of dynamic layer above the "bundle layer". This "service layer" could be subject to frequent changes. For example the service for querying a database is unregistered if the database is going offline.
What's your opinion? Am I starting to get the whole point of services or am I still thinking the wrong way? Are there things I am missing that would make services far more attractive over exported packages?
Its quite simple:
Bundles are just providing classes you can use. Using Imports/Exports you can shield visibility and avoid (for example) versioning conflicts.
Services are instances of classes that satisfy a certain contract (interfaces).
So, when using Services you don't have to care about the origin of a implementation nor of implementation details. They may even change while you are using a certain service.
When you just want to rely on the Bundle Layer of OSGi, you easily introduce crosscutting dependencies to concrete implementations which you usually never want. (read below about DI)
This is not an OSGi thing only - just good practice.
In non OSGi worlds you may use Dependency Injection (DI) frameworks like Guice, Spring or similar. OSGi has the Service Layer built into the framework and lets higher level frameworks (Spring, Guice) use this layer. - so in the end you usually dont use the OSGi Service API directly but DI adapters from user friendly frameworks (Spring-->Spring DM,Guice-->Peaberry etc).
HTH,
Toni
I'd recommend purchasing this book. It does an excellent job explaining services and walking through the construction of a non-trivial application that makes use of OSGi Services.
http://equinoxosgi.org/
My company routinely builds 100+ bundle applications using services. The primary benefits we gain from using services are:
1) Loose coupling of producer/consumer implementation
2) Hot swappable service providers
3) Cleaner application architecture
When you start with OSGi, it is always easier to start with an export-package approach it certainly feels more java-like. But when your application starts growing and you need a bit of dynamicity, services are the way to go.
Export-package only does the resolution on startup, whereas services is an on-going resolution (which you may want or not). From a support point of view having services can be very scary (Is it deterministic? How can I replicate problems?), but it is also very powerful.
Peter Kriens explains why he thinks that Services are a paradigm shift in the same way OO was in its time. see µServices and Duct Tape.
In all my OSGi experience I haven't had yet the occasion to implement complex services (i.e. more than one layer), and certainly annotations seem the way to go. You can also use Spring dynamic module to ease the pain of dealing with service trackers. (and many other options like iPOJO, and Blueprint)
Lets consider the two following scenarios:
Bundle A offers a service which is an arithmetic addition add(x,y) return x+y. To achieve this, it exports "mathOpe package" with "IAddition interface", and registers a service within the service registry. Bundles B, C, D, ... consume this service.
Bundle A exports "mathOpe package", where we found a class Addition exposing an operation (x+y)<--add(x,y). Bundles B, C, D, ... import the package mathOpe.
Comparison of scenario 1 vs. scenario 2:
Just one implementation instance vs. many instances (Feel free to make it static!)
Dynamic service management start, stop, update vs. no management, the consumer is owning the implementation (the "service")
Flexible (we can imagine a remote service over a network) vs. not flexible
... among others.
PS: I am not an OSGI expert nor a Java one, this answer shows only my understanding of the phenomena :)
I think this excellent article could answer a lot of your questions:OSGi, and How It Got That Way.
The main advantage of using a service instead of the implementation class is that the bundle offering the service will do the initialization of the class itself.
The bundle that uses the service does not need to know anything about how the service is initialized.
If you do not use a service you will always have to call kind of a factory to create the service instance. This factory will leak details of the service that should remain private.