We are working on some spikes using Fuse ESB (Camel,OSGi, blueprint) to deliver some components. We have an imposed architecture from our EAs which is: REST controller uses a route to call a CXF WS. This calls a local java class as a service to, for example, perform CRUD actions. These use JPA enabled DAO/entities. All seems a bit academic in design rather than real world but thats another story.
Question is about testing. Normally I would actually test this service tier using H2 to provide the DB, wiring the DAO, entityManager etc together with spring (I know some wouldn't do this but I do, bear with me). But we will use blueprint for fuse. How can I unit test this tier? Getting my tests to subclass CamelBlueprintTestSupport doesn’t work, this expects a route. Can’t use SpringJUnit4ClassRunner (though do have it working with this currently) as this wires with spring, when running in the container we will wire with blueprint.
So how do we unit test this? How do I instantiate this set of classes within a blueprint based unit test? Can we?
One aproach you may try is to use pax exam. It allows to run tests in a full OSGi environment. So you can install your real bundle test it in a black box fashion.
You can use pojosr which is what camel-test-blueprint is using: https://code.google.com/p/pojosr/
Though pojosr is not a full OSGi environment, so there will be some limitations what you can do.
For the camel-test-blueprint you may be able to override the method isUseRouteBuilder and return false, then it ought not to expect a route.
Related
I wonder if there is any option to configure tomcat when using http4s server API. Tomcat builder allows to change some basic options, but besides those there is no much what can be set. Could I somehow provide a server.xml file? Or get access to tomcat instane?
Tomcat, AFAIR is a server running applications defined as WARs. That is: your app is not a server, your app is logic bound to functionalities provided by this external layer.
Http4s server talks to the external world directly, it manages requests lifecycle through FS2 streams, if you use it then you probably talk to DB through e.g. Doobie or some other Cats library, which also manages its own thread pools and transactions instead of relying on ThreadLocals and other JavaEE-like and JPA-like conventions.
Long story short: these models are incompatible.
You would have to rip-off 90% of Http4s to leave something that could be agnostic to implementation, and then wire it to Tomcat, but that would leave only, IDK, DSLs for building requests? And for that you'd have better chances to e.g. define things using Endpoints4s or Tapir and implementing interpreter which binds things to Tomcat.
But at that point there would be probably 0 benefits from using Tomcat or any other servlet container: in Java EE it is usually easy to monitor things because you have 1 thread per 1 ongoing request, and all DB connections and stuff is put into ThreadLocals as request-scoped things. Meanwhile virtually all IO monads use thread pools (separate for different boundaries) so your operation can span across several threads. All conventions that containers rely on to monitor things (which are quite inflexible performance-wise) go to hell.
Meanwhile Http4s has its own ways of tracking things (through middleware), similarly you can add instrumentation to DB queries. So things provided by servlet container are also available there, though it requires a bit of effort to configure them.
The bottom line is: if you are using Http4s, you don't need servlet container, and if you are using servlet container, then you don't have a nice integration with anything that is using any IO monad (scala.concurrent.Future, cats.effect.IO, monix.execution.Task, zio.ZIO, etc) as they aren't guaranteed to run whole operation in the same thread (invalidating a lot of assumptions made by certain Java frameworks).
I want to exchange data between two applications JEE6/JSF2.0 and i'm looking for the best solution. I thought of the below solutions :
by using a JSON file.
by using XML file.
by using GSON file.
by using Remote interface (EJB 3.0).
For you, what's the best solution to use ?
edit : This two applications will be always running on the same network (but can not be on the same JVM)
I want to provide an alternative to David's answer, as I feel that there are some drawbacks to RMI that he underplayed.
This is a Java specific technology. If a third server needs to be introduced and it is a Microsoft Reporting Services server for example, then it cannot talk in the same language.
RMI is an OLD technology and doesn't particularly look well on a CV. Web services are the future. Experienced RMI developers are more uncommon than experienced web service developers.
Cumbersome and heavy framework
A better solution in my opinion would be to use SOAP XML based web services. Here are some advantages to this approach:
Universal acceptance in nearly any development framework. No matter the technology, nearly all have helpful libraries for interacting with web services.
Java has good support for object serialization into XML. This means objects can be quickly serialized into a SOAP XML request, sent to the other server, and deserialized back into a Java object by the other application server for processing.
A service layer can give you the decoupling interface between the two applications just as RMI can.
I hope you reconsider the use of SOAP XML based web services in your application.
There's two options really as you yourself stated.
Using RMI to connect to a EJB or using a webservice and communicating by JSON/XML etc...
From my experience RMI can be favorable if your applicaitons are on the same network, if not then you might get problems with firewalls etc and be forced to tunnel the RMI using HTTPS... which pretty much makes the RMI calls webservice calls.
If your on two different machines then webservices are nice as they dont cause as much trouble with firewalls. Also as they use the HTTP protocol you dont have to worry about the data being transfered.
These examples are kinda generalised but should give you some insight.
GSON vs XML vs JSON is a completely different subject... Non is superiour to the other, and all are fairly easily read by the human eye.
UPDATE
From what I've understod you wont have to worry about firewalls and such, I would recommend using RMI. It usually results in cleaner code and somewhat better performance.
Since I have seen both in action, I can make a comparison between the two technologies, EJB and WebServices. I can confirm that EJB is way more efficient, has support of transactions (including distributed transactions, if that is your requirement), exception handling, and binary streaming out of the box. In terms of performance EJB may exceed SOAP by a factor of 5 times in speed, and REST for about 3 times.
However, EJB is not an integration technology. In fact, it has never thought to do so. The biggest flaw of EJB is that it is very coupled to the Java Platform. Therefore, both endpoints must be written in Java and should use the same Java EE version.
Another problem is that EJB is not a protocol per se, so the implementations from two containers/vendors is probably different. If you need to access a remote EJB from JBoss AS on an Oracle WebLogic server, you must bring JBoss EJB client implementation with you.
Another big problem related to integration with EJB is a lack of data exchange format. Since it uses Java Serialized objects for communication, the data types must be shared on both ends. If you create a new exception type on the server that is classified as an Application Exception, if the client who consumes this service triggers the exception, his code will break. Note that, in this case the remote API was not violated, but another unknown type was introduced.
And, of course, by depending solely on the class type as an exchange format, you are giving the programmers opportunity for doing very stupid things. If you have many different teams in large projects using EJB as integration technology using different versions of Java EE, prepare yourself to experience uttermost pain. I've seem a programmer including a JPA entity on the client, who was annotated with named queries, the table which was accessing, its columns, etc, essentially giving away all the database layout to the service consumer. But it can get even worse. I've already seem a programmer returning a data structure that belonged to a dependency, namely Eclipselink 1.0. However, if you access this from a JBoss server, Eclipselink is also a JPA implementation technology, which conflicts with JBoss' hibernate. So, now you have to include Eclipselink jar in your JBoss APP classpath and configure the container for not loading JPA related packages, which otherwise will break your application completely. Even so, it can get WORSE than before: some other service you need to connect had also the bright idea of using the same datastructure, but now from Eclipselink 1.1.1, that has a different implementation, but the same class signature. Now you are in a very bad situation.
The bottom line: NEVER, EVER, use EJB as an integration technology. Use SOAP using a contract-first approach, where you define a canonical data model for the application, mapping java datastructures to a XML exchange format that can be used by any client, be it written in any language or using different stacks. Or use REST implementing a resource based, using HATEOAS principles. These days I rarely seem a reason for using EJB at all, since CDI is now on the market, support many features that EJB does and does not include any RPC related technology.
So I managed to create a GWT-SpringMVC setup. Wasn't easy (not too many resources), but possible. I even autowired and stuff. It even works :)
However, I can't figure out how to make the GwtTestCase run. Obviously it needs the "server" to be up, and because I use Spring, it needs to pass through the dispatching servlet (no?). But I can't figure out how to connect the two. In production or hosted mode, I got the web.xml and the spring-servlet.xml to configure these things. What can I do for tests?
I thought of ignoring the web part and testing the service directly - but this will deny me the option to automatically tests that everything is "transferable".
(if you have an idea on how to do that, I might ditch the GWTTestCase altogether).
An alternative to GWTTestCase could be the gwt-test-utils framework, which provides a simple integration with Spring (see here for details)
I am trying to get my head around OSGi Services. The main question I keep asking myself is: What's the benefit of using services instead of working with bundles and their exported packages?
As far as I know it seems the concept of Late Binding has something to do with it. Bundle dependencies are wired together at bundle start, so they are pretty fixed I guess. But with services it seems to be almost the same. A bundle starts and registers services or binds to services. Of course services can come and go whenever they want and you have to keep track of these chances. But the core idea doesn't seem that different to me.
Another aspect to this seems to be that services are more flexible. There could be many implementations for one specific Interface. On the other hand there can be a lot of different implementations for a specific exported package too.
In another text I read that the disadvantage of using exported packages is that they make the application more fragile than services. The author wrote that if you remove one bundle from the dependency graph other dependencies would no longer be met, thus possibly causing a domino effect on the whole graph. But couldn't the same happen if a service would go offline? To me it looks like service dependencies are no better than bundle dependencies.
So far I could not find a blog post, book or presentation that could clearly describe why services are better than just exposing functionality by exporting and importing packages.
To sum my questions up:
What are the key benefits of using OSGi Services that make them superior to exporting and importing packages?
Addition
I have tried to gather further information about this issue and come up with some kind of comparison between plain export/import of packages and services. Maybe this will help us to find a satisfying answer.
Start/Stop/Update
Both, bundles (hence packages) and services, can be started and stopped. In addition to that they can be kind of updated. Services are also tied to the bundle life cycle itself. But in this case I just mean if you can start and stop services or bundles (so that the exported packages "disappear").
Tracking of changes
ServiceTracker and BundleTracker make it possible to track and react to changes in the availability of bundles and services.
Specific dependencies to other bundles or services.
If you want to use an exported package you have to import it.
Import-Package: net.jens.helloworld
Would net.jens.helloworld provide a service I would also need to import the package in order to get the interface.
So in both cases their would be some sort of "tight coupling" to a more or less specific package.
Ability to have more than one implementation
Specific packages can be exported by more than one bundle. There could be a package net.jens.twitterclient which is exported by bundle A and bundle B. The same applies to services. The interface net.jens.twitterclient.TwitterService could be published by bundle A and B.
To sum this up here a short comparison (Exported packages/services):
YES/YES
YES/YES
YES/YES
YES/YES
So there is no difference.
Additionally it seems that services add more complexity and introduce another layer of dependencies (see image below).
alt text http://img688.imageshack.us/img688/4421/bundleservicecomparison.png
So if there is no real difference between exported packages and services what is the benefit of using services?
My explanation:
The use of services seems more complex. But services themselves seem to be more lightweight. It should be a difference (in terms of performance and resources) if you start/stop a whole bundle or if you just start and stop a specific service.
From a architectural standpoint I also guess that bundles could be viewed as foundation of the application. A foundation shouldn't change often in terms of starting and stopping bundles. The functionality is provided by services of this packages in some kind of dynamic layer above the "bundle layer". This "service layer" could be subject to frequent changes. For example the service for querying a database is unregistered if the database is going offline.
What's your opinion? Am I starting to get the whole point of services or am I still thinking the wrong way? Are there things I am missing that would make services far more attractive over exported packages?
Its quite simple:
Bundles are just providing classes you can use. Using Imports/Exports you can shield visibility and avoid (for example) versioning conflicts.
Services are instances of classes that satisfy a certain contract (interfaces).
So, when using Services you don't have to care about the origin of a implementation nor of implementation details. They may even change while you are using a certain service.
When you just want to rely on the Bundle Layer of OSGi, you easily introduce crosscutting dependencies to concrete implementations which you usually never want. (read below about DI)
This is not an OSGi thing only - just good practice.
In non OSGi worlds you may use Dependency Injection (DI) frameworks like Guice, Spring or similar. OSGi has the Service Layer built into the framework and lets higher level frameworks (Spring, Guice) use this layer. - so in the end you usually dont use the OSGi Service API directly but DI adapters from user friendly frameworks (Spring-->Spring DM,Guice-->Peaberry etc).
HTH,
Toni
I'd recommend purchasing this book. It does an excellent job explaining services and walking through the construction of a non-trivial application that makes use of OSGi Services.
http://equinoxosgi.org/
My company routinely builds 100+ bundle applications using services. The primary benefits we gain from using services are:
1) Loose coupling of producer/consumer implementation
2) Hot swappable service providers
3) Cleaner application architecture
When you start with OSGi, it is always easier to start with an export-package approach it certainly feels more java-like. But when your application starts growing and you need a bit of dynamicity, services are the way to go.
Export-package only does the resolution on startup, whereas services is an on-going resolution (which you may want or not). From a support point of view having services can be very scary (Is it deterministic? How can I replicate problems?), but it is also very powerful.
Peter Kriens explains why he thinks that Services are a paradigm shift in the same way OO was in its time. see µServices and Duct Tape.
In all my OSGi experience I haven't had yet the occasion to implement complex services (i.e. more than one layer), and certainly annotations seem the way to go. You can also use Spring dynamic module to ease the pain of dealing with service trackers. (and many other options like iPOJO, and Blueprint)
Lets consider the two following scenarios:
Bundle A offers a service which is an arithmetic addition add(x,y) return x+y. To achieve this, it exports "mathOpe package" with "IAddition interface", and registers a service within the service registry. Bundles B, C, D, ... consume this service.
Bundle A exports "mathOpe package", where we found a class Addition exposing an operation (x+y)<--add(x,y). Bundles B, C, D, ... import the package mathOpe.
Comparison of scenario 1 vs. scenario 2:
Just one implementation instance vs. many instances (Feel free to make it static!)
Dynamic service management start, stop, update vs. no management, the consumer is owning the implementation (the "service")
Flexible (we can imagine a remote service over a network) vs. not flexible
... among others.
PS: I am not an OSGI expert nor a Java one, this answer shows only my understanding of the phenomena :)
I think this excellent article could answer a lot of your questions:OSGi, and How It Got That Way.
The main advantage of using a service instead of the implementation class is that the bundle offering the service will do the initialization of the class itself.
The bundle that uses the service does not need to know anything about how the service is initialized.
If you do not use a service you will always have to call kind of a factory to create the service instance. This factory will leak details of the service that should remain private.
I have a server-code that's written in Python, and I have a client-code that's written with GWT. Now I want to run automation testing on the GWT against the data from the Python server.
From what I searched, people recommends using the Selenium, but I prefer to have a GWT-test that has more visibility into the client-code. That way I can verify the local database, and any data that are not exposed to the UI.
Also at this point I'm not too worried about the DOM aspect, layout, and the other UI stuff.
Is there anyway to make the GWTTest work with external server?
I've tried to search for the solution, or people with similar problem, but I couldn't find one. If this question has been asked previously, I apologize.
Thanks, KOkon.
You can use the GWTTest framework to incorporate testing some GWT components that call the server. But the tests won't be able to communicate directly with the server. If you need your tests to set up state on the server, I'm afraid you'll need to write special "for testing purposes only" RPC servers or servlets or similar to do it.
Having said that, I would (presumably like those who suggested Selenium) recommend three types of tests:
Unit tests for server components, and unit GWTTests for client components,
Integration tests for testing server code interaction with database, etc.
Selenium acceptance tests, which are "black box" - they don't have access to the innards of the GWT components.
What you could do is create a proxy servlet that gets started in the GWTTestCase embedded Jetty instance. That proxy could forward all calls to you real services in Python.