Why should I prefer OSGi Services over exported packages? - service

I am trying to get my head around OSGi Services. The main question I keep asking myself is: What's the benefit of using services instead of working with bundles and their exported packages?
As far as I know it seems the concept of Late Binding has something to do with it. Bundle dependencies are wired together at bundle start, so they are pretty fixed I guess. But with services it seems to be almost the same. A bundle starts and registers services or binds to services. Of course services can come and go whenever they want and you have to keep track of these chances. But the core idea doesn't seem that different to me.
Another aspect to this seems to be that services are more flexible. There could be many implementations for one specific Interface. On the other hand there can be a lot of different implementations for a specific exported package too.
In another text I read that the disadvantage of using exported packages is that they make the application more fragile than services. The author wrote that if you remove one bundle from the dependency graph other dependencies would no longer be met, thus possibly causing a domino effect on the whole graph. But couldn't the same happen if a service would go offline? To me it looks like service dependencies are no better than bundle dependencies.
So far I could not find a blog post, book or presentation that could clearly describe why services are better than just exposing functionality by exporting and importing packages.
To sum my questions up:
What are the key benefits of using OSGi Services that make them superior to exporting and importing packages?
Addition
I have tried to gather further information about this issue and come up with some kind of comparison between plain export/import of packages and services. Maybe this will help us to find a satisfying answer.
Start/Stop/Update
Both, bundles (hence packages) and services, can be started and stopped. In addition to that they can be kind of updated. Services are also tied to the bundle life cycle itself. But in this case I just mean if you can start and stop services or bundles (so that the exported packages "disappear").
Tracking of changes
ServiceTracker and BundleTracker make it possible to track and react to changes in the availability of bundles and services.
Specific dependencies to other bundles or services.
If you want to use an exported package you have to import it.
Import-Package: net.jens.helloworld
Would net.jens.helloworld provide a service I would also need to import the package in order to get the interface.
So in both cases their would be some sort of "tight coupling" to a more or less specific package.
Ability to have more than one implementation
Specific packages can be exported by more than one bundle. There could be a package net.jens.twitterclient which is exported by bundle A and bundle B. The same applies to services. The interface net.jens.twitterclient.TwitterService could be published by bundle A and B.
To sum this up here a short comparison (Exported packages/services):
YES/YES
YES/YES
YES/YES
YES/YES
So there is no difference.
Additionally it seems that services add more complexity and introduce another layer of dependencies (see image below).
alt text http://img688.imageshack.us/img688/4421/bundleservicecomparison.png
So if there is no real difference between exported packages and services what is the benefit of using services?
My explanation:
The use of services seems more complex. But services themselves seem to be more lightweight. It should be a difference (in terms of performance and resources) if you start/stop a whole bundle or if you just start and stop a specific service.
From a architectural standpoint I also guess that bundles could be viewed as foundation of the application. A foundation shouldn't change often in terms of starting and stopping bundles. The functionality is provided by services of this packages in some kind of dynamic layer above the "bundle layer". This "service layer" could be subject to frequent changes. For example the service for querying a database is unregistered if the database is going offline.
What's your opinion? Am I starting to get the whole point of services or am I still thinking the wrong way? Are there things I am missing that would make services far more attractive over exported packages?

Its quite simple:
Bundles are just providing classes you can use. Using Imports/Exports you can shield visibility and avoid (for example) versioning conflicts.
Services are instances of classes that satisfy a certain contract (interfaces).
So, when using Services you don't have to care about the origin of a implementation nor of implementation details. They may even change while you are using a certain service.
When you just want to rely on the Bundle Layer of OSGi, you easily introduce crosscutting dependencies to concrete implementations which you usually never want. (read below about DI)
This is not an OSGi thing only - just good practice.
In non OSGi worlds you may use Dependency Injection (DI) frameworks like Guice, Spring or similar. OSGi has the Service Layer built into the framework and lets higher level frameworks (Spring, Guice) use this layer. - so in the end you usually dont use the OSGi Service API directly but DI adapters from user friendly frameworks (Spring-->Spring DM,Guice-->Peaberry etc).
HTH,
Toni

I'd recommend purchasing this book. It does an excellent job explaining services and walking through the construction of a non-trivial application that makes use of OSGi Services.
http://equinoxosgi.org/
My company routinely builds 100+ bundle applications using services. The primary benefits we gain from using services are:
1) Loose coupling of producer/consumer implementation
2) Hot swappable service providers
3) Cleaner application architecture

When you start with OSGi, it is always easier to start with an export-package approach it certainly feels more java-like. But when your application starts growing and you need a bit of dynamicity, services are the way to go.
Export-package only does the resolution on startup, whereas services is an on-going resolution (which you may want or not). From a support point of view having services can be very scary (Is it deterministic? How can I replicate problems?), but it is also very powerful.
Peter Kriens explains why he thinks that Services are a paradigm shift in the same way OO was in its time. see µServices and Duct Tape.
In all my OSGi experience I haven't had yet the occasion to implement complex services (i.e. more than one layer), and certainly annotations seem the way to go. You can also use Spring dynamic module to ease the pain of dealing with service trackers. (and many other options like iPOJO, and Blueprint)

Lets consider the two following scenarios:
Bundle A offers a service which is an arithmetic addition add(x,y) return x+y. To achieve this, it exports "mathOpe package" with "IAddition interface", and registers a service within the service registry. Bundles B, C, D, ... consume this service.
Bundle A exports "mathOpe package", where we found a class Addition exposing an operation (x+y)<--add(x,y). Bundles B, C, D, ... import the package mathOpe.
Comparison of scenario 1 vs. scenario 2:
Just one implementation instance vs. many instances (Feel free to make it static!)
Dynamic service management start, stop, update vs. no management, the consumer is owning the implementation (the "service")
Flexible (we can imagine a remote service over a network) vs. not flexible
... among others.
PS: I am not an OSGI expert nor a Java one, this answer shows only my understanding of the phenomena :)

I think this excellent article could answer a lot of your questions:OSGi, and How It Got That Way.

The main advantage of using a service instead of the implementation class is that the bundle offering the service will do the initialization of the class itself.
The bundle that uses the service does not need to know anything about how the service is initialized.
If you do not use a service you will always have to call kind of a factory to create the service instance. This factory will leak details of the service that should remain private.

Related

Creating new Services in service fabric will cause duplicated code

The visual studio project templates for a Service fabric services contains code that can be reused over other multiple projects. For example the ServiceEventSource.cs or ActorEventSource.cs
My programmer instinct wants to move this code to a shared library, so I don't have duplicate code. But maybe this isn't the way to go with microservices, since you want to have small independent services. Introducing a library will make it more dependent. But they are already dependent on the EventSource class.
My solution will be to move some reusable code to a base class in a shared project and inherit that class in my services. Is this the best approach?
I'm guessing all your services are going to be doing lots of different jobs so once you pad out your EventSource classes they'll be completely different from each other except one method which would be service started?
Like with any logging there is many different approaches, one of the main ones I like is using AOP or interceptor proxies using IoC containers, this will keeps your classes clean but allows re-use of the ETW code and a decent amount of logging to be able to debug later down the line.
I moved a lot of duplicate code to my own nuget libraries which is working quiet well. It is a extra dependency, but always better then duplicate code. Now I'm planning to make my one SF templates in visual studio, so I don't have to remove and adjust some files.
I found a nice library (EventSourceProxy) which helps me managing the EventSource code for ETW: https://github.com/jonwagner/EventSourceProxy

Is MEF a Service locator?

I'm trying to design the architecture for a new LOB MVVM project utilising Caliburn Micro and nHibernate and am now at the point of looking into DI and IOC.
A lot of the examples for bootstrapping Caliburn Micro use MEF as the DI\IOC mechanism.
What I'm struggling with is that MEF seems to by reasonably popular but the idea of the Mef [Imports] annotations smells to me like another flavour of a Service Locator?
Am I missing something about MEF whereby almost all the examples I've seen are not using it correctly or have I completely not understood something about how it's used whereby it side steps the whole service locator issues?
MEF is not a Service Locator, on it's own. It can be used to implement a Service Locator (CompositionInitializer in the Silverlight version is effectively a Service Locator built into MEF), but it can also do dependency injection directly.
While the attributes may "smell" to you, they alone don't cause this to be a service locator, as you can use [ImportingConstructor] to inject data at creation time.
Note that the attributes are actually not the only way to use MEF - it can also work via direct registration or convention based registration instead (which is supported in the CodePlex drops and .NET 4.5).
I suppose if you were to just new up parts that had property imports and try to use them, then you could run into some of the same problems described here: Service Locator is an Anti-Pattern
But in practice you get your parts from the container, and if you use [Import] without the additional allowDefault property, then the part is required and the container will blow up on you if you ask for the part doing the import. It will blow up at runtime, yes, but unlike a run of the mill service-locator, its fairly straightforward to do static analysis of your MEF container using a test framework. I've written about it a couple times here and here.
It hasn't been a problem for me in practice, for a couple of reasons:
I get my parts from the container.
I use composition tests.
I'm writing applications, not framework code.

What is proper design for system with external database and RESTful web gui and service?

Basically I started to design my project to be like that:
Play! Framework for web gui (consuming RESTful service)
Spray Framework for RESTful service, connects to database, process incoming data, serves data for web gui
Database. Only service have rights to access it
Now I'm wondering, if it's really best possible design.
In fact with Play! I could easily have both web gui and service hosted at once.
That would be much easier to test and deploy in simple cases probably.
In complicated cases when high performance is needed I still can run one instance purely for gui and few next just to work as services (even if each of them could still serve full functionality).
On the other side I'm not sure if it won't hit performance too hard (services will be processing a lot of data, not only from the web gui). Also, isn't it mixing things which I should keep separated?
If I'll decide to keep them separated, should I allow to connect to database only through RESTful service? How to resolve problem with service and web gui trying to use different version of database? Should I use versioned REST protocol in that case?
----------------- EDIT------------------
My current system structure looks like that:
But I'm wondering if it wouldn't make sense to simplify it by putting RESTful service inside Play! gui web server directly.
----------------- EDIT 2------------------
Here is the diagram which illustrates my main question.
To say it clearly in other words: would it be bad to connect my service and web gui and share the model? And why?
Because there is also few advantages:
less configuration between service and gui needed
no data transfer needed
no need to create separate access layer (that could be disadvantage maybe, but in what case?)
no inconsistencies between gui/service model (for example because of different protocol versions)
easier to test and deploy
no code duplication (normally we need to duplicate big part of the model)
That said, here is the diagram:
Why do you need the RESTful service to connect to the database? Most Play! applications access the database directly from the controllers. The Play! philosophy considers accessing your models through a service layer to be an anti-pattern. The service layer could be handy if you intend to share that data with other (non-Play!) applications or external systems outside your control, but otherwise it's better to keep things simple. But you could also simply expose the RESTful interface from the Play! application itself for the other systems.
Play! is about keeping things simple and avoiding the over-engineered nonsense that has plagued Java development in the past.
Well, after few more hours of thinking about this, I think I found solution which will satisfy my needs. The goals which I want to be fulfilled are:
Web GUI cannot make direct calls to the database; it need to use proper model which will in turn use some objects repository
It must be possible to test and deploy whole thing as one packet with minimum configuration (at least for development phase, and then it should be possible to easy switch to more flexible solution)
There should be no code duplication (i.e. the same code in the service and web gui model)
If one approach will appear to be wrong I need to be able to switch to other one
What I forget to say before is that my service will have some embedded cache used to aggregate and process the data, and then make commits to database with bigger chunks of them. It's also present on the diagram.
My class structure will look like this:
|
|- models
|- IElementsRepository.scala
|- ElementsRepositoryJSON.scala
|- ElementsRepositoryDB.scala
|- Element.scala
|- Service
|- Element.scala
|- Web
|- Element.scala
|- controlers
|- Element.scala
|- views
|- Element
|- index.scala.html
So it's like normal MVC web app except the fact that there are separate model classes for service and web gui inheriting from main one.
In the Element.scala I will have IElementsRepository object injected using DI (probably using Guice).
IElementsRepository have two concrete implementations:
ElementsRepositoryJSON allows to retrieve data from service through JSON
ElementsRepositoryDB allows to retrieve data from local cache and DB.
This means that depending on active DI configuration both service and web gui can get the data from other service or local/external storage.
So for early development I can keep everything in one Play! instance and use direct cache and DB access (through ElementsRepositoryDB) and later reconfigure web gui to use JSON (through ElementsRepositoryJSON). This also allows me to run gui and service as separated instances if I want. I can even configure service to use other services as data providers (however for now I don't have such a needs).
More or less it will look like that:
Well, I think there's no objectively right or wrong answer here, but I'll offer my opinion: I think the diagram you've provided is exactly right. Your RESTful service is the single point of access for all clients including your web front-end, and I'd say that's the way it should be.
Without saying anything about Play!, Spray or any other web frameworks (or, for that matter, any database servers, HTML templating libraries, JSON parsers or whatever), the obvious rule of thumb is to maintain a strict separation of concerns by keeping implementation details from leaking into your interfaces. Now, you raised two concerns:
Performance: The process of marshalling and unmarshalling objects into JSON representations and serving them over HTTP is plenty fast (compared to JAXB, for example) and well supported by Scala libraries and web frameworks. When you inevitably find performance bottlenecks in a particular component, you can deal with those bottlenecks in isolation.
Testing and Deployment: The fact that the Play! framework shuns servlets does complicate things a bit. Normally I'd suggest for testing/staging, that you just take both the WAR for your front-end stuff and the WAR for your web service, and put them side-by-side in the same servlet container. I've done this in the past using the Maven Cargo plugin, for example. This isn't so straightforward with Play!, but one module I found (and have never used) is the play-cargo module... Point being, do whatever you need to do to keep the layers decoupled and then glue the bits together for testing however you want.
Hope this is useful...

IoC container configuration

How should the configuration for an IoC container be organized? I know that registering by code should be placed at the highest level in an application, but what if an application had hundreds of dependencies that need to be registered? Same with XML configurations. I know that you can split up the XML configurations into multiple files, but that would seem like it would become a maintenance hassle if someone had to dig through multiple XML files.
Are there any best practices for organizing the registration of dependencies? From all the videos and tutorials that I've seen, the code used in the demo were simple enough to place in a single location. I have yet to come across a sample application that utilizes a large number of dependencies.
Autofac and others (eg Ninject) employ a module concept for this exact purpose. http://code.google.com/p/autofac/wiki/StructuringWithModules may be what you're looking for.
Hth
Nick
It would help a little if we knew if you were talking about any particular IoC Container.
Windsor, for example, allows you to define dependencies across a wide range of XML files (organised however you want), and simply included in the configuration. The structure should be in a format that makes sense. Have a file/folder for Controllers, Facilities, etc etc. A heirarchy of related items.
With something more code-oriented, such as Autofac, you could easily create a host of container configuration providers to power your configuration. With Hiro, you don't really need to do much configuration at all.
Regardless of the container used, they all provide facilites for convention-over-configuration based registrations, so that should be your first stop in cleaning up registrations. A great example would be to register all classes whose name ends in 'Controller' in an MVC application.

Impacts of configuring IoC Container from code

I currently have a home-made IoC container that I will soon replace with a new one. My home-made IoC container is configured using config files. From what I have read on the net, the ability to "configure from code" seems to be a very popular feature.
I don’t like the idea of having a class that knows every other class in the system in order to setup the IoC Container. Such class would have to be in an assembly that depends on the 80 other assemblies of my project.
Are there best practices on how to organize the code that configures the container?
I have read this post. Using conventions and auto-wiring is good when there are patterns in the types to be registered. But I have hundreds of types that are in different assemblies and that don’t have anything in common. How should I organize the code for those?
Regards,
Update: I chose an approach where the code that configures the container is decentralized. Each assembly in my system is given a chance to configure the container. The method at the entry points in my system (many .exe apps, the web app, the web services app and the unit test fixtures are all entry points) are responsible for calling each assembly to let them setup the container. I'm currently implementing that, I' not sure if it is going to be satisfactory. I will post another update soon.
Depending on your programming language (I use c#) you might want to look something like Autofac modules: http://code.google.com/p/autofac/wiki/StructuringWithModules