Is MEF a Service locator? - mef

I'm trying to design the architecture for a new LOB MVVM project utilising Caliburn Micro and nHibernate and am now at the point of looking into DI and IOC.
A lot of the examples for bootstrapping Caliburn Micro use MEF as the DI\IOC mechanism.
What I'm struggling with is that MEF seems to by reasonably popular but the idea of the Mef [Imports] annotations smells to me like another flavour of a Service Locator?
Am I missing something about MEF whereby almost all the examples I've seen are not using it correctly or have I completely not understood something about how it's used whereby it side steps the whole service locator issues?

MEF is not a Service Locator, on it's own. It can be used to implement a Service Locator (CompositionInitializer in the Silverlight version is effectively a Service Locator built into MEF), but it can also do dependency injection directly.
While the attributes may "smell" to you, they alone don't cause this to be a service locator, as you can use [ImportingConstructor] to inject data at creation time.
Note that the attributes are actually not the only way to use MEF - it can also work via direct registration or convention based registration instead (which is supported in the CodePlex drops and .NET 4.5).

I suppose if you were to just new up parts that had property imports and try to use them, then you could run into some of the same problems described here: Service Locator is an Anti-Pattern
But in practice you get your parts from the container, and if you use [Import] without the additional allowDefault property, then the part is required and the container will blow up on you if you ask for the part doing the import. It will blow up at runtime, yes, but unlike a run of the mill service-locator, its fairly straightforward to do static analysis of your MEF container using a test framework. I've written about it a couple times here and here.
It hasn't been a problem for me in practice, for a couple of reasons:
I get my parts from the container.
I use composition tests.
I'm writing applications, not framework code.

Related

Unity container injection in my view models

I am a newcomer to C# / .Net / Prism / WPF / DevExpress and started to work on a big project in my company. As I am starting a bit late in the project, a lot of code was already produced and I stumble upon code like this very often:
public class AboutWindowViewModel : BindableBase
{
public AboutWindowViewModel(IUnityContainer container)
{
...
What I find "special" here is the dependence of the view model in the container. In the codebase I am currently looking at, it seems to be the "pattern". Every class gets a dependence in the IUnityContainer and then the dependencies are resolved manually with e.g.
container.ResolveEx<...>(...);
I am used to work with DI frameworks in other languages, so that example was just a non-sense to me because it goes against the definition of a DI framework and the problems it is meant to solve. For example, I tried to make up some code to test one of those view models and it turned out to be a nightmare.
Now, I addressed that concern to the developpers of my company and they answered me the following:
Prism recommendation is to resolve services using containers as they are needed.
Therefore, they resolve them all the time with the IUnityContainer.ResolveEx method.
My question is: is that really the recommended way to build up software with Prism??? If not, do you know where I can find documentation that clearly addresses that point with examples?
is that really the recommended way to build up software with Prism???
Not at all, in it's an anti-pattern.
Prism recommendation is to resolve services using containers as they are needed.
Not really just Prism, but more generally, the recommendation is to let the container resolve the dependencies. Of course, this can be done by calling Resolve manually, but in doing that one defeats all the benefits expected from dependency injection.
Rather, you want to list the dependencies as constructor parameters and let the container fill them. Your code does not need to depend on the container, except for the very entry-point ("resolution root"). The application should only ever contain a single Resolve statement that resolves the first instance.
Here's where Prism comes into play: in your PrismApplication.RegisterTypes and IModule.RegisterTypes, you configure your container. You tell it which type should implement which interface. Later on, when view models are needed, Prism (the ViewModelLocator to be precise) uses the container to resolve the view models, thereby resolving all the dependencies. There's no reason to make any class depend on the container, in fact, Prism doesn't register anything for IContainerRegistry in the first place.
Keep in mind that the container can not only inject singleton services, but also transient instances or factories. And if you need parametrized factories, you can always manually create and register complex factories.
On a side-note, I, personally, would be hesitant to work for a client with a code-base like this, as it's an obvious proof that their developers have no idea what they're doing.

Where to use the dependency injection container in Zend/Zend 2

This relates to DI as much as it relates to the Zend framework. My question is about where to use the DI container. Should it only every be used durring bootstrap for initialization leaving the rest of the application ignorant of existence? Or is it good practice to pass it to controllers, models, helpers, etc to be used there if needed? What about Zend 2?
As it relates to dependency injection in general, that is something you should be practicing if you are attempting to write SOLID code. I have two articles I've written on the subject of Dependency Injection as it relates to the background knowledge (I think) developers should have before jumping directly into code that uses a DiC:
http://ralphschindler.com/2011/05/18/learning-about-dependency-injection-and-php
I've also compiled some examples of how to use Zend\Di that is a DiC component in the ZF2 codebase:
https://github.com/ralphschindler/Zend_DI-Examples/
Another point, I'd like to make ... Once you start passing the DiC as a dependency into controllers, models etc ... your DiC actually becomes a Service Locator at that point. This is perfectly acceptable, but you need to be aware up front that using a Service Locator would/should have been part of your design goals.
The next beta cycle of ZF2 will probably better address how Di and Service Locators are used through modules, controllers and how dependencies are pushed into things like helpers and models. So keep an eye out for that.
Hope that gets you started.
I have been reading some of the answers. First, as far as I know, it is currently not built in with the Zend framework < ver 2 to have the Dependency Injection Container do its work in the "composition root".
Hence, your best bet would be a service locator as mentioned already here. I have come up with a Zend framework application setup to do just that. Check it out over here.
In a nutshell, what it does is
Bootstrap Symfony Dependency Injection in the Zend Application Bootstrap class
Get the container from 1 in a Zend controller where you can use it to retrieve your services

Should CSLA be used with a dependency injection framework?

My development team is evaluating the various frameworks available for .NET to simplify our programming, one of which is CSLA. I have to admit to being a bit confused as to whether or not CSLA would benefit from being used in conjunction with a dependency injection framework, such as Spring.net or Windsor. If we combined one of those two DI frameworks with, say, the Entity Framework to handle ORM duties, does that negate the need or benefit of using CSLA altogether?
I have various levels of understanding of all these frameworks, and I'm trying to get a big picture of what will best benefit our enterprise architecture and object design.
Thank you!
CSLA is a framework for creating business entities, so has separate concerns than an IoC container or ORM. In a enterprise application you should consider the benefits of all three.
In particular, you should consider CSLA if you want data binding built in to your models, dirty checking, N-level undo, validation and business rules, as well as the data portal implementation which allows easy configuration for n-tier deployments.
Short answer: Yes.
Long answer: It requires a bit of grunt work and some experimentation to setup, but it can be done without fundamentally breaking CSLA. I put together a working prototype using StructureMap and the repository pattern and used the BuildUp method of Setter Injection to inject within CSLA. I used a method similar to the one found here to ensure that my business objects are re-injected when the objects are serialized.
I also use the registry base class of StructureMap to separate my configuration into presentation, CSLA client, CSLA server, and CSLA global settings. This way I can use the linked file feature of Visual Studio to include the CSLA server and CSLA global configuration files within the server-side Data Portal and the configuration will always be the same in both places. This was to ensure I can still change the Data Portal configuration settings in CSLA from 2 tier to 3 tier without breaking anything.
Anyway, I am still weighing the potential benefits with the drawbacks to using DI, but so far I am leaning in the direction of using it because testing will be much easier although I am skeptical of trying to use any of the advanced features of DI such as interception. I recommend reading the book Dependency Injection in .NET by Mark Seemann to understand the right and wrong way to use DI because there is a lot of misinformation on the Internet.

Why should I prefer OSGi Services over exported packages?

I am trying to get my head around OSGi Services. The main question I keep asking myself is: What's the benefit of using services instead of working with bundles and their exported packages?
As far as I know it seems the concept of Late Binding has something to do with it. Bundle dependencies are wired together at bundle start, so they are pretty fixed I guess. But with services it seems to be almost the same. A bundle starts and registers services or binds to services. Of course services can come and go whenever they want and you have to keep track of these chances. But the core idea doesn't seem that different to me.
Another aspect to this seems to be that services are more flexible. There could be many implementations for one specific Interface. On the other hand there can be a lot of different implementations for a specific exported package too.
In another text I read that the disadvantage of using exported packages is that they make the application more fragile than services. The author wrote that if you remove one bundle from the dependency graph other dependencies would no longer be met, thus possibly causing a domino effect on the whole graph. But couldn't the same happen if a service would go offline? To me it looks like service dependencies are no better than bundle dependencies.
So far I could not find a blog post, book or presentation that could clearly describe why services are better than just exposing functionality by exporting and importing packages.
To sum my questions up:
What are the key benefits of using OSGi Services that make them superior to exporting and importing packages?
Addition
I have tried to gather further information about this issue and come up with some kind of comparison between plain export/import of packages and services. Maybe this will help us to find a satisfying answer.
Start/Stop/Update
Both, bundles (hence packages) and services, can be started and stopped. In addition to that they can be kind of updated. Services are also tied to the bundle life cycle itself. But in this case I just mean if you can start and stop services or bundles (so that the exported packages "disappear").
Tracking of changes
ServiceTracker and BundleTracker make it possible to track and react to changes in the availability of bundles and services.
Specific dependencies to other bundles or services.
If you want to use an exported package you have to import it.
Import-Package: net.jens.helloworld
Would net.jens.helloworld provide a service I would also need to import the package in order to get the interface.
So in both cases their would be some sort of "tight coupling" to a more or less specific package.
Ability to have more than one implementation
Specific packages can be exported by more than one bundle. There could be a package net.jens.twitterclient which is exported by bundle A and bundle B. The same applies to services. The interface net.jens.twitterclient.TwitterService could be published by bundle A and B.
To sum this up here a short comparison (Exported packages/services):
YES/YES
YES/YES
YES/YES
YES/YES
So there is no difference.
Additionally it seems that services add more complexity and introduce another layer of dependencies (see image below).
alt text http://img688.imageshack.us/img688/4421/bundleservicecomparison.png
So if there is no real difference between exported packages and services what is the benefit of using services?
My explanation:
The use of services seems more complex. But services themselves seem to be more lightweight. It should be a difference (in terms of performance and resources) if you start/stop a whole bundle or if you just start and stop a specific service.
From a architectural standpoint I also guess that bundles could be viewed as foundation of the application. A foundation shouldn't change often in terms of starting and stopping bundles. The functionality is provided by services of this packages in some kind of dynamic layer above the "bundle layer". This "service layer" could be subject to frequent changes. For example the service for querying a database is unregistered if the database is going offline.
What's your opinion? Am I starting to get the whole point of services or am I still thinking the wrong way? Are there things I am missing that would make services far more attractive over exported packages?
Its quite simple:
Bundles are just providing classes you can use. Using Imports/Exports you can shield visibility and avoid (for example) versioning conflicts.
Services are instances of classes that satisfy a certain contract (interfaces).
So, when using Services you don't have to care about the origin of a implementation nor of implementation details. They may even change while you are using a certain service.
When you just want to rely on the Bundle Layer of OSGi, you easily introduce crosscutting dependencies to concrete implementations which you usually never want. (read below about DI)
This is not an OSGi thing only - just good practice.
In non OSGi worlds you may use Dependency Injection (DI) frameworks like Guice, Spring or similar. OSGi has the Service Layer built into the framework and lets higher level frameworks (Spring, Guice) use this layer. - so in the end you usually dont use the OSGi Service API directly but DI adapters from user friendly frameworks (Spring-->Spring DM,Guice-->Peaberry etc).
HTH,
Toni
I'd recommend purchasing this book. It does an excellent job explaining services and walking through the construction of a non-trivial application that makes use of OSGi Services.
http://equinoxosgi.org/
My company routinely builds 100+ bundle applications using services. The primary benefits we gain from using services are:
1) Loose coupling of producer/consumer implementation
2) Hot swappable service providers
3) Cleaner application architecture
When you start with OSGi, it is always easier to start with an export-package approach it certainly feels more java-like. But when your application starts growing and you need a bit of dynamicity, services are the way to go.
Export-package only does the resolution on startup, whereas services is an on-going resolution (which you may want or not). From a support point of view having services can be very scary (Is it deterministic? How can I replicate problems?), but it is also very powerful.
Peter Kriens explains why he thinks that Services are a paradigm shift in the same way OO was in its time. see µServices and Duct Tape.
In all my OSGi experience I haven't had yet the occasion to implement complex services (i.e. more than one layer), and certainly annotations seem the way to go. You can also use Spring dynamic module to ease the pain of dealing with service trackers. (and many other options like iPOJO, and Blueprint)
Lets consider the two following scenarios:
Bundle A offers a service which is an arithmetic addition add(x,y) return x+y. To achieve this, it exports "mathOpe package" with "IAddition interface", and registers a service within the service registry. Bundles B, C, D, ... consume this service.
Bundle A exports "mathOpe package", where we found a class Addition exposing an operation (x+y)<--add(x,y). Bundles B, C, D, ... import the package mathOpe.
Comparison of scenario 1 vs. scenario 2:
Just one implementation instance vs. many instances (Feel free to make it static!)
Dynamic service management start, stop, update vs. no management, the consumer is owning the implementation (the "service")
Flexible (we can imagine a remote service over a network) vs. not flexible
... among others.
PS: I am not an OSGI expert nor a Java one, this answer shows only my understanding of the phenomena :)
I think this excellent article could answer a lot of your questions:OSGi, and How It Got That Way.
The main advantage of using a service instead of the implementation class is that the bundle offering the service will do the initialization of the class itself.
The bundle that uses the service does not need to know anything about how the service is initialized.
If you do not use a service you will always have to call kind of a factory to create the service instance. This factory will leak details of the service that should remain private.

Impacts of configuring IoC Container from code

I currently have a home-made IoC container that I will soon replace with a new one. My home-made IoC container is configured using config files. From what I have read on the net, the ability to "configure from code" seems to be a very popular feature.
I don’t like the idea of having a class that knows every other class in the system in order to setup the IoC Container. Such class would have to be in an assembly that depends on the 80 other assemblies of my project.
Are there best practices on how to organize the code that configures the container?
I have read this post. Using conventions and auto-wiring is good when there are patterns in the types to be registered. But I have hundreds of types that are in different assemblies and that don’t have anything in common. How should I organize the code for those?
Regards,
Update: I chose an approach where the code that configures the container is decentralized. Each assembly in my system is given a chance to configure the container. The method at the entry points in my system (many .exe apps, the web app, the web services app and the unit test fixtures are all entry points) are responsible for calling each assembly to let them setup the container. I'm currently implementing that, I' not sure if it is going to be satisfactory. I will post another update soon.
Depending on your programming language (I use c#) you might want to look something like Autofac modules: http://code.google.com/p/autofac/wiki/StructuringWithModules