Inversion of control container - inversion-of-control

What is the most important features a IOC container should contain? You can easily create containers in 15 lines of code, but what should it include to be "useful" in a project?

This is a pretty wide open topic, and given to a lot of subjectivity, but I will try and answer from a very pragmatic point of view. Given the projects that I have worked on, and my experience with IoC, I would say that there are at least three biggies to look for in terms of usefulness.
Configuration - Any IoC that you use needs to have some central location that allows you to configure the behavior of that container. Whether that be a config file or a nice set of API calls that can be wrapped up in a global class somewhere, if the container isn't easily configurable then it is going to be a headache.
Lifetime Management - You really want a container that has the ability to allow for varied object lifetimes. You might want a certain object to always get a new IPersonCreator, but you only want one IPersonService in existence at any given time.
Automatic Dependency Injection - Ok, so Dependency Injection is the concept that IoC is built on top of, but you don't want to have to manage this yourself. The idea here is that if you ask for an IPersonCreator for the first time, it should resolve all it's dependencies, and their dependencies and so on automatically.
Overall what you need depends on the project, but there are several containers out there that will suit your needs just fine.

In descending order of importance:
Allow at least setter and constructor injection,
Separate configuration from code,
Allow different styles of configuration (XML or annotations),
These will require more than 15 lines of code, but those seem key to me.

Related

Castle Windsor and Dynamic Wiring

I've been a heavy Windsor users for the last several years. Prior to the Fluent Registration API, I would toggle between Xml Registration and huge piles of AddComponent() code. We've been happily using the Fluent Registration API and Installers specifically for quite some time now. I've gotten the impression from various writings like this:
http://docs.castleproject.org/Windsor.XML-Registration-Reference.ashx
That the Xml Registration approach has fallen out of favor and it wouldn't surprise me if it were marked for deprecation at some point in the near future.
Now, for my question: The Fluent Registration API and Installers work swimmingly for auto-wiring scenarios (i.e. when I want Windsor to just figure out how to construct my object graphs). Auto-wiring is the vast majority of IoC use cases out there, but what about when auto-wiring isn't possible? In other words I have multiple implementations of a service and I need to tell Windsor how to construct parts of my object graph. I've done this many times with the Xml Registration approach, but is there a more preferred approach these days? I'm hesitant to go the Xml Registration approach as its future seems uncertain, but I don't know how else to accomplish this with Windsor.
My uses cases are:
System needs to be able to swap implementations at QA-test (i.e.
credit checks and fraud detection processing where we want to test
without a dependency on a credit bureau API)
Provider patterns in our
system where we need to conditionally turn on and off different
implementations at deploy-time.
This all seems very well suited for IoC and we have all the building blocks in place, but want to make sure I'm taking the most future-proof approach with Windsor.
UPDATE:
While I like the feature toggle approach, I recently discovered a Windsor feature that is very useful on this front - Fallback Components. I'm leaving this edit here for anyone that might stumbled across this later.
Configuring your DI container completely through XML is error prone, verbose, and just too painful. The XML configuration possibilities are always a subset of what you can do with code based configuration; code is always more expressive.
Sometimes though your DI configuration depends on deploy-time configurations, but since the number of knobs you need are often fairly small, using a configuration flag is often a much better approach than polluting your configuration file with fully qualified type names.
Or let me put it differently, when you have large amounts of your DI configuration placed in your configuration file because your might want to change them at deploy time, please think again. Many of the changes need testing (by a developer) anyway, so there is no way you want someone from your operations team to fiddle around with that. And when you need a developer to look at it and verify it, what's the advantage of not having to recompile the project? Is this actually any quicker? A developer would still have to start the application anyway.
It is a false sense of flexibility and in fact a poor interface design (xml is the interface for your maintenance and operations department). BTW, are you the person that needs to document how the configuration file should be changed?
Instead of describing the list of fully qualified type names that are valid somewhere in the middle of the xml file, wouldn't it be much easier of all you have to write is "place 'false' in this field to disable ..."?
Here is an example of how to use a configuration switch:
bool detectFraught =
ConfigurationManager.AppSettings["DetectFraud"] != "false";
container.Register(
Component.For(typeof(IFraughtDetector)).ImplementedBy(
detectFraught ? typeof(RealDectector) : typeof(FakeDetector));
See how the configuration switch is now simply a boolean flag. This makes the configuration file much more maintainable, since the configuration is now a simple boolean switch instead of a complete type name (that can be misspelled).
Of course doing the ["DetectFraud"] != "false" isn't that nice by itself, but this can simply be solved by creating a strongly-typed configuration helper.
This answer might help as well. Allows you to dynamically, at runtime, provide an implementation. Though, sounds like you don't need it that dynamically and it's a little less obvious what's going on.
There are no plans to obsolete or remove the XML config support in Windsor.
Yes, you are right, it isn't a preferred approach due to its numerous drawbacks.
Anything you can do in XML can be done in code (note that inverse is not true).
Also keep in mind XML is not all-or-nothing. There are many ways to achieve the scenarios you gave as examples without resorting to registration in XML.
Feature toggles
Conditional compilation
if/else in your installer based on a appSettings flag
others...
I've used each of them in different projects in the past.

Wicket Application Structure Best Practice

I am working with an application that has some Wicket pages, divided into some Applications. We are expanding the Wicket development to substitute other legacy content. Right now, there is no clear path wether to write new Wicket Applications for each workflow, or if we should have one big Application with many URL mappings. I did not find any information about this either.
As far as we are, we see following issues:
Many Wicket Applications pattern:
Each Application (Workflow) can be easily mounted without much of a hassle.
Even if it's not more time consuming, you end up writing more Java Classes (at least for each Application you need at least some basic structure).
Each Application default URL get's accessed by it's homepage, so no further config is necessary.
One big Application pattern:
Each Workflow needs a Page, which has to be mapped in the Application class. As far as I've seen, there is no configuration in xml files to archieve this, but it should be possible to develop some schema that allows to structure this in some xml file. Disatvantage: more time consuming for the first time
For further addings, it should be somewhat easier than with the Application pattern, but it doesn't make a difference that would make a real difference considering that the workflow development is always way bigger than the initial config.
Each Workflow default URL can be accessed by the URL mapping, and can be changed easily, it seems a little easier than with the Application approach, but doesn't make a big difference either.
Now, what I'm looking for:
Opinion based on experiences, maybe arguments for deciding for one or another way.
Is there any documentation from Apache or some source for this? If so, some reference would be a great advice.
As I understand it, you would still deploy all of your Wicket Applications within one single Web Archive.
Doing that, in my opinion you lose the only real advantage of separating your code into different Wicket Applications. If you separate your code into multiple Wicket Application classes
you have to think of configuring each Wicket Application the same way and not forget a single one (include it in the web.xml, call the same settings in the init()-method, ...)
you are writing more boilerplate code as you already said yourself
The configuration and code would be more complex than with the "single application" approach. With a single application
you only have to mount the start page of each workflow in your single application class...which is one line of code compared to a new class and some lines of web.xml config with the multiple applications approac
So, if you don't want to deploy your workflows separately, I'd go with a single application. It makes it so much easier. Especially when you have accumulated more than a couple workflows the single application approach will probably be much easier to maintain.
How much shared coda are you likely to have?
Are there different performance/load tolerance/availability requirements for the different workflows?
These are the questions I use in general to decide whether two things should go in one application or not, and that's pretty much independent from Wicket.
Obviously much shared code points towards a single application. Of course you can still use separate applications with all of them depending on a set of shared modules but in practice you'll spend a lot of time trying to keep your modules in sync.
Similarly, wildly different availability requirements might steer you in the direction of separate applications as you'd probably want to deploy them separately.
The most difficult scenario is if you have much shared code AND you still want to deploy them separately, in that case a multi-tiered approach (multiple frontends connecting to a common backend) might be worth considering.

IoC Container accessibility

I'm wondering if the IoC Container should be referenced only by the class that instantiates and configures it, or if it can be injected into other classes, VMs and VML for example. I'm asking because i saw many people pass it through the ViewModelLocator's construtctor and use it from there.
Is this approach acceptable or to be avoided?
Thank you very much.
You are correct - passing container is Doing it Wrong since it goes against the whole Inversion of Control idea. Here's a few links for you:
Here's how I use IoC containers (and part 2)
I also recommend checking out Windsor's documentation, especially Concepts section which is quite universal (and will be useful to you even if you're not using Windsor).

MEF vs. any IoC

Looking at Microsoft's Managed Extensibility Framework (MEF) and various IoC containers (such as Unity), I am failing to see when to use one type of solution over the other. More specifically, it seems like MEF handles most IoC type patterns and that an IoC container like Unity would not be as necessary.
Ideally, I would like to see a good use case where an IoC container would be used instead of, or in addition to, MEF.
When boiled down, the main difference is that IoC containers are generally most useful with static dependencies (known at compile-time), and MEF is generally most useful with dynamic dependencies (known only at run-time).
As such, they are both composition engines, but the emphasis is very different for each pattern. Design decisions thus vary wildly, as MEF is optimized around discovery of unknown parts, rather than registrations of known parts.
Think about it this way: if you are developing your entire application, an IoC container is probably best. If you are writing for extensibility, such that 3rd-party developers will be extending your system, MEF is probably best.
Also, the article in #Pavel Nikolov's answer provides some great direction (it is written by Glenn Block, MEF's program manager).
I've been using MEF for a while and the key factor for when we use it instead of IOC products is that we regularly have 3-5 implementations of a given interface sitting in our plugins directory at a given time. Which one of those implementations should be used is actually something that can only be decided at runtime.
MEF is good at letting you do just that. Typically, IOC is geared toward making sure you could swap out, for a cononical example, an IUserRepository based on ORM Product 1 for ORM Product 2 at some point in the future. However, most IOC solutions assume that there will only be one IUserRepository in effect at a given time.
If, however, you need to choose one based on the input data for a given page request, IOC containers are typically at a loss.
As an example, we do our permission checking and our validation via MEF plugins for a big web app I've been working on for a while. Using MEF, we can look at when the record's CreatedOn date and go digging for the validation plugin that was actually in effect when the record was created and run the record BOTH through that plugin AND the validator that's currently in effect and compare the record's validity over time.
This kind of power also lets us define fallthrough overrides for plugins. The apps I'm working on are actually the same codebase deployed for 30+ implementations. So, we've typically go looking for plugins by asking for:
An interface implementation that is specific to the current site and the specific record type in question.
An interface implementation that is specific to the current site, but works with any kind of record.
An interface that works for any site and any record.
That lets us bundle a set of default plugins that will kick in, but only if that specific implementation doesn't override it with customer specific rules.
IOC is a great technology, but really seems to be more about making it easy to code to interfaces instead of concrete implementations. However, swapping those implementations out is more of a project shift kind of event in IOC. In MEF, you take the flexibility of interfaces and concrete implementations and make it a runtime decision between many available options.
I am apologizing for being off-topic. I simply wanted to say that there are 2 flaws that render MEF an unnecessary complication:
it is attribute based which doesn't do any good to helping you figuring out why things work as they do. There's no way to get to the details burred in the internals of the framework to see what exactly is going on there. There is no way to get a tracing log or hook up to the resolving mechanisms and handle unresolved situations manually
it doesn't have any troubleshooting mechanism to figure out the reasons for why some parts get rejected. Despite pointing at a failing part it doesn't tell you why that part has failed.
So I am very disappointed with it. I spent too much time fighting windmills trying to bootstrap a few classes instead of working on the real problems. I convinced there is nothing better than the old-school dependency injection technique when you have full control over what is created, when, and can trace anything in the VS debugger. I wish somebody who advocates MEF presented a bunch of good reasons as to why would I choose it over plain DI.
I agree that MEF can be a fully capable IoC framework. In fact I'm writing an application right now based on using MEF for both extensibility and IoC. I took the generic parts of it and made it into a "framework" and open sourced it as its own framework called SoapBox Core in case people want to see how it works.
In particular, take a look at how the Host works if you want to see MEF in action.

Should I still code to the interface even if I am ONLY EVER going to have ONE implementation?

I think the title speaks for itself guys - why should I write an interface and then implement a concrete class if there is only ever going to be 1 concrete implementation of that interface?
I think you shouldn't ;)
There's no need to shadow all your classes with corresponding interfaces.
Even if you're going to make more implementations later, you can always extract the interface when it becomes necessary.
This is a question of granularity. You cannot clutter your code with unnecessary interfaces but they are useful at boundaries between layers.
Someday you may try to test a class that depends on this interface. Then it's nice that you can mock it.
I'm constantly creating and removing interfaces. Some were not worth the effort and some are really needed. My intuition is mostly right but some refactorings are necessary.
The question is, if there is only going to ever be one concrete implementation, should there be an interface?
YAGNI - You Ain't Gonna Need It from Wikipedia
According to those who advocate the YAGNI approach, the temptation to write code that is not necessary at the moment, but might be in the future, has the following disadvantages:
* The time spent is taken from adding, testing or improving necessary functionality.
* The new features must be debugged, documented, and supported.
* Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later.
* Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work right, even if it eventually is needed.
* It leads to code bloat; the software becomes larger and more complicated.
* Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
* Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism.
Two somewhat conflicting answers to your question:
You do not need to extract an interface from every single concrete class you construct, and
Most Java programmers don't build as many interfaces as they should.
Most systems (even "throwaway code") evolve and change far past what their original design intended for them. Interfaces help them to grow flexibly by reducing coupling. In general, here are the warning signs that you ought to be coding to an interface:
Do you even suspect that another concrete class might need the same interface (like, if you suspect your data access objects might need XML representation down the road -- something that I've experienced)?
Do you suspect that your code might need to live on the other side of a Web Services layer?
Does your code forms a service layer to some outside client?
If you can honestly answer "no" to all these questions, then an interface might be overkill. Might. But again, unforeseen consequences are the name of the game in programming.
You need to decide what the programming interface is, by specifying the public functions. If you don't do a good job of that, the class would be difficult to use.
Therefore, if you decide later you need to create a formal interface, you should have the design ready to go.
So, you do need to design an interface, but you don't need to write it as an interface and then implement it.
I use a test driven approach to creating my code. This will often lead me to create interfaces where I want to supply a mock or dummy implementation as part of my test fixture.
I would not normally create any code unless it has some relevance to my tests, and since you cannot easily test an interface, only an implementation, that leads me to create interfaces if I need them when supplying dependencies for a test case.
I will also sometimes create interfaces when refactoring, to remove duplication or improve code readability.
You can always refactor your code to introduce an interface if you find out you need one later.
The only exception to this would be if I were designing an API for release to a third party - where the cost of making API changes is high. In this case I might try to predict the type of changes I might need to do in the future and work out ways of creating my API to minimise future incompatible changes.
One thing which no one mentioned yet, is that sometimes it is necessary in order to avoid depenency issues. you can have the interface in a common project with few dependencies and the implementation in a separate project with lots of dependencies.
"Only Ever going to have One implementation" == famous last words
It doesn't cost much to make an interface and then derive a concrete class from it. The process of doing it can make you rethink your design and often leads to a better end product. And once you've done it, if you ever find yourself eating those words - as frequently happens - you won't have to worry about it. You're already set. Whereas otherwise you have a pile of refactoring to do and it's gonna be a pain.
Editted to clarify: I'm working on the assumption that this class is going to be spread relatively far and wide. If it's a tiny utility class used by one or two other classes in a single package then yeah, don't worry about it. If it's a class that's going to be used in multiple packages by multiple other classes then my previous answer applies.
The question should be: "how can you ever be sure, that there is only going to ever be one concrete implementation?"
How can you be totally sure?
By the time you thought this through, you would already have created the interface and be on your way without assumptions that might turn out to be wrong.
With today's coding tools (like Resharper), it really doesn't take much time at all to create and maintain interfaces alongside your classes, whereas discovering that now you need an extra implementation and to replace all concrete references can take a long time and is no fun at all - believe me.
A lot of this is taken from a Rainsberger talk on InfoQ: http://www.infoq.com/presentations/integration-tests-scam
There are 3 reasons to have a class:
It holds some Value
It helps Persist some entity
It performs some Service
The majority of services should have interfaces. It creates a boundary, hides implementation, and you already have a second client; all of the tests that interact with that service.
Basically if you would ever want to Mock it out in a unit test it should have an interface.