Avoiding the service locator with inversion of control while dynamically creating objects - mvvm

I have a WPF application based on MVVM with Caliburn.Micro and Ninject. I have a root viewmodel called ShellViewModel. It has a couple of dependencies (injected via constructor) which are configured in Caliburn's Bootstrapper. So far so good.
Somewhere down the line, there is a MenuViewModel with a couple of buttons, that in turn open other viewmodels with their own dependencies. These viewmodels are not created during creation of the root object, but I still want to inject dependencies into them from my IoC container.
I've read this question on service locator vs dependency injection and I understand the points being made.
I'm under the impression however that my MenuViewModel needs to be able to access my IoC container in order the properly inject the viewmodels that are being made dynamically..which is something I'm trying to avoid. Is there another way?

Yes, I believe you can do something a bit better.
Consider that if there was no on-demand requirement then obviously you could make those viewmodels be dependencies of MenuViewModel and so on up the chain until you get to the root of the object graph (the ShellViewModel) and the container would wire everything up.
You can put a "firewall" in the object graph by substituting something that can construct the dependencies of MenuViewModel for the dependencies themselves. The container is the obvious choice for this job, and IMHO from a practical standpoint this is a good enough solution even if it's not as pure.
But you can also substitute a special-purpose factory instead of the container; this factory would take a dependency on the container and provide read-only properties for the real dependencies of MenuViewModel. Accessing the properties would result in having the container resolve the objects and returning them (accessor methods would also work instead of properties; what's more appropriate is another discussion entirely, so just use whatever you think is better).
It may look like that you haven't really changed the status quo, but the situation is not the same it would be if MenuViewModel took a direct dependency on the container. In that case you would have no idea what the real dependencies of MenuViewModel are by looking at its public interface, while now you would see that there's a dependency on something like
interface IMenuViewModelDependencyFactory
{
public RealDependencyA { get; }
public RealDependencyB { get; }
}
which is much more informative. And if you look at the public interface of the concrete MenuViewModelDependencyFactory things are also much better:
class MenuViewModelDependencyFactory : IMenuViewModelDependencyFactory
{
private Container container;
public MenuViewModelDependencyFactory(Container container) { ... }
public RealDependencyA { get { ... } }
public RealDependencyB { get { ... } }
}
There should be no confusion over what MenuViewModelDependencyFactory intends to do with the container here because it's so very highly specialized.

Related

Dealing with state in factory implementations

What pattern would one use if you have multiple factory implementations, each of which requires different state information to create new objects?
Example:
IModelParameters: contains all the inputs and outputs to a complex calculation
IModelParameterFactory: has methods for getting and saving IModelParameter objects.
The issue is that one factory implementation might be getting your parameters from a database, with some state needed for retrieval, (i.e. a UserID), another might be getting your inputs from a file, in which case you don't have a UserID, but you do need a file name.
Is there another pattern that works better in this case? I've looked at some dependancy injection tools/libraries, and haven't seen anything that seems to address the situation.
Have you tried to put the requeriments in a class?
Every factory implementation has their own requeriments, but all requeriments classes derives form a base requeriment class (Or impements a requeriments interface). This allows you to have the same interface for all factory implementations, you just must do a cast to the correct requeriments class in every factory implementation.
Yes, casts are ugly and error-prone, but this method provides an uniform an extensible interface for your factory.
It's hard to say without seeing some code, but you may want to look into implementing a Repository Pattern. The Repository implementation would be responsible for retrieving the data that the factory then used to build its object(s). You could inject the repository interface into your factory:
public class ModelParameterFactory : IModelParameterFactory
{
private readonly IModelParameterRepository Repository;
public ModelParameterFactory(IModelParameterRepository repository)
{
Repository = repository;
}
...interface methods use the injected repository...
}
Then you would have, say a DatabaseModelParameterRepository and a FileModelParameterRepository. But I'm guessing you also have logic around which of those you would need to inject, so that calls for another factory:
public class ModelParameterRepositoryFactory : IModelParameterRepositoryFactory
{
public ModelParameterRepositoryFactory(...inputs needed to determine which repository to use...)
{
...assign...
}
...determine which repository is required and return it...
}
At this point, it might make more sense to inject IModelParameterRepositoryFactory into the ModelParameterFactory, rather than inject the IModelParameterRepository.
public class ModelParameterFactory : IModelParameterFactory
{
private readonly IModelParameterRepositoryFactory RepositoryFactory;
public ModelParameterFactory(IModelParameterRepositoryFactory repositoryFactory)
{
RepositoryFactory = repositoryFactory;
}
...interface methods get repository from the factory...
}
Whether you use a DI container or not, all logic regarding which repository to use and which factory to use are now moved into the relevant factory implementations, as opposed to the calling code or DI configuration.
While not terribly complex, this design nonetheless does give me pause to wonder whether your ModelParameterFactory and ModelParameters are too generic. You might benefit from teasing them into separate, more specific classes. The result would be a simpler and more expressive design. The above should work for you if that is not the case, however.
In my point of view, a state is something that you store in memory, such as static object, global variable, cache or session. Usually in DI, such states are not maintained, but being passed as a parameter. Example:
public IEnumerable<Records> GetRecordByUserId(string userId){ /*code*/ }
The userId is being passed instead being maintained in the repository.
However, when you want to make them as configuration-like instead of passing each time you do query, I think you can inject it as a wrapper class. See my question for more info. However, I don't recommend this design at repository, but I do recommend at service level.

OSGi services - best practice

I start loving OSGi services more and more and want to realize a lot more of my components as services. Now I'm looking for best-practice, especially for UI components.
For Listener-relations I use the whiteboard-pattern, which IMHO opinion is the best approach. However if I want more than just notifications, I can think of three possible solutions.
Imagine the following scenario:
interface IDatabaseService {
EntityManager getEntityManager();
}
[1] Whiteboard Pattern - with self setting service
I would create a new service interface:
interface IDatabaseServiceConsumer {
setDatabaseService(IDatabaseService service);
}
and create a declarative IDatabaseService component with a bindConsumer method like this
protected void bindConsumer(IDatabaseServiceConsumer consumer) {
consumer.setDatabaseService(this);
}
protected void unbindConsumer(IDatabaseServiceConsumer consumer) {
consumer.setDatabaseService(null);
}
This approach assumes that there's only one IDatabaseService.
[Update] Usage would look like this:
class MyUIClass ... {
private IDatabaseService dbService;
Consumer c = new IDatabaseServiceConsumer() {
setDatabaseService(IDatabaseService service) {
dbService = service;
}
}
Activator.registerService(IDatabaseServiceConsumer.class,c,null);
...
}
[2] Make my class a service
Image a class like
public class DatabaseEntryViewer extends TableViewer
Now, I just add bind/unbind methods for my IDatabaseService and add a component.xml and add my DatabaseEntryViewer. This approach assumes, that there is a non-argument constructor and I create the UI components via a OSGi-Service-Factory.
[3] Classic way: ServiceTracker
The classic way to register a static ServiceTracker in my Activator and access it. The class which uses the tracker must handle the dynamic.
Currently I'm favoring the first one, as this approach doesn't complicated object creation and saves the Activator from endless, static ServiceTrackers.
I have to agree with #Neil Bartlett, your option 1 is backward. You are in effect using an Observer/Observable pattern.
Number 2 is not going to work, since the way UI objects lifecycles are managed in RCP won't allow you to do what you want. The widget will have to be created as part of the initialization of some sort of view container (ViewPart, Dialog, ...). This view part is typically configured and managed via the Workbench/plugin mechanism. You should work with this, not against it.
Number 3 would be a simple option, not necessarily the best, but simple.
If you use Spring DM, then you can easily accomplish number 2. It provides a means to inject your service beans into your UI Views, Pages, etc. You use a spring factory to create your views (as defined in your plugin.xml), which is configured via a Spring configuration, which is capable of injecting your services into the bean.
You may also be able to combine the technique used by the SpringExtensionFactory class along with DI to accomplish the same thing, without introducing another piece of technology. I haven't tried it myself so I cannot comment on the difficulty, although it is what I would try to do to bridge the gap between RCP and OSGi if I wasn't already using Spring DM.

Autofac and Quartz.Net Integration

Does anyone have any experience integrating autofac and Quartz.Net? If so, where is it best to control lifetime management -- the IJobFactory, within the Execute of the IJob, or through event listeners?
Right now, I'm using a custom autofac IJobFactory to create the IJob instances, but I don't have an easy way to plug in to a ILifetimeScope in the IJobFactory to ensure any expensive resources that are injected in the IJob are cleaned up. The job factory just creates an instance of a job and returns it. Here are my current ideas (hopefully there are better ones...)
It looks like most AutoFac integrations somehow wrap a ILifetimeScope around the unit of work they create. The obvious brute force way seems to be to pass an ILifetimeScope into the IJob and have the Execute method create a child ILifetimeScope and instantiate any dependencies there. This seems a little too close to a service locator pattern, which in turn seems to go against the spirit of autofac, but it might be the most obvious way to ensure proper handling of a scope.
I could plug into some of the Quartz events to handle the different phases of the Job execution stack, and handle lifetime management there. That would probably be a lot more work, but possibly worth it if it gets cleaner separation of concerns.
Ensure that an IJob is a simple wrapper around an IServiceComponent type, which would do all the work, and request it as Owned<T>, or Func<Owned<T>>. I like how this seems to vibe more with autofac, but I don't like that its not strictly enforceable for all implementors of IJob.
Without knowing too much about Quartz.Net and IJobs, I'll venture a suggestion still.
Consider the following Job wrapper:
public class JobWrapper<T>: IJob where T:IJob
{
private Func<Owned<T>> _jobFactory;
public JobWrapper(Func<Owned<T>> jobFactory)
{
_jobFactory = jobFactory;
}
void IJob.Execute()
{
using (var ownedJob = _jobFactory())
{
var theJob = ownedJob.Value;
theJob.Execute();
}
}
}
Given the following registrations:
builder.RegisterGeneric(typeof(JobWrapper<>));
builder.RegisterType<SomeJob>();
A job factory could now resolve this wrapper:
var job = _container.Resolve<JobWrapper<SomeJob>>();
Note: a lifetime scope will be created as part of the ownedJob instance, which in this case is of type Owned<SomeJob>. Any dependencies required by SomeJob that is InstancePerLifetimeScope or InstancePerDependency will be created and destroyed along with the Owned instance.
Take a look at https://github.com/alphacloud/Autofac.Extras.Quartz. It also available as NuGet package https://www.nuget.org/packages/Autofac.Extras.Quartz/
I know it a bit late:)

Library assembly IoC setup

I am working in a project that has two main parts: a class library assembly and the main application. Both are using Castle Windsor for IoC and both manually setup their list of of components in code (to aid refactoring and prevent the need for a config file). Currently the main application has code like this:
public static void Main()
{
// Perform library IoC setup
LibraryComponent.Init();
// Perform application IoC setup
IoC.Register<IXyz, Abc>("abc");
// etc, etc, ...
// Start the application code ...
}
However the call to initialise the library doesn't seem like a good solution. What is the best way to setup a class library that uses an IoC container to decouple its internal components?
Edit:
Lusid proposed using a static method on each public component in the library that would in turn make the call to initialise. One possible way to make this a bit nicer would be to use something like PostSharp to do this in an aspect-oriented way. However I was hoping for something a bit more elegant ;-)
Lusid also proposed using the AppDomain.AssemblyLoad event to perform custom steps at load time, however I am really after a way to avoid the client assembly from requiring any setup code.
Thanks!
I'm not sure if I'm understanding exactly the problem you are trying to solve, but my first guess is that you are looking for a way to decouple the need to call the Init method from your main application.
One method I've used in the past is a static constructor on a static class in the class library:
static public class LibraryComponent {
static LibraryComponent() {
Init();
}
}
If you have multiple class libraries, and would like a quick and dirty way of evaluating all of them as they are loaded, here's a (kinda hairy) way:
[STAThread]
static void Main()
{
AppDomain.CurrentDomain.AssemblyLoad += new AssemblyLoadEventHandler(CurrentDomain_AssemblyLoad);
}
static void CurrentDomain_AssemblyLoad(object sender, AssemblyLoadEventArgs args)
{
IEnumerable<Type> types = args.LoadedAssembly.GetTypes()
.Where(t => typeof(IMyModuleInterface).IsAssignableFrom(t));
foreach (Type t in types)
{
doSomethingWithType(t);
}
}
The Where clause could be anything you want, of course. The code above would find any class deriving from IMyModuleInterface in each assembly that gets loaded into the current AppDomain, and then I can do something with it, whether it be registering dependencies, maintaining an internal list, whatever.
Might not be exactly what you are looking for, but hopefully it helps in some way.
You could have a registration module. Basically LibraryComponent.Init() function takes an IRegistrar to wire everything up.
The IRegistrar could basically have a function Register(Type interface, Type implementation). The implimentor would map that function back to their IOC container.
The downside is that you can't rely on anything specific to the container your using.
Castle Windsor actually has a concept called facilities that are basically just ways of wrapping standardised pieces of configuration. In this model, you would simply add the two facilities to the container and they would do the work.
Of course, this wouldn't really be better than calling a library routine to do the work unless you configured the facilities in a configuration file (consider binsor). If you are really allergic to configuration files, your current solution is probably the best.

Is there any reason to not use my IoC as a general Settings Repository?

Suppose that the ApplicationSettings class is a general repository of settings that apply to my application such as TimeoutPeriod, DefaultUnitOfMeasure, HistoryWindowSize, etc... And let's say MyClass makes use of one of those settings - DefaultUnitOfMeasure.
My reading of proper use of Inversion of Control Containers - and please correct me if I'm wrong on this - is that you define the dependencies of a class in its constructor:
public class MyClass {
public MyClass(IDataSource ds, UnitOfMeasure default_uom) {...}
}
and then call instantiate your class with something like
var mc = IoC.Container.Resolve<MyClass>();
Where IDataSource has been assigned a concrete implementation and default_uom has been wired up to instantiate from the ApplicationSettings.DefaultUnitOfMeasure property. I've got to wonder however, if all these hoops are really that necessary to jump through. What trouble am I setting myself up for should I do
public class MyClass {
public MyClass(IDataSource ds) {
UnitOfMeasure duom = IoC.Container.Resolve<UnitOfMeasure>("default_uom");
}
}
Yes, many of my classes end up with a dependency on IoC.Container but that is a dependency that most of my classes will have anyways. It seems like I maybe should make full use of it as long as the classes are coupled. Please Agile gurus, tell me where I'm wrong.
IoC.Container.Resolve("default_uom");
I see this as a classic anti-pattern, where you are using the IoC container as a service locater - the key issues that result are:
Your application no longer fails-fast if your container is misconfigured (you'll only know about it the first time it tries to resolve that particular service in code, which might not occur except for a specific set of logic/circumstances).
Harder to test - not impossible of course, but you either have to create a real (and semi-configured) instance of the windsor container for your tests or inject the singleton with a mock of IWindsorContainer - this adds a lot of friction to testing, compared to just being able to pass the mock/stub services directly into your class under test via constructors/properties.
Harder to maintain this kind of application (configuration isn't centralized in one location)
Violates a number of other software development principles (DRY, SOC etc.)
The concerning part of your original statement is the implication that most of your classes will have a dependency on your IoC singleton - if they're getting all the services injected in via constructors/dependencies then having some tight coupling to IoC should be the exception to the rule - In general the only time I take a dependency on the container is when I'm doing something tricky i.e. trying to avoid a circular dependency problems, or wish to create components at run-time for some reason, and even then I can often avoid taking a dependency on anything more then a generic IServiceProvider interface, allowing me to swap in a home-bake IoC or service locater implementation if I need to reuse the components in an environment outside of the original project.
I usually don't have many classes depending on my IoC container. I usually try to wrap the IoC stuff in a facade object that I inject into other classes, usually most of my IoC injection is done only in the higher layers of my application though.
If you do things your way you can't test MyClass without creating a IoC configuration for your tests. This will make your tests harder to maintain.
Another problem is that you're going to have powerusers of your software who want to change the configuration editing your IoC config files. This is something I'd want to avoid. You could split up your IoC config into a normal config file and the IoC specific stuff. But then you could just as well use the normal .Net config functionality to read the configuration.
Yes, many of my classes end up with a dependency on IoC.Container but that is a dependency that most of my classes will have anyways.
I think this is the crux of the issue. If in fact most of your classes are coupled to the IoC container itself chances are you need to rethink your design.
Generally speaking your app should only refer to the container class directly once during the bootstrapping. After you have that first hook into the container the rest of the object graph should be entirely managed by the container and all of those objects should be oblivious to the fact that they were created by an IoC container.
To comment on your specific example:
public class MyClass {
public MyClass(IDataSource ds) {
UnitOfMeasure duom = IoC.Container.Resolve<UnitOfMeasure>("default_uom");
}
}
This makes it harder to re-use your class. More specifically it makes it harder to instantiate your class outside of the narrow usage pattern you are confining it to. One of the most common places this will manifest itself is when trying to test your class. It's much easier to test that class if the UnitOfMeasure can be passed to the constructor directly.
Also, your choice of name for the UOM instance ("default_uom") implies that the value could be overridden, depending on the usage of the class. In that case, you would not want to "hard-code" the value in the constructor like that.
Using the constructor injection pattern does not make your class dependent on the IoC, just the opposite it gives clients the option to use the IoC or not.