I feel like using Dependency Injection is changing the way I write my object oriented code. For instance, below is what I would do without DI
Interface: DataAdapter
SqliteDataAdapter implements DataAdapter
XMLDataAdapter implements DataAdapter
OracleDataAdapter implements DataAdapter
// Initialization
DataAdapter adapter = new SqliteDataAdapter();
DataAdapter adapter = new XMLDataAdapter();
DataAdapter adapter = new OracleDataAdapter();
but using DI my code structure would be:
Interface: DataAdapter
SqliteDataAdapter implements ISqliteDataAdapter, DataAdapter
XMLDataAdapter implements IXMLDataAdapter, DataAdapter
OracleDataAdapter implements IOracleDataAdapter, DataAdapter
// Initialization
ISqliteDataAdapter adapter = new SqliteDataAdapter();
IXMLDataAdapter adapter = new XMLDataAdapter();
IOracleDataAdapter adapter = new OracleDataAdapter();
The reason for this change is that in my module I can bind 1 interface to 1 class.
Is this a bad practice? If yes what is the correct solution?
Doesn't DI change the whole purpose of using interfaces?
EDIT:
The following is my binding for DI container
bind(ISqliteDataAdapter.class).to(SqliteDataAdapter.class);
bind(IXMLDataAdapter.class).to(XMLDataAdapter.class);
bind(IOracleDataAdapter.class).to(OracleDataAdapter.class);
If i do as suggested, how would I be able to use multiple adapters? What if I need to use both XMLDataAdapter and SQLDataAdapter in my application?
bind(DataAdapter.class).to(SqliteDataAdapter.class);
Edit:
Here is the current call to get an instance:
#inject protected ISqliteDataAdapter dataAdapter;
Here is how I should do it with having 1 interface only:
#inject protected DataAdapter dataAdapter;
// In this case I don't have a choice on which type of data adapter It's going to create
// It's already defined in my module file and it's pointing to one of the 3 DataAdapters
So what I'm trying to understand is, how can I structure my code in a way that I have control over the type of object it's injecting, without having interface for every type of DataAdapter.
I would say that DI is a natural consequence of having interfaces. The reason we have interfaces is so that we can write code that doesn't depend on particular class, but instead an work with a multitude of classes without any changes. Even if we expect there to only be one class which does we need, this may change in the future. By programming to interface, we can even account for changes we can't imagine.
DI just uses the above notions in a peculiar way, to maximize the testability and adaptability of code. So I would say the DI is a really good practice.
That said, I think there is one warning to be issued for anyone involved with these kinds of systems. There's the danger that we write full-fledged classes like your SqliteDataAdaptor, and then push a "extract the interface" button to give the ISqliteDataAdaptor interface. This practice is very bad, particularly if it happens a lot. Instead, ISqliteDataAdaptor should usually be designed first, and then a sensible implementation can be written.
Dependency injection is about controlling what is instantiated and placing that logic in a single place. The code you write should not change the base types(interfaces) that you use.
In both cases you should have DataAdapter as the type of object you use. It's you who decides what the type of the instance that you get from the dependency injection container not the container.
also see this
Related
I'd like to know what the best practices are for Eclipse 4 dependency injection.
After reading about this subject on the internet, I came up with the following strategy.
requirements
Share the data model of the application (e.g. company, employee, customer, ...) so that framework objects (view parts, handlers, listeners, ...) can access it with as little coupling as possible.
proposed strategy
I've used the lifeCycleURI plugin property to register a handler which is triggered at application startup. Such handler creates an "empty" top-level data model container object and place it into the EclipseContext. It is also discarded when application stops.
All Eclipse framework classes (view parts, handlers) use the classic DI to get such data model object injected.
Button listeners created with the class constructor can't have the data model object injected in them. So I thought they could be created with ContextInjectionFactory.make() to have injection performed. This would couple the class which creates the listener with CIF, but the great advantage is that injection works out of the box.
This is the best solution I've found yet to leverage E4 DI with as little coupling as possible. The weak spot is in my opinion this coupling with CIF. My question would be whether any strategy exist to remove even this coupling, or alternate solutions for the same requirements.
You can create a service class in your project, let's say ModelService.
Add #creatable and #singleton annotations to that class :
#creatable
#singleton
class ModelService{
}
And let DI do its job using following syntax in your parts/handlers/etc ..
#Inject ModelService modelService;
Then you can implement methods in your service like 'createBaseModel()', updateModel() and so on.
This creates a low coupling solution : you can also implement ModelService in a separate plugin and define it as a OSGi service.
for that solution, you can read this Lars Vogel article.
Are static classes pretty much always frowned upon, or is there ever a good time to use them?
For example, would it make sense to implement something ubiquitous in your application like security in a static class? You could still use property injection on the static class to change out the implementation, and if you were to use something like MEF to inject the implementation then I would think it wouldn't get in the way of your tests.
I use static classes mainly for stateless helper classes and when I want to create extension methods. I try to avoid static classes that have state because as you mention it can get in the way of the tests.
Let's say you decide to add state to a static class. To test the methods of this class that depend on its state you will have to find a way to change this state during the tests. This means that you have to:
Prepare the state before each test.
Clear the state after each test.
This means that the class will need to offer a way (by means of internal methods or internal property setters) to alter its state which can be dangerous. If you want to create immutable classes or classes that encapsulate completely their implementation details then you will not be able to test them easily (if not at all) and your test might break more often from changes to the implementation. Even with MEF it will not be easy to do this.
Of course static class sometimes offer attractive solutions for problems like logging and,as mentioned in your question, security. In these cases I would go for a static class that delegates all calls to a private readonly field. This way the class of this field can be unit tested normally. You can then test the static class in your integration tests.
By the way have a look at .NET's design guidelines for static classes. It doesn't include anything relevant to your question but it includes valuable advice.
Recently I was looking at some source code provided by community leaders in their open source implementations. One these projects made use of IOC. Here is sample hypothetical code:
public class Class1
{
private ISomeInterface _someObject;
public Class1(ISomeInterface someObject)
{
_someObject = someObject;
}
// some more code and then
var someOtherObject = new SomeOtherObject();
}
My question is not about what the IOCs are for and how to use them in technical terms but rather what are the guidelines regarding object creation. All that effort and then this line using "new" operator. I don't quite understand. Which object should be created by IOC and for which ones it is permissible to be created via the new operator?
As a general rule of thumb, if something is providing a service which may want to be replaced either for testing or to use a different implementation (e.g. different authentication services) then inject the dependency. If it's something like a collection, or a simple data object which isn't providing behaviour which you'd ever want to vary, then it's fine to instantiate it within the class.
Usually you use IoC because:
A dependency that can change in the future
To code against interfaces, not concrete types
To enable mocking these dependencies in Unit Testing scenarios
You could avoid using IoC in the case where you don't control the dependency, for example an StringBuilder is always going to be an StringBuilder and have a defined behavior, and you usually don't really need to mock that; while you might want to mock an HttpRequestBase, because it's an external dependency on having an internet connection, for example, which is a problem during unit tests (longer execution times, and it's something out of your control).
The same happens for database access repositories and so on.
I have a question, and I'm going to tag this subjective since that's what I think it evolves into, more of a discussion. I'm hoping for some good ideas or some thought-provokers. I apologize for the long-winded question but you need to know the context.
The question is basically:
How do you deal with concrete types in relation to IoC containers? Specifically, who is responsible for disposing them, if they require disposal, and how does that knowledge get propagated out to the calling code?
Do you require them to be IDisposable? If not, is that code future-proof, or is the rule that you cannot use disposable objects? If you enforce IDisposable-requirements on interfaces and concrete types to be future-proof, whose responsibility is objects injected as part of constructor calls?
Edit: I accepted the answer by #Chris Ballard since it's the closest one to the approach we ended up with.
Basically, we always return a type that looks like this:
public interface IService<T> : IDisposable
where T: class
{
T Instance { get; }
Boolean Success { get; }
String FailureMessage { get; } // in case Success=false
}
We then return an object implementing this interface back from both .Resolve and .TryResolve, so that what we get in the calling code is always the same type.
Now, the object implementing this interface, IService<T> is IDisposable, and should always be disposed of. It's not up to the programmer that resolves a service to decide whether the IService<T> object should be disposed or not.
However, and this is the crucial part, whether the service instance should be disposed or not, that knowledge is baked into the object implementing IService<T>, so if it's a factory-scoped service (ie. each call to Resolve ends up with a new service instance), then the service instance will be disposed when the IService<T> object is disposed.
This also made it possible to support other special scopes, like pooling. We can now say that we want minimum 2 service instances, maximum 15, and typically 5, which means that each call to .Resolve will either retrieve a service instance from a pool of available objects, or construct a new one. And then, when the IService<T> object that holds the pooled service is disposed of, the service instance is released back into its pool.
Sure, this made all code look like this:
using (var service = ServiceContainer.Global.Resolve<ISomeService>())
{
service.Instance.DoSomething();
}
but it's a clean approach, and it has the same syntax regardless of the type of service or concrete object in use, so we chose that as an acceptable solution.
Original question follows, for posterity
Long-winded question comes here:
We have a IoC container that we use, and recently we discovered what amounts to a problem.
In non-IoC code, when we wanted to use, say, a file, we used a class like this:
using (Stream stream = new FileStream(...))
{
...
}
There was no question as to whether this class was something that held a limited resource or not, since we knew that files had to be closed, and the class itself implemented IDisposable. The rule is simply that every class we construct an object of, that implements IDisposable, has to be disposed of. No questions asked. It's not up to the user of this class to decide if calling Dispose is optional or not.
Ok, so on to the first step towards the IoC container. Let's assume we don't want the code to talk directly to the file, but instead go through one layer of indirection. Let's call this class a BinaryDataProvider for this example. Internally, the class is using a stream, which is still a disposable object, so the above code would be changed to:
using (BinaryDataProvider provider = new BinaryDataProvider(...))
{
...
}
This doesn't change much. The knowledge that the class implements IDisposable is still here, no questions asked, we need to call Dispose.
But, let's assume that we have classes that provide data that right now doesn't use any such limited resources.
The above code could then be written as:
BinaryDataProvider provider = new BinaryDataProvider();
...
OK, so far so good, but here comes the meat of the question. Let's assume we want to use an IoC container to inject this provider instead of depending on a specific concrete type.
The code would then be:
IBinaryDataProvider provider =
ServiceContainer.Global.Resolve<IBinaryDataProvider>();
...
Note that I assume there is an independent interface available that we can access the object through.
With the above change, what if we later on want to use an object that really should be disposed of? None of the existing code that resolves that interface is written to dispose of the object, so what now?
The way we see it, we have to pick one solution:
Implement runtime checking that checks that if a concrete type that is being registered implements IDisposable, require that the interface it is exposed through also implements IDisposable. This is not a good solution
Enfore a constraint on the interfaces being used, they must always inherit from IDisposable, in order to be future-proof
Enforce runtime that no concrete types can be IDisposable, since this is specifically not handled by the code using the IoC container
Just leave it up to the programmer to check if the object implements IDisposable and "do the right thing"?
Are there others?
Also, what about injecting objects in constructors? Our container, and some of the other containers we've looked into, is capable of injecting a fresh object into a parameter to a constructor of a concrete type. For instance, if our BinaryDataProvider need an object that implements the ILogging interface, if we enforce IDispose-"ability" on these objects, whose responsibility is it to dispose of the logging object?
What do you think? I want opinions, good and bad.
One option might be to go with a factory pattern, so that the objects created directly by the IoC container never need to be disposed themselves, eg
IBinaryDataProviderFactory factory =
ServiceContainer.Global.Resolve<IBinaryDataProviderFactory>();
using(IBinaryDataProvider provider = factory.CreateProvider())
{
...
}
Downside is added complexity, but it does mean that the container never creates anything which the developer is supposed to dispose of - it is always explicit code which does this.
If you really want to make it obvious, the factory method could be named something like CreateDisposableProvider().
(Disclaimer: I'm answering this based on java stuff. Although I program C# I haven't proxied anything in C# but I know it's possible. Sorry about the java terminology)
You could let the IoC framework inspect the object being constructed to see if it supports
IDisposable. If not, you could use a dynamic proxy to wrap the actual object that the IoC framework provides to the client code. This dynamic proxy could implement IDisposable, so that you'd always deliver a IDisposable to the client. As long as you're working with interfaces that should be fairly simple ?
Then you'd just have the problem of communicating to the developer when the object is an IDisposable. I'm not really sure how this'd be done in a nice manner.
You actually came up with a very dirty solution: your IService contract violates the SRP, wich is a big no-no.
What I recommend is to distinguish so-called "singleton" services from so-called "prototype" services. Lifetime of "singleton" ones is managed by the container, which may query at runtime whether a particular instance implements IDisposable and invoke Dispose() on shutdown if so.
Managing prototypes, on the other hand, is totally the responsibility of the calling code.
Suppose that the ApplicationSettings class is a general repository of settings that apply to my application such as TimeoutPeriod, DefaultUnitOfMeasure, HistoryWindowSize, etc... And let's say MyClass makes use of one of those settings - DefaultUnitOfMeasure.
My reading of proper use of Inversion of Control Containers - and please correct me if I'm wrong on this - is that you define the dependencies of a class in its constructor:
public class MyClass {
public MyClass(IDataSource ds, UnitOfMeasure default_uom) {...}
}
and then call instantiate your class with something like
var mc = IoC.Container.Resolve<MyClass>();
Where IDataSource has been assigned a concrete implementation and default_uom has been wired up to instantiate from the ApplicationSettings.DefaultUnitOfMeasure property. I've got to wonder however, if all these hoops are really that necessary to jump through. What trouble am I setting myself up for should I do
public class MyClass {
public MyClass(IDataSource ds) {
UnitOfMeasure duom = IoC.Container.Resolve<UnitOfMeasure>("default_uom");
}
}
Yes, many of my classes end up with a dependency on IoC.Container but that is a dependency that most of my classes will have anyways. It seems like I maybe should make full use of it as long as the classes are coupled. Please Agile gurus, tell me where I'm wrong.
IoC.Container.Resolve("default_uom");
I see this as a classic anti-pattern, where you are using the IoC container as a service locater - the key issues that result are:
Your application no longer fails-fast if your container is misconfigured (you'll only know about it the first time it tries to resolve that particular service in code, which might not occur except for a specific set of logic/circumstances).
Harder to test - not impossible of course, but you either have to create a real (and semi-configured) instance of the windsor container for your tests or inject the singleton with a mock of IWindsorContainer - this adds a lot of friction to testing, compared to just being able to pass the mock/stub services directly into your class under test via constructors/properties.
Harder to maintain this kind of application (configuration isn't centralized in one location)
Violates a number of other software development principles (DRY, SOC etc.)
The concerning part of your original statement is the implication that most of your classes will have a dependency on your IoC singleton - if they're getting all the services injected in via constructors/dependencies then having some tight coupling to IoC should be the exception to the rule - In general the only time I take a dependency on the container is when I'm doing something tricky i.e. trying to avoid a circular dependency problems, or wish to create components at run-time for some reason, and even then I can often avoid taking a dependency on anything more then a generic IServiceProvider interface, allowing me to swap in a home-bake IoC or service locater implementation if I need to reuse the components in an environment outside of the original project.
I usually don't have many classes depending on my IoC container. I usually try to wrap the IoC stuff in a facade object that I inject into other classes, usually most of my IoC injection is done only in the higher layers of my application though.
If you do things your way you can't test MyClass without creating a IoC configuration for your tests. This will make your tests harder to maintain.
Another problem is that you're going to have powerusers of your software who want to change the configuration editing your IoC config files. This is something I'd want to avoid. You could split up your IoC config into a normal config file and the IoC specific stuff. But then you could just as well use the normal .Net config functionality to read the configuration.
Yes, many of my classes end up with a dependency on IoC.Container but that is a dependency that most of my classes will have anyways.
I think this is the crux of the issue. If in fact most of your classes are coupled to the IoC container itself chances are you need to rethink your design.
Generally speaking your app should only refer to the container class directly once during the bootstrapping. After you have that first hook into the container the rest of the object graph should be entirely managed by the container and all of those objects should be oblivious to the fact that they were created by an IoC container.
To comment on your specific example:
public class MyClass {
public MyClass(IDataSource ds) {
UnitOfMeasure duom = IoC.Container.Resolve<UnitOfMeasure>("default_uom");
}
}
This makes it harder to re-use your class. More specifically it makes it harder to instantiate your class outside of the narrow usage pattern you are confining it to. One of the most common places this will manifest itself is when trying to test your class. It's much easier to test that class if the UnitOfMeasure can be passed to the constructor directly.
Also, your choice of name for the UOM instance ("default_uom") implies that the value could be overridden, depending on the usage of the class. In that case, you would not want to "hard-code" the value in the constructor like that.
Using the constructor injection pattern does not make your class dependent on the IoC, just the opposite it gives clients the option to use the IoC or not.