ServiceContainer, IoC, and disposable objects - inversion-of-control

I have a question, and I'm going to tag this subjective since that's what I think it evolves into, more of a discussion. I'm hoping for some good ideas or some thought-provokers. I apologize for the long-winded question but you need to know the context.
The question is basically:
How do you deal with concrete types in relation to IoC containers? Specifically, who is responsible for disposing them, if they require disposal, and how does that knowledge get propagated out to the calling code?
Do you require them to be IDisposable? If not, is that code future-proof, or is the rule that you cannot use disposable objects? If you enforce IDisposable-requirements on interfaces and concrete types to be future-proof, whose responsibility is objects injected as part of constructor calls?
Edit: I accepted the answer by #Chris Ballard since it's the closest one to the approach we ended up with.
Basically, we always return a type that looks like this:
public interface IService<T> : IDisposable
where T: class
{
T Instance { get; }
Boolean Success { get; }
String FailureMessage { get; } // in case Success=false
}
We then return an object implementing this interface back from both .Resolve and .TryResolve, so that what we get in the calling code is always the same type.
Now, the object implementing this interface, IService<T> is IDisposable, and should always be disposed of. It's not up to the programmer that resolves a service to decide whether the IService<T> object should be disposed or not.
However, and this is the crucial part, whether the service instance should be disposed or not, that knowledge is baked into the object implementing IService<T>, so if it's a factory-scoped service (ie. each call to Resolve ends up with a new service instance), then the service instance will be disposed when the IService<T> object is disposed.
This also made it possible to support other special scopes, like pooling. We can now say that we want minimum 2 service instances, maximum 15, and typically 5, which means that each call to .Resolve will either retrieve a service instance from a pool of available objects, or construct a new one. And then, when the IService<T> object that holds the pooled service is disposed of, the service instance is released back into its pool.
Sure, this made all code look like this:
using (var service = ServiceContainer.Global.Resolve<ISomeService>())
{
service.Instance.DoSomething();
}
but it's a clean approach, and it has the same syntax regardless of the type of service or concrete object in use, so we chose that as an acceptable solution.
Original question follows, for posterity
Long-winded question comes here:
We have a IoC container that we use, and recently we discovered what amounts to a problem.
In non-IoC code, when we wanted to use, say, a file, we used a class like this:
using (Stream stream = new FileStream(...))
{
...
}
There was no question as to whether this class was something that held a limited resource or not, since we knew that files had to be closed, and the class itself implemented IDisposable. The rule is simply that every class we construct an object of, that implements IDisposable, has to be disposed of. No questions asked. It's not up to the user of this class to decide if calling Dispose is optional or not.
Ok, so on to the first step towards the IoC container. Let's assume we don't want the code to talk directly to the file, but instead go through one layer of indirection. Let's call this class a BinaryDataProvider for this example. Internally, the class is using a stream, which is still a disposable object, so the above code would be changed to:
using (BinaryDataProvider provider = new BinaryDataProvider(...))
{
...
}
This doesn't change much. The knowledge that the class implements IDisposable is still here, no questions asked, we need to call Dispose.
But, let's assume that we have classes that provide data that right now doesn't use any such limited resources.
The above code could then be written as:
BinaryDataProvider provider = new BinaryDataProvider();
...
OK, so far so good, but here comes the meat of the question. Let's assume we want to use an IoC container to inject this provider instead of depending on a specific concrete type.
The code would then be:
IBinaryDataProvider provider =
ServiceContainer.Global.Resolve<IBinaryDataProvider>();
...
Note that I assume there is an independent interface available that we can access the object through.
With the above change, what if we later on want to use an object that really should be disposed of? None of the existing code that resolves that interface is written to dispose of the object, so what now?
The way we see it, we have to pick one solution:
Implement runtime checking that checks that if a concrete type that is being registered implements IDisposable, require that the interface it is exposed through also implements IDisposable. This is not a good solution
Enfore a constraint on the interfaces being used, they must always inherit from IDisposable, in order to be future-proof
Enforce runtime that no concrete types can be IDisposable, since this is specifically not handled by the code using the IoC container
Just leave it up to the programmer to check if the object implements IDisposable and "do the right thing"?
Are there others?
Also, what about injecting objects in constructors? Our container, and some of the other containers we've looked into, is capable of injecting a fresh object into a parameter to a constructor of a concrete type. For instance, if our BinaryDataProvider need an object that implements the ILogging interface, if we enforce IDispose-"ability" on these objects, whose responsibility is it to dispose of the logging object?
What do you think? I want opinions, good and bad.

One option might be to go with a factory pattern, so that the objects created directly by the IoC container never need to be disposed themselves, eg
IBinaryDataProviderFactory factory =
ServiceContainer.Global.Resolve<IBinaryDataProviderFactory>();
using(IBinaryDataProvider provider = factory.CreateProvider())
{
...
}
Downside is added complexity, but it does mean that the container never creates anything which the developer is supposed to dispose of - it is always explicit code which does this.
If you really want to make it obvious, the factory method could be named something like CreateDisposableProvider().

(Disclaimer: I'm answering this based on java stuff. Although I program C# I haven't proxied anything in C# but I know it's possible. Sorry about the java terminology)
You could let the IoC framework inspect the object being constructed to see if it supports
IDisposable. If not, you could use a dynamic proxy to wrap the actual object that the IoC framework provides to the client code. This dynamic proxy could implement IDisposable, so that you'd always deliver a IDisposable to the client. As long as you're working with interfaces that should be fairly simple ?
Then you'd just have the problem of communicating to the developer when the object is an IDisposable. I'm not really sure how this'd be done in a nice manner.

You actually came up with a very dirty solution: your IService contract violates the SRP, wich is a big no-no.
What I recommend is to distinguish so-called "singleton" services from so-called "prototype" services. Lifetime of "singleton" ones is managed by the container, which may query at runtime whether a particular instance implements IDisposable and invoke Dispose() on shutdown if so.
Managing prototypes, on the other hand, is totally the responsibility of the calling code.

Related

Scala: Making singleton to process a client request

Is it a good practice to make use of singleton object to process client requests?
As object instance is singleton, and if its in middle of processing a client request , and meanwhile another request arrives then same instance is invoked with new client data. Won't it make things messy?
When using singleton objects we must ensure everything inside it as well as everything it calls must be thread-safe. For example, javax.crypto.Cipher does not seem to be thread-safe, so it should probably not be called from a singleton. Consider how guice uses #Singleton to specify threadsafety intention:
#Singleton
public class InMemoryTransactionLog implements TransactionLog {
/* everything here should be threadsafe! */
}
Also consider the example of Play Framework which starting version 2.4 began moving away from singleton controllers and started encouraging class controllers.
It depends on whether the object holds any mutable data or not.
If the object is just a holder for pure functions and immutable state, it does not matter how many threads are using it at the same time because they can't affect each other via shared state.
If the object has mutable state then things can definitely go wrong if you access it from multiple threads without some kind of locking, either inside the object or externally.
So it is good practice as long as there is no mutable state. It is a good way of collecting related methods under the same namespace, or for creating a global function (by defining an apply method).

IOC vs New guidelines

Recently I was looking at some source code provided by community leaders in their open source implementations. One these projects made use of IOC. Here is sample hypothetical code:
public class Class1
{
private ISomeInterface _someObject;
public Class1(ISomeInterface someObject)
{
_someObject = someObject;
}
// some more code and then
var someOtherObject = new SomeOtherObject();
}
My question is not about what the IOCs are for and how to use them in technical terms but rather what are the guidelines regarding object creation. All that effort and then this line using "new" operator. I don't quite understand. Which object should be created by IOC and for which ones it is permissible to be created via the new operator?
As a general rule of thumb, if something is providing a service which may want to be replaced either for testing or to use a different implementation (e.g. different authentication services) then inject the dependency. If it's something like a collection, or a simple data object which isn't providing behaviour which you'd ever want to vary, then it's fine to instantiate it within the class.
Usually you use IoC because:
A dependency that can change in the future
To code against interfaces, not concrete types
To enable mocking these dependencies in Unit Testing scenarios
You could avoid using IoC in the case where you don't control the dependency, for example an StringBuilder is always going to be an StringBuilder and have a defined behavior, and you usually don't really need to mock that; while you might want to mock an HttpRequestBase, because it's an external dependency on having an internet connection, for example, which is a problem during unit tests (longer execution times, and it's something out of your control).
The same happens for database access repositories and so on.

IoC (StructureMap) Best Practice

By my (likely meager) understanding, doing this in the middle of a method in your controller / presenter is considered bad practice, since it creates a dependency between StructureMap and your presenter:
void Override() {
ICommentForOverrideGetter comm = StructureMap.ObjectFactory.GetInstance<ICommentForOverrideGetter>();
since this dependancy should be injected into the presenter via the constructor, with your IoC container wiring it up. In this case though my code needs a fresh copy of ICommentForOverrideGetter every time this method runs. Is this an exception to the above best practice, or a case where I should re-think my architecture?
It is said that there is no problem in computer science which cannot be solved by one more level of indirection:
If you just don't want the dependency in your presenter, inject a factory interface, the real implementation could do new CommentForOverrideGetter or whatever.
Edit:
"I have no problem ignoring best practices when I think the complexity/benefit ratio is too high": Neither do I, but as I said in the comments, I don't like hard dependencies on IoC containers in code I want to unit test and presenters are such a case.
Depending on what your ICommentForOverrideGetter does, you could also use a simple CommentForOverrideGetter.CreateNew() but as you require a fresh instance per call, I'd suspect at least some kind of logic associated with the creation? Or is it a stateful "service"?
If you insist on doing service location in your method, you should at least inject the container into your controller, so that you can eliminate the static method call. Add a constructor parameter of type StructureMap.IContainer and store it in a field variable. StructureMap will inject the proper container. You can then call GetInstance() on that container, instead of ObjectFactory.

Single Responsibility Principle: do all public methods in a class have to use all class dependencies?

Say I have a class that looks like the following:
internal class SomeClass
{
IDependency _someDependency;
...
internal string SomeFunctionality_MakesUseofIDependency()
{
...
}
}
And then I want to add functionality that is related but makes use of a different dependency to achieve its purpose. Perhaps something like the following:
internal class SomeClass
{
IDependency _someDependency;
IDependency2 _someDependency2;
...
internal string SomeFunctionality_MakesUseofIDependency()
{
...
}
internal string OtherFunctionality_MakesUseOfIDependency2()
{
...
}
}
When I write unit tests for this new functionality (or update the unit tests that I have for the existing functionality), I find myself creating a new instance of SomeClass (the SUT) whilst passing in null for the dependency that I don't need for the particular bit of functionality that I'm looking to test.
This seems like a bad smell to me but the very reason why I find myself going down this path is because I found myself creating new classes for each piece of new functionality that I was introducing. This seemed like a bad thing as well and so I started attempting to group similar functionality together.
My question: should all dependencies of a class be consumed by all its functionality i.e. if different bits of functionality use different dependencies, it is a clue that these should probably live in separate classes?
When every instance method touches every instance variable then the class is maximally cohesive. When no instance method shares an instance variable with any other, the class is minimally cohesive. While it is true that we like cohesion to be high, it's also true that the 80-20 rule applies. Getting that last little increase in cohesion may require a mamoth effort.
In general if you have methods that don't use some variables, it is a smell. But a small odor is not sufficient to completely refactor the class. It's something to be concerned about, and to keep an eye on, but I don't recommend immediate action.
Does SomeClass maintain an internal state, or is it just "assembling" various pieces of functionality? Can you rewrite it that way:
internal class SomeClass
{
...
internal string SomeFunctionality(IDependency _someDependency)
{
...
}
internal string OtherFunctionality(IDependency2 _someDependency2)
{
...
}
}
In this case, you may not break SRP if SomeFunctionality and OtherFunctionality are somehow (functionally) related which is not apparent using placeholders.
And you have the added value of being able to select the dependency to use from the client, not at creation/DI time. Maybe some tests defining use cases for those methods would help clarifying the situation: If you can write a meaningful test case where both methods are called on same object, then you don't break SRP.
As for the Facade pattern, I have seen it too many times gone wild to like it, you know, when you end up with a 50+ methods class... The question is: Why do you need it? For efficiency reasons à la old-timer EJB?
I usually group methods into classes if they use a shared piece of state that can be encapsulated in the class. Having dependencies that aren't used by all methods in a class can be a code smell but not a very strong one. I usually only split up methods from classes when the class gets too big, the class has too many dependencies or the methods don't have shared state.
My question: should all dependencies of a class be consumed by all its functionality i.e. if different bits of functionality use different dependencies, it is a clue that these should probably live in separate classes?
It is a hint, indicating that your class may be a little incoherent ("doing more than just one thing"), but like you say, if you take this too far, you end up with a new class for every piece of new functionality. So you would want to introduce facade objects to pull them together again (it seems that a facade object is exactly the opposite of this particular design rule).
You have to find a good balance that works for you (and the rest of your team).
Looks like overloading to me.
You're trying to do something and there's two ways to do it, one way or another. At the SomeClass level, I'd have one dependency to do the work, then have that single dependent class support the two (or more) ways to do the same thing, most likely with mutually exclusive input parameters.
In other words, I'd have the same code you have for SomeClass, but define it as SomeWork instead, and not include any other unrelated code.
HTH
A Facade is used when you want to hide complexity (like an interface to a legacy system) or you want to consolidate functionality while being backwards compatible from an interface perspective.
The key in your case is why you have the two different methods in the same class. Is the intent to have a class which groups together similar types of behavior even if it is implemented through unrelated code, as in aggregation. Or, are you attempting to support the same behavior but have alternative implementations depending on the specifics, which would be a hint for a inheritance/overloading type of solution.
The problem will be whether this class will continue to grow and in what direction. Two methods won't make a difference but if this repeats with more than 3, you will need to decide whether you want to declare it as a facade/adapter or that you need to create child classes for the variations.
Your suspicions are correct but the smell is just the wisp of smoke from a burning ember. You need to keep an eye on it in case it flares up and then you need to make a decision as how you want to quench the fire before it burns out of control.

Is there any reason to not use my IoC as a general Settings Repository?

Suppose that the ApplicationSettings class is a general repository of settings that apply to my application such as TimeoutPeriod, DefaultUnitOfMeasure, HistoryWindowSize, etc... And let's say MyClass makes use of one of those settings - DefaultUnitOfMeasure.
My reading of proper use of Inversion of Control Containers - and please correct me if I'm wrong on this - is that you define the dependencies of a class in its constructor:
public class MyClass {
public MyClass(IDataSource ds, UnitOfMeasure default_uom) {...}
}
and then call instantiate your class with something like
var mc = IoC.Container.Resolve<MyClass>();
Where IDataSource has been assigned a concrete implementation and default_uom has been wired up to instantiate from the ApplicationSettings.DefaultUnitOfMeasure property. I've got to wonder however, if all these hoops are really that necessary to jump through. What trouble am I setting myself up for should I do
public class MyClass {
public MyClass(IDataSource ds) {
UnitOfMeasure duom = IoC.Container.Resolve<UnitOfMeasure>("default_uom");
}
}
Yes, many of my classes end up with a dependency on IoC.Container but that is a dependency that most of my classes will have anyways. It seems like I maybe should make full use of it as long as the classes are coupled. Please Agile gurus, tell me where I'm wrong.
IoC.Container.Resolve("default_uom");
I see this as a classic anti-pattern, where you are using the IoC container as a service locater - the key issues that result are:
Your application no longer fails-fast if your container is misconfigured (you'll only know about it the first time it tries to resolve that particular service in code, which might not occur except for a specific set of logic/circumstances).
Harder to test - not impossible of course, but you either have to create a real (and semi-configured) instance of the windsor container for your tests or inject the singleton with a mock of IWindsorContainer - this adds a lot of friction to testing, compared to just being able to pass the mock/stub services directly into your class under test via constructors/properties.
Harder to maintain this kind of application (configuration isn't centralized in one location)
Violates a number of other software development principles (DRY, SOC etc.)
The concerning part of your original statement is the implication that most of your classes will have a dependency on your IoC singleton - if they're getting all the services injected in via constructors/dependencies then having some tight coupling to IoC should be the exception to the rule - In general the only time I take a dependency on the container is when I'm doing something tricky i.e. trying to avoid a circular dependency problems, or wish to create components at run-time for some reason, and even then I can often avoid taking a dependency on anything more then a generic IServiceProvider interface, allowing me to swap in a home-bake IoC or service locater implementation if I need to reuse the components in an environment outside of the original project.
I usually don't have many classes depending on my IoC container. I usually try to wrap the IoC stuff in a facade object that I inject into other classes, usually most of my IoC injection is done only in the higher layers of my application though.
If you do things your way you can't test MyClass without creating a IoC configuration for your tests. This will make your tests harder to maintain.
Another problem is that you're going to have powerusers of your software who want to change the configuration editing your IoC config files. This is something I'd want to avoid. You could split up your IoC config into a normal config file and the IoC specific stuff. But then you could just as well use the normal .Net config functionality to read the configuration.
Yes, many of my classes end up with a dependency on IoC.Container but that is a dependency that most of my classes will have anyways.
I think this is the crux of the issue. If in fact most of your classes are coupled to the IoC container itself chances are you need to rethink your design.
Generally speaking your app should only refer to the container class directly once during the bootstrapping. After you have that first hook into the container the rest of the object graph should be entirely managed by the container and all of those objects should be oblivious to the fact that they were created by an IoC container.
To comment on your specific example:
public class MyClass {
public MyClass(IDataSource ds) {
UnitOfMeasure duom = IoC.Container.Resolve<UnitOfMeasure>("default_uom");
}
}
This makes it harder to re-use your class. More specifically it makes it harder to instantiate your class outside of the narrow usage pattern you are confining it to. One of the most common places this will manifest itself is when trying to test your class. It's much easier to test that class if the UnitOfMeasure can be passed to the constructor directly.
Also, your choice of name for the UOM instance ("default_uom") implies that the value could be overridden, depending on the usage of the class. In that case, you would not want to "hard-code" the value in the constructor like that.
Using the constructor injection pattern does not make your class dependent on the IoC, just the opposite it gives clients the option to use the IoC or not.