Understanding dependency injection - swift

I've been reading about the benefits of implementing DI on iOS, but I have one doubt: where should I put the code that actually creates instances of objects? Is it factory pattern the recommended way to go about this? should all instances be created on the same factory? I've been checking out some frameworks like Typhoon but they all seem over-engineered to me for something as simple as dependency injection. I don't want to overwhelmed my app by using any of these frameworks, so I'm trying to keep it simple, at least for now.

Related

Unity container injection in my view models

I am a newcomer to C# / .Net / Prism / WPF / DevExpress and started to work on a big project in my company. As I am starting a bit late in the project, a lot of code was already produced and I stumble upon code like this very often:
public class AboutWindowViewModel : BindableBase
{
public AboutWindowViewModel(IUnityContainer container)
{
...
What I find "special" here is the dependence of the view model in the container. In the codebase I am currently looking at, it seems to be the "pattern". Every class gets a dependence in the IUnityContainer and then the dependencies are resolved manually with e.g.
container.ResolveEx<...>(...);
I am used to work with DI frameworks in other languages, so that example was just a non-sense to me because it goes against the definition of a DI framework and the problems it is meant to solve. For example, I tried to make up some code to test one of those view models and it turned out to be a nightmare.
Now, I addressed that concern to the developpers of my company and they answered me the following:
Prism recommendation is to resolve services using containers as they are needed.
Therefore, they resolve them all the time with the IUnityContainer.ResolveEx method.
My question is: is that really the recommended way to build up software with Prism??? If not, do you know where I can find documentation that clearly addresses that point with examples?
is that really the recommended way to build up software with Prism???
Not at all, in it's an anti-pattern.
Prism recommendation is to resolve services using containers as they are needed.
Not really just Prism, but more generally, the recommendation is to let the container resolve the dependencies. Of course, this can be done by calling Resolve manually, but in doing that one defeats all the benefits expected from dependency injection.
Rather, you want to list the dependencies as constructor parameters and let the container fill them. Your code does not need to depend on the container, except for the very entry-point ("resolution root"). The application should only ever contain a single Resolve statement that resolves the first instance.
Here's where Prism comes into play: in your PrismApplication.RegisterTypes and IModule.RegisterTypes, you configure your container. You tell it which type should implement which interface. Later on, when view models are needed, Prism (the ViewModelLocator to be precise) uses the container to resolve the view models, thereby resolving all the dependencies. There's no reason to make any class depend on the container, in fact, Prism doesn't register anything for IContainerRegistry in the first place.
Keep in mind that the container can not only inject singleton services, but also transient instances or factories. And if you need parametrized factories, you can always manually create and register complex factories.
On a side-note, I, personally, would be hesitant to work for a client with a code-base like this, as it's an obvious proof that their developers have no idea what they're doing.

Play framework SOLID principles

I'm learning Scala/Play2.1.3 from a C#/.Net/ASP.NET MVC background.
I wonder why there is no dependency injection support by default?
In Play samples, all data-access methods are static in the domain model classes. They use factories instead of injections. What if I want to mock some data-access methods for unit-testing?
There is no ready-to-use high-level ORM there. Actually they discourage me to use ORMs! Regarding SQL DBs I can't believe that I have to write joins again which I don't remember the last time I wrote a join clause. Isn't it a step backward?
I've learned to use SOLID principals which are not observed in Play framework (completely) IMO.
Am I wrong your I should consider using another framework?
You are right, the majority of samples does not use Dependency Injection. But since the 2.1 version, it is possible to inject the controllers, and their dependencies.
For the dependency injection, check the doc and also, how to unit test (last paragraph).
But since there are many static calls, you could end up with some static reference somewhere and you'll not be able to unit test your code.
But I think Play is a great framework, the team is modularizing more and more the framework, so that it will be better and better regarding SOLID principles.

Is MEF a Service locator?

I'm trying to design the architecture for a new LOB MVVM project utilising Caliburn Micro and nHibernate and am now at the point of looking into DI and IOC.
A lot of the examples for bootstrapping Caliburn Micro use MEF as the DI\IOC mechanism.
What I'm struggling with is that MEF seems to by reasonably popular but the idea of the Mef [Imports] annotations smells to me like another flavour of a Service Locator?
Am I missing something about MEF whereby almost all the examples I've seen are not using it correctly or have I completely not understood something about how it's used whereby it side steps the whole service locator issues?
MEF is not a Service Locator, on it's own. It can be used to implement a Service Locator (CompositionInitializer in the Silverlight version is effectively a Service Locator built into MEF), but it can also do dependency injection directly.
While the attributes may "smell" to you, they alone don't cause this to be a service locator, as you can use [ImportingConstructor] to inject data at creation time.
Note that the attributes are actually not the only way to use MEF - it can also work via direct registration or convention based registration instead (which is supported in the CodePlex drops and .NET 4.5).
I suppose if you were to just new up parts that had property imports and try to use them, then you could run into some of the same problems described here: Service Locator is an Anti-Pattern
But in practice you get your parts from the container, and if you use [Import] without the additional allowDefault property, then the part is required and the container will blow up on you if you ask for the part doing the import. It will blow up at runtime, yes, but unlike a run of the mill service-locator, its fairly straightforward to do static analysis of your MEF container using a test framework. I've written about it a couple times here and here.
It hasn't been a problem for me in practice, for a couple of reasons:
I get my parts from the container.
I use composition tests.
I'm writing applications, not framework code.

Should CSLA be used with a dependency injection framework?

My development team is evaluating the various frameworks available for .NET to simplify our programming, one of which is CSLA. I have to admit to being a bit confused as to whether or not CSLA would benefit from being used in conjunction with a dependency injection framework, such as Spring.net or Windsor. If we combined one of those two DI frameworks with, say, the Entity Framework to handle ORM duties, does that negate the need or benefit of using CSLA altogether?
I have various levels of understanding of all these frameworks, and I'm trying to get a big picture of what will best benefit our enterprise architecture and object design.
Thank you!
CSLA is a framework for creating business entities, so has separate concerns than an IoC container or ORM. In a enterprise application you should consider the benefits of all three.
In particular, you should consider CSLA if you want data binding built in to your models, dirty checking, N-level undo, validation and business rules, as well as the data portal implementation which allows easy configuration for n-tier deployments.
Short answer: Yes.
Long answer: It requires a bit of grunt work and some experimentation to setup, but it can be done without fundamentally breaking CSLA. I put together a working prototype using StructureMap and the repository pattern and used the BuildUp method of Setter Injection to inject within CSLA. I used a method similar to the one found here to ensure that my business objects are re-injected when the objects are serialized.
I also use the registry base class of StructureMap to separate my configuration into presentation, CSLA client, CSLA server, and CSLA global settings. This way I can use the linked file feature of Visual Studio to include the CSLA server and CSLA global configuration files within the server-side Data Portal and the configuration will always be the same in both places. This was to ensure I can still change the Data Portal configuration settings in CSLA from 2 tier to 3 tier without breaking anything.
Anyway, I am still weighing the potential benefits with the drawbacks to using DI, but so far I am leaning in the direction of using it because testing will be much easier although I am skeptical of trying to use any of the advanced features of DI such as interception. I recommend reading the book Dependency Injection in .NET by Mark Seemann to understand the right and wrong way to use DI because there is a lot of misinformation on the Internet.

Arguments against Inversion of Control containers

Seems like everyone is moving towards IoC containers. I've tried to "grok" it for a while, and as much as I don't want to be the one driver to go the wrong way on the highway, it still doesn't pass the test of common sense to me. Let me explain, and please correct/enlighten me if my arguments are flawed:
My understanding: IoC containers are supposed to make your life easier when combining different components. This is done through either a) constructor injection, b) setter injection and c) interface injection. These are then "wired up" programmatically or in a file that's read by the container. Components then get summoned by name and then cast manually whenever needed.
What I don't get:
EDIT: (Better phrasing)
Why use an opaque container that's not idiomatic to the language, when you can "wire up" the application in (imho) a much clearer way if the components were properly designed (using IoC patterns, loose-coupling)? How does this "managed code" gain non-trivial functionality? (I've heard some mentions to life-cycle management, but I don't necessarily understand how this is any better/faster than do-it-yourself.)
ORIGINAL:
Why go to all the lengths of storing the components in a container, "wiring them up" in ways that aren't idiomatic to the language, using things equivalent to "goto labels" when you call up components by name, and then losing many of the safety benefits of a statically-typed language by manual casting, when you'd get the equivalent functionality by not doing it, and instead using all the cool features of abstraction given by modern OO languages, e.g. programming to an interface? I mean, the parts that actually need to use the component at hand have to know they are using it in any case, and here you'd be doing the "wiring" using the most natural, idiomatic way - programming!
There are certainly people who think that DI Containers add no benefit, and the question is valid. If you look at it purely from an object composition angle, the benefit of a container may seem negligible. Any third party can connect loosely coupled components.
However, once you move beyond toy scenarios you should realize that the third party that connects collaborators must take on more that the simple responsibility of composition. There may also be decommissioning concerns to prevent resource leaks. As the composer is the only party that knows whether a given instance was shared or private, it must also take on the role of doing lifetime management.
When you start combining various instance scopes, using a combination of shared and private services, and perhaps even scoping some services to a particular context (such as a web request), things become complex. It's certainly possible to write all that code with poor man's DI, but it doesn't add any business value - it's pure infrastructure.
Such infrastructure code constitutes a Generic Subdomain, so it's very natural to create a reusable library to address such concerns. That's exactly what a DI Container is.
BTW, most containers I know don't use names to wire themselves - they use Auto-wiring, which combines the static information from Constructor Injection with the container's configuration of mappings from interfaces to concrete classes. In short, containers natively understand those patterns.
A DI Container is not required for DI - it's just damned helpful.
A more detailed treatment can be found in the article When to use a DI Container.
I'm sure there's a lot to be said on the subject, and hopefully I'll edit this answer to add more later (and hopefully more people will add more answers and insights), but just a couple quick points to your post...
Using an IoC container is a subset of inversion of control, not the whole thing. You can use inversion of control as a design construct without relying on an IoC container framework. At its simplest, inversion of control can be stated in this context as "supply, don't instantiate." As long as your objects aren't internally depending on implementations of other objects, and are instead requiring that instantiated implementations be supplied to them, then you're using inversion of control. Even if you're not using an IoC container framework.
To your point on programming to an interface... I'm not sure what your experience with IoC containers has been (my personal favorite is StructureMap), but you definitely program to an interface with IoC. The whole idea, at least in how I've used it, is that you separate your interfaces (your types) from your implementations (your injected classes). The code which relies on the interfaces is programmed only to those, and the implementations of those interfaces are injected when needed.
For example, you can have an IFooRepository which returns from a data store instances of type Foo. All of your code which needs those instances gets them from a supplied object of type IFooRepository. Elsewhere, you create an implementation of FooRepository and configure your IoC to supply that anywhere an IFooRepository is needed. This implementation can get them from a database, from an XML file, from an external service, etc. Doesn't matter where. That control has been inverted. Your code which uses objects of type Foo doesn't care where they come from.
The obvious benefit is that you can swap out that implementation any time you want. You can replace it with a test version, change versions based on environment, etc. But keep in mind that you also don't need to have such a 1-to-1 ratio of interfaces to implementations at any given time.
For example, I once used a code generating tool at a previous job which spit out tons and tons of DAL code into a single class. Breaking it apart would have been a pain, but what wasn't much of a pain was to configure it to spit it all out in specific method/property names. So I wrote a bunch of interfaces for my repositories and generated this one class which implemented all of them. For that generated class, it was ugly. But the rest of my application didn't care because it saw each interface as its own type. The IoC container just supplied that same class for each one.
We were able to get up and running quickly with this and nobody was waiting on the DAL development. While we continued to work in the domain code which used the interfaces, a junior dev was tasked with creating better implementations. Those implementations were later swapped in, all was well.
As I mentioned earlier, this can all be accomplished without an IoC container framework. It's the pattern itself that's important, really.
First of all what is IOC? It means that responsibility of creating the dependent object is taken away from the main object and delegated to third party framework. I always use spring as my IOC framework and it bring tons of benefit to the table.
Promotes coding to interface and decoupling - The key benefit is that IOC promotes and makes decoupling very easy. You can always inject an interface in your main object and then use the interface methods to perform tasks. The main object does not need to know which dependent object is assigned to the interface. When you want to use a different class as dependency all you need is to swap the old class with a new one in the config file without a single line of code change. Now you can argue that this can be done in the code using various interface design patterns. But IOC framework makes its walk in a park. So even as a newbie you become expert in levering various interface design patterns like bridge, factory etc.
Clean code - As most of object creation and object life-cycle operations are delegated to IOC container you saved from the writing broiler point repetitive code. So you have a cleaner, smaller and more understandable code.
Unit testing - IOC makes unit testing easy. Since you are left with decoupled code you can easily test the decoupled code in isolation. Also you can easily inject dependencies in your test cases and see how different component interacts.
Property Configurators - Almost all the applications have some properties file where they store application specific static properties. Now to access those properties developers need to write wrappers which will read and parse the properties file and store the properties in format that application can access. Now all the IOC frameworks provide a way of injecting static properties/values in specific class. So this again becomes walk in the park.
These are some of the points I can think right away I am sure there are more.