IoC and events - inversion-of-control

I am having a really hard time reconciling IoC, interfaces, and events. Let's see if I can explain this without writing a book.
I'm just getting started with IoC and I'm playing with Spring. We have a simple data layer that was built long before EF or the others. One of the classes is a DBProcedure which has some methods and events.
I created an IDBProcedure interface that the 'real' DBProcedure class implements. In TDD fashion I'd like to be able to swap out the 'real' DBProcedure class for another that implements the same interface for testing. To me, this means that the IDBProcedure interface should be defind in a different namespace/project than my data layer, right?
But a DBProcedure can raise some events and those events deliver custom EventArgs-derived classes. Does that mean that the EventArgs-classes need to be defined outside the data layer too? Seems like it to make the interface work, but that seems bad because it spreads data-layerness around?
On the other hand maybe I have the wrong idea - is it ok to include the data layer namespace when I'm testing to get interface and event definitions even though I'm not using any of the 'real' classes?

Yes, you need to move the interfaces and all the types it depends on somewhere, because you do not want the interfaces module to depend on the implementations.
The typical choice for this is one of two alternatives
Impl ----> Api <---- client
(Implementation depends on api, client depends on api, everything in api module)
Impl ----> Api <----- client
\ | /
\ V /
------->Model<------
Here everyone depends on a common "model" module, and this contains the enums and such. The advantage of this version is that you can have multiple API modules share the same common enums and other artifacts. (Because you really don't want API's to depend on other API modules usually)

Related

Dependency Injection, EF Core + web api 2 architecture

My layout
project.web (.net core 2.1 web api)
Some binding models (for post/put requests) and resource models for GET requests
Controllers.
I only call interfaces from (x.api) which are resolved to x.core services.
No validation or anything. This happens inside the core layer
I've setup a few things like automapper and swagger, that are not relevant for my question.
project.api (class lib)
only contains interfaces for .core and .store projects (services, repositories and domain models)
project.core (class lib)
two kinds of services
1) Services which call the repository services (interfaces). But validate the data before calling the repo service.
2) Services that will have to execute long term stuff (IE: scanning folders, handling file information, ...). I actually created HostedServices for these as a folder could easily contain thousands of files.
project.store (class lib)
Wrapper services for my storage (Only contains helper methods so I don't have to write the same queries a hundred times.)
Problem / question
At this time I have registered all of my services and repositories as singletons in public void ConfigureServices(IServiceCollection services)
because I was using a different storage (nosql, litedb) before refactoring code to EF (sqllite)
Now the problem is that I want to register my DbContext as scoped (by default)
But my repositories (singleton) depend on dbcontext. Which means I will have to make these scoped as well. I'm ok with this, as these are only wrapper services, so I don't have to write the same queries all the time.
But some other services, that will need access to my data are singletons, and I cannot register these as scoped. Contains some data that needs to be the same for every request, and some collections and long running jobs.
I can think of two solutions
The first solution is to make a dependency to IServiceScopeFactory in my repository and use something like using (var scope = ServiceScopeFactory.CreateScope()) { scope.ServiceProvider.GetService(typeof(MyDbContext))... }
this way I can remove the dependency from my repository wrapper, but this doesn't sound clean to me.
The other solution is to register all of my services that only handle database stuff as scoped. (IE customerSservice in core only does validations and calls customerRepository) I remove dependencies from my remaining singleton services.
In those singletons, instead of depending on the customersService, I could use a rest call with restsharp or something similar
Just like how I would consume them from my windows client applications and web client apps.
I don't actually like either. But perheps someone can give me some advice or thoughts?
Well, the two options you laid out are in fact your only two options. The first is the service locator antipattern, which as the name implies, is something you should avoid. However, when you are dealing with singleton-scoped objects needing access to objects in other scopes, there is no other way.
The only other option is to reduce the scope of your services from singletons, such that you can then inject the context directly. Not everything necessarily needs to be a singleton. Generally, if you need to utilize something like DbContext, there's a strong argument to be made that your object should not be singleton-scope in the first place. If you need it to be singleton-scoped, that's most likely an indication that the class is either doing too much or is otherwise brittle.

Avoid duplication in API development with Play Framework

I developed a REST API with Play 2.2.0. Some controllers expose GET methods, other expose POST methods with authentication etc...
I developed the client using Play as well but I have a problem. How can I avoid duplicating the model layer between both applications ?
In the server application, I have a Model Country(code, name).
In the client I am able to list countries and create new ones.
Currently, I have a class Country in both sides. When I get countries I deserialize them. The problem is that if I add a field in Country in the server, I have to maintain the client as well.
How can I share the Country entity between applications ?
PS : I don't want to create a dependency between the API and the client, as the client could have been developed with another language or framework
Thanks
This is not very specific to play framework but is more of a general question. You either create reusable representations of the data in your protocol (the actual data structures you send between your nodes) and get a tight coupling in representation and language. Many projects does it like this, since they know they will have the same platform throghout their architecture.
The other option is to duplicate all of or only the parts of parsing/generating that each part of the architecture needs, this way you get a looser coupling and can use any language in the different parts.
There are also some data protocols/tools that will have a representation in a protocol specific way and then can generate representations in various programming languages.
So as you see, it's all about pros and cons - neither solution is "the right way (tm)" to do this, you will have to think about your specific system/architecture and what pros are most valuable and what cons are most costly to you.
Well I suggest to send to the client a template of what they should display, on the client with js take advantage of js template frameworks, so you can tell to the client how can show them, dynamic... if they want to override them well... more job
We can call them Rest component oriented...
well suggestions :)
should works!

Does Onion Architecture contradict IoC

Jeffrey Palermo pioneered the onion architecture, which I have found a good approach.
http://www.headspring.com/jeffrey/onion-architecture-part-4-after-four-years/
However his statement "Inner layers define interfaces. Outer layers implement interfaces" seems to contradict IoC, if my understanding is correct, which states that Consumer define interface and providers implement it, i.e. the control lies with the consumer not the provider.
This principle makes sense to me, since, imagine you are writing a UI, this principle means you can get on with creating your UI without knowing anything about the services you are going to call since you are in charge of defining the interface that exposes all the functionality your will need.
So to that end Jeffreys statement seems a contradiction and confuses me about where to put contract(interface definitions), because it seems to imply that:
Domain Layer
MyEntity
IMyService
Service
MyEntityService : IMyService
Since there is no layer beneath Domain, where do I put IMyEntity. Also it means I cannot create a Presentation project until Domain exists and has defined IMyService.
As I side note, where do I place IMyEntityRepository and MyEntityRepository ? Since service relies on IMyEntityRepository and MyEntityRepository relies on IMyEntity
So, where to begin? :-)
Let’s start with the real role of an IOC. According to Wikipedia,
Inversion Of Control is a programming technique in which object
coupling is bound at run time by an assembler object and is typically
not known at compile time.
In your case, your UI will manipulate services interfaces without knowing the service implementation that will be bound at run time. It’s not up to the consumer to define those service interfaces; they will be defined in the application core of your Onion architecture, but we’ll see that later.
"Inner layers define interfaces. Outer layers implement interfaces", that’s how the Onion Architecture is designed, but do not forget that the outermost layer is the IOC! It’s up to the IOC to bind interfaces with the right implementations at run time!
You’re right saying that your UI won’t work without having at least one implementation available for the interfaces you will manipulate. But in this case, if for any reason you need to build your UI first, consider using a mocking framework!
Your last question is about where you need to place your IMyEntityRepository and MyEntityRepository classes. Well, that’s the easy part ;-) IMyEntityRepository definitely needs to be placed within your Application core. All your entities, service interfaces, repository interfaces and whatever interfaces need to be at the same place. MyEntityRepository implementation should be placed somewhere in your infrastructure layer as his role will be mainly to deal with getting data out of the DB.
Hope that helps!
I have worked with Jeffrey for many years, and I would say that IoC is integral to making Onion Architecture possible. Interfaces for external dependencies are defined in projects with few (if any) dependencies (in other words, projects at the "center" of the onion). Classes that implement those interfaces that depend on external dependencies are located in projects at the edge / on the surface of the onion. IoC containers, then, are needed to "hook up" the class implementations on the edge of the onion to interfaces at the core of the onion at run-time.
We've implemented Onion on my project and it's conceptually pretty simple.
Create a project that contains only interfaces and POCO's, we'll call this Contract for now
Create one or more projects that contains the implementations of your interfaces and all your 3rd party things like NHibernate mappings, we'll call this Implementation(s)
Add a direct reference to Contract from projects that need to use this functionality but don't add a reference to Implementation from these projects
In your composite root (application entry point) projects do two things (1) as part of the build copy the latest version of implementation to a configured location (we use AppSettings for the configuration but there are a lot of options here) (2) have your container scan the configured location for your Implementation Dll(s)
This approach allows you to only rely on Contract and the idea is that you can switch Implementation so if you want to move to Entity Framework or something else in the future you only have to reimplement Implementation using that framework.
We also copy the NHibernate DLLs to the configured scanning location and this allows us to make the architecture defensive so that it is difficult to not follow the it because NHibernate is only available where it should be used.
The interfaces in the onion architecture are the ones that the layer depends on (i.e. consumes), whose implementation indeed is provided by the outer layer.
More specifically, the architecture itself does not say you have to abstract the business logic behind interfaces (which you should probably do anyway, in respect of the dependency inversion principle, but that's another story). What is says is that the dependencies of the layer should be modeled as interfaces so implementations can be provided by the outer layer.
The best example is infrastructure code, and particularly data access. Your business logic needs to load and store data, so it defines an interface that it'll consume. The outer layer will provide an implementation using NHibernate or EF or whatever.
Actually, the low-level layers (from the DIP; i.e. the data access and other commodities) are on the outermost layers of the onion, while the high-level layers (i.e. the business logic) are closer to the center.
See also http://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html which replaces the domain, business logic and more business logic terms with entities, use cases and interface adapters which IMO are easier to reason about. The farther you go from the center, the most specific you are to what the user sees and use; the closer to the center the more generic; at the center are those things that are transverse to your enterprise, then the use cases which are specific to your app more don't dictate how they'll be used or in which environment (which DB, which UI, etc.), then you'll make adapters between your use-cases and your technical framework (ASP.NET MVC, WCF, WPF –in a non-web scenario–, EF, NHibernate, etc.)

Implementing Chain of Responsibility with Services

I'm thinking about a platform-neutral (i.e. not .NET MEF) technique of implementing chain-of-responsibliity pattern using web services as the handlers. I want to be able to add more CoR handlers by deploying new services and not compiling new CoR code, just change configuration info. It seems the challenge will be managing the metadata about available handlers and ensuring the handlers are conforming to the interface.
My question: any ideas on how I can safely ensure:
1. The web services are implementing the interface
2. The web services are implementing the base class behavior, like calling the successor
Because, in compiled code, I can have type-safety and therefore know that any handlers have derived from the abstract base class that ensures the interface and behavior I want. That seems to be missing in the world of services.
This seems like a valid question, but a rather simple one.
You are still afforded the protection of the typing system, even if you are loading code later, at runtime, that the original code never saw before.
I would think the preferred approach here would be to have something like a properties file with a list of implementers (your chain). Then in the code, you are going to have to have a way to instantiate an instance of each handler at runtime to construct the chain. When you construct the instance, you will have to check its type. In Java, for instance, that would take the form of instanceof (abomination ordinarily, but you get a pass for loading scenarios), or isAssignableFrom. In Objective C, it's conformsToProtocol.
If it doesn't, it can't be used and you can spit an error out to the console.

Arguments against Inversion of Control containers

Seems like everyone is moving towards IoC containers. I've tried to "grok" it for a while, and as much as I don't want to be the one driver to go the wrong way on the highway, it still doesn't pass the test of common sense to me. Let me explain, and please correct/enlighten me if my arguments are flawed:
My understanding: IoC containers are supposed to make your life easier when combining different components. This is done through either a) constructor injection, b) setter injection and c) interface injection. These are then "wired up" programmatically or in a file that's read by the container. Components then get summoned by name and then cast manually whenever needed.
What I don't get:
EDIT: (Better phrasing)
Why use an opaque container that's not idiomatic to the language, when you can "wire up" the application in (imho) a much clearer way if the components were properly designed (using IoC patterns, loose-coupling)? How does this "managed code" gain non-trivial functionality? (I've heard some mentions to life-cycle management, but I don't necessarily understand how this is any better/faster than do-it-yourself.)
ORIGINAL:
Why go to all the lengths of storing the components in a container, "wiring them up" in ways that aren't idiomatic to the language, using things equivalent to "goto labels" when you call up components by name, and then losing many of the safety benefits of a statically-typed language by manual casting, when you'd get the equivalent functionality by not doing it, and instead using all the cool features of abstraction given by modern OO languages, e.g. programming to an interface? I mean, the parts that actually need to use the component at hand have to know they are using it in any case, and here you'd be doing the "wiring" using the most natural, idiomatic way - programming!
There are certainly people who think that DI Containers add no benefit, and the question is valid. If you look at it purely from an object composition angle, the benefit of a container may seem negligible. Any third party can connect loosely coupled components.
However, once you move beyond toy scenarios you should realize that the third party that connects collaborators must take on more that the simple responsibility of composition. There may also be decommissioning concerns to prevent resource leaks. As the composer is the only party that knows whether a given instance was shared or private, it must also take on the role of doing lifetime management.
When you start combining various instance scopes, using a combination of shared and private services, and perhaps even scoping some services to a particular context (such as a web request), things become complex. It's certainly possible to write all that code with poor man's DI, but it doesn't add any business value - it's pure infrastructure.
Such infrastructure code constitutes a Generic Subdomain, so it's very natural to create a reusable library to address such concerns. That's exactly what a DI Container is.
BTW, most containers I know don't use names to wire themselves - they use Auto-wiring, which combines the static information from Constructor Injection with the container's configuration of mappings from interfaces to concrete classes. In short, containers natively understand those patterns.
A DI Container is not required for DI - it's just damned helpful.
A more detailed treatment can be found in the article When to use a DI Container.
I'm sure there's a lot to be said on the subject, and hopefully I'll edit this answer to add more later (and hopefully more people will add more answers and insights), but just a couple quick points to your post...
Using an IoC container is a subset of inversion of control, not the whole thing. You can use inversion of control as a design construct without relying on an IoC container framework. At its simplest, inversion of control can be stated in this context as "supply, don't instantiate." As long as your objects aren't internally depending on implementations of other objects, and are instead requiring that instantiated implementations be supplied to them, then you're using inversion of control. Even if you're not using an IoC container framework.
To your point on programming to an interface... I'm not sure what your experience with IoC containers has been (my personal favorite is StructureMap), but you definitely program to an interface with IoC. The whole idea, at least in how I've used it, is that you separate your interfaces (your types) from your implementations (your injected classes). The code which relies on the interfaces is programmed only to those, and the implementations of those interfaces are injected when needed.
For example, you can have an IFooRepository which returns from a data store instances of type Foo. All of your code which needs those instances gets them from a supplied object of type IFooRepository. Elsewhere, you create an implementation of FooRepository and configure your IoC to supply that anywhere an IFooRepository is needed. This implementation can get them from a database, from an XML file, from an external service, etc. Doesn't matter where. That control has been inverted. Your code which uses objects of type Foo doesn't care where they come from.
The obvious benefit is that you can swap out that implementation any time you want. You can replace it with a test version, change versions based on environment, etc. But keep in mind that you also don't need to have such a 1-to-1 ratio of interfaces to implementations at any given time.
For example, I once used a code generating tool at a previous job which spit out tons and tons of DAL code into a single class. Breaking it apart would have been a pain, but what wasn't much of a pain was to configure it to spit it all out in specific method/property names. So I wrote a bunch of interfaces for my repositories and generated this one class which implemented all of them. For that generated class, it was ugly. But the rest of my application didn't care because it saw each interface as its own type. The IoC container just supplied that same class for each one.
We were able to get up and running quickly with this and nobody was waiting on the DAL development. While we continued to work in the domain code which used the interfaces, a junior dev was tasked with creating better implementations. Those implementations were later swapped in, all was well.
As I mentioned earlier, this can all be accomplished without an IoC container framework. It's the pattern itself that's important, really.
First of all what is IOC? It means that responsibility of creating the dependent object is taken away from the main object and delegated to third party framework. I always use spring as my IOC framework and it bring tons of benefit to the table.
Promotes coding to interface and decoupling - The key benefit is that IOC promotes and makes decoupling very easy. You can always inject an interface in your main object and then use the interface methods to perform tasks. The main object does not need to know which dependent object is assigned to the interface. When you want to use a different class as dependency all you need is to swap the old class with a new one in the config file without a single line of code change. Now you can argue that this can be done in the code using various interface design patterns. But IOC framework makes its walk in a park. So even as a newbie you become expert in levering various interface design patterns like bridge, factory etc.
Clean code - As most of object creation and object life-cycle operations are delegated to IOC container you saved from the writing broiler point repetitive code. So you have a cleaner, smaller and more understandable code.
Unit testing - IOC makes unit testing easy. Since you are left with decoupled code you can easily test the decoupled code in isolation. Also you can easily inject dependencies in your test cases and see how different component interacts.
Property Configurators - Almost all the applications have some properties file where they store application specific static properties. Now to access those properties developers need to write wrappers which will read and parse the properties file and store the properties in format that application can access. Now all the IOC frameworks provide a way of injecting static properties/values in specific class. So this again becomes walk in the park.
These are some of the points I can think right away I am sure there are more.