Does Onion Architecture contradict IoC - inversion-of-control

Jeffrey Palermo pioneered the onion architecture, which I have found a good approach.
http://www.headspring.com/jeffrey/onion-architecture-part-4-after-four-years/
However his statement "Inner layers define interfaces. Outer layers implement interfaces" seems to contradict IoC, if my understanding is correct, which states that Consumer define interface and providers implement it, i.e. the control lies with the consumer not the provider.
This principle makes sense to me, since, imagine you are writing a UI, this principle means you can get on with creating your UI without knowing anything about the services you are going to call since you are in charge of defining the interface that exposes all the functionality your will need.
So to that end Jeffreys statement seems a contradiction and confuses me about where to put contract(interface definitions), because it seems to imply that:
Domain Layer
MyEntity
IMyService
Service
MyEntityService : IMyService
Since there is no layer beneath Domain, where do I put IMyEntity. Also it means I cannot create a Presentation project until Domain exists and has defined IMyService.
As I side note, where do I place IMyEntityRepository and MyEntityRepository ? Since service relies on IMyEntityRepository and MyEntityRepository relies on IMyEntity

So, where to begin? :-)
Let’s start with the real role of an IOC. According to Wikipedia,
Inversion Of Control is a programming technique in which object
coupling is bound at run time by an assembler object and is typically
not known at compile time.
In your case, your UI will manipulate services interfaces without knowing the service implementation that will be bound at run time. It’s not up to the consumer to define those service interfaces; they will be defined in the application core of your Onion architecture, but we’ll see that later.
"Inner layers define interfaces. Outer layers implement interfaces", that’s how the Onion Architecture is designed, but do not forget that the outermost layer is the IOC! It’s up to the IOC to bind interfaces with the right implementations at run time!
You’re right saying that your UI won’t work without having at least one implementation available for the interfaces you will manipulate. But in this case, if for any reason you need to build your UI first, consider using a mocking framework!
Your last question is about where you need to place your IMyEntityRepository and MyEntityRepository classes. Well, that’s the easy part ;-) IMyEntityRepository definitely needs to be placed within your Application core. All your entities, service interfaces, repository interfaces and whatever interfaces need to be at the same place. MyEntityRepository implementation should be placed somewhere in your infrastructure layer as his role will be mainly to deal with getting data out of the DB.
Hope that helps!

I have worked with Jeffrey for many years, and I would say that IoC is integral to making Onion Architecture possible. Interfaces for external dependencies are defined in projects with few (if any) dependencies (in other words, projects at the "center" of the onion). Classes that implement those interfaces that depend on external dependencies are located in projects at the edge / on the surface of the onion. IoC containers, then, are needed to "hook up" the class implementations on the edge of the onion to interfaces at the core of the onion at run-time.

We've implemented Onion on my project and it's conceptually pretty simple.
Create a project that contains only interfaces and POCO's, we'll call this Contract for now
Create one or more projects that contains the implementations of your interfaces and all your 3rd party things like NHibernate mappings, we'll call this Implementation(s)
Add a direct reference to Contract from projects that need to use this functionality but don't add a reference to Implementation from these projects
In your composite root (application entry point) projects do two things (1) as part of the build copy the latest version of implementation to a configured location (we use AppSettings for the configuration but there are a lot of options here) (2) have your container scan the configured location for your Implementation Dll(s)
This approach allows you to only rely on Contract and the idea is that you can switch Implementation so if you want to move to Entity Framework or something else in the future you only have to reimplement Implementation using that framework.
We also copy the NHibernate DLLs to the configured scanning location and this allows us to make the architecture defensive so that it is difficult to not follow the it because NHibernate is only available where it should be used.

The interfaces in the onion architecture are the ones that the layer depends on (i.e. consumes), whose implementation indeed is provided by the outer layer.
More specifically, the architecture itself does not say you have to abstract the business logic behind interfaces (which you should probably do anyway, in respect of the dependency inversion principle, but that's another story). What is says is that the dependencies of the layer should be modeled as interfaces so implementations can be provided by the outer layer.
The best example is infrastructure code, and particularly data access. Your business logic needs to load and store data, so it defines an interface that it'll consume. The outer layer will provide an implementation using NHibernate or EF or whatever.
Actually, the low-level layers (from the DIP; i.e. the data access and other commodities) are on the outermost layers of the onion, while the high-level layers (i.e. the business logic) are closer to the center.
See also http://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html which replaces the domain, business logic and more business logic terms with entities, use cases and interface adapters which IMO are easier to reason about. The farther you go from the center, the most specific you are to what the user sees and use; the closer to the center the more generic; at the center are those things that are transverse to your enterprise, then the use cases which are specific to your app more don't dictate how they'll be used or in which environment (which DB, which UI, etc.), then you'll make adapters between your use-cases and your technical framework (ASP.NET MVC, WCF, WPF –in a non-web scenario–, EF, NHibernate, etc.)

Related

Opinion on ASP.NET MVC Onion-based architecture [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What is your opinion on the following 'generic' code-first Onion-inspired ASP.NET MVC architecture:
The layers, explained:
Core - contain the Domain model. e.g. that's the business objects and their relationship. I am using Entity Framework to visually design the entities and the relations between them. It lets me generate a script for a database. I am getting automatically-generated POCO-like models, which I can freely refer to in the next layer (Persistence), since they are simple (i.e. they are not database-specific).
Persistence - Repository interface and implementations. Basically CRUD operations on the Domain model.
BusinessServices - A business layer around the repository. All the business logic should be here (e.g. GetLargestTeam(), etc). Uses CRUD operations to compose return objects or get/filter/store data. Should contain all business rules and validations.
Web (or any other UI) - In this particular case it's an MVC application, but the idea behind this project is to provide UI, driven by what the Business services offer. The UI project consumes the Business layer and has no direct access to the Repository. The MVC project has its own View models, which are specific to each View situation. I am not trying to force-feed it Domain Models.
So the references go like this:
UI -> Business Services -> Repository -> Core objects
What I like about it:
I can design my objects, rather than code them manually. I am getting
code-generated Model objects.
UI is driven/enforced by the Business
layer. Different UI applications can be coded against the same
Business model.
Mixed feelings about:
Fine, we have a pluggable repository implementation, but how often do you really have different implementations of the same persistence interface?
Same goes for the UI - we have the technical ability to implement different UI apps against the same business rules, but why would we do that, when we can simply render different views (mobile, desktop, etc)?
I am not sure if the UI should only communicate with the Business Layer via View models, or should I use Domain Models to transfer data, as I do now. For display, I am using view models, but for data transfer I am using Domain models. Wrong?
What I don't like:
The Core project is now referenced in every other project - because I want/have to access the Domain models. In classic Onion architecture, the core is referenced only by the next layer.
The DbContext is implemented in the .Core project, because it is being generated by the Entity Framework, in the same place where the .edmx is. I actually want to use the .EDMX for the visual model design, but I feel like the DbContext belongs to the Persistence layer, somewhere within the database-specific repository implementation.
As a final question - what is a good architecture which is not over-engineered (such as a full-blown Onion, where we have injections, service locators, etc) but at the same time provides some reasonable flexibility, in places where you would realistically need it?
Thanks
Wow, there’s a lot to say here! ;-)
First of all, let’s talk about the overall architecture.
What I can see here is that it’s not really an Onion architecture. You forgot the outermost layer, the “Dependency Resolution” layer. In an Onion architecture, it’s up to this layer to wires up Core interfaces to Infrastructure implementations (where your Persistence project should reside).
Here’s a brief description of what you should find in an Onion application. What goes in the Core layer is everything unique to the business: Domain model, business workflows... This layer defines all technical implementation needs as interfaces (i.e.: repositories’ interfaces, logging interfaces, session’s interfaces …). The Core layer cannot reference any external libraries and has no technology specific code. The second layer is the Infrastructure layer. This layer provides implementations for non-business Core interfaces. This is where you call your DB, your web services … You can reference any external libraries you need to provide implementations, deploy as many nugget packages as you want :-). The third layer is your UI, well you know what to put in there ;-) And the latest layer, it’s the Dependency Resolution I talked about above.
Direction of dependency between layers is toward the center.
Here’s how it could looks like:
The question now is: how to fit what you’ve already coded in an Onion architecture.
Core: contain the Domain model
Yes, this is the right place!
Persistence - Repository interface and implementations
Well, you’ll need to separate interfaces with implementations. Interfaces need to be moved into Core and implementations need to be moved into Infrastructure folder (you can call this project Persistence).
BusinessServices - A business layer around the repository. All the
business logic should be here
This needs to be moved in Core, but you shouldn’t use repositories implementations here, just manipulate interfaces!
Web (or any other UI) - In this particular case it's an MVC
application
cool :-)
You will need to add a “Bootstrapper“ project, just have a look here to see how to proceed.
About your mixed feelings:
I won’t discuss about the need of having repositories or not, you’ll find plenty of answers on stackoverflow.
In my ViewModel project I have a folder called “Builder”. It’s up to my Builders to discuss with my Business services interfaces in order to get data. The builder will receive lists of Core.Domain objects and will map them into the right ViewModel.
About what you don’t like:
In classic Onion architecture, the core is referenced only by the next
layer.
False ! :-) Every layer needs the Core to have access to all the interfaces defined in there.
The DbContext is implemented in the .Core project, because it is being
generated by the Entity Framework, in the same place where the .edmx
is
Once again, it’s not a problem as soon as it’s really easy to edit the T4 template associated with your EDMX. You just need to change the path of the generated files and you can have the EDMX in the Infrastructure layer and the POCO’s in your Core.Domain project.
Hope this helps!
I inject my services into my controllers. The services return DTO's which reside in Core.
The model you have looks good, I don't use the repository pattern but many people do. I is difficult to work with EF in this type of architecture which is why I chose to use Nhibernate.
A possible answer to your final question.
CORE
DOMAIN
DI
INFRASTRUCTURE
PRESENTATION
SERVICES
In my opinion:
"In classic Onion architecture, the core is referenced only by the next layer."
That is not true, Core should be reference by any layer... remember that Direction of dependency between layers is toward the center (Core).
"the layers above can use any layer beneath them" By Jeffrey Palermo http://jeffreypalermo.com/blog/the-onion-architecture-part-3/
About your EF, it is in the wrong place.... it should be in the Infrastructure layer not in the Core layer. And use POCO Generator to create the entities (POCO classes) in Core/Model. Or use a Auto-mapper so you can map Core Model (Business objects) to Entity Model (EF Entities)
What you've done looks pretty good and is basically one of two standard architectures that I see a lot.
Mixed feelings about:
Fine, we have a pluggable repository implementation, but how often do you really have different implementations of the same persistence interface?
Pluggable is often touted as being good design but I've never once seen a team swap out a major implementation of something for something else. They just modify the existing thing. IMHO "pluggability" is only useful for being able to mock components for automated unit testing.
I am not sure if the UI should only communicate with the Business Layer via View models, or should I use Domain Models to transfer data, as I do now. For display, I am using view models, but for data transfer I am using Domain models. Wrong?
I reckon view models are a UI (MVC Web) concern, if you added a different type of UI for example it might not require view models or might need something different. So I think the Business layer should return domain entities and allow them to be mapped to view models in the UI layer.
What I don't like:
The Core project is now referenced in every other project - because I want/have to access the Domain models. In classic Onion architecture, the core is referenced only by the next layer.
As others have said this is quite normal. Usually everything ends up having a dependency on the Domain.
The DbContext is implemented in the .Core project, because it is being generated by the Entity Framework, in the same place where the .edmx is. I actually want to use the .EDMX for the visual model design, but I feel like the DbContext belongs to the Persistence layer, somewhere within the database-specific repository implementation.
I think this is a consequence of Entity Framework. If you used it in "Code First" mode you actually can - and usually do - have the context and repository in the Persistance layer with the Domain (represented as POCO classes) in what you've called Core.
As a final question - what is a good architecture which is not over-engineered (such as a full-blown Onion, where we have injections, service locators, etc) but at the same time provides some reasonable flexibility, in places where you would realistically need it?
As I touched on above I wouldn't worry about the need to swap things out except to allow for automated unit tests. Unless there is a specific requirement you know about that will make this very likely.
Good luck!

What specific issue does the repository pattern solve?

(Note: My question has very similar concerns as the person who asked this question three months ago, but it was never answered.)
I recently started working with MVC3 + Entity Framework and I keep reading that the best practice is to use the repository pattern to centralize access to the DAL. This is also accompanied with explanations that you want to keep the DAL separate from the domain and especially the view layer. But in the examples I've seen the repository is (or appears to be) simply returning DAL entities, i.e. in my case the repository would return EF entities.
So my question is, what good is the repository if it only returns DAL entities? Doesn't this add a layer of complexity that doesn't eliminate the problem of passing DAL entities around between layers? If the repository pattern creates a "single point of entry into the DAL", how is that different from the context object? If the repository provides a mechanism to retrieve and persist DAL objects, how is that different from the context object?
Also, I read in at least one place that the Unit of Work pattern centralizes repository access in order to manage the data context object(s), but I don't grok why this is important either.
I'm 98.8% sure I'm missing something here, but from my readings I didn't see it. Of course I may just not be reading the right sources... :\
I think the term "repository" is commonly thought of in the way the "repository pattern" is described by the book Patterns of Enterprise Application Architecture by Martin Fowler.
A Repository mediates between the domain and data mapping layers,
acting like an in-memory domain object collection. Client objects
construct query specifications declaratively and submit them to
Repository for satisfaction. Objects can be added to and removed from
the Repository, as they can from a simple collection of objects, and
the mapping code encapsulated by the Repository will carry out the
appropriate operations behind the scenes.
On the surface, Entity Framework accomplishes all of this, and can be used as a simple form of a repository. However, there can be more to a repository than simply a data layer abstraction.
According to the book Domain Driven Design by Eric Evans, a repository has these advantages:
They present clients with a simple model for obtaining persistence objects and managing their life cycle
They decouple application and domain design from persistence technology, multiple database strategies, or even multiple data sources
They communicate design decisions about object access
They allow easy substitution of a dummy implementation, for unit testing (typically using an in-memory collection).
The first point roughly equates to the paragraph above, and it's easy to see that Entity Framework itself easily accomplishes it.
Some would argue that EF accomplishes the second point as well. But commonly EF is used simply to turn each database table into an EF entity, and pass it through to UI. It may be abstracting the mechanism of data access, but it's hardly abstracting away the relational data structure behind the scenes.
In simpler applications that mostly data oriented, this might not seem to be an important point. But as the applications' domain rules / business logic become more complex, you may want to be more object oriented. It's not uncommon that the relational structure of the data contains idiosyncrasies that aren't important to the business domain, but are side-effects of the data storage. In such cases, it's not enough to abstract the persistence mechanism but also the nature of the data structure itself. EF alone generally won't help you do that, but a repository layer will.
As for the third advantage, EF will do nothing (from a DDD perspective) to help. Typically DDD uses the repository not just to abstract the mechanism of data persistence, but also to provide constraints around how certain data can be accessed:
We also need no query access for persistent objects that are more
convenient to find by traversal. For example, the address of a person
could be requested from the Person object. And most important, any
object internal to an AGGREGATE is prohibited from access except by
traversal from the root.
In other words, you would not have an 'AddressRepository' just because you have an Address table in your database. If your design chooses to manage how the Address objects are accessed in this way, the PersonRepository is where you would define and enforce the design choice.
Also, a DDD repository would typically be where certain business concepts relating to sets of domain data are encapsulated. An OrderRepository may have a method called OutstandingOrdersForAccount which returns a specific subset of Orders. Or a Customer repository may contain a PreferredCustomerByPostalCode method.
Entity Framework's DataContext classes don't lend themselves well to such functionality without the added repository abstraction layer. They do work well for what DDD calls Specifications, which can be simple boolean expressions sent in to a simple method that will evaluate the data against the expression and return a match.
As for the fourth advantage, while I'm sure there are certain strategies that might let one substitute for the datacontext, wrapping it in a repository makes it dead simple.
Regarding 'Unit of Work', here's what the DDD book has to say:
Leave transaction control to the client. Although the REPOSITORY will insert into and delete from the database, it will ordinarily not
commit anything. It is tempting to commit after saving, for example,
but the client presumably has the context to correctly initiate and
commit units of work. Transaction management will be simpler if the
REPOSITORY keeps its hands off.
Entity Framework's DbContext basically resembles a Repository (and a Unit of Work as well). You don't necessarily have to abstract it away in simple scenarios.
The main advantage of the repository is that your domain can be ignorant and independent of the persistence mechanism. In a layer based architecture, the dependencies point from the UI layer down through the domain (or usually called business logic layer) to the data access layer. This means the UI depends on the BLL, which itself depends on the DAL.
In a more modern architecture (as propagated by domain-driven design and other object-oriented approaches) the domain should have no outward-pointing dependencies. This means the UI, the persistence mechanism and everything else should depend on the domain, and not the other way around.
A repository will then be represented through its interface inside the domain but have its concrete implementation outside the domain, in the persistence module. This way the domain depends only on the abstract interface, not the concrete implementation.
That basically is object-orientation versus procedural programming on an architectural level.
See also the Ports and Adapters a.k.a. Hexagonal Architecture.
Another advantage of the repository is that you can create similar access mechanisms to various data sources. Not only to databases but to cloud-based stores, external APIs, third-party applications, etc.
You're right,in those simple cases the repository is just another name for a DAO and it brings only one value: the fact that you can switch EF to another data access technique. Today you're using MSSQL, tomorrow you'll want a cloud storage. OR using a micro orm instead of EF or switching from MSSQL to MySql.
In all those cases it's good that you use a repository, as the rest of the app won't care about what storage you're using now.
There's also the limited case where you get information from multiple sources (db + file system), a repo will act as the facade, but it's still a another name for a DAO.
A 'real' repository is valid only when you're dealing with domain/business objects, for data centric apps which won't change storage, the ORM alone is enough.
It would be useful in situations where you have multiple data sources, and want to access them using a consistent coding strategy.
For example, you may have multiple EF data models, and some data accessed using traditional ADO.NET with stored procs, and some data accessed using a 3rd party API, and some accessed from an Access database living on a Windows NT4 server sitting under a blanket of dust in your broom closet.
You may not want your business or front-end layers to care about where the data is coming from, so you build a generic repository pattern to access "data", rather than to access "Entity Framework data".
In this scenario, your actual repository implementations will be different from each other, but the code that calls them wouldn't know the difference.
Given your scenario, I would simply opt for a set of interfaces that represent what data structures (your Domain Models) need to be returned from your data layer. Your implementation can then be a mixture of EF, Raw ADO.Net or any other type of Data Store/Provider. The key strategy here is that the implementation is abstracted away from the immediate consumer - your Domain layer. This is useful when you want to unit test your domain objects and, in less common situations - change your data provider / database platform altogether.
You should, if you havent already, consider using an IOC container as they make loose coupling of your solution very easy by way of Dependency Injection. There are many available, personally i prefer Ninject.
The domain layer should encapsulate all of your business logic - the rules and requirements of the problem domain, and can be consumed directly by your MVC3 web application. In certain situations it makes sense to introduce a services layer that sits above the domain layer, but this is not always necessary, and can be overkill for straightforward web applications.
Another thing to consider is that even when you know that you will be working with a single data store it still might make sense to create a repository abstraction. The reason is that there might be a function that your application needs that your ORM du jour either does badly (performance), not at all, or you just don't know how to make the ORM bend to your needs.
If you are wrapping your ORM behind a well thought out repository interface, you can easily switch between different technologies as you see fit. It's not uncommon in my repositories to see some methods use EF for their work and others to use something like PetaPoco, or (gasp) ADO.net code. The repository abstraction enables you to use exactly the right tool for the job at hand without leaking these complexities into the client code.
I think there is a big misunderstanding of what many articles call "repository." And that's why there are doubts about what real value those abstractions bring.
In my opinion the repository in it's pure form is IEnumerable, while you and many articles are talking about "data access service."
I've blogged about it here.

Considering the following architectural changes and need some advice (Domain Entities, DTO, Aggregates)

about a year ago I set set up a solution consisting of an ASP.Net MVC 3 (now) presentation layer, application layer, domain layer and infrastructure layer (crosscutting stuff and data). I decided to keep the domain model in a separate project from the domain logic and use a relaxed approach to the presentation layer by passing the domain entities instead of DTO's since we really only have 1 front end right now.
We are going to be servicing a distributed layer soon, in addition to our main website and I will use DTO's there, but I am considering using DTO's in the main website also. I am also wondering if I should bother to break out the framework code in the domain layer (IRepository, IUnitOfWork, Entity/Value object supertypes etc). Well here, let me list out the questions I need feedback on:
1) I was pretty diligent about not having an anemic domain model and also watched out for behavior that was specific to the presentation concerns. Most of the business calculations that are needed are on the domain entities, is it ok for the presentation layer to call this behavior directly or should it instead call an application service that then calls the domain entities? This would suggest to me that there is no reason to have the presentation layer know about the domain entities and instead could use DTO's. Alternatively, I could have the DTO's expose these behaviors, but then I feel like I am robbing the domain entities. So I guess that is 3 options (Rich domain objects called directly, service layer or dto with behavior) which is best?
2) Right now I have a domain project, which has domain services, specifications and logic and is orchestrated by the application layer and separate project for the domain model (used by presentation layer and application layer). I also have framework interfaces for generic repository and unit of work pattern here. Should I break the framework stuff out into a separate project and combine the rest into one project?
3) I want to reorganize my domain layer into aggregates, right now all of the domain model is organized by modules, basically all the types for each module are in one namespace. Would it be better to organize the entities, value objects, services and other stuff by the aggregates?
4) Should I use the Separated Interface pattern for infrastructure services that are basically .net framework helper library types? For example configuration objects or validation runners? What is the benefit there in doing so?
5) Lastly, not many examples I have seen have used interfaces for domain entities. Almost every object I have I prefer to pass around interfaces for dependency reasons and it makes testing much easier. Is it valid to use interfaces instead of concretes? I should mention that we use EF 4.3.1 (soon to upgrade to latest version) and I seem to remember that EF had a problem with using interfaces or something. Should I be exposing interfaces instead of the domain entities?
Thank you very much in advance.
Project Structure:
Presentation.Web
| |
| Application
| | |
Domain.Model - Domain
(Infrastructure.Data, Infrastructure.Core, Infrastructure.Security)
Explanation:
Presentation.Web (MVC3 Web Project)
Application
-- Service Layer that orchestrates the domain layer and responds to requests from the presentation layer (get this update that). This is organized by module, for example if I had a customer module I would have Application.Customer and in that would be all of the application services
Domain
-- Contains domain services, specifications, calculations and other domain logic that is not exposed as behavior on domain entities. For example a calculation that involves several domain entities exposed as a domain service for the application layer to call.
-- Also contains framework code for a specification framework and the main interfaces for a generic repository and unit of work pattern.
Domain.Model
-- Contains the domain entities and enumerations. Organized by module. For example, if I might have a customer module which has a customer entity, customerorder entity etc. This is broken out away from the domain project so that the objects can be used by the application and presenation layer.
Infrastructure.Security
-- Security infrastructure for authentication and authorization
Infrastructure.Core
-- Cross-cutting stuff used by multiple layers (validators, logging, configuration, extensions, IoC, email etc..). Most of the projects depend on interfaces in this project (except domain.model) for infrastructure services.
Infrastructure.Data
-- Repository Implementations via LINQ and EF 4.3.1, mapping layer, Unit of Work implementation. Interfaces are in Domain project (separated interfaces pattern)
1) First, determine whether your main website really needs to use the application layer. IMHO, if your application services and your main website are on the same web server, then you should evaluate whether the potential performance loss is worth having your main website call app server methods when it could call the domain objects directly. However, if your application server is definitely on another server, then yes, you should have the application server call your domain objects and pass only DTOs back and forth between it and any presentation layers you may have, including your main website.
2) This is really a question on preference of organization. Both are valid. You choose.
3) Anoter question on preference of organization. I, personally, organize my code by bounded context first. Then, I have entities and aggregate roots directly under them. Then, I have folders for Enumerations, Repositories (interfaces), Services (interfaces), Specifications, and Values. The namespaces do not reflect this organizational structure past the last bounded context folder. But, again, you should do this in the way that best suits the way you look at the code.
4) This is an implementation concern. I, personally, only break out implementation concerns into interfaces if I think there is a good possibility that I will need to swap out the implementations in the future. That being said, I usually organize my helper libraries into specific infrastructure contexts (eg. MainContext.Web.MVC.Helpers or MainContext.Web.WebForms.Helpers.) These rarely change and I have yet to come across an instance where I needed to swap out implementations entirely.
5) From my understanding, it is perfectly valid to use interfaces instead of concretes for your domain entities. That being said, I have yet to run into a case where I needed different implementations for my domain entities. The only reason I can even think of would be if you needed to change your business logic for one application, but leave an older application using the original business logic. If your business objects are good models for the domain, I can't fathom you actually running into this problem, but I have seen examples where people do this just for the sake of the abstraction. IMHO, that is not worth the extra coding effort, but if it makes you feel good inside or you get some actual benefit (eg. making testing easier), there isn't any reason why you can't abstract out your domain entities. That being said, domain services and repositories should definitely have contracts that allows you to swap out their implementations.
Answer 5 is derived from the idea that the application is the one who chooses the implementations. If you are trying to achieve onion architecture, then your application is going to be choosing the concrete implementations for everything (repositories, domain services, and other abstracted implementation concerns). I see no reason why it can't just use domain aggregates directly since they are the concrete representation of your domain model. (Note: All entities should be encapsulated into aggregates. The application should never be able to hold a reference to an entity that is not an aggregate under the context)

Arguments against Inversion of Control containers

Seems like everyone is moving towards IoC containers. I've tried to "grok" it for a while, and as much as I don't want to be the one driver to go the wrong way on the highway, it still doesn't pass the test of common sense to me. Let me explain, and please correct/enlighten me if my arguments are flawed:
My understanding: IoC containers are supposed to make your life easier when combining different components. This is done through either a) constructor injection, b) setter injection and c) interface injection. These are then "wired up" programmatically or in a file that's read by the container. Components then get summoned by name and then cast manually whenever needed.
What I don't get:
EDIT: (Better phrasing)
Why use an opaque container that's not idiomatic to the language, when you can "wire up" the application in (imho) a much clearer way if the components were properly designed (using IoC patterns, loose-coupling)? How does this "managed code" gain non-trivial functionality? (I've heard some mentions to life-cycle management, but I don't necessarily understand how this is any better/faster than do-it-yourself.)
ORIGINAL:
Why go to all the lengths of storing the components in a container, "wiring them up" in ways that aren't idiomatic to the language, using things equivalent to "goto labels" when you call up components by name, and then losing many of the safety benefits of a statically-typed language by manual casting, when you'd get the equivalent functionality by not doing it, and instead using all the cool features of abstraction given by modern OO languages, e.g. programming to an interface? I mean, the parts that actually need to use the component at hand have to know they are using it in any case, and here you'd be doing the "wiring" using the most natural, idiomatic way - programming!
There are certainly people who think that DI Containers add no benefit, and the question is valid. If you look at it purely from an object composition angle, the benefit of a container may seem negligible. Any third party can connect loosely coupled components.
However, once you move beyond toy scenarios you should realize that the third party that connects collaborators must take on more that the simple responsibility of composition. There may also be decommissioning concerns to prevent resource leaks. As the composer is the only party that knows whether a given instance was shared or private, it must also take on the role of doing lifetime management.
When you start combining various instance scopes, using a combination of shared and private services, and perhaps even scoping some services to a particular context (such as a web request), things become complex. It's certainly possible to write all that code with poor man's DI, but it doesn't add any business value - it's pure infrastructure.
Such infrastructure code constitutes a Generic Subdomain, so it's very natural to create a reusable library to address such concerns. That's exactly what a DI Container is.
BTW, most containers I know don't use names to wire themselves - they use Auto-wiring, which combines the static information from Constructor Injection with the container's configuration of mappings from interfaces to concrete classes. In short, containers natively understand those patterns.
A DI Container is not required for DI - it's just damned helpful.
A more detailed treatment can be found in the article When to use a DI Container.
I'm sure there's a lot to be said on the subject, and hopefully I'll edit this answer to add more later (and hopefully more people will add more answers and insights), but just a couple quick points to your post...
Using an IoC container is a subset of inversion of control, not the whole thing. You can use inversion of control as a design construct without relying on an IoC container framework. At its simplest, inversion of control can be stated in this context as "supply, don't instantiate." As long as your objects aren't internally depending on implementations of other objects, and are instead requiring that instantiated implementations be supplied to them, then you're using inversion of control. Even if you're not using an IoC container framework.
To your point on programming to an interface... I'm not sure what your experience with IoC containers has been (my personal favorite is StructureMap), but you definitely program to an interface with IoC. The whole idea, at least in how I've used it, is that you separate your interfaces (your types) from your implementations (your injected classes). The code which relies on the interfaces is programmed only to those, and the implementations of those interfaces are injected when needed.
For example, you can have an IFooRepository which returns from a data store instances of type Foo. All of your code which needs those instances gets them from a supplied object of type IFooRepository. Elsewhere, you create an implementation of FooRepository and configure your IoC to supply that anywhere an IFooRepository is needed. This implementation can get them from a database, from an XML file, from an external service, etc. Doesn't matter where. That control has been inverted. Your code which uses objects of type Foo doesn't care where they come from.
The obvious benefit is that you can swap out that implementation any time you want. You can replace it with a test version, change versions based on environment, etc. But keep in mind that you also don't need to have such a 1-to-1 ratio of interfaces to implementations at any given time.
For example, I once used a code generating tool at a previous job which spit out tons and tons of DAL code into a single class. Breaking it apart would have been a pain, but what wasn't much of a pain was to configure it to spit it all out in specific method/property names. So I wrote a bunch of interfaces for my repositories and generated this one class which implemented all of them. For that generated class, it was ugly. But the rest of my application didn't care because it saw each interface as its own type. The IoC container just supplied that same class for each one.
We were able to get up and running quickly with this and nobody was waiting on the DAL development. While we continued to work in the domain code which used the interfaces, a junior dev was tasked with creating better implementations. Those implementations were later swapped in, all was well.
As I mentioned earlier, this can all be accomplished without an IoC container framework. It's the pattern itself that's important, really.
First of all what is IOC? It means that responsibility of creating the dependent object is taken away from the main object and delegated to third party framework. I always use spring as my IOC framework and it bring tons of benefit to the table.
Promotes coding to interface and decoupling - The key benefit is that IOC promotes and makes decoupling very easy. You can always inject an interface in your main object and then use the interface methods to perform tasks. The main object does not need to know which dependent object is assigned to the interface. When you want to use a different class as dependency all you need is to swap the old class with a new one in the config file without a single line of code change. Now you can argue that this can be done in the code using various interface design patterns. But IOC framework makes its walk in a park. So even as a newbie you become expert in levering various interface design patterns like bridge, factory etc.
Clean code - As most of object creation and object life-cycle operations are delegated to IOC container you saved from the writing broiler point repetitive code. So you have a cleaner, smaller and more understandable code.
Unit testing - IOC makes unit testing easy. Since you are left with decoupled code you can easily test the decoupled code in isolation. Also you can easily inject dependencies in your test cases and see how different component interacts.
Property Configurators - Almost all the applications have some properties file where they store application specific static properties. Now to access those properties developers need to write wrappers which will read and parse the properties file and store the properties in format that application can access. Now all the IOC frameworks provide a way of injecting static properties/values in specific class. So this again becomes walk in the park.
These are some of the points I can think right away I am sure there are more.

IoC and events

I am having a really hard time reconciling IoC, interfaces, and events. Let's see if I can explain this without writing a book.
I'm just getting started with IoC and I'm playing with Spring. We have a simple data layer that was built long before EF or the others. One of the classes is a DBProcedure which has some methods and events.
I created an IDBProcedure interface that the 'real' DBProcedure class implements. In TDD fashion I'd like to be able to swap out the 'real' DBProcedure class for another that implements the same interface for testing. To me, this means that the IDBProcedure interface should be defind in a different namespace/project than my data layer, right?
But a DBProcedure can raise some events and those events deliver custom EventArgs-derived classes. Does that mean that the EventArgs-classes need to be defined outside the data layer too? Seems like it to make the interface work, but that seems bad because it spreads data-layerness around?
On the other hand maybe I have the wrong idea - is it ok to include the data layer namespace when I'm testing to get interface and event definitions even though I'm not using any of the 'real' classes?
Yes, you need to move the interfaces and all the types it depends on somewhere, because you do not want the interfaces module to depend on the implementations.
The typical choice for this is one of two alternatives
Impl ----> Api <---- client
(Implementation depends on api, client depends on api, everything in api module)
Impl ----> Api <----- client
\ | /
\ V /
------->Model<------
Here everyone depends on a common "model" module, and this contains the enums and such. The advantage of this version is that you can have multiple API modules share the same common enums and other artifacts. (Because you really don't want API's to depend on other API modules usually)