How to update grandchildren in an aggregate root - entity-framework

I Use EF Code First, and lazy loading.
My problem relates to how to efficiently update an entity in within a grandchild collection. First of all, i fear this makes a lot of calls in the db that is not really needed. But if my domain class is not to care about persitance, I cant see another way to do this.
Here is the classes:
public class Supplier
{
public int Id {get;set;}
//...Supplier properties
public virtual ICollection<Contract> Contracts {get;set;}
//supplier methods
}
public class Contract
{
public int id {get;set;}
public int SupplierId{get;set;}
//---Contract properties
[ForeignKey("SupplierId")]
public virtual Supplier Supplier {get;set;}
public virtual ICollection<DeliveryContract> DeliveryContracts {get;set;}
}
public class DeliveryContract
{
public int Id {get;set;}
public bool DeliveryOnMonday{get;set;}
public bool DeliveryOnTuesday{get;set}
//...30 different Delivery terms properties
public Department Department {get;set;}
public int ContractId {get;set;}
[ForeignKey("ContractId")
public virtual Contract Contract {get;set;}
}
The Supplier is the aggregate Root. So i have a method on the supplier which is ChangeDeliveryContract, and that corresponds to what would happen in the real world.
public class Supplier
{
//properties
public void ChangeDeliveryContract (DeliveryContract cahangedDc)
{
//So from the supplier i have to find the contract to change
var dcToUpdate = Contracts
.SingleOrDefault(c=>c.Id == changedDc.ContractId)
.SingleOrDefalut(dc=>dc.Id == changedDc.Id);
//So... what do i do now? Map all 30 properties from changedDc to DcToUpdate
//Some business rules is also applied here i.e. there can only be one
// DeliveryContract between Supplier and Department
}
}
I use MVC so the program would look something like:
public ActionResult Update (DeliveryContract changedDc, int supplierId)
{
var Supplier = supplierRepository.GetById(supplierid);
supplier ChangeDeliveryContract (changedDc);
supplierRepository.Save();
//More code...
}
First of all, the problem lays in the ChangeDeliveryContract. I've not been able to get this to work. Also, i feel querying through collections like I do might be inefficient. Third, mapping 30 + properties also feels a bit wrong.
How do you guys do it, and is there a best practices here.

In applying DDD, the selection of aggregate roots can vary depending on various characteristics of the model, which include considerations regarding the number of children, etc. In this case, while Supplier is an AR, it does not mean that DeliveryContract can't also be an AR. While it may seem like Supplier is the sole AR and all operations regarding suppliers should stem from the Supplier class, this can become unruly with respect to database calls as you've come to realize. One role of an AR is the protection of invariants and there is nothing on the Supplier class that is used to protecting invariants which is a possible indication that Supplier is not the most appropriate AR to implement the required business rules. Therefore, it seems to me that in this case you can make DeliveryContract an AR, with its own repository, and a method for applying changes. Or you can make Contract an AR, depending on whether a contract must enforce any invariants regarding delivery contracts and also on a practical consideration of the number of expected delivery contracts per contract. If the number is very high, then it would be impractical to have a collection of delivery contracts on the contract class. Overall, I would opt for having smaller ARs, though invariants and consistency rules must be considered.
Take a look at a great series of articles by Vaughn Vernon for an in-depth treatment of this topic: Effective Aggregate Design.

OK, this is a bit confusing and I blame it on the mixing of the Domain and Persistence models(yeah, those EF tutorials have done a GREAT job of confusing everyone). One should not influence another, that's why you have the repository pattern. And yes, the domain should not care about the persistence.
Now that the Suplier doesn't know about EF anymore, let'see... If I understand correctly then you prety much need to redesign the Supplier (and probably the children aggregates as well) because you need to take into account the business rules.
It's pretty hard for me to reverse engineer the requirements from that code, but I have a feeling that a supplier has delivery contracts to different departments. When you change a delivery contract the supplier should enforce the business rules valid in that context (that's important if there are multiple contexts valid for the same entity).
I think though the delivery contract needs a bit more clarification, because I can't believe it's only a dumb object that only holds 30 properties. Perhaps some business rules are tied to some properties? So, we need more details.
As a side, if your really need to map 30 properties brcause that's the way it is, you can use Automapper for that.

About the property mapping in ChangeDeliveryContract, you feel like mapping 30 properties is a bit wrong. In itself there's nothing wrong with mapping 30 properties. If it has to be done, it has to be done. You could use AutoMapper for it, to ease the task.
I think the 'feel' of the code can be changed if you make methods like 'Supplier.MakeDeliveryOnMonday()', 'Supplier.DontMakeDeliveryOnTuesday()'. You probably can guess what these methods do (check business logic and set a boolean to true or false). So you don't have to use 'big' methods like ChangeDeliveryContract.

Related

How do I handle persistence and unit of work in DDD using Entity Framework?

I'm a little overwhelmed with all of the information on DDD, unit of work, domain services, app services, etc. I'm trying to figure out how a persistence-ignorant domain model ultimately gets persisted, specifically in the context of unit-of-work and Entity Framework. Let's say I have an Order aggregate root, which I am attempting to keep in my persistence-ignorant domain model (the core of my architectural onion):
public class Order : EntityBase
{
public int Id { get; private set; }
public int MarketplaceId { get; private set; }
public int CustomerId {get; set;}
public List<OrderItem> Items { get; private set; }
public List<OrderComment> Comments { get; private set; }
public void AddItem(OrderItem item) { /**add item**/ }
public void AddComment(OrderComment comment) { /**add comment**/ }
public override bool Validate() { /**validate**/ }
public void Cancel() { /**cancel**/ }
}
Let's say I have a process that updates a property on the Order entity, for example it changes the CustomerId associated with the order.
I have an IOrderRepository in my domain layer, which would have an implementation (in an outer layer) with a function like this:
Order GetOrder(int orderId)
{
//get entity framework order, items, etc.
//map to domain-layer order and return domain-layer order
}
void UpdateOrder(Order order)
{
//get ENTITY FRAMEWORK order, order items, order comments, etc.
//take DOMAIN order (passed in to this function), and update EF items fetched above
//use a single EF unit of work to commit these changes
}
There's something wrong with my approach. The UpdateOrder function seems heavy for a small change; but it also seems I have to do that if my repository isn't aware of which items on the persistence-ignorant domain model have changed. Should I be handling every type of update in a separate repository function? UpdateMarketplace(int marketplaceId), UpdateCustomer(int customerId)?
As I'm typing this, I'm also wondering...maybe the way I have it above is not too heavy? If I change one property, even though I'm doing all of the above, perhaps Entity Framework will recognize that the values being assigned are the same and will only send the one db column update to SQL?
How can I take my Order domain model (fetching is straightforward enough), perform some operation or operations on it that may be limited in scope, and then persist the model using Entity Framework?
You need to look into the Unit of Work pattern. Your UoW keeps track of the changes, so when you get your order from your repository and modify it, you call UnitOfWork.SaveChanges() which should persist all the changes.
Using Entity Framework, your DbContext is basically the Unit of Work but I would create a simpler interface around it so you can abstract it away for easier usage in your higher layers.
Regarding EF, I would recommend mapping your domain entities directly using the code first approach. I would also turn off lazy loading and all the magic stuff so you have full control and less "surprises".
Unfortunately I'm not allowed to share our code but we have all this working pretty effectively with the new EF6 Alpha 3. I would recommend you taking a look at Microsoft Spain's nlayerapp for some implementation examples. I don't agree with many of their design decisions (also, see this review), but I think you can draw some inspiration from the Entity Framework parts. Take a look at their Unit of Work implementation and especially how they have abstracted it away for easier usage in the higher layers, and how they use it in their application services.
I will also recommend looking into creating a generic repository to avoid duplicating lots of logic in your aggregate specific repositories. MS Spain has one here, but you should also take a look at this thread.
Please have a look at this SO question where I gave an example of how I've implemented UoW & Repositories.
As #Tommy Jakobsen told you, your domain entities should be your EF entities, it would avoid you to add a useless mapping layer.
Hope that helps!
You may check ASP.NET Boilerplate's Unit Of Work implementation: http://www.aspnetboilerplate.com/Pages/Documents/Unit-Of-Work
It's open source project, you can check codes. Also, you can directly use it.

ViewModel Redundancy Clarification

I was recently reading about ViewModel and its advantages. I am able to understand why it is needed, however the question I have is If i have two classes for the same object(ie. Person class), doesn't i make the code redundant ? Also does it no make future changes a little difficult since you need to make sure the base class model and view model has the same number of properties is that right ? For instance let's say I have table called Person which has
ID
Name
Color
I am creating a hbm for creating the mapping for NHibernate. I have the following model class
public class Person {
public int ID {get;set;}
public string Name {get;set;}
public string color {get;set;} }
If i am correct, the view model class should look like
public class PersonViewModel {
[DisplayName("Full Name")]
public string Name {get;set;}
[DisplayName("Favourite Color")]
public string color {get;set;}
}
First, I have two classess referring to the same object in the db. Even though one class is used for DB purposes and other one is used for View purposes, we still have two class with exactly the same meta data. Secondly, If I introduce a new field in the db, I would need to add it in three places, Base Model class, View Model Class and the HBM file.
Please correct me if I am wrong, how can this be termed as code optimization or a best practice.
It depends on the approach you wish to take, you could expose the model directly as a property of your view model to avoid violating the DRY principle. However, this would violate the Law of Demeter so you would have to balance this, as your views would now be more tightly coupled with your domain model.
Also, in terms of validation, if you expose the model directly then you need to be careful that any property exposed could be set by an end user, even if you don't use the property directly in your view. You are also more likely to have different validation requirements per view, in which case validation would be the concern of the view model.
That's why the general best practice is not to expose your domain models directly to the view. You can use frameworks such as AutoMapper to reduce the data transfer plumbing code between the layers.

How to expose only properties that needed with Web API?

I'm new to ASP.NET Web API.
I saw examples of how you can get and return POCOs in RESTful web application.
I wonder how in real world application you can pass only some of the properties of your POCO (for security and/or message size reasons).
I found that I can use the '[ScriptIgnore]' attribute, but I'm looking for a way to customize which properties to pass according to the requesting controller, for example.
Does there is a nice, out of the box way, to do so?
Thanks
Probably the easiest is to decorate your POCO with System.Runtime.Serialization.DataContractAttribute and the members you want to include with System.Runtime.Serialization.DataMemberAttribute i.e.
[DataContract]
public class MyType
{
[DataMember]
public string Property1 {get; set;}
public string Property2 {get; set;}
public string Property3 {get; set;}
}
In this case only Property1 will be serialized. It;s worth noting, that both XmlMediaTypeFormatter and JsonMediaTypeFormatter will respect DataContract so you don't need any XML/JSON specific attrbiutes.
Now, this will work in simpler solutions, for a real, well rounded approach you'd probably need to resort to DTOs instead of exposing your Models to the client.
You could use Automapper for that, and project Models to DTOs - you have a good introductory article here http://www.mono-software.com/blog/post/Mono/120/Using-AutoMapper-to-handle-DTOs/. Also, with Automapper you can have different types of DTOs created from the same base Model, which, I understand, is something you are interested in.
If you are trying to return multiple manifestations of the same model from different controller it is a harmful design (according to me). If you still want to do it you can make the unwanted properties to 'null' and return the model.
To handle the null objects to ignore from serializing you have to do this somewhere while configuring your formatters (right in global.asax):
GlobalConfiguration.Configuration.Formatters.JsonFormatter.SerializerSettings.NullValueHandling = NullValueHandling.Ignore;

EF 4.2 Code First and DDD Design Concerns

I have several concerns when trying to do DDD development with EF 4.2 (or EF 4.1) code first. I've done some extensive research but haven't come up with concrete answers for my specific concerns. Here are my concerns:
The domain cannot know about the persistence layer, or in other words the domain is completely separate from EF. However, to persist data to the database each entity must be attached to or added to the EF context. I know you are supposed to use factories to create instances of the aggregate roots so the factory could potentially register the created entity with the EF context. This appears to violate DDD rules since the factory is part of the domain and not part of the persistence layer. How should I go about creating and registering entities so that they correctly persist to the database when needed to?
Should an aggregate entity be the one to create it's child entities? What I mean is, if I have an Organization and that Organization has a collection of Employee entities, should Organization have a method such as CreateEmployee or AddEmployee? If not where does creating an Employee entity come in keeping in mind that the Organization aggregate root 'owns' every Employee entity.
When working with EF code first, the IDs (in the form of identity columns in the database) of each entity are automatically handled and should generally never be changed by user code. Since DDD states that the domain is separate from persistence ignorance it seems like exposing the IDs is an odd thing to do in the domain because this implies that the domain should handle assigning unique IDs to newly created entities. Should I be concerned about exposing the ID properties of entities?
I realize these are kind of open ended design questions, but I am trying to do my best to stick to DDD design patterns while using EF as my persistence layer.
Thanks in advance!
On 1: I'm not all that familiar with EF but using the code-first/convention based mapping approach, I'd assume it's not too hard to map POCOs with getters and setters (even keeping that "DbContext with DbSet properties" class in another project shouldn't be that hard). I would not consider the POCOs to be the Aggregate Root. Rather they represent "the state inside an aggregate you want to persist". An example below:
// This is what gets persisted
public class TrainStationState {
public Guid Id { get; set; }
public string FullName { get; set; }
public double Latitude { get; set; }
public double Longitude { get; set; }
// ... more state here
}
// This is what you work with
public class TrainStation : IExpose<TrainStationState> {
TrainStationState _state;
public TrainStation(TrainStationState state) {
_state = state;
//You can also copy into member variables
//the state that's required to make this
//object work (think memento pattern).
//Alternatively you could have a parameter-less
//constructor and an explicit method
//to restore/install state.
}
TrainStationState IExpose.GetState() {
return _state;
//Again, nothing stopping you from
//assembling this "state object"
//manually.
}
public void IncludeInRoute(TrainRoute route) {
route.AddStation(_state.Id, _state.Latitude, _state.Longitude);
}
}
Now, with regard to aggregate life-cycle, there are two main scenario's:
Creating a new aggregate: You could use a factory, factory method, builder, constructor, ... whatever fits your needs. When you need to persist the aggregate, query for its state and persist it (typically this code doesn't reside inside your domain and is pretty generic).
Retrieving an existing aggregate: You could use a repository, a dao, ... whatever fits your needs. It's important to understand that what you are retrieving from persistent storage is a state POCO, which you need to inject into a pristine aggregate (or use it to populate it's private members). This all happens behind the repository/DAO facade. Don't muddle your call-sites with this generic behavior.
On 2: Several things come to mind. Here's a list:
Aggregate Roots are consistency boundaries. What consistency requirements do you see between an Organization and an Employee?
Organization COULD act as a factory of Employee, without mutating the state of Organization.
"Ownership" is not what aggregates are about.
Aggregate Roots generally have methods that create entities within the aggregate. This makes sense because the roots are responsible for enforcing consistency within the aggregate.
On 3: Assign identifiers from the outside, get over it, move on. That does not imply exposing them, though (only in the state POCO).
The main problem with EF-DDD compatibility seems to be how to persist private properties. The solution proposed by Yves seems to be a workaround for the lack of EF power in some cases. For example, you can't really do DDD with Fluent API which requires the state properties to be public.
I've found only mapping with .edmx files allows you to leave Domain Entities pure. It doesn't enforce you to make things publc or add any EF-dependent attributes.
Entities should always be created by some aggregate root. See a great post of Udi Dahan: http://www.udidahan.com/2009/06/29/dont-create-aggregate-roots/
Always loading some aggregate and creating entities from there also solves a problem of attaching an entity to EF context. You don't need to attach anything manually in that case. It will get attached automatically because aggregate loaded from the repository is already attached and has a reference to a new entity. While repository interface belongs to the domain, repository implementation belongs to the infrastructure and is aware of EF, contexts, attaching etc.
I tend to treat autogenerated IDs as an implementation detail of the persistent store, that has to be considered by the domain entity but shouldn't be exposed. So I have a private ID property that is mapped to autogenerated column and some another, public ID which is meaningful for the Domain, like Identity Card ID or Passport Number for a Person class. If there is no such meaningful data then I use Guid type which has a great feature of creating (almost) unique identifiers without a need for database calls.
So in this pattern I use those Guid/MeaningfulID to load aggregates from a repository while autogenerated IDs are used internally by database to make a bit faster joins (Guid is not good for that).

EF entities as domain-models, when decoupling them from views with view-models?

I'm trying to understand the best architecture for my MVC2 site.
As I have been experimenting with getting the data in and out of a database with Entity Framework, I am beginning to realize the simple domain-models I have so far constructed do not map to all the needs of my planned views. So I am considering following the accpepted answer to this question: Why Two Classes, View Model and Domain Model?.
But there seems to be redundancy with little payoff that I can perceive between the domain-models and the EF models, and I can't even hardly understand the conceptual difference. I do NOT have as a requirement the need to switch data sources down the road, and I do not forsee the need to switch my ORM solution either.
QUESTION:
If I follow this pattern then, since I am using Entity Framework, shouldn't I just use my EF entities to serve directly as the domain models? (note: I haven't thought through the "how" of that, but answers there are welcome too.) Or am I still advised to manage a separate set of domain-models?
It seems you've got some redundancy here. Reading your paragraph:
But there seems to be redundancy with
little payoff that I can perceive
between the domain-models and the EF
models, and I can't even hardly
understand the conceptual difference.
I would argue that there is no real difference between the EF Model and your Domain Model. In the projects I create, my EF Model is my Domain model.
However, my Domain model classes are not the same as my ViewModels. The Domain model class might contain data that is not interesting for the View, or maybe the view needs information that is calculated/evaluated based on information in view. A simple example might be:
public class Session // Domain model (and EF Model
{
public int Id {get; set; }
public DateTime Start {get; set; }
public int DurationInMinutes {get; set; }
}
public class SessionViewModel // The viewmodel :p
{
public DateTime Start {get; set; }
public int DurationInMinutes {get; set;}
public DateTime End
{
get
{
return Start.Add(TimeSpan.FromMinutes(DurationInMinutes));
}
}
}
In this example I'm interested in displaying the actual End-time in my View, but I have no interest in storing it in the database, as that might lead to data-discrepencies (DurationInMinutes + Start might not equal End if data is corrupted upon saving)
When I first started coding this way, I ended up doing alot of manual work mapping my Domain models to ViewModels, and back. AutoMapper changed all that :) Google it, or NuGet it and it will make your life a whole lot easier :)
Hope this helps a little. Please comment if I'm totally missing the point :)
Update to address the comment
DataAnnotations would then be applied to the ViewModel, because normally DataAnnotations denote how the data should be displayed and validated in the View.
For instance you would put the [Required] attribute on public DateTime Start {get; set;} in order for the Html.DisplayFor extensions automatically validates your HTML according to your dataannotations.
By definition (by some anyway) the Domain Model should not contain any code or logic related to your business logic. The Domain Model is simply responsible for containing the data pretty raw according to your datastore. Personally I like to put some sort of Service layer inbetween that is responsible for fetching the data and returning ViewModels, and also doing the reverse.
The ultimate goal is to avoid referencing your domainmodel directly from your controllers.
Of course, all these points has to be weighed in reference to the size of the project. It's certainly overkill to do all this just to mock up a test-site - but in any other project where you'll actually be deploying something that might scale, expand or otherwise change, it's a good practice to get used to, as it seriously increases your ability to do so.
Another key point to this approach is that you are forced to abstract your operations down to smaller and more managable units, enabling better and more precise unit-tests.