Managing an Entity and its Snapshot with an ORM - entity-framework

I would like to use one of the ideas that Jimmy Nilsson mentioned in his book Applying DDD with Patterns, and that is if i have an entity like a Product for example, i would like to take a snapshot of that entity for historic information, something like ProductSnapshot but i wonder how i might be able to implement this with an ORM (i am currently using Entity Framework). The main problem i am facing is that if i have another Entity like OrderLine that receives the Product via its constructor then entity framework would need you to make a public property of the type you wish to persist so this will force me to have something like this:
class OrderLine {
public Product Original Product {get; set;}
public ProductSnapshot Snapshot {get; set;}
}
and that seems awkward and not intuitive and i don't know how to deal with it properly when it comes to data binding (to which property i should bind), and finally i think that Product is an Entity while ProductSnapshot is a Value Object plus the snapshot is only taken when the OrderLine is accepted and after that the Product is not needed.

When doing DDD, forget that the database exists. This means the ORM doesn't exist either. Now, because you don't have to care about persistence and ORM limits, you can model the ProductSnapshot according to the domain needs.
Create a ProductSnapshot class with all the required members.This class would be a result probably of a SnapshotService.GetSnapshot(Product p) . Once you have the ProductSnapshot just send it to a repository SnapshotsRepository.Save(snapshot) . Being a snapshot, this means it will probably be more of a data structure, a 'dumb' object. It also should be invariable, 'frozen' .
The Repository will use EF to actually save the data. You decide what the EF entities and relations are. ProductSnapshot is a considered to be a business object by the persistence(it doesn't matter if in reality it's just a simple Dto) and the EF entities may look very different (for example, I store business objects in serialized form in a key-value table) according to your querying needs.
Once you define the EF entites you need to map the ProductSnapshot to them. It's very probable that ProductSnapshot itself can be used as an EF Entity so you don't need to do any mapping.
The point is, that taking a snapshot seems to be domain behavior. You deal with the EF only after you have the snapshot and you do exactly as you'd do with any other busines object.

Why does OrderLine have to have ProductSnapshot property? I suppose, you can either have a link to ProductSnapshot from Product class if you need to get that historical informatil, or, in case you just want to save Product state under some conditions, just implement a SaveSnapshot method in Product partial class, or have an extension method for it.

Related

Foreign key between aggregate roots

I understand the concept of aggregate root and I know that one aggregate root must reference another by identity ( http://dddcommunity.org/wp-content/uploads/files/pdf_articles/Vernon_2011_2.pdf ) so what I don't get is how can I force Entity Framework to add a foreign key constraint between two aggregates?
Lets suppose I have a simplified domain:
public class AggregateOne{
[Key]
public Guid AggregateOneID{ get; private set;}
public Guid AggregateTwoFK{get; private set;}
/*Other Properties and methods*/
}
public class AggregateTwo{
[Key]
public Guid AggregateTwoID{get; private set;}
/*Other Properties and methods*/
}
With this domain design, Entity Framework doesn't know that there is a relationship between AggregateOne and AggregateTwo and consequently there is no foreign key at the generated database.
In DDD, EF doesn't exist. Domain relationships are not the same as database relationships. Don't try to mix EF with domain modeling, they don't work together. So in a nutshell, what you have there is not DDD, just plain old relational db masquerading as DDD. EF would be used by the Repositories and would care about persisting one Aggregate Root (AR).
Two ARs can work together, however you need to model the process according to the domain. EF is there to act as a db for the app, it's concerned with persistence issues and shouldn't care about the Domain. Persistence is all about storage and not about reflecting domain relationships (the EF entity is not the domain entity although they can have the same name and can look similar. The important detail is that both belong to different layers and handle different issues). The Domain repositories care only to persist the AR in a way that can be easily restored when it will change. If more AR need to be persisted together, embrace eventual consistency and learn how to use a service bus and sagas. It will greatly simplify your life (consider it a kind of implementation for the unit of work pattern).
For querying, the most clean and elegant way is to generate/update a read model suitable for the querying use cases and this is usually done after a domain event tells the 'world' that something changed in the Domain.
Doing DDD right is not straightforward and it's very easy to fall into the trap, believing that you apply DDD when in fact you're just CRUD ing away, using DDD terminology. Also IMO CQRS is a must with DDD if you like an easy life.
Understand the domain without rushing it and being superficial, identify the bounded contexts, model the domain concepts and their use cases (very important!!!), define repository interfaces as you need them, and implement the repositories only when there's nothing else left to do (the real repos, in the mean time you can use fake ones like in memory repos - they're very fast to implement and your app being decoupled means it shouldn't care about how persistence is implemented, right?). I know it sounds weird, but this how you know you have a maintainable DDD app.
The point of implementing the repositories last is to really decouple the app from the persistence details and also to have defined the expectations(repository methods) the app has from persistence. Once defined, you can write tests :D then implement the repositories. The bonus is that you get to focus only on repo implementation is isolation and when the all tests pass, you know everything works as it should.
Why should you have two complete different objects? Why not only expose your entities as domain objects through a domain interface?
In this case there's no issue with having your entities also act as domain objects with their implementation details neatly hidden behind the interface.
Another point a neat way to represent aggregate roots with EF is to make sure the foreign key column also makes up the primary key of the dependant entity. In your case that would mean AggregateOneId and AggregateTwoFk together would form the composite primary key of AggregateOne. This will ensure that EF doesn't need a repository for removing instances off AggregateOne as long as it's removed from AggregateTwo's collection it will be properly marked for deletion from the databases (if you don't have key like this you need to remove it from AggregateOne set because EF would throw an exception not understanding the intent of the developer that AggregateOne should be deleted.

DbContext with dynamic DbSet

Is it possible to have a DbContext that has only one property of the generic type IDbSet and not a collection of concrete IDbSet e.g. DbSet.
More specifically, i want to create only one generic DbSet where the actual type will be determined dynamically e.g.
public new IDbSet<T> Set<T>() where T : class
{
return context.Set<T>();
}
I don't want to create multiple DbSets e.g.
DbSet<product> Products { get; set; }
...
Actually i tried to use that generic DbSet but there seems to be one problem. The DbContext doesn't create the corresponding tables in the database. So although i can work with the in-memory entity graph, when the time comes to store the entites into the DB an exception is thrown (Invalid object name 'dbo.Product'.)
Is there any way to force the EF to create tables that correspond to dynamicaly creates DbSets?
Yes you can do this.
modelBuilder.Configurations.Add
The DBSet entries will be derived.
If you plan to use POCOs and just build the model this way ok.
So you save Manual DBSet<> declaration...
But if you plan on being more Dynamic without POCOs...
Before you go down the this route, there are a number of things to consider.
Have you selected the right ORM ?
Do you plan on having a POCOs ?
Why is DbSet Products { get; set; } so bad ?
You get a lot of action for that 1 line of code.
What Data access approach you plan to use without types DBSets
Do you plan to use Linq to Entity statements?
do you plan on creating Expression trees for the Dynamic Data access necessary. Since the types arent known at compile time.
Do you plan to use the DB Model cache,?
How will the cache be managed, especially in Web. ASP environments.
There are most likely other issues i did think of off the top of my head.
Constructing the model yourself is a big task. The Linq access is compromised when compile time types/POCOs are NOT used and the model cache and performance become critical management tasks.
The practical side of this task is not to under estimate
Start here bContext.OnModelCreating
Typically, this method is called only once when the first instance of
a derived context is created. The model for that context is then
cached and is for all further instances of the context in the app
domain. This caching can be disabled by setting the ModelCaching
property on the given ModelBuidler, but this can seriously degrade
performance. More control over caching is provided through use of the
DbModelBuilder and DbContext classes directly.
The modelbuilder class
Good Luck

EF 4.2 Code First and DDD Design Concerns

I have several concerns when trying to do DDD development with EF 4.2 (or EF 4.1) code first. I've done some extensive research but haven't come up with concrete answers for my specific concerns. Here are my concerns:
The domain cannot know about the persistence layer, or in other words the domain is completely separate from EF. However, to persist data to the database each entity must be attached to or added to the EF context. I know you are supposed to use factories to create instances of the aggregate roots so the factory could potentially register the created entity with the EF context. This appears to violate DDD rules since the factory is part of the domain and not part of the persistence layer. How should I go about creating and registering entities so that they correctly persist to the database when needed to?
Should an aggregate entity be the one to create it's child entities? What I mean is, if I have an Organization and that Organization has a collection of Employee entities, should Organization have a method such as CreateEmployee or AddEmployee? If not where does creating an Employee entity come in keeping in mind that the Organization aggregate root 'owns' every Employee entity.
When working with EF code first, the IDs (in the form of identity columns in the database) of each entity are automatically handled and should generally never be changed by user code. Since DDD states that the domain is separate from persistence ignorance it seems like exposing the IDs is an odd thing to do in the domain because this implies that the domain should handle assigning unique IDs to newly created entities. Should I be concerned about exposing the ID properties of entities?
I realize these are kind of open ended design questions, but I am trying to do my best to stick to DDD design patterns while using EF as my persistence layer.
Thanks in advance!
On 1: I'm not all that familiar with EF but using the code-first/convention based mapping approach, I'd assume it's not too hard to map POCOs with getters and setters (even keeping that "DbContext with DbSet properties" class in another project shouldn't be that hard). I would not consider the POCOs to be the Aggregate Root. Rather they represent "the state inside an aggregate you want to persist". An example below:
// This is what gets persisted
public class TrainStationState {
public Guid Id { get; set; }
public string FullName { get; set; }
public double Latitude { get; set; }
public double Longitude { get; set; }
// ... more state here
}
// This is what you work with
public class TrainStation : IExpose<TrainStationState> {
TrainStationState _state;
public TrainStation(TrainStationState state) {
_state = state;
//You can also copy into member variables
//the state that's required to make this
//object work (think memento pattern).
//Alternatively you could have a parameter-less
//constructor and an explicit method
//to restore/install state.
}
TrainStationState IExpose.GetState() {
return _state;
//Again, nothing stopping you from
//assembling this "state object"
//manually.
}
public void IncludeInRoute(TrainRoute route) {
route.AddStation(_state.Id, _state.Latitude, _state.Longitude);
}
}
Now, with regard to aggregate life-cycle, there are two main scenario's:
Creating a new aggregate: You could use a factory, factory method, builder, constructor, ... whatever fits your needs. When you need to persist the aggregate, query for its state and persist it (typically this code doesn't reside inside your domain and is pretty generic).
Retrieving an existing aggregate: You could use a repository, a dao, ... whatever fits your needs. It's important to understand that what you are retrieving from persistent storage is a state POCO, which you need to inject into a pristine aggregate (or use it to populate it's private members). This all happens behind the repository/DAO facade. Don't muddle your call-sites with this generic behavior.
On 2: Several things come to mind. Here's a list:
Aggregate Roots are consistency boundaries. What consistency requirements do you see between an Organization and an Employee?
Organization COULD act as a factory of Employee, without mutating the state of Organization.
"Ownership" is not what aggregates are about.
Aggregate Roots generally have methods that create entities within the aggregate. This makes sense because the roots are responsible for enforcing consistency within the aggregate.
On 3: Assign identifiers from the outside, get over it, move on. That does not imply exposing them, though (only in the state POCO).
The main problem with EF-DDD compatibility seems to be how to persist private properties. The solution proposed by Yves seems to be a workaround for the lack of EF power in some cases. For example, you can't really do DDD with Fluent API which requires the state properties to be public.
I've found only mapping with .edmx files allows you to leave Domain Entities pure. It doesn't enforce you to make things publc or add any EF-dependent attributes.
Entities should always be created by some aggregate root. See a great post of Udi Dahan: http://www.udidahan.com/2009/06/29/dont-create-aggregate-roots/
Always loading some aggregate and creating entities from there also solves a problem of attaching an entity to EF context. You don't need to attach anything manually in that case. It will get attached automatically because aggregate loaded from the repository is already attached and has a reference to a new entity. While repository interface belongs to the domain, repository implementation belongs to the infrastructure and is aware of EF, contexts, attaching etc.
I tend to treat autogenerated IDs as an implementation detail of the persistent store, that has to be considered by the domain entity but shouldn't be exposed. So I have a private ID property that is mapped to autogenerated column and some another, public ID which is meaningful for the Domain, like Identity Card ID or Passport Number for a Person class. If there is no such meaningful data then I use Guid type which has a great feature of creating (almost) unique identifiers without a need for database calls.
So in this pattern I use those Guid/MeaningfulID to load aggregates from a repository while autogenerated IDs are used internally by database to make a bit faster joins (Guid is not good for that).

What is the overhead of Entity Framework tracking?

I've just been talking with a colleague about Entity Framework change tracking. We eventually figured out that my context interface should have
IDBSet<MyPoco> MyThings { get; }
rather than
IQueryable<MyPoco> MyThings { get; }
and that my POCO should also have all it's properties as virtual.
Using the debugger we could then see the tracking objects and also that the results contained proxies to my actual POCOs.
If I don't have my POCO properties as virtual and have my context interface using IQueryable<> instead of IDbSet<> I don't get any of that.
In this instance I am only querying the database, but in the future will want to update the database via Entity Framework.
So, to make my life easier in the future when I come to look at this code as a reference, is there any performance penalty in having the tracking info/proxies there when I will never make use of them?
There is a performance penalty of tacking entities in EF. When you query using entity framework EF will keep a copy of values loaded from database. Also single Context instance keeps track of only single instance of an entity. So EF has to check whether it already has a copy of the entity before it creates an instance(ie. There will be lot of comparisons going behind the scenes).
So avoid it if you don't need it. You can do so as follows.
IQueryable<MyPoco> MyThings { get { return db.MyThings.AsNoTracking(); } }
MSDN page on Stages of Query Execution details the cost associated with each step of query execution.
Edit:
You should not expose IDBSet<MyPoco> MyThings because that tells the consumer of your API that your entities can be added, updated and deleted when in fact you intend to query the data.
Navigation properties in the model classes as declared as virtual so as to imply lazy load feature which means the navigation property will only be needed if required. As far as the Entity objects are concerned, there main aim is to load the specific table records from the database into the DbSet which comes from DbContext. You can't use IQueryable in this case. Also, it doesn't make any sense with the DataContext. IQueryable is an altogether different interface

Should i use partial classes as business layer when using entity framework?

I am working on a project using entity framework. Is it okay to use partial classes of the EF generated classes as the business layer. I am begining to think that this is how EF is intended to be used.
I have attempted to use a DTO pattern and soon realized that i am just creating a bunch of mapping classes that is duplicating my effort and also a cause for more maintenance work and an additional layer.
I want to use self-tracking-entities and pass the EF entities to all the layers. Please share your thoughts and ideas. Thanks
I had a look at using partial classes and found that exposing the database model up towards the UI layer would be restrictive.
For a few reasons:
The entity model created includes a deep relational object model which, depending on your schema, would get exposed to the UI layer (say the presenter of MVP or the ViewModel in MVVM).
The Business logic layer typically exposes operations that you can code against. If you see a save method on the BLL and look at the parameters needed to do the save and see a model that require the construction of other entities (cause of the relational nature the entity model) just to do the save, it is not keeping the operation simple.
If you have a bunch of web services then the extra data will need to be sent across for no apparent gain.
You can create more immutable DTO's for your operations parameters rather than encountering side effects cause the same instance was modified in some other part of the application.
If you do TDD and follow YAGNI then you will tend to have a structure specifically designed for the operation you are writing, which would be easier to construct tests against (not requiring to create other objects not realated to the test just because they are on the model). In this case you might have...
public class Order
{ ...
public Guid CustomerID { get; set; }
... }
Instead of using the Entity model generated by the EF which have references exposed...
public class Order
{ ...
public Customer Customer { get; set; }
... }
This way the id of the customer is only needed for an operation that takes an order. Why would you need to construct a Customer (and potentially other objects as well) for an operation that is concerned with taking orders?
If you are worried about the duplication and mapping, then have a look at Automapper
I would not do that, for the following reasons:
You loose the clear distinction between the data layer and the business layer
It makes the business layer more difficult to test
However, if you have some data model specific code, place that is a partial class to avoid it being lost when you regenerate the model.
I think partial class will be a good idea. If the model is regenerated then you will not loose the business logic in the partial classes.
As an alternative you can also look into EF4 Code only so that you don't need to generate your model from the database.
I would use partial classes. There is no such thing as data layer in DDD-ish code. There is a data tier and it resides on SQL Server. The application code should only contain business layer and some mappings which allow persisting business objects in the mentioned data tier.
Entity Framework is you data access code so you shouldn't built your own. In most cases the database schema would be modified because the model have changed, not the opposite.
That being said, I would discourage you to share your entities in all the layers. I value separation of UI and domain layer. I would use DTO to transfer data in and out of the domain. If I have the necessary freedom, I would even use CQRS pattern to get rid of mapping entities to DTO -- I would simply create a second EF data access project meant only for reading data for the UI. It would be built on top of the same database. You read data through read (anemic -- without business logic) model, but you modify it by issuing commands that are executed against real model implemented using EF and partial methods.
Does this answer your question?
I wouldn't do that. Try too keep the layers independent as possible. So a tiny change in your database schema will not affect all your layers.
Entities can be used for data layer but they should not.
If at all, provide interfaces to be used and let your entities implement them (on the partial file) the BL should not know the entities but the interfaces.