I'm using Prism framework with EF in WPF application.
ViewModel:
keeps service references (passed by unity container).
Services:
are providing "high level" operations with data
keeps reference of Repository, which provides basic CRUD operations with database (single table per repository).
Repository:
every method in repository uses the "using" pattern, where I work with short lived object context.
This is where i got stuck: after object context is disposed, I can not work with mapped properties no longer. My database model is complex (many related tables) and many .Include() calls when retrieving data keeps code dirty.
After reading several threads, I discovered that "unit of work" pattern is probably what I need.
Here comes my question:
Who keeps reference of unit of work (and therefore context)?
If I choose context per view approach, viewModel should have context reference.
How can I inject unit of work to my services then? Or should I create new instance of Service in ViewModel and pass context in constructor parameter?
We're using a similar architecture in a project:
Each ViewModel gets its own Service object which gets injected in the constructor (at least the top level ones that directly correspond to a View. Some hierarchical ViewModels may reuse their parent's Service, but let's keep it simple here).
By default, each Service operation creates a new context, but...
Services have BeginContext and EndContext methods which can be called
by the ViewModels to keep the context open over multiple operations.
This worked pretty well for us. Most of the time we call BeginContext when a View is opened and EndContext when it is closed.
Related
I'm refactoring my ASP.NET MVC 4 WebAPI project for performance optimization reasons.
Within my controller code, I'm searching for entities in a context (DbContext, EF6). There are a few thousands of such entities, new ones are added on an hourly basis (i.e. "slowly"), they are rarely deleted (and I don't care if deleted entities are still found on the context's cache!) and are never modified.
After reading the answers to this question, to this one and a few more discussions, I'm still not sure it's a bad idea to use a single static DbContext for the purpose described above - a DbContext which never updates the database.
Performance-wise, I'm not worried about the instantiation cost, but rather about the uselessness of caching requested entities if the DbContext is created for each request. I'm also using a 2nd level caching, which makes the persistence of the context even more acute.
My questions are:
1. Regardless of the specific implementation, is a "static" DbContext a valid solution in my case?
2. If so, what would be the most appropriate way of implementing such a DbContext?
3. Should I periodically "flush" the context to clear the cache in order to prevent if from growing too big?
DbContext caches entity instances when you get/query the data. It ensures different queries that return the same data map to the same entity (based on type and id). Otherwise, if you modify the same entity in different object instances, the context would not know which one has the correct data. Therefore a static DbContext would blow up over time until the process crashes.
DbContexts should be short lived. Request.Properties is a good place to save it in Web API (maps to HttpContext.Items in IIS).
We have kind of an interesting situation here. We are using the repository pattern with Entity Framework so each database table has its own Repository class that accepts an instance of a DbContext in its constructor. We are also using Ninject for dependency injection. We have defined a single context to be instantiated during a given request so that when a multiple of repositories ask for a DbContext, the same instance is used throughout. This allows us to follow a UnitOfWork pattern so multiple things can occur on multiple repositories and a single commit will commit all changes to the database.
Here is the issue, we are also using SQL Azure Federations which splits up our client data into multiple databases (sharding). We need to be able to jump from one federation member (database) to another within the same request but want to be able to use the same service/repository methods that are dependent upon the injected context. Our first thought was to just execute the USE FEDERATION sql command on the existing context to move to the next database but it seems to work sometimes and not others. After executing the statement we see that the underlying connection on the context really is pointing to the new server but for some reason queries executed on that context end up returning results from the previous connection. I presume using the same instance of a context on multiple database is not something that is really natively supported as you would normally spin up a new context when connecting to a new database. Unfortunately we have a bunch of repositories that are created dynamically using Ninject which then take in the same instance of the context so even if we do spin up a new context for the new database, we have no way to making all the existing repositories suddenly become dependent upon the new context we just spun up instead of the one that was created at the initial request.
Here are a few solutions we can think of but are not sure how to get any of them to work:
Change the database used on an existing context (as explained above but didn't seem to work)
Swap out an injected context for all repositories with a new one so that all existing repositories now become dependent upon a different context
Somehow request from Ninject a new instance of all the repositories passing in the parameters needed for the context once it is requested.
Again, bottom line here is we have a set of repositories and services that are dependent upon a single Context and we want to be able to reuse those services and repositories but swap out or change the context, on which all are dependent, to point to a new server.
Solution was to just completely drop Ninject out of the scenario. Not the best solution but the tools we are using were not really designed to do what we wanted in the environment we are working.
I am relatively new to entity framework, all the documents or books I can find are talking about how to use the framework, or which model should be used, but short of explanation how the framework works in depth.
For instance, when I load the entities from the database via either LINQ query or framework methods, are those entities thread safe? In another words can they be shared with other threads? If so how EF controls the consistency?
When control goes out of context, are those entities gone or still in memory? After .SaveChanges are those entities gone? What is the life cycle?
Can an expert in EF explain the above points in details please.
Thanks in advance.
The life cycle of loaded entities is more-or-less tied to that of the Entity Context which loaded them. Hence in many examples you will see:
using (var ctx = new Context())
{
// ... do work
} // The context gets disposed here.
Once the context is disposed (at the end of the using statement, e.g.), you should no longer treat entities that were loaded inside the context as if you can load additional information from them. For example, don't try accessing navigation properties on them. To avoid problems, I usually find it best to create a DTO that has only the exact data that I expect people to be able to use, and have that be the only value that leaves the using statement.
using (var ctx = new Context())
{
var q = from p in ctx.People
select new PersonSummary{Name = p.Name, Email = p.Email};
return q.ToList(); // This will fully evaluate the query,
// leaving you with plain PersonSummary objects.
}
Entity Contexts are not thread-safe, so you shouldn't be trying to load navigation properties and such from multiple threads for objects tied to the same context, even within the context's lifecycle.
For instance, when I load the entities from the database via either LINQ query or framework methods, are those entities thread safe? In
another words can they be shared with other threads? If so how EF
controls the consistency?
The ObjectContext class is not tread safe. You must have one object context per thread or to create you own thread synchronization process. This way the consistency is managed by the ObjectContext since it tracks all the objects' state.
When control goes out of context, are those entities gone or still in memory? After .SaveChanges are those entities gone? What is the life
cycle?
ObjectContext class inherit from IDisposable interface so you can, and should, use USING statement when using Entity Framework. This way they're gone after you close the using statement. If you DO NOT dispose the context they keep being tracked, only their states are changed. Disposing ObjectContext instances will also make sure that the database connection is properly disposed and you are not leaking database connections.
So, the big question is:
Where and when should EF live?
Theses ORM should be treated as the Unit of Work pattern, that is, the ORM object should live until the business task is done.
In my specific scenarios I use an IoC container like Windsor that does the heavy lifting for me. In an ASP.NET MVC app for example, Windsor can create a Context per Web Request. With this you don't have to write a lot of using statements throughout your code. You can read more about it here:
Windsor Tutorial - Part Seven - Lifestyles
Here's a link that explains it in more details directly from the guy that helps build the framework at Microsoft:
Entity Framework Object Context Life Cycle compared to Linq to Sql Data Context Life Cycle
You can write a test application to observe the behavior of the context tracker.
If you retrieve an entity from a context, then dispose of that context, then create a new instance of the context and attempt to save a change to the entity you retrieved earlier, it will complain that it is already tracking an entity with that ID.
I am trying to implement the DDD so I have created the following classes
- User [the domain model]
- UserRepository [a central factory to manage the object(s)]
- UserMapper + UserDbTable [A Mapper to map application functionality and provide the CRUD implementation]
My first question is that when a model needs to communicate with the persistent layer should it contact the Repository or the mapper? Personally I am thinking that it should ask the repository which will contact the mapper and provide the required functionality.
Now my second concern is that there should be only one repository for all the objects of same class, so that means I will be creating a singleton. But if my application has lots of domain models (lets say 20), then there will be 20 singletons. And it don't feel right. The other option is to use DI (dependency injection) but the framework I am using (Zend Framework 1.11) has no support for DIC.
My third
UserRepository [a central factory to manage the object(s)]
In DDD Repository is not a Factory. Repository is responsible for middle and end of life of the domain object. Factory is responsible for beginning. Conceptually, persisting and restoring happens to the domain object in its middle life.
UserMapper + UserDbTable [A Mapper to map application functionality and provide the CRUD implementation]
These classes do not belong to a domain layer, this is data access. They would all be encapsulated by repository implementation (or would not exist at all if you use ORM).
My first question is that when a model needs to communicate with the
persistent layer should it contact the Repository or the mapper?
Personally I am thinking that it should ask the repository which will
contact the mapper and provide the required functionality.
Model does not need to communicate with persistence layer. In fact you should try to make your model as persistence-agnostic as possible. From the perspective of your domain model, Repository is just an interface. The implementation of this interface belongs to a different layer - data access. The implementation is injected later, somewhere in your Application layer. Application layer is aware of persistence and transactions. This is where you can implement Unit Of Work pattern (that also does not belong to domain layer).
Now my second concern is that there should be only one repository for all the objects of same class, so that means I will be creating a singleton. But if my application has lots of domain models (lets say 20), then there will be 20 singletons.
First of all you can have more than one Repository for a given domain object. This is what happens most of the time anyway because you want to avoid 'method explosion' on your Repository interface. Secondly singleton Repository is a bad idea because it would couple all consumers to a single implementation which, among other things, would make unit testing hard. Thirdly, there is nothing wrong with having 20 or more Repositories, in fact the more focused classes you have the better, see SRP.
UPDATE:
I think that you are confusing regular Factory pattern and DDD Factory. In DDD terms, when the object is restored from the database it already exists conceptually (even though it is a new object in memory). So it is a responsibility of the Repository to persist and restore it. DDD Factory comes into play when the complex domain object begins its life - whether it would be a long lived object (saved in db) or not.
Answering your second question. The ZF1 way would be to create a singleton per object class. You could have a factory/registry that creates these for you and returns the previously created one when you ask for an already created one. Alternatively, if you are using PHP 5.3, use a DI container such as Pimple or Zend\Di.
My business logic and core entities are tightly coupled.
An object, for example, called Session is a database entity but in literal terms of the word is a real life Session during which events are recorded.
This Session object also has [NotMapped] objects and handles to unmanaged resources.
The Session object also implements IDisposable.
A good chunk of entities in my project have the above characteristics.
This sounds like disaster down the line. The question is what approach to take here.
I am expecting answers to point to design patterns or architecture but please do include a very short code example to illustrate your point rather than just the name of the proposed solution.
What I have thought of so far is to derive from each entity as a business object and use code generation to convert from one type to the other. Since this is a client/server application, I want to be able to use the entity relationship set as-is in my desktop app, albeit a derived one.
Not sure how to achieve this in a sustainable way.
This is not about design patter but about ownership of the disposable entities. Who owns the entity? The owner is responsible for disposal. That is something defined directly by your code / design.
EF context itself is disposable - you can override its Dispose operation and force it to dispose all attached entities but that is most probably something you don't want to do because context is most probably not the owner of the entity. The code requesting entities from context or requesting persistence of entities should be considered as owner responsible for disposal.