Some background: We sell an online product and each customer gets their own database but are using a shared service.
I would like to use EF6 instead of old ADO.NET, But as far as I know it's not possible to change the database when the dbcontext is created, and i fear that it's too expensive to create a new dbcontext for each query.
And caching 1000+ dbcontext's sounds like a very bad solution.
Connection pooling will not work well with 1000+ connection strings. There will be one pool for each database resulting in an enormous amount of connections.
I recommend that you connect to a dummy database first, then use DbConnection.ChangeDatabase to change into the right database. EF does not notice that and works just fine.
You don't need to cache DbContext's. They are lightweight.
This is actually pretty easy to do
public class MyContext : DbContext{
public MyContext(string connectionStringName): base(connectionStringName){}
}
or
public class MyContext : DbContext{
public MyContext(DbConnection connection): base(connection, contextOwnsConnection: true){}
}
You should not be too concerned about the cost of construction of DbContext. There are other factors to consider in deciding the lifetime of your DbContext/ObjectContext, which you can find here.
It's not a good idea to share your DbContext instance primarily because of Memory Usage and Thread Safety as mentioned in the above link.
You can do next:
public class BaseContext<TContext> : DbContext where TContext : DbContext
{
protected BaseContext(): base("name=DbName")
{
}
}
And that use it like this:
public class StatusContext : BaseContext<StatusContext>
{
....
}
All contexts that inherit from BaseContext will use same db.
Related
I am working on an application with multiple modules. I created tables for different modules under different schema in same database and all user related tables in the default schema. I feel like more confused after reading more about dbcontext, unit of work and repository pattern. I started creating one dbcontext and realized a logged in user need few number of tables but by calling the constructor, it bring everything into memory. Later I thought of creating multiple dbcontexts, but I have to include user related tables in all dbcontexts.
As a third option, I start working with unitofwork and repository pattern. Many articles were telling it is just another abstraction on top of EF with DBContext and DBSet. I still continued working and realized that I will have hundreds of repositories and once I add all of them into unitofwork and call the constructor, again everything will be loaded into memory. I am totally confused on which approach best suits for my need. Each controller need only the specific tables repositories and user tables repositories for the CRUD operations, but by doing above steps, will it cause performance issues?
My unitofwork is as below
using DemoApp.Core;
using DemoApp.Core.Repositories;
using DemoApp.Persistence.Repositories;
namespace DemoApp.Persistence
{
public class UnitOfWork : IUnitOfWork
{
private readonly DemoAppContext _context;
public UnitOfWork(DemoAppContext context)
{
_context = context;
Ones = new OneRepository(_context);
Twos = new TwoRepository(_context);
}
public IOneRepository Ones { get; private set; }
public ITwoRepository Twos { get; private set; }
public int Complete()
{
return _context.SaveChanges();
}
public void Dispose()
{
_context.Dispose();
}
}
}
And the controller
using DemoApp.Core;
using DemoApp.Core.Domain;
using DemoApp.Persistence;
using System;
using System.Linq;
using System.Web.Mvc;
namespace DemoApp.Controllers
{
public class HomeController : Controller
{
private readonly IUnitOfWork _unitOfWork;
public HomeController(IUnitOfWork unitOfWork)
{
_unitOfWork = unitOfWork;
}
public ActionResult Index()
{
var result = _unitOfWork.Ones.GetAll();
return View(result);
}
}
}
Having a single, large context will not load entities into memory on construction, but it will resolve the entity mappings which can take a few seconds in very large contexts for the first load. Bounded contexts suit very large systems where you can split off related entities, or keep "heavy" or time sensitive entities separate from the main use context.
I use a pattern with bounded contexts that uses an attribute typed to the Context in question to mark which entity type configurations apply to what context. This accommodates using smaller, read-only suited entity definitions. I'd only really recommend this for very large entity sets though. By utilizing deferred execution and .Select() expressions to pull just the data needed when it's needed, using multiple, bounded entity declarations isn't typically needed.
The argument for using Unit of Work & Repository patterns is primarily around enabling unit testing. I do not recommend using Generic repositories (repository per entity) but rather utilizing a Repository pattern much in the way you would utilize a Controller pattern. Each repository serves an area of an application with methods to retrieve, create, and delete entities relevant to that area. Tying a repository to a single entity leads to a lot of boiler-plate code, or generic operations that don't apply to all entities equally. It makes your code less flexible and ultimately harder to read. In most cases you should be utilizing the relationships mapped out between entities, so a single repository can manage retrieving and acting upon all relevant entities for a particular screen for example rather than shifting between lots of different repositories to try and load related data. I use the DbContextScope UoW pattern by Mehdime as it facilitates both read/write and read-only scopes across repositories/helpers, and negates the need to inject a dbcontext/UoW wrapper into the repositories. This enables having multiple UoW scopes in a request vs. scoping a DbContext/UoW instance to a request or manually messing around with lifetime scopes if your container supports that. In any case it's worth having a look at as an option. Mehdime's implementation is for EF 6.x, while there are forks available for EF Core.
Suppose I have an EJB, CrudService, with methods to perform CRUD on a single entity (so not collections). CrudService has an EntityManager injected into it, and it cannot be modified.
CrudService looks something like this:
#Stateless
public class CrudService {
#PersistenceContext(name = "TxTestPU")
private EntityManager em;
public Integer createPost(Post post) {
post = em.merge(post);
return post.getId();
}
public void updatePost(Post post) {
em.merge(post);
}
public Post readPost(Integer id) {
return em.find(Post.class, id);
}
public void deletePost(Post post) {
em.remove(post);
}
}
I would like to be able to create/update a collection of Post entities, in parallel, in a single transaction. An approach which does not work, as for each thread in the pool a new transaction is created by the container, is the following :
#Stateless
public class BusinessBean {
#Inject
private CrudService crudService;
public void savePosts(Collection<Post> posts) {
posts.parallelStream().forEach(post ->
crudService.createPost(post);
}
}
Is there a way to do it ?
The code runs on Wildfly, with a Hibernate persistence unit and Postgresql database.
The straight "here is an answer" answer.
Not generically. Have a look at the answers to this question: Is it discouraged using Java 8 parallel streams inside a Java EE container?
The annoying "XY problem" answer.
How would you expect this would work? Most databases don't support multiple parallel transactions on a single database connection, I don't believe PG supports it: https://stackoverflow.com/a/289057/924597
So something/somebody (JEE container, JDBC, driver, etc.) would have to open multiple DB connections to achieve this - which I think you're saying is what is happening? If you're doing this across many different business actions this would likely exhaust your connection pool pretty quickly.
In the spirit of this being an "XY problem" answer - what problem are you trying to solve?
If it's just a raw throughput problem - consider batching your inserts.
If it's a bulk insert problem - consider making an end-run around your container and using a different tool, JEE containers aren't usually meant for/good at this kind of thing.
I have a legacy EF 4 library with a database-first generated ObjectContext with EntityObjects. I'd like to slowly migrate to using DbContext and am in need of some guidence.
One of the overloads for DbContext takes an existing ObjectContext. I thought this would allow me to wrap my existing ObjectContext in a DbContext and expose my existing EntityObjects through IDbSet properties. Unfortunately, when creating the DbContext, the IDbSet properties are not created and instead an exception is thrown with a message of: "Verify that the type was defined as a class, is not primitive, nested or generic, and does not inherit from EntityObject."
Is there no way to use a DbContext with IDbSet exposing an existing ObjectContext and EntityObjects? It seems strange that I can create a DbContext with an ObjectContext, but not expose the entities themselves.
Here's some sample code:
public class MyDbContext : DbContext
{
public MyDbContext(MyObjectContext objectContext, bool dbContextOwnsObjectContext)
: base(objectContext, dbContextOwnsObjectContext)
{
}
public IDbSet<Person> People { get; set; }
}
The exception is caused by the DbContext trying to create the IDbSet. Person is an EntityObject from my existing EDMX.
As you noted, you cannot use the existing entity objects that the edmx created.
Have you considered using the Reverse Engineer Code First power tools?
I've used them in several projects, and its simple as pointing it at your database, it generates the model in the same project, and then removing the edmx file.
I have a hierarchical DbContext structure, where I would like a specialized DbContext with its own DbSets to inherit the DbSets of a BaseDbContext.
While accessing the underlying ObjectContext with ((IObjectContextAdapter)this).ObjectContext it takes too long (several minutes) to receive the ObjectContext.
Is there an issue with DbContext in CT5, that getting an ObjectContext from derived DbContext is not performantly possible?
The structure is: DbContext(EF4) -> myBaseDbContext -> mySpecializedDbContext.
Does anyone have an idea of what´s going on in this scenario?
It´s just POCO (CF) with TPC and a little inheritance.
I didn't have performance issues with following and you don't have so many DbSets:
public class MyContext: DbContext
{
//your DbSets<> and other
public ObjectContext ObjectContext()
{
return (this as IObjectContextAdapter).ObjectContext;
}
}
I'm trying to implement the repository pattern with ef4 ctp5, I came up with something but I'm no expert in ef so I want to know if what I did is good.
this is my db context
public class Db : DbContext
{
public DbSet<User> Users { get; set; }
public DbSet<Role> Roles { get; set; }
}
and the repository: (simplified)
public class Repo<T> : IRepo<T> where T : Entity, new()
{
private readonly DbContext context;
public Repo()
{
context = new Db();
}
public IEnumerable<T> GetAll()
{
return context.Set<T>().AsEnumerable();
}
public long Insert(T o)
{
context.Set<T>().Add(o);
context.SaveChanges();
return o.Id;
}
}
You need to step back and think about what the repository should be doing. A repository is used for retrieving records, adding records, and updating records. The repository you created barely handles the first case, handles the second case but not efficiently, and doesn't at all handle the 3rd case.
Most generic repositories have an interface along the lines of
public interface IRepository<T> where T : class
{
IQueryable<T> Get();
void Add(T item);
void Delete(T item);
void CommitChanges();
}
For retrieving records, you can't just call the whole set with AsEnumerable() because that will load every database record for that table into memory. If you only want Users with the username of username1, you don't need to download every user for the database as that will be a very large database performance hit, and a large client performance hit for no benefit at all.
Instead, as you will see from the interface I posted above, you want to return an IQueryable<T> object. IQuerables allow whatever class that calls the repository to use Linq and add filters to the database query, and once the IQueryable is run, it's completely run on the database, only retrieving the records you want. The database is much better at sorting and filtering data then your systems, so it's best to do as much on the DB as you can.
Now in regards to inserting data, you have the right idea but you don't want to call SaveChanges() immediately. The reason is that it's best to call Savechanges() after all your db operations have been queued. For example, If you want to create a user and his profile in one action, you can't via your method, because each Insert call will cause the data to be inserted into the database then.
Instead what you want is to separate out the Savechanges() call into the CommitChanges method I have above.
This is also needed to handle updating data in your database. In order to change an Entity's data, Entity Framework keeps track of all records it has received and watches them to see if any changes have been made. However, you still have to tell the Entity Framework to send all changed data up to the database. This happenes with the context.SaveChanges() call. Therefore, you need this to be a separate call so you are able to actually update edited data, which your current implementation does not handle.
Edit:
Your comment made me realize another issue that I see. One downfall is that you are creating a data context inside of the repository, and this isn't good. You really should have all (or most) of your created repositories sharing the same instance of your data context.
Entity Framework keeps track of what context an entity is tracked in, and will exception if you attempt to update an entity in one context with another. This can occur in your situation when you start editing entities related to one another. It also means that your SaveChanges() call is not transactional, and each entity is updated/added/deleted in it's own transaction, which can get messy.
My solution to this in my Repositories, is that the DbContext is passed into the repository in the constructor.
I may get voted down for this, but DbContext already is a repository. When you expose your domain models as collection properties of your concrete DbContext, then EF CTP5 creates a repository for you. It presents a collection like interface for access to domain models whilst allowing you to pass queries (as linq, or spec objects) for filtering of results.
If you need an interface, CTP5 doesn't provide one for you. I've wrapped my own around the DBContext and simply exposed the publicly available members from the object. It's an adapter for testability and DI.
I'll comment for clarification if what I said isn't apparently obvious.