Our system contains multiple databases with an equal scheme. Their connection strings are stored in a some storage, so we create a DbContext instance each time we need connect to those databases. Something like this:
public ICdsDataContext CreateContext(long storageId)
{
// get options to connect to a database (connection strings, etc)
var storageDbOptions = _storageDbOptionsFactory.Create(storageId);
// create an EF context
return CreateContext(storageDbOptions);
}
We would like to improve performance, so we want to use PooledDbContextFactory<T>. Our CreateContext method looks like this:
private CdsDataContext CreateContext(IStorageDbOptions storageDbOptions)
{
// get context factory with pooling support
var pooledDbContextFactory = GetPooledDbContextFactory(storageDbOptions);
// create and return a new context from the pool
return pooledDbContextFactory.CreateDbContext();
}
We have a lot of databases, because of that we want to cache PooledDbContext for a while. If a connection to a database is not in use for sometime, the factory for this connection string will be removed from the cache. We manually create a factory because of different databases. The lifetime of PooledDbContext could be much shorter than the application lifetime.
private PooledDbContextFactory<CdsDataContext> GetPooledDbContextFactory(IStorageDbOptions storageDbOptions)
{
// here goes some checks
...
// fetch from cache factory with pooling based on params from storageDbOptions
var cacheKey = $"cds_context_factory_{connectionString}_need_logging_{needLogging}_timeout_{commandTimeout}";
return _memoryCache.GetOrCreate(cacheKey, entry =>
{
// if not factory in cache was found, create a new one
// set TTL
entry.SlidingExpiration = CacheTtl;
// create options
var optionsBuilder = new DbContextOptionsBuilder<CdsDataContext>();
optionsBuilder
.UseNpgsql(connectionString)
.UseLazyLoadingProxies();
// return new factory
return new PooledDbContextFactory<CdsDataContext>(optionsBuilder.Options);
});
}
Of course, context is placed in using.
using (var cdsContext = _cdsDataContextFactory.CreateContext(projectInfoData.CdsStorageId))
{
// some logic goes here
}
But there is not Dispose method in PooledDbContextFactory, IDbContextPool also doesn't contains Dispose. But DbContextPool that implements IDbContextPool also implement IDiposable and IAsyncDisposable.
Do we need to manually dispose PooledDbContextFactory and it internal pool? Or there is no risks for memory and connections leaks?
Related
I have a .NET Core 3.1 application which consists of an AWS API Gateway, a set of Lambda functions, and an RDS Aurora PostgreSQL database for reading and writing to. This application uses Entity Framework Core for the database connection and query execution. I am trying to understand - of the two available connection methods I can see available - which is the correct one to use for performance, efficiency and best use of resources within the context of AWS Lambda containers / serverless.
Option 1)
Maintain a single DBContext connection, opened outside the function handler(s) of my Lambda functions, and passed around in and out of all the methods requiring said connection. E.g.
public class Functions
{
private readonly loyaltyContext loyaltyContext;
public Functions()
{
loyaltyContext = new loyaltyContext();
}
}
Then within each handler:
Refer to it:
var testEfCore = (from c in loyaltyContext.ContactInformation
select c.ContactInternalId).Take(1);
Pass it into other methods:
var duplicateError = await BusinessRules.DuplicateCustomerCheck(customer, loyaltyContext);
Option 2)
Create, use and dispose the DbContext at each point it is required, within the handler, and the methods and functions it calls, e.g.
public static bool IsCardExist(string cardNumber, string cardState)
{
using (var loyalty = new loyaltyContext())
{
var existingCard = (from c in loyalty.ExternalCards
where c.CardNumber == cardNumber
&& c.CardStatus == cardState
select c.CardNumber).Any();
return existingCard;
}
}
public static void SomeOtherFunctionCalledByHandler ()
{
using (var loyalty = new loyaltyContext())
{
}
}
And so on.
I am building a Sails/WaterLine adaptor for RestLike datasource. In order to return instances to WaterLine I need to transform the result to handle things like dates and null. To do this I need access to the attribute definitions on the model. But I can't figure out how to get access to them.
sails-rest appears to somehow store a definition object on the connection and then uses it later to format results. This is pretty much what I need, but I do not see how this definition object is derived in the first place.
How can a waterline adapter get access to attributes defined in the model?
found it!
The registerConnection method, gets the collections argument
That object contains all the models and their definitions. Store it on the connection so you can reference it later in the other adapter methods.
registerConnection: function(connection, collections, cb) {
if(!connection.identity) return cb(new Error('Connection is missing an identity.'));
if(connections[connection.identity]) return cb(new Error('Connection is already registered.'));
// Add in logic here to initialize connection
// e.g. connections[connection.identity] = new Database(connection, collections);
var dbConnection = '... create connection here ...'
connections[connection.identity] = {
dbConnection : dbConnection,
collections : collections // <-- store collection
}
cb();
}
...later in the other functions where you need the model definition
create: function (connection, collection, values, cb){
// database connection
var dbConnection = connections[connection].dbConnection;
// model definition
var definition = connections[connection].collections[collection].definition
// do the rest of the stuff
}
In the following case where two DbContexts are nested due to method calls:
public void Method_A() {
using (var db = new SomeDbContext()) {
//...do some work here
Method_B();
//...do some more work here
}
}
public void Method_B() {
using (var db = new SomeDbContext()) {
//...do some work
}
}
Question:
Will this nesting cause any issues? (and will the correct DbContext be disposed at the correct time?)
Is this nesting considered bad practice, should Method_A be refactored into:
public void Method_A() {
using (var db = new SomeDbContext()) {
//...do some work here
}
Method_B();
using (var db = new SomeDbContext()) {
//...do some more work here
}
}
Thanks.
Your DbContext derived class is actually managing at least three things for you here:
the metadata that describes your database and your entity model,
the underlying database connection, and
a client side "cache" of entities loaded using the context, for change tracking, relationship fixup, etc. (Note that although I term this a "cache" for want of a better word, this is generally short lived and is just to support EFs functionality. It's not a substitute for proper caching in your application if applicable.)
Entity Framework generally caches the metadata (item 1) so that it is shared by all context instances (or, at least, all instances that use the same connection string). So here that gives you no cause for concern.
As mentioned in other comments, your code results in using two database connections. This may or may not be a problem for you.
You also end up with two client caches (item 3). If you happen to load an entity from the outer context, then again from the inner context, you will have two copies of it in memory. This would definitely be confusing, and could lead to subtle bugs. This means that, if you don't want to use shared context objects, then your option 2 would probably be better than option 1.
If you are using transactions, there are further considerations. Having multiple database connections is likely to result in transactions being promoted to distributed transactions, which is probably not what you want. Since you didn't make any mention of db transactions, I won't go into this further here.
So, where does this leave you?
If you are using this pattern simply to avoid passing DbContext objects around in your code, then you would probably be better off refactoring MethodB to receive the context as a parameter. The question of how long-lived context objects should be comes up repeatedly. As a rule of thumb, create a new context for a single database operation or for a series of related database operations. (See, for example this blog post and this question.)
(As an alternative, you could add a constructor to your DbContext derived class that receives an existing connection. Then you could share the same connection between multiple contexts.)
One useful pattern is to write your own class that creates a context object and stores it as a private field or property. Then you make your class implement IDisposable and its Dispose() method disposes the context object. Your calling code news up an instance of your class, and doesn't have to worry about contexts or connections at all.
When might you need to have multiple contexts active at the same time?
This can be useful when you need to write code that is multi-threaded. A database connection is not thread-safe, so you must only ever access a connection (and therefore an EF context) from one thread at a time. If that is too restrictive, you need multiple connections (and contexts), one per thread. You might find this interesting.
You can alter your code by passing to Method_B the context. If you do so, the creation of the second db call SomeDbContext will not be necessary.
there a question an answer in stackoverflow in this link
Proper use of "Using" statement for datacontext
It is a bit late answer, but still people may be looking so here is another way.
Create class, that cares about disposing for you. In some scenarios, there would be a function usable from different places in solution. This way you avoid creating multiple instances of DbContext and you can use nested calls as many as you like.
Pasting simple example.
public class SomeContext : SomeDbContext
{
protected int UsingCount = 0;
public static SomeContext GetContext(SomeContext context)
{
if (context != null)
{
context.UsingCount++;
}
else
{
context = new SomeContext();
}
return context;
}
private SomeContext()
{
}
protected bool MyDisposing = true;
protected override void Dispose(bool disposing)
{
if (UsingCount == 0)
{
base.Dispose(MyDisposing);
MyDisposing = false;
}
else
{
UsingCount--;
}
}
public override int SaveChanges()
{
if (UsingCount == 0)
{
return base.SaveChanges();
}
else
{
return 0;
}
}
}
Example of usage
public class ExmapleNesting
{
public void MethodA()
{
using (var context = SomeContext.GetContext(null))
{
// manipulate, save it, just do not call Dispose on context in using
MethodB(context);
}
MethodB();
}
public void MethodB(SomeContext someContext = null)
{
using (var context = SomeContext.GetContext(someContext))
{
// manipulate, save it, just do not call Dispose on context in using
// Even more nested functions if you'd like
}
}
}
Simple and easy to use.
If you think number of connections to Database,and impact of times that new connections must be opened, not an important problem and you have no limitation for support your application to run at best performance, everything is OK.
Your code works well. Because create just a db context has a low impact in your performance,meta data will be cached after first loading, and connection to your database just occurs when the code need to execute a query. With liitle performance consideration and code design, I offer you to make context factory to have just an instance of each Db Context for each instance of your application.
You can take a look at this link for more performance considerations
http://msdn.microsoft.com/en-us/data/hh949853
I'm having trouble with one of my queries because of EF's change tracking and lazy loading features. The thing is that after I'm getting the result of the query, I'm using AutoMapper to map the domain objects into my business model but it keeps throwing an exception because the context has been disposed.
The ObjectContext instance has been disposed and can no longer be used
for operations that require a connection.
When I look at the resultant collection in the debugger, I see that it is a list of DynamicProxy and not the actual entity. I tried to stop Change Tracking but that did not help. Here's my code:
public List<ContentTypeColumn> GetContentTypeColumns(Int64 contentTypeId)
{
List<ContentTypeColumn> result = new List<ContentTypeColumn>();
using (SCGREDbContext context = new SCGREDbContext())
{
ContentType contentType = context.ContentTypes.Include("Parent").AsNoTracking().FirstOrDefault(x => x.Id.Equals(contentTypeId));
result.AddRange(contentType.ContentTypeColumns.ToList());
while (contentType.Parent != null)
{
result.AddRange(contentType.Parent.ContentTypeColumns.ToList());
contentType = contentType.Parent;
}
}
return result.ToList();
}
Note: If you need to look into my domain model involved in this operation you can refer to this question.
If you need to stop lazy loading and dynamic change tracking you can simply turn it off:
using (SCGREDbContext context = new SCGREDbContext())
{
context.Configuration.ProxyCreationEnabled = false;
...
}
I'm currently working on a project which is using EF Code First with POCOs. I have 5 POCOs that so far depends on the POCO "User".
The POCO "User" should refer to my already existing MemberShip table "aspnet_Users" (which I map it to in the OnModelCreating method of the DbContext).
The problem is that I want to take advantage of the "Recreate Database If Model changes" feature as Scott Gu shows at: http://weblogs.asp.net/scottgu/archive/2010/07/16/code-first-development-with-entity-framework-4.aspx - What the feature basically does is to recreate the database as soon as it sees any changes in my POCOs. What I want it to do is to Recreate the database but to somehow NOT delete the whole Database so that aspnet_Users is still alive. However it seems impossible as it either makes a whole new Database or replaces the current one with..
So my question is: Am I doomed to define my database tables by hand, or can I somehow merge my POCOs into my current database and still take use of the feature without wipeing it all?
As of EF Code First in CTP5, this is not possible. Code First will drop and create your database or it does not touch it at all. I think in your case, you should manually create your full database and then try to come up with an object model that matches the DB.
That said, EF team is actively working on the feature that you are looking for: altering the database instead of recreating it:
Code First Database Evolution (aka Migrations)
I was just able to do this in EF 4.1 with the following considerations:
CodeFirst
DropCreateDatabaseAlways
keeping the same connection string and database name
The database is still deleted and recreated - it has to be to for the schema to reflect your model changes -- but your data remains intact.
Here's how: you read your database into your in-memory POCO objects, and then after the POCO objects have successfully made it into memory, you then let EF drop and recreate the database. Here is an example
public class NorthwindDbContextInitializer : DropCreateDatabaseAlways<NorthindDbContext> {
/// <summary>
/// Connection from which to ead the data from, to insert into the new database.
/// Not the same connection instance as the DbContext, but may have the same connection string.
/// </summary>
DbConnection connection;
Dictionary<Tuple<PropertyInfo,Type>, System.Collections.IEnumerable> map;
public NorthwindDbContextInitializer(DbConnection connection, Dictionary<Tuple<PropertyInfo, Type>, System.Collections.IEnumerable> map = null) {
this.connection = connection;
this.map = map ?? ReadDataIntoMemory();
}
//read data into memory BEFORE database is dropped
Dictionary<Tuple<PropertyInfo, Type>, System.Collections.IEnumerable> ReadDataIntoMemory() {
Dictionary<Tuple<PropertyInfo,Type>, System.Collections.IEnumerable> map = new Dictionary<Tuple<PropertyInfo,Type>,System.Collections.IEnumerable>();
switch (connection.State) {
case System.Data.ConnectionState.Closed:
connection.Open();
break;
}
using (this.connection) {
var metaquery = from p in typeof(NorthindDbContext).GetProperties().Where(p => p.PropertyType.IsGenericType)
let elementType = p.PropertyType.GetGenericArguments()[0]
let dbsetType = typeof(DbSet<>).MakeGenericType(elementType)
where dbsetType.IsAssignableFrom(p.PropertyType)
select new Tuple<PropertyInfo, Type>(p, elementType);
foreach (var tuple in metaquery) {
map.Add(tuple, ExecuteReader(tuple));
}
this.connection.Close();
Database.Delete(this.connection);//call explicitly or else if you let the framework do this implicitly, it will complain the connection is in use.
}
return map;
}
protected override void Seed(NorthindDbContext context) {
foreach (var keyvalue in this.map) {
foreach (var obj in (System.Collections.IEnumerable)keyvalue.Value) {
PropertyInfo p = keyvalue.Key.Item1;
dynamic dbset = p.GetValue(context, null);
dbset.Add(((dynamic)obj));
}
}
context.SaveChanges();
base.Seed(context);
}
System.Collections.IEnumerable ExecuteReader(Tuple<PropertyInfo, Type> tuple) {
DbCommand cmd = this.connection.CreateCommand();
cmd.CommandText = string.Format("select * from [dbo].[{0}]", tuple.Item2.Name);
DbDataReader reader = cmd.ExecuteReader();
using (reader) {
ConstructorInfo ctor = typeof(Test.ObjectReader<>).MakeGenericType(tuple.Item2)
.GetConstructors()[0];
ParameterExpression p = Expression.Parameter(typeof(DbDataReader));
LambdaExpression newlambda = Expression.Lambda(Expression.New(ctor, p), p);
System.Collections.IEnumerable objreader = (System.Collections.IEnumerable)newlambda.Compile().DynamicInvoke(reader);
MethodCallExpression toArray = Expression.Call(typeof(Enumerable),
"ToArray",
new Type[] { tuple.Item2 },
Expression.Constant(objreader));
LambdaExpression lambda = Expression.Lambda(toArray, Expression.Parameter(typeof(IEnumerable<>).MakeGenericType(tuple.Item2)));
var array = (System.Collections.IEnumerable)lambda.Compile().DynamicInvoke(new object[] { objreader });
return array;
}
}
}
This example relies on a ObjectReader class which you can find here if you need it.
I wouldn't bother with the blog articles, read the documentation.
Finally, I would still suggest you always back up your database before running the initialization. (e.g. if the Seed method throws an exception, all your data is in memory, so you risk your data being lost once the program terminates.) A model change isn't exactly an afterthought action anyway, so be sure to back your data up.
One thing you might consider is to use a 'disconnected' foreign key. You can leave the ASPNETDB alone and just reference the user in your DB using the User key (guid). You can access the logged in user as follows:
MembershipUser currentUser = Membership.GetUser(User.Identity.Name, true /* userIsOnline */);
And then use the User's key as a FK in your DB:
Guid UserId = (Guid) currentUser.ProviderUserKey ;
This approach decouples your DB with the ASPNETDB and associated provider architecturally. However, operationally, the data will of course be loosely connected since the IDs will be in each DB. Note also there will be no referential constraints, whcih may or may not be an issue for you.