I have a problem using Entity Framework and COM+. The architecture is this:
A windows service calls four methods of a COM, every n minutes.
COM+ then calls the corresponding methods on a Business Layer.
Business Layer calls DAL. DAL then returns a list to Business Layer. This is done by calling a .ToList()
When I try running the service, the DAL methods return a timeout inner exception. When I try to view the table from Enterprise Manager, it returns a timeout as well! From what I 've seen, the SELECT statements block the other connection instances.
Has anyone else experienced similar problems?
P.S. I cannot post any code yet because I am not at my work... Will do so tomorrow.
Well, as it seems, Entity Framework didn't have to do with any of the above.
As it turns out, the problem was within COM+. I should have ended each method of COM+ with ContextUtil.SetComplete(). Apparently, I didn't do that so the transaction stayed active and after the first few calls, it locked my db.
e.g.
Using MyEntity As New EntityObject
MyEntity.Connection.Open()
Dim rows = From tbl In MyEntity.MyTable _
Select tbl
list = rows.ToList
End Using
ContextUtil.SetComplete()
Please note that if an exception occurs, you should place a ContextUtil.SetAbort(). I should also like to note that the above code is mixed. It's part of my DAL layer and part of my COM+. I just put it like that to make the example more clear...
Related
I have the next setup: WCF Web Services hosted in IIS. Entity Framework 6 used to retrieve data from the DB. Web Services are initialized in the Global.asax.cs, which inherits from NinjectHttpApplication (so we use ninject for dependency injection). In this NinjectHttpApplication, on the CreateKernel method we bind the EF DbContext as follows:
protected override IKernel CreateKernel()
{
var kernel = new StandardKernel();
kernel.Bind<DbContext>().To<MyCustomContext>().InTransientScope();
return kernel;
}
Then, every time a service is called, the Context is obtained as follows in its consturctor:
_context = kernel.Get<DbContext>();
Then, the service retrieves data from the DB as follows:
data = _context.Set<TEntity>().Where(<whatever filter>);
Having said that, my problem is the next: I have a service which is being called many times (with a complex and long query with multple joins), and every time it is called, EF takes ages to produce SQL to send to the DB as result of the Linq To Entities that I've coded. The execution of the query in the DB is nothing (600 milliseconds) but EF is taking ages to produce the SQL every single time this service is called. I suspect this is because of kernel.Bind<DbContext>().To<MyContext>().InTransientScope() who is forcing EF to create a new instance of the DbContext every time there is a call.
I've made a few tests with UnitTests and the behavior is totally different: if you instantiate the service multiple times from the same unit test method and you call it, EF takes long to produce the query only the first time, then it takes no time to produce SQL from the subsequent calls (same query but with different parameters to filter the data to retrieve). From the unit test, the CreateKernel() is of course only called once in the Initialize() method (like in the Web Service in the global.asax.cs), so I dont know what is provoking this huge delay. I suspect EF is capable to keep/cache the query pre-compiled with the unit test approach but not in the real web application. Any clue why?
Please note that the Linq to Entities query is parameterized (strings and date are the params).
Any help very appreciated.
I find you bind your DbContext in a InTransientScope, which means every time you get Dbcontext from ninject , it will create a new DbContext for you.
You could consider using InThreadScope() instead of InTransientScope(), which means ninject will return the same instance when it is in the same thread.
There is also SingleTon scope , which means always returning the same instance ,but this will make the dbcontext too big.
I am using EclipseLink 2.3.0. I have a method that I am calling from a unit test (hence outside of a container, no JTA) that looks like this:
EntityManager em = /* get an entity manager */;
em.getTransaction().begin();
// make some changes
em.getTransaction().commit();
The changes were NOT being persisted to the database, and looked at this for a long time and finally realized that EntityManager.getTransaction() is actually returning a NEW EntityTransaction, rather than the same one in both calls. The effect is that the first call creates a new transaction and begins it, and the second call creates ANOTHER transaction and commits it. Because the first transaction was never committed, the changes are not saved. We verified this like this:
log.info(em.getTransaction().toString());
log.info(em.getTransaction().toString());
Which resulted in these log messages:
INFO: org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl#1e34f445
INFO: org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl#706a4d1a
The two different object ID's verifying that there are two different instances. Changing the code to this:
EntityManager em = /* get an entity manager */;
EntityTransaction tx = em.getTransaction();
tx.begin();
// make some changes
tx.commit();
... remedied the problem. Now when I run the code, I see the SQL statements generated to do the database work, and looking in the database, the data has been changed.
I were a bit surprised by this outcome, since I have seen numerous code examples online (for JPA generally and for EclipseLink specifically) that recommend the code we used for managing transactions. I searched far and wide for information specifically about this but have not found anything. So what's going on?
I looked in the JPA spec for something that specifies exactly what getTransaction() does and it was not specific if the transaction is new or the same. Is there a setting in persistence.xml that controls this? Is the behavior specific to each implementation of the JPA spec?
Thanks so much for any information or guidance.
Using getTransaction() does work in JPA and in EclipseLink (this is how our own tests work).
My guess is you are doing something else very odd.
Are you using Spring, or another layer?
Please include the entire code and persistence.xml for your test. Ensure that you are not using JTA in your persistence.xml.
The JPA spec (see paragraph 7.5.4) has explicit examples showing the use of getTransaction() to begin and commit the transaction. So your code should be fine.
Your test shows that you get two different objects, but that doesn't mean the same transaction is not used. Maybe the returned object is just some proxy to a single, real, transaction object.
Or maybe the transaction is committed or rollbacked inside the code hidden under // make some changes.
Have u tried to use persist before commit: ?
Employee employee = new Employee("Samuel", "Joseph", "Wurzelbacher");
em.getTransaction().begin();
em.persist(employee);
em.getTransaction().commit();
We have a lot of legacy code that uses our own data object. We are slowly trying to introduce EF. We need the ability to enlist EF into a transaction we already started using System.Data.SqlClient.SQLTransaction. EF of course uses System.Transaction.Transaction. Is this possible?
To make things more clear. We have code all over the place that does the following:
public sub DeleteEntity()
Dim InTransaction = ado.InTransaction
if not InTransaction then ado.BeginTran
...
<--want to use EF Here
...
if not InTransaction then ado.CommitTran
end sub
The DeleteEntity routine is not simple. It has a lot of logic. I want to use EF for just want thing in the middle of the code so i need to enlist it in the active transaction. I can't just use transaction scope because of how it is designed. DeleteEntity is called in lots of places and i don't want to visit every place that calls the routine. It more has to deal with System.Transaction.Transaction and SqlTransaction then it does EF itself.
Update: I tried:
context.connection.EnlistTransaction(Transaction.Current)
That doesn't work.
I may be wrong but have you considered using ObjectQuery instead of using to Linq to Entities for the Delete operation. Here is what I found on MSDN here
Promotion of a transaction to a DTC may occur when a connection is
closed and reopened within a single transaction. Because the Entity
Framework opens and closes the connection automatically, you should
consider manually opening and closing the connection to avoid
transaction promotion. For more information, see How to: Manually Open
the Connection from the Object Context.
I need to copy data from one database to another with EF. E.g. I have the following table relations: Forms->FormVersions->FormLayouts... We have different forms in both databases and we want to collect them to one DB. Basically I want to load Form object recursively from one DB and save it to another DB with all his references. Also I need to change IDs of the object and related objects if there are exists objects with the same ID in the second database.
Until now I have following code:
Form form = null;
using (var context = new FormEntities())
{
form = (from f in context.Forms
join fv in context.FormVersions on f.ID equals fv.FormID
where f.ID == 56
select f).First();
}
var context1 = new FormEntities("name=FormEntities1");
context1.AddObject("Forms", form);
context1.SaveChanges();
I'm receiving the error: "The EntityKey property can only be set when the current value of the property is null."
Can you help with implementation?
The simplest solution would be create copy of your Form (new object) and add that new object. Otherwise you can try:
Call context.Detach(form)
Set form's EntityKey to null
Call context1.AddObject(form)
I would first second E.J.'s answer. Assuming though that you are going to use Entity Framework, one of the main problem areas that you will face is relationship management. Your code should use the Include method to ensure that related objects are included in the results of a select operation. The join that you have will not have this effect.
http://msdn.microsoft.com/en-us/library/bb738708.aspx
Further, detaching an object will not automatically detach the related objects. You can detach them in the same way however the problem here is that as each object is detached, the relationships that it held to other objects within the context are broken.
Manually restoring the relationships may be an option for you however it may be worthwhile looking at EntityGraph. This framework allows you to define object graphs and then perform operations such as detach upon them. The entire graph is detached in a single operation with its relationships intact.
My experience with this framework has been in relation to RIA Services and Silverlight however I believe that these operations are also supported in .Net.
http://riaservicescontrib.codeplex.com/wikipage?title=EntityGraphs
Edit1: I just checked the EntityGraph docs and see that DetachEntityGraph is in the RIA specific layer which unfortunately rules it out as an option for you.
Edit2: Alex Jame's answer to the following question is a solution to your problem. Don't load the objects into the context to begin with - use the notracking option. That way you don't need to detach them which is what causes the problem.
Entity Framework - Detach and keep related object graph
If you are only doing a few records, Ladislav's suggestion will probably work, but if you are moving lots of data, you should/could consider doing this move in a stored procedure. The entire operation can be done at the server, with no need to move objects from the db server, to your front end and then back again. A single SP call would do it all.
The performance will be a lot better which may or may not not matter in your case.
This problem is not readily reproducible in a simple example here but was wondering if anyone has any experience and tips, here is the issue:
using Entity Framework
have many points in application where (1) data is written to some entity table e.g. Customer, (2) data is written to history table
both of these actions use Entity Framework, HOWEVER, they use different contexts
these actions need to be both in one transaction: i.e. if one fails to write, the other should not write, etc.
I can wrap them with a TransactionScope,
like this:
using (TransactionScope txScope = new TransactionScope()) {
...
}
but this gives me:
Microsoft Distributed Transaction Coordinator (MSDTC) is disabled for
network transactions.
Our database admin has told me that MSDTC is disabled by choice and can not be installed.
Hence I am making changes trying to create my own EntityConnection with a MetadataWorkspace with the idea that each context will use the same EntityConnection. However, this is proving near impossible trying to get it to work, e.g. currently I continue to get the above error even though theoretically both contexts are using EntityConnection. It's difficult to understand where/why Entity Framework is requiring the MSDTC for example.
Has anyone gone down this road before, have experience or code examples to share?
Well, the problem is quite easy.
If you are using sql server 2008 you should not have that problem because you have promotable transaction, and as .NET knows that you are using the same persistence store (the database) it wont promote it to DTC and commit it as local. look into promotable transaction with sql server 2008.
As far as I know Oracle is working in its driver to support promotable transactions, but I do not know the state, MS oracle driver does not support it.
http://www.oracle.com/technology/tech/windows/odpnet/col/odp.net_11.1.0.7.20_twp.pdf
If you are using a driver that do not support promotable transactions it is impossible for .NET to use local transaction doing two connections. You should change your architecture or convince the database admin for installing MSDTC.
I had a similar problem with SQL 2008, Entity Framework.
I had two frameworks defined (EF1, and EF2) but using identical connection strings to a sql 2008 database.
I got the MSDTC error above, when using nested "usings" across both.
eg the code was like this:
using (TransactionScope dbContext = new TransactionScope())
{
using (EF1 context = new EF1())
{
// do some EF1 db call
using (EF2 context2 = new EF2())
{
// do some EF2 db call
}
}
dbContext.Complete();
}
It wasnt as simple as this, because it was split across several methods, but this was the basic structure of "usings".
The fix was to only open one using at a time. No MTDSC error, No need to open distributed transactions on db.
using (TransactionScope dbContext = new TransactionScope())
{
using (EF1 context = new EF1())
{
// do some EF1 db call
}
using (EF2 context2 = new EF2())
{
// do some EF2 db call
}
dbContext.Complete();
}
I think that what you need to do is to force your contexts to share single database connection. You will be able then to perform these two operations against two different contexts in single transaction. You can achieve this by passing one EntityConnection object to both of your context's constructors. Of course this approach will require you to pass this object to methods which update DB.
I have recently blogged about creating database context scope which will make using multiple EF contexts and transactions easier.