My code is pretty simple:
Context.AddObject("EntitiesSetName", newObjectName);
Context.SaveChanges();
It worked fine, but just one time – the first one. That time, I interrupted my program by Shift+F5 after the SaveChanges() was traced. It was a debug process, so I manually removed a newly created record from a DB and ran a program again in the debug mode. But it does not work anymore – it “hangs” when SaveChanges() is being called.
Another strange thing that I see:
If I write before addObject() and SaveChanges() are called something like:
var tempResult = (from mydbRecord in Context
where Context.myKey == 123
select mydbRecord.myKey).Count();
// 123 is the key value of the record that should be created before the program hangs.
tempResult will have the next value: 1.
So, it seems that the record is created (when the program hung) and now exists, but when I check the DB manually using other tools – it does not!
What do I do wrong? Is it some kind of cache issue or something else?
EDIT:
I've found a source of problem.
It was not EF problem at all, but it's a problem of the tool that I use to control the database manually (Benthic).
My program falls into some kind of deadlock (when I call SaveChanges()) with the tool when the tool is connected into the same DB.
So, the problem is in the synchronization area, imho, so my question can be marked as solved.
Related
Firstly i would like to apologize if i could not find anything about what i would like to describe that really solved my problems. This does not mean that i fully searched in the site. Although i have been spending too much time (days). I am also new on here (in the sense that i never wrote/replied to SO users). And i am sorry for my possible english errors.
I have to say i am new to Java EE.
I am working on WildFly 14, using MySQL.
I am now focusing on a JPA problem.
I have a uniqueness constraint. I am doing tests and while performing the uniqueness violation test, from the data source level i get a MySQLIntegrityConstraintViolationException, and that's ok. I have the problem in that the persist() method does not let me catch the exception (i even put Throwable in the clause, but nothing..). I strongly, strictly, need to catch that, in order to manage a crucial procedure (that, indirectly contains the call to .remove()) in my work's code.
By the way, trying to write that exception, the system does not show me the window of the suggested classes/annotations/etc, suggesting me just to create the class "MySQLIntegrityConstraintViolationException". Doesn't working on WildFly, using MySQL, suffice, for having the suggestions?
Not finding the solution, i decided to change: instead of using persist(), i decided to use .createNativeQuery() in which i put as parameter a String describing an insertion in the db. It seems working. Indeed it works (signals uniqueness violation (ok!), does not execute the TRY block code (ok!) and goes into CATCH block (ok!)). But, again, the exception/error is not clear.
Also, when in the code i enter the piece of code that is in charge of managing the catching and then executing what's inside (and i have a .remove(), inside), it raises the exception:
"Transaction is required to perform this operation (either use a transaction or extended persistence context)" --> this referring to my entityManager.remove() execution..
Now i cannot understand.. should not JPA/JTA manage automatically the transactions?
Moreover, trying, later, to put entityManager.getTransaction().begin() (and commit()), it gives me the problem of having tried to manage manually transactions when instead i couldn't.. seems an endless loop..
[edit]: i am working in CMT context, so i am allowed to work with just EntityManager and EntityManagerFactory. I have tried with entityManager.getTransaction().begin() and entityManager.getTransaction().commit() and it hasn't worked.
[edit']: .getTransaction (EntityTransaction object) cannot be used in CMT context, for this reason that didn't work.
[edit'']: i have solved the transaction issue, by means of the transaction management suited for the CMT context: JTA + CMT requires us to manage the transactions with a TRY-CATCH-FINALLY block, in whose TRY body it is needed to put the operation we want to perform on the database and in whose FINALLY body it is needed to put the EntityManager object closing (em.close()). Though, as explained above, i have used em.createNativeQuery(), that, when failing, throws catchable (catchable in my app) exceptions; i would really need to do a roll-back (usage of .createNativeQuery() is temporary) in my work code and use the .persist() method, so i need to know what to do in order to be able to catch that MySQLIntegrityConstraintViolationException.
Thanks so much!
IT SEEMS i have solved the problem.
Rolling back to the use of .persist() (so, discarding createNativeQuery()), putting em.flush() JUST AFTER em.persist(my_entity_object) has helped me, in that, once the uniqueness constraint is violated (see above), the raised exception is now catchable. With the catchable exception, I can now do as described at the beginning of the post.
WARNING: I remind you of the fact that i am new to JavaEE-JPA-JTA. I have been "lucky" because, since my lack of knowledge, i put that instruction (em.flush()) by taking a guess (i don't know how i could think of that). Hence, I would not be able to explain the behaviour; I would appreciate, though, any explanation of what could have happen, of how and when the method flush() is used, and so on and so forth..
Thanks!
Well the system we have has a bunch of dependencies, but I'll try to summarize what's going on without divulging too much details.
Test assembly in the form of a .dll is the one being executed. A lot of these tests call an API.
In the problematic method, there's 2 API calls that have an await on them: one to write a record to that external interface, and another to extract all records and then read the last one in that external interface, both via API. The test is simply to check if writing the last record was successful in an end-to-end context, that's why there's both a write and then a read.
If we execute the test in Visual Studio, everything works as expected. I also tested it manually via command lining vstest.console.exe, and the expected results always come out as well.
However, when it comes to VS Test task in VSTS, it fails for some reason. We've been trying to figure it out, and eventually we reached the point where we printed the list from the 'read' part. It turns out the last record we inserted isn't in the data we pulled, but if we check the external interface via a different method, we confirmed that the write process actually happened. What gives? Why is VSTest getting like an outdated set of records?
We also noticed two things:
1.) For the tests that passed, none of the Console.WriteLine outputs appear in the logs. Only on Failed test do they do so.
2.) Even if our Data.Should.Be call is at the very end of the TestMethod, the logs report the fail BEFORE it prints out the lines! And even then, the printing should happen after reading the list of records, and yet when the prints do happen we're still missing the record we just wrote.
Is there like a bottom-to-top thing we're missing here? It really seems to me like VSTS vstest is executing the assert before the actual code. The order of TestMethods happen the right order though (the 4th test written top-to-bottom in the code is executed 4th rather than 4th to last) and we need them to happen in the right order because some of the later tests depend on the former tests succeeding.
Anything we're missing here? I'd put a source code but there's a bunch of things I need to scrub first if so.
Turns out we were sorely misunderstanding what 'await' does. We're using .Wait() instead for the culprit and will also go back through the other tests to check for quality.
I have a class inheriting from DbContext implementing a code-first Entity Framework object.
I would like to have a unit test that exercises the model builder in this dbcontext -- this would be useful for detecting things like 'key not defined on entity' errors.
But I may not have an actual database available during unit testing time. Is there a way to exercise this code without actually trying to make a database connection with the context. I tried something innocuous like:
var ctx = new MyDbContext("Data Source=(local);Initial Catalog=Dummy;<.......>");
var foo = ctx.GetValidationErrors(); //triggers DBModelBuilder, should throw if my DBModel is goofed up
This does technically work. However this takes a very long time to run -- if I pause and inspect the call stack it is triggering a call to System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin
Eventually it times out, swallows the connection error and finishes my test.
Is there any way to do this without the connection attempt?
The solution here is to just kind of go with the problem and use the lowest possible connection timeout. I'm using this connection string:
Server=localhost; Database=tempdb; Integrated Security=true;Connection Timeout=1;ConnectRetryCount=0
It still triggers the problem, but with a one second timeout and no automatic retries (for only this one test in the system) it's totally acceptable.
And if the developer has a local db installed, it will accidentally work even faster.
I am using Neo4j embedded in my Scala project. I have been including
ShutdownHookThread {
shutdown(ds)
}
the above piece of code in each and every function before the beginning of transaction. Do I need to include it in every function. What happens if I don't include it?
ShutdownHookThread registers a piece of code to be executed when your application is about to exit. You need to use it only once - somewhere in your app bootstrap code, cause there is no sense to shutdown the database more than one time.
I've set up mvc-mini-profiler against my Entity Framework-powered MVC 3 site. Everything is duly configured; Starting profiling in Application_Start, ending it in Application_End and so on. The profiling part works just fine.
However, when I try to swap my data model object generation to providing profilable versions, performance slows to a grind. Not every SQL query, but some queries take about 5x the entire page load. (The very first page load after firing up IIS Express takes a bit longer, but this is sustained.)
Negligible time (~2ms tops) is spent querying, executing and "data reading" the SQL, while this line:
var person = dataContext.People.FirstOrDefault(p => p.PersonID == id);
...when wrapped in using(profiler.Step()) is recorded as taking 300-400 ms. I profiled with dotTrace, which confirmed that the time is actually spent in EF as usual (the profilable components do make very brief appearances), only it is taking much longer.
This leads me to believe that the connection or some of its constituent parts are missing sufficient data, making EF perform far worse.
This is what I'm using to make the context object (my edmx model's class is called DataContext):
var conn = ProfiledDbConnection.Get(
/* returns an SqlConnection */CreateConnection());
return CreateObjectContext<DataContext>(conn);
I originally used the mvc-mini-profiler provided ObjectContextUtils.CreateObjectContext method. I dove into it and noticed that it set a wildcard metadata workspace path string. Since I have the database layer isolated to one project and several MVC sites as other projects using the code, those paths have changed and I'd rather be more specific. Also, I thought this was the cause of the performance issue. I duplicated the CreateObjectContext functionality into my own project to provide this, as such:
public static T CreateObjectContext<T>(DbConnection connection) where T : System.Data.Objects.ObjectContext {
var workspace = new System.Data.Metadata.Edm.MetadataWorkspace(
GetMetadataPathsString().Split('|'),
// ^-- returns
// "res://*/Redacted.csdl|res://*/Redacted.ssdl|res://*/Redacted.msl"
new Assembly[] { typeof(T).Assembly });
// The remainder of the method is copied straight from the original,
// and I carried over a duplicate CtorCache too to make this work.
var factory = DbProviderServices.GetProviderFactory(connection);
var itemCollection = workspace.GetItemCollection(System.Data.Metadata.Edm.DataSpace.SSpace);
itemCollection.GetType().GetField("_providerFactory", // <==== big fat ugly hack
BindingFlags.NonPublic | BindingFlags.Instance).SetValue(itemCollection, factory);
var ec = new System.Data.EntityClient.EntityConnection(workspace, connection);
return CtorCache<T, System.Data.EntityClient.EntityConnection>.Ctor(ec);
}
...but it doesn't seem to make much of a difference. The problem still exists whether I use the above hacked version that's more specific with metadata workspace paths or the mvc-mini-profiler provided version. I just thought I'd mention that I've tried this too.
Having exhausted all this, I'm at my wits' end. Once again: when I just provide my data context as usual, no performance is lost. When I provide a "profilable" data context, performance plummets for certain queries (I don't know what influences this either). What could mvc-mini-profiler do that's wrong? Am I still feeding it the wrong data?
I think this is the same problem as this person ran into.
I just resolved this issue today.
see: http://code.google.com/p/mvc-mini-profiler/issues/detail?id=43
It happened cause some of our fancy hacks were not cached well enough. In particular:
var workspace = new System.Data.Metadata.Edm.MetadataWorkspace(
new string[] { "res://*/" },
new Assembly[] { typeof(T).Assembly });
Is a very expensive call, so we need to cache the workspace.
Profiling, by definition, will effect performance of the application being profiled. The profiler needs to insert it's own method calls throughout the application, intercept low level system calls, and record all that data someplace (meaning writes to disk). All of those tasks take up precious CPU cycles, memory, and disk access.