Nullreference exception in EntityFramework ObjectStateManager.DetectConflicts - entity-framework

I've written a WCF webservice that takes XML files and stores them into the database. Everything worked fine under 'low load' but under high load I'm getting some unexpected behavior and thusfar I haven't been able to pinpoint what exactly the problem is. Does anybody have a suggestion?
This is the exception I'm seeing in the logs 'sometimes' - like 25 times out of 10 000:
Exception: System.NullReferenceException: Object reference not set to an instance of an object.
at System.Data.Objects.ObjectStateManager.DetectConflicts(IList`1 entries)
at System.Data.Objects.ObjectStateManager.DetectChanges()
at System.Data.Entity.Internal.InternalContext.GetStateEntry(Object entity)
at System.Data.Entity.DbContext.Entry(Object entity)
... rest of my stacktrace
I see this happen every once-in-a-while and I'm currently looking into whether this has to do with concurrency (some other thread maybe working on the same entity). Can someone maybe give me some pointers as to where to look for?

NullReferenceException Occurs when You try to use a reference variable who's value is Nothing/null.
When the value is Nothing/null for the reference variable, that means
it is not actually holding a reference to an instance of any object
that exists on the heap.
I can't tell what the problem is, But i believe its with the threads. Since its working fine for small number of users. It might have used multiple threads for improving performance when the load increased. When threads executes asynchronously, there is a greater chance for having this issue.!!
The Solution i can offer is to custom specify the threads, and synchronise the Objects. Probably it will solve the Issue.

Related

How to initialize EclipseLink connection

Sorry if my question is quite simple, but I really couldn't find an answer googling. I have this project with JPA 2.0 (EclipseLink), it's working fine but I want to ask if there's a way to initialize the database connection?
Actually it begins whenever the user try to access any module that requires any query, which is quite annoying because the connection can take some seconds and the app froze for a second when it's connecting.
I can make any random query on main method for "turn it on", but it's an unnecesary query and not the solution I want to use.
Thanks beforehand!
The problem will be that the deployment process is lazy. This prevents the cost of initializing and connecting to unused/unneeded persistence units, but it means everything with a persistence unit is processed the very first time it is accessed.
This can be configured on a persistence unit by using the "eclipselink.deploy-on-startup" property:
http://www.eclipse.org/eclipselink/documentation/2.4/jpa/extensions/p_deploy_on_startup.htm
Not sure if this is what you are looking for, I found the existence of property eclipselink.jdbc.exclusive-connection.is-lazy, which defaults to true.
According to Javadoc, "property specifies when write connection is acquired lazily".

[Drools]Fact Objects Mistakenly Updated During Multi-Stage Rule Firing OR wtih Long Fact Object List

If the subject is confusing, that is because the problem itself is way too confusing to us. Here is the thing.
We have an application that leverages Drools' rule engine to help us evaluate java beans - Fact Objects in Drool's term - on their field values and update a particular flag field within the bean to "true" or "false" according to the evaluation result. All the evaluations and update operations are defined in the template.
The way it invokes Drools is like this. First it creates a stateful session before the first use. And when we have a list of beans, we insert them one by one to the session, and call fireAllRules. After firing the rules, we keep the session for later uses. And once we have another batch of beans, we do the same again, and again, and again...
This sounds making sense. But later during the testing, we found that although during the first batch, the rule engine worked fine, the following batches didn't. Some beans were mistakenly updated, that is, even no fields did match any rules, the flag was updated to true.
Then we thought maybe we should not reuse the session. So we put all beans from all batches into one big list. But soon we found that the problematic beans still got wrong update. And what's more weird, if we run this testing on different machines, problematic bean could be different. But if we test any of the problematic beans in unit test with itself, everything works fine.
Now I hope I have explained the problem. We are new to Drools. Maybe we did something wrong somewhere that we don't know. Could anyone here give any direction of the problem? That'll us a very big favor!
It sounds to me as though you're not clearing out working memory after each 'fireAllRules'.
If you use a stateful session, then every fact which you insert will remain in working memory until you explicitly retract it. Therefore every time you fire rules, you are re-evaluating the original set of facts, plus the new ones.
It might be useful to add a little debugging to your code. Using session.getObjects(), you will be able to see what facts are in working memory before and after execution of the rules. This should indicate what is not being retracted between evaluations.

NEventStore CommonDomain: Why does EventStoreRepository.GetById open a new stream when an aggregate doesn't exist?

Please explain the reasoning behind making EventStoreRepository.GetById create a new stream when an aggregate doesn't exist. This behavior appears to give GetById a dual responsibility which in my case could result in undesirable results. For example, my aggregates always implement a factory method for their creation so that their first event results in the setting of the aggregate's Id. If a command was handled prior to the existance of the aggregate it may succeed even though the aggregate doesn't have it's Id set or other state initialized (crash-n-burn with null reference exception is equally as likely).
I would much rather have GetById throw an exception if an aggregate doesn't exist to prevent premature creation and command handling.
That was a dumb design decision on my part. I went back and forth on this one. To be honest, I'm still back and forth on it.
The issue is that exceptions should really be used to indicate exceptional or unexpected scenarios. The problem is that a stream not being found can be a common operation and even an expected operation in some regards. I tinkered with the idea of throwing an exception, returning null, or returning an empty stream.
The way to determine if the aggregate is empty is to check the Revision property to see if it's zero.

What coredata errors should I prepare for?

I have an app getting close to release date, but it occurred to me that wherever I have core data save and/or fetch requests I'm not really handling the errors other than to check if they exist and #throw them, which I'm sure will seem almost like nails on a chalkboard to more experienced programmers, and surely there's some kind of disaster waiting to happen.
So to be specific, what kinds of errors can I expect from A) Fetches, and B) Saves, and also C) in general terms, how should I deal with these?
You can see the Core Data Constants Reference to get an idea about what kind of errors you can expect to see in general.
For fetches, the most common issue is that the fetch returns an empty array. Make sure that your view controllers, datasources and delegates can handle an empty fetch. If you dynamically construct complex predicates, make sure catch exceptions from an invalid predicate.
Most save errors results from validation errors. You should have a error recovery for every validation you supply. One common and somewhat hidden validation error is not providing a required relationship.
One thing that trips people up with Objective-c is that errors and exceptions are slightly different critters than they are in other languages. In Objective-C an error is something that the programmer should anticipate and plan for in the normal operation of the application e.g. a missing file. By contrast an exception is something exceptional that the programmer wouldn't expect the app to have to routinely handle e.g. a corrupted file.
Therefore, in Core Data a validation failure would be an common expected and unexceptional error whereas as corrupted persistent store would be a rare, unexpected and highly exceptional exception.
See the Exceptions Programming Guide and the Error Handling Programming Guide for details.

How can I clear Class::DBI's internal cache?

I'm currently working on a large implementation of Class::DBI for an existing database structure, and am running into a problem with clearing the cache from Class::DBI. This is a mod_perl implementation, so an instance of a class can be quite old between times that it is accessed.
From the man pages I found two options:
Music::DBI->clear_object_index();
And:
Music::Artist->purge_object_index_every(2000);
Now, when I add clear_object_index() to the DESTROY method, it seems to run, but doesn't actually empty the cache. I am able to manually change the database, re-run the request, and it is still the old version.
purge_object_index_every says that it clears the index every n requests. Setting this to "1" or "0", seems to clear the index... sometimes. I'd expect one of those two to work, but for some reason it doesn't do it every time. More like 1 in 5 times.
Any suggestions for clearing this out?
The "common problems" page on the Class::DBI wiki has a section on this subject. The simplest solution is to disable the live object index entirely using:
$Class::DBI::Weaken_Is_Available = 0;
$obj->dbi_commit(); may be what you are looking for if you have uncompleted transactions. However, this is not very likely the case, as it tends to complete any lingering transactions automatically on destruction.
When you do this:
Music::Artist->purge_object_index_every(2000);
You are telling it to examine the object cache every 2000 object loads and remove any dead references to conserve memory use. I don't think that is what you want at all.
Furthermore,
Music::DBI->clear_object_index();
Removes all objects form the live object index. I don't know how this would help at all; it's not flushing them to disk, really.
It sounds like what you are trying to do should work just fine the way you have it, but there may be a problem with your SQL or elsewhere that is preventing the INSERT or UPDATE from working. Are you doing error checking for each database query as the perldoc suggests? Perhaps you can begin there or in your database error logs, watching the queries to see why they aren't being completed or if they ever arrive.
Hope this helps!
I've used remove_from_object_index successfully in the past, so that when a page is called that modifies the database, it always explicitly reset that object in the cache as part of the confirmation page.
I should note that Class::DBI is deprecated and you should port your code to DBIx::Class instead.