Entity Framework Code First - Model change breaks Seed - entity-framework

We've been using Entity Framework Code First 5 for a little while now, without major issue.
I've recently discovered that ANY change I make to my model (such as adding a field, or removing a field) means that the Seed method no longer runs leaving my database in an invalid state.
If I reverse the change, the seed method runs fine.
I have tried making changes to varying parts of my model, so it's not the specific change which is relevant.
Anyone know how I can (a) debug what the specific issue is, or (b) come across this themselves and know how to fix it?
UPDATE: After the model change, however many times I query the database it doesn't run the Seed. However, I have found that if I manually run IISRESET, and then re-execute the web service which executes the query it does then run the seed! Anyone know why this would be the case, and why suddenly I need to reset IIS in between the database initialization and the Seed executing?
Many thanks Steve

Related

Anylogic model stop without message

I have created a model to generate a product that will be cycled through a list of machines. Technically the product list is for a single-day run, but I run the model for long durations to stabilise the model output.
The model can run properly for months until around 20 months, then suddenly stops without any error message as shown in the screenshot. I do not know how to debug this since I do not know where the error comes from.
Does anyone have a similar encounter and could advise on how to approach this issue? Could it be an issue of memory overload?
Without more details, it's hard to pinpoint the exact reason, but this generally happens if the run is stuck in an infinite While Loop or similar. So check all your loops where it's possible for such a scenario to happen and it's likely that one of them (or more) is causing the issue.

Postgres AUTONOMOUS_TRANSACTION equivalent on the same DB

I'm currently working on a SpringBatch application that should insert some logs in case a certain type of error happens. The problem is that if the BatchJob fails, it automatically rollback everything done and that’s perfect, but it also rollback the error logs.
I need to achieve something similar to the AUTONOMOUS_TRANSACTION of Oracle while using PostgreSQL (14).
I’ve seen the DBLINK and it seem the only thing close to an alternative, but I have found some problems:
I need to avoid the connection string because the database host/port/name changes in the different environments, is it possible? I need to persist the data in the same database to technically I don’t need to connect to any other database but use the calling connection.
Is it possible to create a Function/Procedure that creates the takes care of all and I only have to call it Java side? Maybe this way I can somehow pass the connection data as a parameter in case that is not possible to avoid.
In a best case scenario I would be able to do something like:
dblink_exec(text sql);
That without arguments considers the same database where is been executed.
The problem is that I need this to be done without specifying any connection data, this will be inside a function on the executing db, in the same schema… that function will pass from one environment to the next one and the code needs to be the same so any name/user/pass needed must be avoided since they will change by environment. And since doing it in the same db and schema technically they can be inferred.
Thanks in advance!
At the moment I haven't try anything, I'm trying to get some information first.

Exporting Importing Breeze Model and hasTempKey issues

When you create a new entity Breeze set id:-1 state:'Added' hasTempKey:true. After export and re-import Breeze doesn't merge the import -1 entity with the current -1 entity in memory it adds a new one... this is explain in the docs... ( but how do we overcome this problem is the question in my case... ) So I tried to set the entity created to setUnchanged(); Now the export import cycle runs as expected but the created entity has lost it's hasTempKey:true property so newly created entity can conflict with a current one... some advice on how to resolve these issues would really be appreciated thanks
I assume that this question relates to your approach to implementing a disconnected app as described in this SO question.
As I said there, I you're spending too much time trying to trick Breeze into doing what it should not do. Here, for example, you want EntityManager.saveChanges to not actually save to the remote data store. But the whole point of "saveChanges" is that it persists permanently. "Saving locally" is not really saving. No one but you knows about these saved changes. You don't know if they would pass the business validation rules on your server or if they would collide with a different user's changes. If your laptop dies or is stolen, your locally saved data are gone.
I think breeze can be a huge help in crafting occasionally connected applications. But I think it is critical to properly differentiate stashing changes locally with the intent to save them and actually saving them remotely.
Outline of a Disconnected App
I urge you to take a different tack.
Your app could easily initiate a sequence of distinct editing sessions. For example, one session could be a travel reservation for client 'A', another session for client 'B', and a third session is about something else entirely ... maybe the client 'C' profile.
When your app can't reach the server, it preserves each session as a WIP ("Work In Progress"). Each WIP session its own serialized bundle, identified by a WIP key.
Aside: you'll see this pattern in John Papa's "Building Apps with Angular and Breeze Part 2" when that comes out later this year.
The Breeze EntityManager.exportEntities(list_of_entities) serializes everything about the changed entities of that session including their change-state, original values, and temp keys. Remember that the list_of_entities can be anything including an object graph. You can save that bundle to browser local storage under the WIP key and restore it later.
I'd keep a directory of WIP sessions that included information about the state of the session as a whole (e.g., what kind of editing session it is and whether this session was ready to be persisted remotely). Your app presents WIP sessions to the user while offline. When it gets a connection, it goes through a "synchronization" phase during which it tries to persist the changes. With luck it succeeds. If not, you can rehydrate the session in the UI and help the user reconcile the conflicts.
These are broad strokes. The devil is in the details.
The critical thing in this context is that you do not mess with entity state or the temp keys. You don't care what the keys are or if they change. The serialized session will hold that state information for you. The serialized bundles will move in and out of local storage without complaint. You are using Breeze as intended, while offline or online.
The current behavior is deliberate. Generally we assume that temp keys are effectively not comparable. However, I do understand your use case. So, one approach would be to:
1) import your "exported entities" into a temporary EntityManager and check for temp key collisions between this entityManager and your "destination" EntityManager.
2) Then remove any "dups" from the destination EntityManager
3) Import your original "exported entities" into your destination EntityManager
You can actually skip step 1 if you know that all tempKeys are dups.
Another approach would be to use Guid's for your keys. This completely bypasses the temp key issue because Guid never need to be "temporary".

breezejs update cache with changes from server

I am using breezejs in a offline first manner, executing query’s initially against a server and stashing the entities in local storage where I query the entity manager cache.
When data changes on the server (by means of another app changing it using breeze) the client app synchronizes by just getting a new copy of the entities from the server.
This works great but I am wondering if there is a way that I can get only the changes from the server, I was thinking maybe set a revision GUID or timestamp on each record and then checking the metadata if it needs updating but I really have no idea on how to proceed.
So my question is can breeze be tweaked to allow for such a use case?
And is there maybe a beter way that I am overlooking?
I think your direction is correct .If you had a column with a TimeDate in every table e.g "LastModified" and that column would get updated on every record update. then you could add a filter to every breeze query after the first that says that that date must be later than the last time you did this "rebase" query or the initial loading. so It's not supported out of the box, but you can get it to work yourself. the guid per version will not really be a good idea, as you will have to send all these guids on every request, and then check all of them. time stamp makes more sense.

Issue with Entity Framework 4.2 Code First taking a long time to add rows to a database

I am currently using Entity Framework 4.2 with Code First. I currently have a Windows 2008 application server and a database server running on Amazon EC2. The application server has a Windows Service installed that runs once per day. The service executes the following code:
// returns between 2000-4000 records
var users = userRepository.GetSomeUsers();
// do some work
foreach (var user in users)
{
var userProcessed = new UserProcessed { User = user };
userProcessedRepository.Add(userProcessed);
}
// Calls SaveChanges() on DbContext
unitOfWork.Commit();
This code takes a few minutes to run. It also maxes out the CPU on the application server. I have tried the following measures:
Remove the unitOfWork.Commit() to see if it is network related when the application server talks to the database. This did not change the outcome.
Changed my application server from a medium instance to a high CPU instance on Amazon to see if it is resource related. This caused the server not to max out the CPU anymore and the execution time improved slightly. However, the execution time was still a few minutes.
As a test I modified the above code to run three times to see if execution time for the second and third loop using the same DbContext. Every consecutive loop took longer to run that the previous one but that could be related to using the same DbContext.
Am I missing something? Is it really possible that something as simple as this takes minutes to run? Even if I don't commit to the database after each loop? Is there a way to speed this up?
Entity Framework (as it stands) isn't really well suited to this kind of bulk operation. Are you able to use one of the bulk insert methods with EC2? Otherwise, you might find that hand-coding the T-SQL INSERT statements is significantly faster. If performance is important then that probably outweighs the benefits of using EF.
My guess is that your ObjectContext is accumulating a lot of entity instances. SaveChanges seems to have a phase that has time linear in the number of entities loaded. This is likely the reason for the fact that it is taking longer and longer.
A way to resolve this is to use multiple, smaller ObjectContexts to get rid of old entity instances.