Our product has to be interfaced with multiple client/partner systems. For example, when a person is added/updated we have to notify changes to a 3rd-party system, for example by calling a web service or creating a xml file in a folder, etc.
We need a "hook" after SaveChanges has successfully persisted changes in the database.
Lots of information can be found about how to execute business logic when saving changes (before changes are persisted in the database), but less about executing logic after changes are persisted.
After investigating, I think to use the following:
// Persist data
cxt.SaveChanges(false);
// TODO: execute business logic that can get data changes
// Discard changes and set entities as unmodified
ctx.AcceptAllChanges();
Does anyone have a better solution for this scenario?
I know this question is a bit old, but figure that I would add this for anyone else searching on this topic.
I would recommend checking out EFHooks. The official version is a bit stale (e.g. .NET 4 only), but I have forked the project and published a new NuGet package, VisoftInc.EFHooks.
You can read more about both packages in this post: http://blogs.visoftinc.com/2013/05/27/hooking-into-ef-with-efhooks/
Basically, EFHooks will let you hook into EF (e.g. PostInsert or PostUpdate) and run some code. The hooks are Pre/Post for Insert/Update/Delete.
One thing to note is that this is based on the DbContext, so if you are still using the ObjectContext for EF, this solution won't work for you.
You can override savechanges, then do the updating of the 3rd party systems at the same time as you save the data to the database.
see: http://thedatafarm.com/blog/data-access/objectcontext-savechanges-is-now-virtual-overridable-in-ef4/
Related
Currently, I'm working on a Java EE project with some non-trivial requirements regarding persistence management. Changes to entities by users first need to be applied to some working copy before being validated, after which they are applied to the "live data". Any changes on that live data also need to have some record of them, to allow auditing.
The entities are managed via JPA, and Hibernate will be used as provider. That is a given, so we don't shy away from Hibernate-specific stuff. For the first requirement, two persistence units are used. One maps the entities to the "live data" tables, the other to the "working copy" tables. For the second requirement, we're going to use Hibernate Envers, a good fit for our use-case.
So far so good. Now, when users view the data on the (web-based) front-end, it would be very useful to be able to indicate which fields were changed in the working copy compared to the live data. A different colour would suffice. For this, we need some way of knowing which properties were altered. My question is, what would be a good way to go about this?
Using the JavaBeans API, a PropertyChangeListener could suffice to be notified of any changes in an entity of the working copy and keep a set of them. But the set would also need to be persisted, since the application could be restarted and changes can be long-lived before they're validated and applied to the live data. And applying the changes on the live data to obtain the working copy every time it is needed isn't feasible (hence the two persistence units).
We could also compare the working copy to the live data and find fields that are different. Some introspection and reflection code would suffice, but again that seems rather processing-intensive, not to mention the live data would need to be fetched.
Maybe I'm missing something simple, or someone know of a wonderful JPA/Hibernate feature I can use. Even if I can't avoid making (a) separate database table(s) for storing such information until it is applied to the live data, some best-practices or real-life experience with this scenario could be very useful.
I realize it's a semi-open question but surely other people must have encountered a requirement like this. Any good suggestion is appreciated, and any pointer to a ready-made solution would be a good candidate as accepted answer.
Maybe you can use the Hibernate flush entity event listener. The dirty properties are calculated before the flush. You can store them somewhere in your database.
A sample code of using the dirty properties feature of Hibernate which may give you an idea.
I'm implementing an application that need to keep track of the data changes, so we're going to version the entities.
One of the requirements is that when the user starts an edit, a "local" copy is created which the user will be editing. This copy actually has to be stored on the server (so not really local) so that if the user pauses the work she can retrieve it later, even from a different machine.
When happy with the changes she saves and the changes become visible to everyone. A new version number is assigned to the edited entity and to the entities referencing it (they need now to refer to the latest version).
A further step is to publish the data. In this case the version is frozen: no more edits allowed. Any other change would be against a new version of the data set.
I initially thought of using a VCS (e.g. Git) but the "2-step" commit complicates things a bit (not that they were that easy before). Also, I don't know of any VCS that stores the "work-in-progress".
Is there any tool, framework or library that implements this or something similar?
I don't understand the concept of automatic migration.
Having set AutomaticMigrationsEnabled = true; in the Migrations.Configuration class I can't find the place where migration steps are stored.
How will Entity Framework recognize the current state of a production database and update it accordingly when, e.g., my console application is run at the customers' office?
Any information on this is very appreciated.
To answer your first question: They aren't stored anywhere. Automatic migrations only means that the migration will take place without you having to do anything about it. Generating a migration file only occurs when you are doing a manual migration. The only trace that automatic migration leaves is a new record in the _MigrationHistory table of your database--which will only be a serialized version of the new model, and not what your changes were.
To answer your second question: You shouldn't have to. Once you're in production, your client shouldn't be able to adjust the database themselves. That's just a terrible idea.
We have recently begun using Entity Framework for accessing all the various databases we touch on a regular basis. We've established a collection of library projects, one for each of these. For many of them, we're accessing established databases that do not change, and using DB first works just great.
For some projects, though, we're developing evolving databases that are having new fields and tables added periodically. Because these libraries are used by multiple projects (at the moment, just two, but eventually many more), running a migration on the production database necessitates a republish of both/all sites that use that particular DB's library. Failure to update the library on any other site of course produces the error that the model backing the context has changed.
How can we effectively manage the deployment/update of the Code-First libraries to all of the sites that use them each time a change to the database is made?
A year later, here's what we came up with and have been using.
We now include the following line in the Application_Start() method:
Database.SetInitializer<EFLib.MyHousing.MyHousingMVCContext>(null);
This causes it not to throw a fit if the current database model doesn't exactly match what's in the code. While there is still potential for problems if non-backward-compatible changes are made, this allows for new functionality to be added without the need to re-deploy every site that uses these libraries when the affecting changes are not relevant to that particular site.
I have an application I'm building in Play! Framework with a bunch of data that I would like to track changes to. In a enterprise solution, I would likely use database triggers to copy the changes to a historical table to track those changes. I am not familiar with a similar paradigm in Play!/JPA, but maybe I'm missing something. Is there a decent way to do this other than me creating a copy of all of my entities and manually copying the data from the old/unchanged record to the historical, then saving the changes to the original model?
If you data is very critical to keep all the data changes I would stick with the triggers. Because as database doing the updates so there is no possible clock skew in the cluster running the web app and if a non-JPA client accesses the database then you could keep updates as well.
However, if you are not so obsesive about these kind of concerns than I would suggest you magic EntityListeners such as:
#PrePersist
#PreUpdate
#PreRemove
#PostPersist
#PostUpdate
#PostRemove
Here you could find examples of how to use EntityListener,
If you use EclipseLink JPA, you can enable history support.
See,
http://wiki.eclipse.org/EclipseLink/Examples/JPA/History