I have an application I'm building in Play! Framework with a bunch of data that I would like to track changes to. In a enterprise solution, I would likely use database triggers to copy the changes to a historical table to track those changes. I am not familiar with a similar paradigm in Play!/JPA, but maybe I'm missing something. Is there a decent way to do this other than me creating a copy of all of my entities and manually copying the data from the old/unchanged record to the historical, then saving the changes to the original model?
If you data is very critical to keep all the data changes I would stick with the triggers. Because as database doing the updates so there is no possible clock skew in the cluster running the web app and if a non-JPA client accesses the database then you could keep updates as well.
However, if you are not so obsesive about these kind of concerns than I would suggest you magic EntityListeners such as:
#PrePersist
#PreUpdate
#PreRemove
#PostPersist
#PostUpdate
#PostRemove
Here you could find examples of how to use EntityListener,
If you use EclipseLink JPA, you can enable history support.
See,
http://wiki.eclipse.org/EclipseLink/Examples/JPA/History
Related
Currently, I'm working on a Java EE project with some non-trivial requirements regarding persistence management. Changes to entities by users first need to be applied to some working copy before being validated, after which they are applied to the "live data". Any changes on that live data also need to have some record of them, to allow auditing.
The entities are managed via JPA, and Hibernate will be used as provider. That is a given, so we don't shy away from Hibernate-specific stuff. For the first requirement, two persistence units are used. One maps the entities to the "live data" tables, the other to the "working copy" tables. For the second requirement, we're going to use Hibernate Envers, a good fit for our use-case.
So far so good. Now, when users view the data on the (web-based) front-end, it would be very useful to be able to indicate which fields were changed in the working copy compared to the live data. A different colour would suffice. For this, we need some way of knowing which properties were altered. My question is, what would be a good way to go about this?
Using the JavaBeans API, a PropertyChangeListener could suffice to be notified of any changes in an entity of the working copy and keep a set of them. But the set would also need to be persisted, since the application could be restarted and changes can be long-lived before they're validated and applied to the live data. And applying the changes on the live data to obtain the working copy every time it is needed isn't feasible (hence the two persistence units).
We could also compare the working copy to the live data and find fields that are different. Some introspection and reflection code would suffice, but again that seems rather processing-intensive, not to mention the live data would need to be fetched.
Maybe I'm missing something simple, or someone know of a wonderful JPA/Hibernate feature I can use. Even if I can't avoid making (a) separate database table(s) for storing such information until it is applied to the live data, some best-practices or real-life experience with this scenario could be very useful.
I realize it's a semi-open question but surely other people must have encountered a requirement like this. Any good suggestion is appreciated, and any pointer to a ready-made solution would be a good candidate as accepted answer.
Maybe you can use the Hibernate flush entity event listener. The dirty properties are calculated before the flush. You can store them somewhere in your database.
A sample code of using the dirty properties feature of Hibernate which may give you an idea.
I have a concern that I would like some input on to see if there is a solution. We have an ecosystem of about 30 web applications that all connect to the same database. We are going to be updating the applications and with that we are going to be moving to a new database schema. I have a process that will be pulling all the old data into the new schema using EF Code-First for the new schema.
Well, today I ran into an issue when two branches of the solution that I wrote to migrate the data have a slightly different schema (one branch put a MaxLength on a field and an index on another). The issue that I had was that the database was out of sync in one branch, but up-to-date in the other and the branch that was out of sync would not run until I ran Update-Database.
I was thinking about putting the code-first poco classes into a library along with the DataContext to be able to be used by the various web apps. I would then make this library available to the team using an internal NuGet server.
My concern is that if the schema changes (doesn't happen very often, but it does happen) what happens to all of the web applications that are relying on this library for data connectivity? Would they all break (my assumption is that they would)? If so, all of the production web apps would be down. Is there a way to get around this?
We have recently begun using Entity Framework for accessing all the various databases we touch on a regular basis. We've established a collection of library projects, one for each of these. For many of them, we're accessing established databases that do not change, and using DB first works just great.
For some projects, though, we're developing evolving databases that are having new fields and tables added periodically. Because these libraries are used by multiple projects (at the moment, just two, but eventually many more), running a migration on the production database necessitates a republish of both/all sites that use that particular DB's library. Failure to update the library on any other site of course produces the error that the model backing the context has changed.
How can we effectively manage the deployment/update of the Code-First libraries to all of the sites that use them each time a change to the database is made?
A year later, here's what we came up with and have been using.
We now include the following line in the Application_Start() method:
Database.SetInitializer<EFLib.MyHousing.MyHousingMVCContext>(null);
This causes it not to throw a fit if the current database model doesn't exactly match what's in the code. While there is still potential for problems if non-backward-compatible changes are made, this allows for new functionality to be added without the need to re-deploy every site that uses these libraries when the affecting changes are not relevant to that particular site.
Our product has to be interfaced with multiple client/partner systems. For example, when a person is added/updated we have to notify changes to a 3rd-party system, for example by calling a web service or creating a xml file in a folder, etc.
We need a "hook" after SaveChanges has successfully persisted changes in the database.
Lots of information can be found about how to execute business logic when saving changes (before changes are persisted in the database), but less about executing logic after changes are persisted.
After investigating, I think to use the following:
// Persist data
cxt.SaveChanges(false);
// TODO: execute business logic that can get data changes
// Discard changes and set entities as unmodified
ctx.AcceptAllChanges();
Does anyone have a better solution for this scenario?
I know this question is a bit old, but figure that I would add this for anyone else searching on this topic.
I would recommend checking out EFHooks. The official version is a bit stale (e.g. .NET 4 only), but I have forked the project and published a new NuGet package, VisoftInc.EFHooks.
You can read more about both packages in this post: http://blogs.visoftinc.com/2013/05/27/hooking-into-ef-with-efhooks/
Basically, EFHooks will let you hook into EF (e.g. PostInsert or PostUpdate) and run some code. The hooks are Pre/Post for Insert/Update/Delete.
One thing to note is that this is based on the DbContext, so if you are still using the ObjectContext for EF, this solution won't work for you.
You can override savechanges, then do the updating of the 3rd party systems at the same time as you save the data to the database.
see: http://thedatafarm.com/blog/data-access/objectcontext-savechanges-is-now-virtual-overridable-in-ef4/
I have a system where there are two identical databases. One is for back of house work where data is imported, edited generally worked on. Once the data in the first database is as required it is coped to the second database, which is used to drive a public facing (read only) site.
So once a month, or so I will need to push data from database to another. I'd like to drive all this with EF, is that reasonable, can EF do this kind of thing, or will I get stuck part way down the line?
It's probably doable, but frankly, EF (or any other ORM) is not really suited for this kind of task. If you do decide to implement your synchronization tool with EF, at least make sure to turn off change tracking.
I wouldn't dismiss Yuri's suggestion (simply using a scheduled backup/restore), if the databases are really identical. It's certainly the easiest to implement!
Another solution would be to use a database synchronization tool, like Sql Server Integration Services.