I'm working on Seam project and have following problem - there is an ajax edit form and many interactions with this form affect (mutate) underlying entity and are changed in db immediately, but I wan't those changes persisted in database only when user will press "save" button. I'm thinking about deataching entity to accomplish this but wonder how (Also looking for smarter solutions).
The changes that you are making to an entity are immediately reflected making it synchronized with database. To detach a entity, you can use entityManager.detach(object) or entityManager.clear(), but that will detach all managed entities.
EntityManager's flush-mode is FlushModeType.AUTO by default, instead try FlushModeType.COMMIT in which changes are flushed only on explicit commit/flush & then using entityManager.flush() to synchronize the persistence context to the underlying database.
Related
How does something such as Entity Framework track changes to it's data when data changes could originate from other sources? For eg: When there is a cluster of the same asp net core app running and if one app updates a record but it's being tracked on a different instance and that instance receives a get request wouldn't it send out of date data?
Basically, how do ORMs preserve ACIDity if they perform local change tracking?
It helps to think of EF contexts and their local caching especially as short-lived. When you read an entity, that entity's "lifespan" should be thought of as matching the lifespan of the DbContext that originated it. Beyond that lifespan, the object is effectively just assumed to be like any other potentially stale copy of the data. Even within that lifespan it does not synchronize with the underlying data source, so the point of truth is when SaveChanges is called. The caching EF provides is more around the scenario of: "I'm going to load some entities, and those entities reference other entities. As the code iterates over the entities, when it comes across a reference to something else, EF will check to see if that something else has already been loaded and serve it before going to the DB." So in that sense, a long-lived DbContext is a bad thing because some of that cached data could be quite old and stale, and as the DbContext loads more data sifting through these tracked entities gets slower and the context consumes more memory.
In web applications, the DbContext is scoped typically to a single request, or shorter than that. (Unit of Work) This means that edits on concurrently handled requests aren't notified of each other's changes, and neither request sees changes made by other sources between the time those request contexts loaded their data and prepared to save. EF can be made aware of what to check for concurrent changes, normally a row version timestamp, and can block an update where this check fails. Beyond that, it is the developer that has to determine what action to take. This often means catching a concurrency fault and then handing off to an appropriate handler to log the details and notify the user. This could be a First-in-wins scenario where the user is notified that their changes failed and to try again; (with the refreshed data provided) A Last-in-wins scenario where the user is prompted that there have been changes but can overwrite; (and hopefully logged the event in case there are disputes/questions) Or a Merge where the system inspects the changes and provides details of any conflicts and changes for the user to review and adjust/accept/or cancel their update.
EF can help detect this, but ultimately the developer has to code for what to do about it.
In terms of detecting concurrent edits as they happen, that requires deliberate coding to do things like communicating changes between sessions (publish/subscribe) where each session listens for updates to entities it's actively working on, and broadcasting changes to entities as it updates them. To detect possible other changes to data by other sources means another process to listen for DB updates (beyond changes it already knows about made by the system) and broadcasting those change notifications to any active sessions. Certainly a very cool thing to see working in action, but the cost & complexity that it introduces has to be justified beyond just handling concurrency issues on save. :)
I'm developping a Java EE application based on JSF2 and Glassfish 3.1. The application has to provide CRUD operations on a database table. The database table data are shown using a primeface dataTable with incell editing behaviour.
I would like to let the user to
modify, add and remove elements in the table and
commit only when the user is sure of his changes (pressing a command button)
rollback when the user wants to discard his changes
The table has an entity and a stateless EJB takes care of interfaces with the entityManager to access the DataBase.
The problem is that everytime I remove a row in the table, the transaction gets committed without any control left to the user.
How can I implement this kind of user control over commits/rollbacks?
The problem is that everytime I remove a row in the table, the transaction gets committed without any control left to the user.
This sounds strange. If you're editing the data in the data table and posting back those edits (via AJAX or direct) AND the data originates from a stateless bean, then it can't be anything other than that you're working on detached entities.
Changes to those entities or the list that contains them, are not automatically reflected in the database. There is no concept of a transaction that gets automatically committed in that case. I strongly disagree with the answer given by Nayan above. Yes, user transactions let you control the commit, but that doesn't seem to be your problem at all.
Although you should maybe show some code, my guess is that you're simply calling a delete method of some sort on your EJB Service after every remove action from the user and then expecting the same transaction and persistence context to be still there. But in a stateless bean those are gone as soon as you exited the method that gave you the entities in the first place.
Your best strategy would be to let the user action operate only on the data that is cached by the #ViewScoped backing bean. Then if and only if the user confirms the update action you call your EJB Service with all the changed items in one go. If there is a parent entity that has a reference to a list with among others all those items you deleted, you only have to pass this parent entity and make sure cascade remove is set on the relation.
That said, there IS support for the pattern you seem to think you were already getting. This pattern involves using a #Stateful session bean and an extended persistence context. In that case the session bean's persistence context will cache all your changes until you associate it with a transaction again. If you do your delete actions in a non-transactional method of the session bean and implement your cancel method also as non-transactional with a #Remove and an entityManager.clear() and a save method as a transactional #Remove method (doesn't need to do anything in its body) then you'll get this effect.
Unless you have a firm grasp of EJB and transactions you'd best go with the first strategy though.
The user operations are being reflected in the database on frequent basis with activity leading to unnecessary database calls & increasing traffic. Instead you should commit only after the final confirmation from the user.
The changes should be made on the copy of the object or clone & let the user operate on it. When the user wants to save the changes, you can save the modified copy. If changes are to be discarded, then the initial state can be restored from original object.
For your current approach :
The problem is that everytime I remove a row in the table, the
transaction gets committed without any control left to the user.
Its because the default flush mode is FlushModeType.AUTO, you can change it to FlushModeType.COMMIT.
How can I implement this kind of user control over commits/rollbacks?
With bean managed transaction using UserTransaction interface, you can have control over transaction begin, commit, rollback etc.
Edit : According to JSR-317, a managed entity might get persisted into the database immediately or later afterwards is implementation specific.
If a transaction is active, a compliant implementation of this
specification is permitted to write to the database immediately (i.e.,
whenever a managed entity is updated, created, and/or removed),
however, the configuration of an implementation to require such
non-deferred database writes is outside the scope of this
specification.
I have this applikation that is actually two applications, a webapplication and a console application. The console application is used as a scheduled task on the windows machine and is executed 3 times a day to to some recurring work. Both application uses the same Model and repository that is placed in a seperate projekt (class library). The problem is that if the console application need to make som changes to the database it updates the model entity and save the changes to database but when this happens the context in the webbapplication is unaware of this and therefore the object context is not refreshed with the new/updated data and the user of the application can not see the changes.
My question is: Is there a way to tell the objectcontext to always load data from the database, either on the hole objectcontext or for a specific query?
/Regards Vinblad
I don't think you should have this problem in web application. ObjectContext in web application should be created per request so only requests processing during update should be affected.
Anyway there are few methods wich can force ObjectContext to reload data. Queries and load functions allow passing MergeOption which should be able to overwrite current data. But the most interesting should be Refresh method especially with this application.
By Using a DbSet you can you can also make use of the .AsNoTracking() method.
Whenever you run something like
context.Entities.FirstOrDefault()
or whatever query against the context, the data is actually fetched from the database, so you shouldn't be having a problem.
What is your ObjectContext lifetime in the webapp? The ObjectContext is a UnitOfWork, so it should be only created to fetch/write/update data and disposed quickly afterwards.
You can find a similar question here:
Refresh ObjectContext or recreate it to reflect changes made to the database?
FWIW, creating a new (anonymous) object in the query also forces a round trip to the database:
' queries from memory
context.Entities.FirstOrDefault()
' queries from db
context.Entities.Select(Function(x) New With {p.ID, p.Name}).FirstOrDefault()
Please forgive the VB, it's my native language :)
I'm using spring mvc 3.0 with eclipselink and jpa and I'm experiencing following issue: I have this field in my app:
#OneToMany(mappedBy = "stadium")
private Set<FootballMatch> footballMatches = new HashSet<FootballMatch>();
when there is triggered some action that adds new items into set via Set.add(), the changes doesn't show up in the browser, althought there are new rows in the database. When I clear the browser cache, nothing happens. I have discovered that a way how to force load new values is to redeploy the app, even without change of single line. So the app is returning old values, caching it somewhere. How do I force it to load allways updated values?
I was thinking of something like getLastModified(), but I don't know where should I implement it.
Since it is a bidirectional relationship, ensure that you are setting both sides of the relationship.
See,
http://en.wikibooks.org/wiki/Java_Persistence/Relationships#Object_corruption.2C_one_side_of_the_relationship_is_not_updated_after_updating_the_other_side
Also ensure you are changing, merging, committing the correct objects.
Try using EntityManager.refresh() to refresh the object, do you get the updated values? If you do not, then your database does not have the data.
Are you caching the EntityManager in your application? If you have an old EntityManager around it will not see new changes. Normally a new EntityManager should be created for each request/transaction.
EclipseLink maintains an object cache by default, this can be disabled using the persistence.xml property,
"eclipselink.caching.shared"="false"
See,
http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching
I am trying to use an MVC application with Entity Framework and Repository pattern
In this application, an end user may modify different entities data through multiple http requests during their session. (kind of wizard pages)
However they do commit these modifications until a final commit button is clicked
These also have the option to leave and this case, their work should be rollbacked.
I am wondering what would happen if two uses are doing the same and one of them clicks the commit button
I guess changes made by both users are committed !!!
I guess I need to create a object context by user connection or by session
Your comments are very much welcome
The context should be used only once for fetching the data initially and once for persisting.
(No long-lived 'multi-http-request' contexts).
So what you do is this:
Create context, fetch data, dispose of context.
Manage user changes to the data across multiple requests in whatever way you like (without using the context), e.g. Session, hidden fields, etc.
Create context, persist modified entities, dispose of context.
Regarding step 2 - I recommend using specific objects (ViewModels) rather than EntityObjects in the Views for user interaction.