How do I implement a lock system for the abap persistency service? - persistence

I read somewhere that there is no lock system included yet. This info was dated 2009, six years ago. Has that been implemented? Or how can I now implement a lock system?
How can I enshure that between my select and my update anybody else makes changes? I do not want to lock the system after calling a getter, that blocks the entire System..

Application-level locking is still not included (and for a good reason - you usually need to lock complex business objects and not the individual entities they consist of). Just use the enqueue objects like in any other ABAP application (and preferably encapsulate them).

Related

Best way to keep in sync data in two different applications

I have 2 closed-source application that must share the same data at some point. Both uses REST APIs.
An actual example are helpdesk tickets, they can be created on both applications and i need to update the data on one application when the user adds a new ticket/closes a ticket on the other application and vice versa.
Since is closed-source I can't really modify che code.
I was thinking I can create a third application that every 5 minutes or so, list both applications' tickets for differences on the precedent call, and if the data is different from the precedent call it updates the other application too.
Is there a better way of doing this?
With closed-source applications it's nearly impossible to get something out of them, unless they have some plugin-based setup that you can hook into.
The most efficient way in terms of costs would be to have the first application publish a message on a queue, or call a web-hook that you set, whenever the event is triggered. But as I mentioned, the application needs to support that.
So yeah, your solution is pretty much everything you can do for now, but keep in mind the challenges that you may encounter over time:
What if the results of both APIs are too large to be compared directly? Maybe you need to think about paging the results.
What if your app crashes and you loose the previous state? You need to somehow back it up in an external source
How often you should poll the API to make sure you're getting the updates you need, while keeping a good performance for the existing traffic?

Managing Concurrent Access

I have an application that manages a list of employees. Users(Admins) in the application have the possibility to create and edit those employees . I want to lock the edit access of an employee while an other user is editing him .
I found that i can use optimistic concurrency so when second user try to edit it he can not . The disadvantage of this solution is that the user can waste time on editing the employee (especially if there is many parameteres to edit) and when he clicks on edit button he will get the new version edited by the user before him.
So I am searching a way to manage concurrent access in the code and not in JPA.Like if the user want to access the edit page of the employee X he will recieve a message that the user ADMIN2 is editing this employee now . And he can not access to edit the employee while ADMIN2 still editing the user .
Is there any standards to use to manage this kind of concurrent access . If not how do you think i can manage this concurrent access ?
There is no build-in way to do this in JPA. JPA2 does support pessimistic locking, but this is a concept linked to transactions, and therefore not what you need.
Also you don't actually want to do this, and if you were around 10+ years ago when (some) source control system used pessimistic locking ('good old source safe), you will know why this is a bad idea compared to modern day Git.
What you really need is a way to merge the concurrent changes, just like a git merge conflict. Instead of throwing away the user's changes (when the optimistic lock insert fails) you send his modified version and the current version from the database back to the UI, and let the user merge the two versions, and save again.
You could also go full out on the history/auditing, both EclipseLink and Hibernate has a way of storing multiple versions of the same entity (basically like Git does), so you can track changes. I you know your way around JPA, and have a good UX designer it is possible to build a system that works much better than any pessimistic locking will - and even if you try to build pessimistic locking using an 'editing' column, you will still need to use optimistic locking in case two uses click the edit button for the same resource concurrently. ;-)
If you really want to implement, what you describe, I would add a editing column in the corresponding table where you mark that editing starts. Then before you start editing, you check this boolean and act with your message. In other words: Such a locking on DB level is hardly possible accross multiple transactions and would be a danger (I asume that editing the employee is a long process, far a way from handled inside same transaction) and you best implement it your self (do not forget to remove the editing boolean after saving or after cancel).

How can I tell if my Entity Framework application is multi-threaded?

Ok, so I came to this company that recalled its software from an offshore, no-longer-extant entity. We all know the drill.
In looking at the nuts and bolts, I come across the 'lock' keyword. Googling, I find that Entity Framework does not support multi-threading.
My question is: How can I be 100% certain that the application is attempting to run in multiple threads? Is the existence of the 'lock' keyword enough?
Thanks.
If this is a ASP.NET/MVC web app and you have the lock keyword that is probably because the app is in IIS and IIS dispatches different user requests on different threads and therefore web app becomes multi-threaded.
In case of MVC - Controller is created per request and then it is processed on different thread. That leads to the need to lock something if two users at a time are going to access it.
If this is a desktop app and the lock is where data access happens it might be for similar purpose.
The lock keyword alone is not enough, they could be using it incorrectly after all. lock will just prevent more than one thread from entering the protected area at any one time. What is being protected by the lock? Data stored in a static variable is available to all users (threads) using the app and so should have controlled access.

Short-living DbContext with desktop applications and a local database

This question was inspired by an earlier question i have asked here, I have learned from that question that DbContext instances should be short living dependencies. Now given that i develop LOB desktop applications with local databases using SQL CE i have a few questions:
In my case (local db, single user, desktop app), should DbContext really live for a short-period of time ?
if i disposed of my DbContext with every operation, would that make me lose all the tracking information gathered through out its life cycle ?
if the answer to 2 is true (trouble!), how to go about doing it the right way, should i develop a UnitOfWork that keeps change tracking information or what ?!
Old quesiton, but it can help to someone maybe.
As described in this article, the living of the dbContext object depends weather it is used in web or desktop app.
Web Application
It is now a common and best practice that for web applications,
context is used per request.
In web applications, we deal with requests that are very short but
holds all the server transaction they are therefore the proper
duration for the context to live in.
Desktop Applications
For desktop application, like Win Forms/WPF, etc. the context is used
per form/dialog/page.
Since we don’t want to have the context as a singleton for our
application we will dispose it when we move from one form to another.
In this way, we will gain a lot of the context’s abilities and won’t
suffer from the implications of long running contexts.
Basically, context should be short living object, but always with the right balance.
1) Yes, short is good. But every user input/interaction is extreme
2) Clearly yes. But beyond a logical unit of work from a Client interaction, the pattern of discarding the context fits in well. eg Change an order. Perhaps Header, Items and cust loaded. New address added to cust, Order header changed and SaveChanges. New logical interactions starts on client. Dont forget you can have several smaller contexts. Indeed bounded contexts are key to performance. Perhaps you have 1 long running context with system config and other such settings that are non volatile, few in number but accessed very often. I would keep such a context for longer.
*3)*Not sure exactly what the question is. But a LUW type of Class that has a method Commit and then disposes the context is 1 such pattern.
Dont forget to generate Views on DbContexts if reloaded often.

Core Data with Web Services recommended pattern?

I am writing an app for iOS that uses data provided by a web service. I am using core data for local storage and persistence of the data, so that some core set of the data is available to the user if the web is not reachable.
In building this app, I've been reading lots of posts about core data. While there seems to be lots out there on the mechanics of doing this, I've seen less on the general principles/patterns for this.
I am wondering if there are some good references out there for a recommended interaction model.
For example, the user will be able to create new objects on the app. Lets say the user creates a new employee object, the user will typically create it, update it and then save it. I've seen recommendations that updates each of these steps to the server --> when the user creates it, when the user makes changes to the fields. And if the user cancels at the end, a delete is sent to the server. Another different recommendation for the same operation is to keep everything locally, and only send the complete update to the server when the user saves.
This example aside, I am curious if there are some general recommendations/patterns on how to handle CRUD operations and ensure they are sync'd between the webserver and coredata.
Thanks much.
I think the best approach in the case you mention is to store data only locally until the point the user commits the adding of the new record. Sending every field edit to the server is somewhat excessive.
A general idiom of iPhone apps is that there isn't such a thing as "Save". The user generally will expect things to be committed at some sensible point, but it isn't presented to the user as saving per se.
So, for example, imagine you have a UI that lets the user edit some sort of record that will be saved to local core data and also be sent to the server. At the point the user exits the UI for creating a new record, they will perhaps hit a button called "Done" (N.B. not usually called "Save"). At the point they hit "Done", you'll want to kick off a core data write and also start a push to the remote server. The server pus h won't necessarily hog the UI or make them wait till it completes -- it's nicer to allow them to continue using the app -- but it is happening. If the update push to server failed, you might want to signal it to the user or do something appropriate.
A good question to ask yourself when planning the granularity of writes to core data and/or a remote server is: what would happen if the app crashed out, or the phone ran out of power, at any particular spots in the app? How much loss of data could possibly occur? Good apps lower the risk of data loss and can re-launch in a very similar state to what they were previously in after being exited for whatever reason.
Be prepared to tear your hair out quite a bit. I've been working on this, and the problem is that the Core Data samples are quite simple. The minute you move to a complex model and you try to use the NSFetchedResultsController and its delegate, you bump into all sorts of problems with using multiple contexts.
I use one to populate data from your webservice in a background "block", and a second for the tableview to use - you'll most likely end up using a tableview for a master list and a detail view.
Brush up on using blocks in Cocoa if you want to keep your app responsive whilst receiving or sending data to/from a server.
You might want to read about 'transactions' - which is basically the grouping of multiple actions/changes as a single atomic action/change. This helps avoid partial saves that might result in inconsistent data on server.
Ultimately, this is a very big topic - especially if server data is shared across multiple clients. At the simplest, you would want to decide on basic policies. Does last save win? Is there some notion of remotely held locks on objects in server data store? How is conflict resolved, when two clients are, say, editing the same property of the same object?
With respect to how things are done on the iPhone, I would agree with occulus that "Done" provides a natural point for persisting changes to server (in a separate thread).