Why am I seeing concurrentUpdateException during login? - atg

Ours is a large ecommerce application built on ATG. On many sessions I'm seeing ConcurrentUpdateException during login itself, even if there was no other activity by the user recently. This is resulting in other exceptions while doing other activities.

Please do Check your transaction properly.
Following are the Best practice steps to update ATG Order
When you update the order Please follow the below steps.
Acquire transaction level lock.
Begin a transaction.
3- Synchronize the order.
Make changes to order.
Update order.
End synchronization.
End transaction.
Release lock.
Note: Avoid update order in commerce pipeline chains. It might lead to nested transaction.
Even Check any modifications on Order Manager component with mergeOrders method.

Related

Tracking in progress cadence workflow by client

Let's say we need to generate the order after the user finalized his/her cart.
This is our steps to generate order:
generate an order in pending state (order microservice)
authorize user's credit(accounting microservice)
set status of the cart to closed(cart microservice)
approve the order (order microservice)
To do this we simply create a cadence workflow that calls to an activity for each step.
problem1: how the client can detect that the order creation is in progress for that cart if the user opens the cart again or refresh the page?
(Note: assume our workflow is not executed by the worker yet)
My Solution for problem 1: create order and change its status to pending before running the workflow, so for the next requests, the client knows that the order is in pending status. but what happens if order creation was successful, but start workflow failed? it's not transactional (order creation and workflow run)
If this solution is your solution also and you accept its risk, tell me please. I'm new to cadence.
Problem 2: How to inform the client after the workflow has done and the order is ready?
Any solution or article or help, please?
Problem 1 : There are multiple solutions to consider:
1.1 Add a step in the workflow to change the order to pending state, before calling order microservice, instead of doing it outside of workflow. It will save you from the issue of consistency, you can add retry in the workflow to make sure the state are eventually consistent.
1.2 Add a query method to query the workflow state, and User/UI can make queryWorkflow call to retrieve the workflow state to see the order status.
1.3 Put the state into SearchAttribute of the workflow, and User/UI can make describeWorkflow call to get the state
1.4. After https://github.com/uber/cadence/issues/3729 is implemented, you can use memo instead of SearchAttribute like 1.3
Comparison: 1.1 is the choice if you think order storage is the source of truth of order state, 1.2~1.4 will make workflow become source of truth. It really depends on how you want to design the system architecture.
1.2 may not be a good choice if User/UI expects low latency. Because QueryCall may take a few seconds to return.
1.3~1.4 will be much more performant/fast. It only requires Cadence server make a DB call to get the workflow state.
1.3 has another benefit if you have Advanced visibility enabled with ElasticSearch+Kafka setup -- you can search/filter/count workflows by order states. But the limitation of 1.3 is that you should only put very small data like a string/integer, not a blob of state.
The benefit of 1.4 is that you could put a blob of state.
To prevent user finalizing a cart multiple times:
When starting workflow, use a stable workflowID associated to the cart. So that you can call describeWF before allowing them to finalize/checkout a cart. The workflow is persisted once the StartWF req is accepted.
If there is an active workflow(not failed/completed/timeouted), it means the cart is pending. For example if a cart uses a UUID, then you can use that UUID+prefix to make workflowID. Cadence guarantees workflowID uniqueness so there will be no race condition of starting two workflows for the same cart. If a checkout failed then a user can submit the checkout workflow again.
Problem 2 : It depends on what you want by "inform". The term inform sounds like asynchronous notification. If that's the case you can have another activity to send the notification to another microservice, or send a signal to another workflow that need the notification.
If here you means synchronous manner like showing on a WebUI, then it can be solved the same way as in the solutions I mentioned for problem 1.

optimistic concurrency for wallet application development

I am now working wallet system ,multiple user balance update concurrently . But now the problem is when a user balance update time another transaction happened balance can not update properly. If i use optimistic concurrency for wallet balance column in database in entity framework core . The problem is solved or not
I think this just screams for EF's concurrency token.
Configure the balance column for Concurrency check by either configuring it through data annotation [ConcurrencyCheck] or by configuring the Fluent API.
I assume you are asking a question for whether the optimistic lock column implementation solves the simultaneous balance update problem. The answer is NO, it does not solve the simultaneous balance update problem by itself. Simultaneous updates won't happen with just optimistic lock column implementation.
Optimistic lock implementation only tells you that there were simultaneous attempts to change the balance, and it stops the update that happens later than the first one (only the first of the simultaneous changes succeeds).
In order for the balance to update, you need to write code that handles the concurrency conflicts, as it's described here. The basic idea is to handle the conflict error and decide what to do. Some of the options are to ask the user what to do, or to automatically decide what to do by your code while handling the conflict error.

Transactions across REST microservices?

Let's say we have a User, Wallet REST microservices and an API gateway that glues things together. When Bob registers on our website, our API gateway needs to create a user through the User microservice and a wallet through the Wallet microservice.
Now here are a few scenarios where things could go wrong:
User Bob creation fails: that's OK, we just return an error message to the Bob. We're using SQL transactions so no one ever saw Bob in the system. Everything's good :)
User Bob is created but before our Wallet can be created, our API gateway hard crashes. We now have a User with no wallet (inconsistent data).
User Bob is created and as we are creating the Wallet, the HTTP connection drops. The wallet creation might have succeeded or it might have not.
What solutions are available to prevent this kind of data inconsistency from happening? Are there patterns that allow transactions to span multiple REST requests? I've read the Wikipedia page on Two-phase commit which seems to touch on this issue but I'm not sure how to apply it in practice. This Atomic Distributed Transactions: a RESTful design paper also seems interesting although I haven't read it yet.
Alternatively, I know REST might just not be suited for this use case. Would perhaps the correct way to handle this situation to drop REST entirely and use a different communication protocol like a message queue system? Or should I enforce consistency in my application code (for example, by having a background job that detects inconsistencies and fixes them or by having a "state" attribute on my User model with "creating", "created" values, etc.)?
What doesn't make sense:
distributed transactions with REST services. REST services by definition are stateless, so they should not be participants in a transactional boundary that spans more than one service. Your user registration use case scenario makes sense, but the design with REST microservices to create User and Wallet data is not good.
What will give you headaches:
EJBs with distributed transactions. It's one of those things that work in theory but not in practice. Right now I'm trying to make a distributed transaction work for remote EJBs across JBoss EAP 6.3 instances. We've been talking to RedHat support for weeks, and it didn't work yet.
Two-phase commit solutions in general. I think the 2PC protocol is a great algorithm (many years ago I implemented it in C with RPC). It requires comprehensive fail recovery mechanisms, with retries, state repository, etc. All the complexity is hidden within the transaction framework (ex.: JBoss Arjuna). However, 2PC is not fail proof. There are situations the transaction simply can't complete. Then you need to identify and fix database inconsistencies manually. It may happen once in a million transactions if you're lucky, but it may happen once in every 100 transactions depending on your platform and scenario.
Sagas (Compensating transactions). There's the implementation overhead of creating the compensating operations, and the coordination mechanism to activate compensation at the end. But compensation is not fail proof either. You may still end up with inconsistencies (= some headache).
What's probably the best alternative:
Eventual consistency. Neither ACID-like distributed transactions nor compensating transactions are fail proof, and both may lead to inconsistencies. Eventual consistency is often better than "occasional inconsistency". There are different design solutions, such as:
You may create a more robust solution using asynchronous communication. In your scenario, when Bob registers, the API gateway could send a message to a NewUser queue, and right-away reply to the user saying "You'll receive an email to confirm the account creation." A queue consumer service could process the message, perform the database changes in a single transaction, and send the email to Bob to notify the account creation.
The User microservice creates the user record and a wallet record in the same database. In this case, the wallet store in the User microservice is a replica of the master wallet store only visible to the Wallet microservice. There's a data synchronization mechanism that is trigger-based or kicks in periodically to send data changes (e.g., new wallets) from the replica to the master, and vice-versa.
But what if you need synchronous responses?
Remodel the microservices. If the solution with the queue doesn't work because the service consumer needs a response right away, then I'd rather remodel the User and Wallet functionality to be collocated in the same service (or at least in the same VM to avoid distributed transactions). Yes, it's a step farther from microservices and closer to a monolith, but will save you from some headache.
This is a classic question I was asked during an interview recently How to call multiple web services and still preserve some kind of error handling in the middle of the task. Today, in high performance computing, we avoid two phase commits. I read a paper many years ago about what was called the "Starbuck model" for transactions: Think about the process of ordering, paying, preparing and receiving the coffee you order at Starbuck... I oversimplify things but a two phase commit model would suggest that the whole process would be a single wrapping transaction for all the steps involved until you receive your coffee. However, with this model, all employees would wait and stop working until you get your coffee. You see the picture ?
Instead, the "Starbuck model" is more productive by following the "best effort" model and compensating for errors in the process. First, they make sure that you pay! Then, there are message queues with your order attached to the cup. If something goes wrong in the process, like you did not get your coffee, it is not what you ordered, etc, we enter into the compensation process and we make sure you get what you want or refund you, This is the most efficient model for increased productivity.
Sometimes, starbuck is wasting a coffee but the overall process is efficient. There are other tricks to think when you build your web services like designing them in a way that they can be called any number of times and still provide the same end result. So, my recommendation is:
Don't be too fine when defining your web services (I am not convinced about the micro-service hype happening these days: too many risks of going too far);
Async increases performance so prefer being async, send notifications by email whenever possible.
Build more intelligent services to make them "recallable" any number of times, processing with an uid or taskid that will follow the order bottom-top until the end, validating business rules in each step;
Use message queues (JMS or others) and divert to error handling processors that will apply operations to "rollback" by applying opposite operations, by the way, working with async order will require some sort of queue to validate the current state of the process, so consider that;
In last resort, (since it may not happen often), put it in a queue for manual processing of errors.
Let's go back with the initial problem that was posted. Create an account and create a wallet and make sure everything was done.
Let's say a web service is called to orchestrate the whole operation.
Pseudo code of the web service would look like this:
Call Account creation microservice, pass it some information and a some unique task id 1.1 Account creation microservice will first check if that account was already created. A task id is associated with the account's record. The microservice detects that the account does not exist so it creates it and stores the task id. NOTE: this service can be called 2000 times, it will always perform the same result. The service answers with a "receipt that contains minimal information to perform an undo operation if required".
Call Wallet creation, giving it the account ID and task id. Let's say a condition is not valid and the wallet creation cannot be performed. The call returns with an error but nothing was created.
The orchestrator is informed of the error. It knows it needs to abort the Account creation but it will not do it itself. It will ask the wallet service to do it by passing its "minimal undo receipt" received at the end of step 1.
The Account service reads the undo receipt and knows how to undo the operation; the undo receipt may even include information about another microservice it could have called itself to do part of the job. In this situation, the undo receipt could contain the Account ID and possibly some extra information required to perform the opposite operation. In our case, to simplify things, let's say is simply delete the account using its account id.
Now, let's say the web service never received the success or failure (in this case) that the Account creation's undo was performed. It will simply call the Account's undo service again. And this service should normaly never fail because its goal is for the account to no longer exist. So it checks if it exists and sees nothing can be done to undo it. So it returns that the operation is a success.
The web service returns to the user that the account could not be created.
This is a synchronous example. We could have managed it in a different way and put the case into a message queue targeted to the help desk if we don't want the system to completly recover the error". I've seen this being performed in a company where not enough hooks could be provided to the back end system to correct situations. The help desk received messages containing what was performed successfully and had enough information to fix things just like our undo receipt could be used for in a fully automated way.
I have performed a search and the microsoft web site has a pattern description for this approach. It is called the compensating transaction pattern:
Compensating transaction pattern
All distributed systems have trouble with transactional consistency. The best way to do this is like you said, have a two-phase commit. Have the wallet and the user be created in a pending state. After it is created, make a separate call to activate the user.
This last call should be safely repeatable (in case your connection drops).
This will necessitate that the last call know about both tables (so that it can be done in a single JDBC transaction).
Alternatively, you might want to think about why you are so worried about a user without a wallet. Do you believe this will cause a problem? If so, maybe having those as separate rest calls are a bad idea. If a user shouldn't exist without a wallet, then you should probably add the wallet to the user (in the original POST call to create the user).
IMHO one of the key aspects of microservices architecture is that the transaction is confined to the individual microservice (Single responsibility principle).
In the current example, the User creation would be an own transaction. User creation would push a USER_CREATED event into an event queue. Wallet service would subscribe to the USER_CREATED event and do the Wallet creation.
If my wallet was just another bunch of records in the same sql database as the user then I would probably place the user and wallet creation code in the same service and handle that using the normal database transaction facilities.
It sounds to me you are asking about what happens when the wallet creation code requires you touch another other system or systems? Id say it all depends on how complex and or risky the creation process is.
If it's just a matter of touching another reliable datastore (say one that can't participate in your sql transactions), then depending on the overall system parameters, I might be willing to risk the vanishingly small chance that second write won't happen. I might do nothing, but raise an exception and deal with the inconsistent data via a compensating transaction or even some ad-hoc method. As I always tell my developers: "if this sort of thing is happening in the app, it won't go unnoticed".
As the complexity and risk of wallet creation increases you must take steps to ameliorate the risks involved. Let's say some of the steps require calling multiple partner apis.
At this point you might introduce a message queue along with the notion of partially constructed users and/or wallets.
A simple and effective strategy for making sure your entities eventually get constructed properly is to have the jobs retry until they succeed, but a lot depends on the use cases for your application.
I would also think long and hard about why I had a failure prone step in my provisioning process.
One simple Solution is you create user using the User Service and use a messaging bus where user service emits its events , and Wallet Service registers on the messaging bus, listens on User Created event and create Wallet for the User. In the mean time , if user goes on Wallet UI to see his Wallet, check if user was just created and show your wallet creation is in progress, please check in some time
What solutions are available to prevent this kind of data inconsistency from happening?
Traditionally, distributed transaction managers are used. A few years ago in the Java EE world you might have created these services as EJBs which were deployed to different nodes and your API gateway would have made remote calls to those EJBs. The application server (if configured correctly) automatically ensures, using two phase commit, that the transaction is either committed or rolled back on each node, so that consistency is guaranteed. But that requires that all the services be deployed on the same type of application server (so that they are compatible) and in reality only ever worked with services deployed by a single company.
Are there patterns that allow transactions to span multiple REST requests?
For SOAP (ok, not REST), there is the WS-AT specification but no service that I have ever had to integrate has support that. For REST, JBoss has something in the pipeline. Otherwise, the "pattern" is to either find a product which you can plug into your architecture, or build your own solution (not recommended).
I have published such a product for Java EE: https://github.com/maxant/genericconnector
According to the paper you reference, there is also the Try-Cancel/Confirm pattern and associated Product from Atomikos.
BPEL Engines handle consistency between remotely deployed services using compensation.
Alternatively, I know REST might just not be suited for this use case. Would perhaps the correct way to handle this situation to drop REST entirely and use a different communication protocol like a message queue system?
There are many ways of "binding" non-transactional resources into a transaction:
As you suggest, you could use a transactional message queue, but it will be asynchronous, so if you depend on the response it becomes messy.
You could write the fact that you need to call the back end services into your database, and then call the back end services using a batch. Again, async, so can get messy.
You could use a business process engine as your API gateway to orchestrate the back end microservices.
You could use remote EJB, as mentioned at the start, since that supports distributed transactions out of the box.
Or should I enforce consistency in my application code (for example, by having a background job that detects inconsistencies and fixes them or by having a "state" attribute on my User model with "creating", "created" values, etc.)?
Playing devils advocate: why build something like that, when there are products which do that for you (see above), and probably do it better than you can, because they are tried and tested?
In micro-services world the communication between services should be either through rest client or messaging queue. There can be two ways to handle the transactions across services depending on how are you communicating between the services. I will personally prefer message driven architecture so that a long transaction should be a non blocking operation for a user.
Lets take you example to explain it :
Create user BOB with event CREATE USER and push the message to a message bus.
Wallet service subscribed to this event can create a wallet corresponding to the user.
The one thing which you have to take care is to select a robust reliable message backbone which can persists the state in case of failure. You can use kafka or rabbitmq for messaging backbone. There will be a delay in execution because of eventual consistency but that can be easily updated through socket notification. A notifications service/task manager framework can be a service which update the state of the transactions through asynchronous mechanism like sockets and can help UI to update show the proper progress.
Personally I like the idea of Micro Services, modules defined by the use cases, but as your question mentions, they have adaptation problems for the classical businesses like banks, insurance, telecom, etc...
Distributed transactions, as many mentioned, is not a good choice, people now going more for eventually consistent systems but I am not sure this will work for banks, insurance, etc....
I wrote a blog about my proposed solution, may be this can help you....
https://mehmetsalgar.wordpress.com/2016/11/05/micro-services-fan-out-transaction-problems-and-solutions-with-spring-bootjboss-and-netflix-eureka/
Eventual consistency is the key here.
One of the services is chosen to become primary handler of the event.
This service will handle the original event with single commit.
Primary handler will take responsibility for asynchronously communicating the secondary effects to other services.
The primary handler will do the orchestration of other services calls.
The commander is in charge of the distributed transaction and takes control. It knows the instruction to be executed and will coordinate executing them. In most scenarios there will just be two instructions, but it can handle multiple instructions.
The commander takes responsibility of guaranteeing the execution of all instructions, and that means retires.
When the commander tries to effect the remote update and doesn’t get a response, it has no retry.
This way the system can be configured to be less prone to failure and it heals itself.
As we have retries we have idempotence.
Idempotence is the property of being able to do something twice such a way that the end results be the same as if it had been done once only.
We need idempotence at the remote service or data source so that, in the case where it receives the instruction more than once, it only processes it once.
Eventual consistency
This solves most of distributed transaction challenges, however we need to consider couple of points here.
Every failed transaction will be followed by a retry, the amount of attempted retries depends on the context.
Consistency is eventual i.e., while the system is out of consistent state during a retry, for example if a customer has ordered a book, and made a payment and then updates the stock quantity. If the stock update operations fail and assuming that was the last stock available, the book will still be available till the retry operation for the stock updating has succeeded. After the retry is successful your system will be consistent.
Why not use API Management (APIM) platform that supports scripting/programming? So, you will be able to build composite service in the APIM without disturbing micro services. I have designed using APIGEE for this purpose.

Rally rest API transactions

Is there any way to achieve an atomic transaction using the Rally wsapi. I know a transaction implies state among the consecutive requests, but REST obviously is a stateless protocol. So that might be an issue.
need to be able to pull a portfolioitem/feature and then immediately write it back if I have the most recent version of it. I have a custom field on portfolioitem/feature that WILL be edited by multiple people simultaneously, and I need to make sure that each update happens in the correct order.
Since i don't have access to Rally's server stuff, i must do all this client side, and I can't figure out how to do this. I will be doing this will the Rally SDK also.
I don't think WS API supports atomic transactions. A scenario where updates occur as one atomic transaction so that, for example, if one of the updates fail they are all rolled back is not supported. In the example you mentioned each update will be a distinct transaction and in case of a mid-air collision when the same artifact is updated by different users, one of the users will receive a concurrency error.
I am in the same boat as the OP, the only difference being that hours may pass between the read and subsequent write. Interestingly, I only seem to get concurrency errors when I attempt to update a record while there's another transaction of mine in flight. I don't see any exception raised when I am updating a record using a stale version thereof, i.e. one that someone else has changed from under me.
I will be attempting to fix this soon as it's becoming an issue. The chosen approach is to forcibly chain a GET before every POST, and throw an exception if the VersionID of the record I GET doesn't match the one I have stored in-memory. In case of mismatch, it will refresh the local record (and thus, view) and prompt the user to resubmit their changes. Yes this will be inconvenient for a user but in my app most changes are a single click away so it's reasonable.
I too would like to know if there is a better approach to this problem. One would assume that with every record having a VersionID, it would be handled server-side, with proper support from WsapiProxy on the client end. Maybe I'm missing something obvious, like explicitly fetching VersionID?

SQL Transactions for E-Commerce? (Using .Net 3.5 for app)

I have an application I am working on that will need the use of transactions. I am familiar with how they work, but need some suggestions on a current implementation. Here is the scenario...
A user is working on a project in our system. In order to publish the project on our system, they must first pay some fees so the project can be published. When the user clicks publish they are then taken to a shopping cart. The shopping cart has two line items. The line items represent the costs associated with the publishing. When the payment goes through, I then run my stored procedures to update a porject and insert the order with the details. How I have it currently is not in a transaction. I have 3 separate SPs to handle the flow. First, the project data is updated, then an order is inserted, i then use the ID generated by the order insert to insert the data for the details. I am currently looping through the shopping cart and performing each detail insert separately.
I began to create a stored procedure that would execute all the stored procedures, but I got stumped with what to do about the order line items or details.
What is a good solution to bundle all of these tasks into one transaction?
Thanks
Daniel
System.Transactions are the way to go http://msdn.microsoft.com/en-us/library/ms973865.aspx
You can wrap your operations in a using statement then if anything goes wrong you just dont commit the transaction - very very simple