how can I add branch to parallel activity dynamically - workflow

I have an state machine implemented using wwf which is going to handle an indent flow.
Imagine that we have an indent registered by user and we want to send enquiry to different vendors.
the number of enquiries are various and depends on buyer's decision that how many vendors he wants to enquire.So my state machine comes to an state named Waiting for Enquiry and I need fork here and I have different state among this fork ( I think I have to add a nested state machine) I'm adding a parallel activity but the branches of this parallel activity are depended on selected vendors as I mentioned above.when I try to add branches dynamically I receive exception because the workflow is not mach with the persisted one.
is there any solution?

Sounds like you want to use the ParallelForEach instead of a Parallel with multiple branches.

Related

Tracking in progress cadence workflow by client

Let's say we need to generate the order after the user finalized his/her cart.
This is our steps to generate order:
generate an order in pending state (order microservice)
authorize user's credit(accounting microservice)
set status of the cart to closed(cart microservice)
approve the order (order microservice)
To do this we simply create a cadence workflow that calls to an activity for each step.
problem1: how the client can detect that the order creation is in progress for that cart if the user opens the cart again or refresh the page?
(Note: assume our workflow is not executed by the worker yet)
My Solution for problem 1: create order and change its status to pending before running the workflow, so for the next requests, the client knows that the order is in pending status. but what happens if order creation was successful, but start workflow failed? it's not transactional (order creation and workflow run)
If this solution is your solution also and you accept its risk, tell me please. I'm new to cadence.
Problem 2: How to inform the client after the workflow has done and the order is ready?
Any solution or article or help, please?
Problem 1 : There are multiple solutions to consider:
1.1 Add a step in the workflow to change the order to pending state, before calling order microservice, instead of doing it outside of workflow. It will save you from the issue of consistency, you can add retry in the workflow to make sure the state are eventually consistent.
1.2 Add a query method to query the workflow state, and User/UI can make queryWorkflow call to retrieve the workflow state to see the order status.
1.3 Put the state into SearchAttribute of the workflow, and User/UI can make describeWorkflow call to get the state
1.4. After https://github.com/uber/cadence/issues/3729 is implemented, you can use memo instead of SearchAttribute like 1.3
Comparison: 1.1 is the choice if you think order storage is the source of truth of order state, 1.2~1.4 will make workflow become source of truth. It really depends on how you want to design the system architecture.
1.2 may not be a good choice if User/UI expects low latency. Because QueryCall may take a few seconds to return.
1.3~1.4 will be much more performant/fast. It only requires Cadence server make a DB call to get the workflow state.
1.3 has another benefit if you have Advanced visibility enabled with ElasticSearch+Kafka setup -- you can search/filter/count workflows by order states. But the limitation of 1.3 is that you should only put very small data like a string/integer, not a blob of state.
The benefit of 1.4 is that you could put a blob of state.
To prevent user finalizing a cart multiple times:
When starting workflow, use a stable workflowID associated to the cart. So that you can call describeWF before allowing them to finalize/checkout a cart. The workflow is persisted once the StartWF req is accepted.
If there is an active workflow(not failed/completed/timeouted), it means the cart is pending. For example if a cart uses a UUID, then you can use that UUID+prefix to make workflowID. Cadence guarantees workflowID uniqueness so there will be no race condition of starting two workflows for the same cart. If a checkout failed then a user can submit the checkout workflow again.
Problem 2 : It depends on what you want by "inform". The term inform sounds like asynchronous notification. If that's the case you can have another activity to send the notification to another microservice, or send a signal to another workflow that need the notification.
If here you means synchronous manner like showing on a WebUI, then it can be solved the same way as in the solutions I mentioned for problem 1.

How to pass argument from one re-occurring workflow to another

I'm using a third-party solution, that effectively offers a limited MWF interface when it comes to Workflow development.
The challenge I have is that I am trying to utilise the workflow scheduling interface, that would allow me to have workflows re-occurring at prescribed times/dates etc. No problem here. However, due to the underlying data, I need to store a key to this data in the initiating workflow, which is then accessible to all subsequent "child" workflows. I've tried setting an Argument field, In/Out & In, in the workflow, but this is not preserved in subsequent iterations. Any solutions? Thanks

Transactions across REST microservices?

Let's say we have a User, Wallet REST microservices and an API gateway that glues things together. When Bob registers on our website, our API gateway needs to create a user through the User microservice and a wallet through the Wallet microservice.
Now here are a few scenarios where things could go wrong:
User Bob creation fails: that's OK, we just return an error message to the Bob. We're using SQL transactions so no one ever saw Bob in the system. Everything's good :)
User Bob is created but before our Wallet can be created, our API gateway hard crashes. We now have a User with no wallet (inconsistent data).
User Bob is created and as we are creating the Wallet, the HTTP connection drops. The wallet creation might have succeeded or it might have not.
What solutions are available to prevent this kind of data inconsistency from happening? Are there patterns that allow transactions to span multiple REST requests? I've read the Wikipedia page on Two-phase commit which seems to touch on this issue but I'm not sure how to apply it in practice. This Atomic Distributed Transactions: a RESTful design paper also seems interesting although I haven't read it yet.
Alternatively, I know REST might just not be suited for this use case. Would perhaps the correct way to handle this situation to drop REST entirely and use a different communication protocol like a message queue system? Or should I enforce consistency in my application code (for example, by having a background job that detects inconsistencies and fixes them or by having a "state" attribute on my User model with "creating", "created" values, etc.)?
What doesn't make sense:
distributed transactions with REST services. REST services by definition are stateless, so they should not be participants in a transactional boundary that spans more than one service. Your user registration use case scenario makes sense, but the design with REST microservices to create User and Wallet data is not good.
What will give you headaches:
EJBs with distributed transactions. It's one of those things that work in theory but not in practice. Right now I'm trying to make a distributed transaction work for remote EJBs across JBoss EAP 6.3 instances. We've been talking to RedHat support for weeks, and it didn't work yet.
Two-phase commit solutions in general. I think the 2PC protocol is a great algorithm (many years ago I implemented it in C with RPC). It requires comprehensive fail recovery mechanisms, with retries, state repository, etc. All the complexity is hidden within the transaction framework (ex.: JBoss Arjuna). However, 2PC is not fail proof. There are situations the transaction simply can't complete. Then you need to identify and fix database inconsistencies manually. It may happen once in a million transactions if you're lucky, but it may happen once in every 100 transactions depending on your platform and scenario.
Sagas (Compensating transactions). There's the implementation overhead of creating the compensating operations, and the coordination mechanism to activate compensation at the end. But compensation is not fail proof either. You may still end up with inconsistencies (= some headache).
What's probably the best alternative:
Eventual consistency. Neither ACID-like distributed transactions nor compensating transactions are fail proof, and both may lead to inconsistencies. Eventual consistency is often better than "occasional inconsistency". There are different design solutions, such as:
You may create a more robust solution using asynchronous communication. In your scenario, when Bob registers, the API gateway could send a message to a NewUser queue, and right-away reply to the user saying "You'll receive an email to confirm the account creation." A queue consumer service could process the message, perform the database changes in a single transaction, and send the email to Bob to notify the account creation.
The User microservice creates the user record and a wallet record in the same database. In this case, the wallet store in the User microservice is a replica of the master wallet store only visible to the Wallet microservice. There's a data synchronization mechanism that is trigger-based or kicks in periodically to send data changes (e.g., new wallets) from the replica to the master, and vice-versa.
But what if you need synchronous responses?
Remodel the microservices. If the solution with the queue doesn't work because the service consumer needs a response right away, then I'd rather remodel the User and Wallet functionality to be collocated in the same service (or at least in the same VM to avoid distributed transactions). Yes, it's a step farther from microservices and closer to a monolith, but will save you from some headache.
This is a classic question I was asked during an interview recently How to call multiple web services and still preserve some kind of error handling in the middle of the task. Today, in high performance computing, we avoid two phase commits. I read a paper many years ago about what was called the "Starbuck model" for transactions: Think about the process of ordering, paying, preparing and receiving the coffee you order at Starbuck... I oversimplify things but a two phase commit model would suggest that the whole process would be a single wrapping transaction for all the steps involved until you receive your coffee. However, with this model, all employees would wait and stop working until you get your coffee. You see the picture ?
Instead, the "Starbuck model" is more productive by following the "best effort" model and compensating for errors in the process. First, they make sure that you pay! Then, there are message queues with your order attached to the cup. If something goes wrong in the process, like you did not get your coffee, it is not what you ordered, etc, we enter into the compensation process and we make sure you get what you want or refund you, This is the most efficient model for increased productivity.
Sometimes, starbuck is wasting a coffee but the overall process is efficient. There are other tricks to think when you build your web services like designing them in a way that they can be called any number of times and still provide the same end result. So, my recommendation is:
Don't be too fine when defining your web services (I am not convinced about the micro-service hype happening these days: too many risks of going too far);
Async increases performance so prefer being async, send notifications by email whenever possible.
Build more intelligent services to make them "recallable" any number of times, processing with an uid or taskid that will follow the order bottom-top until the end, validating business rules in each step;
Use message queues (JMS or others) and divert to error handling processors that will apply operations to "rollback" by applying opposite operations, by the way, working with async order will require some sort of queue to validate the current state of the process, so consider that;
In last resort, (since it may not happen often), put it in a queue for manual processing of errors.
Let's go back with the initial problem that was posted. Create an account and create a wallet and make sure everything was done.
Let's say a web service is called to orchestrate the whole operation.
Pseudo code of the web service would look like this:
Call Account creation microservice, pass it some information and a some unique task id 1.1 Account creation microservice will first check if that account was already created. A task id is associated with the account's record. The microservice detects that the account does not exist so it creates it and stores the task id. NOTE: this service can be called 2000 times, it will always perform the same result. The service answers with a "receipt that contains minimal information to perform an undo operation if required".
Call Wallet creation, giving it the account ID and task id. Let's say a condition is not valid and the wallet creation cannot be performed. The call returns with an error but nothing was created.
The orchestrator is informed of the error. It knows it needs to abort the Account creation but it will not do it itself. It will ask the wallet service to do it by passing its "minimal undo receipt" received at the end of step 1.
The Account service reads the undo receipt and knows how to undo the operation; the undo receipt may even include information about another microservice it could have called itself to do part of the job. In this situation, the undo receipt could contain the Account ID and possibly some extra information required to perform the opposite operation. In our case, to simplify things, let's say is simply delete the account using its account id.
Now, let's say the web service never received the success or failure (in this case) that the Account creation's undo was performed. It will simply call the Account's undo service again. And this service should normaly never fail because its goal is for the account to no longer exist. So it checks if it exists and sees nothing can be done to undo it. So it returns that the operation is a success.
The web service returns to the user that the account could not be created.
This is a synchronous example. We could have managed it in a different way and put the case into a message queue targeted to the help desk if we don't want the system to completly recover the error". I've seen this being performed in a company where not enough hooks could be provided to the back end system to correct situations. The help desk received messages containing what was performed successfully and had enough information to fix things just like our undo receipt could be used for in a fully automated way.
I have performed a search and the microsoft web site has a pattern description for this approach. It is called the compensating transaction pattern:
Compensating transaction pattern
All distributed systems have trouble with transactional consistency. The best way to do this is like you said, have a two-phase commit. Have the wallet and the user be created in a pending state. After it is created, make a separate call to activate the user.
This last call should be safely repeatable (in case your connection drops).
This will necessitate that the last call know about both tables (so that it can be done in a single JDBC transaction).
Alternatively, you might want to think about why you are so worried about a user without a wallet. Do you believe this will cause a problem? If so, maybe having those as separate rest calls are a bad idea. If a user shouldn't exist without a wallet, then you should probably add the wallet to the user (in the original POST call to create the user).
IMHO one of the key aspects of microservices architecture is that the transaction is confined to the individual microservice (Single responsibility principle).
In the current example, the User creation would be an own transaction. User creation would push a USER_CREATED event into an event queue. Wallet service would subscribe to the USER_CREATED event and do the Wallet creation.
If my wallet was just another bunch of records in the same sql database as the user then I would probably place the user and wallet creation code in the same service and handle that using the normal database transaction facilities.
It sounds to me you are asking about what happens when the wallet creation code requires you touch another other system or systems? Id say it all depends on how complex and or risky the creation process is.
If it's just a matter of touching another reliable datastore (say one that can't participate in your sql transactions), then depending on the overall system parameters, I might be willing to risk the vanishingly small chance that second write won't happen. I might do nothing, but raise an exception and deal with the inconsistent data via a compensating transaction or even some ad-hoc method. As I always tell my developers: "if this sort of thing is happening in the app, it won't go unnoticed".
As the complexity and risk of wallet creation increases you must take steps to ameliorate the risks involved. Let's say some of the steps require calling multiple partner apis.
At this point you might introduce a message queue along with the notion of partially constructed users and/or wallets.
A simple and effective strategy for making sure your entities eventually get constructed properly is to have the jobs retry until they succeed, but a lot depends on the use cases for your application.
I would also think long and hard about why I had a failure prone step in my provisioning process.
One simple Solution is you create user using the User Service and use a messaging bus where user service emits its events , and Wallet Service registers on the messaging bus, listens on User Created event and create Wallet for the User. In the mean time , if user goes on Wallet UI to see his Wallet, check if user was just created and show your wallet creation is in progress, please check in some time
What solutions are available to prevent this kind of data inconsistency from happening?
Traditionally, distributed transaction managers are used. A few years ago in the Java EE world you might have created these services as EJBs which were deployed to different nodes and your API gateway would have made remote calls to those EJBs. The application server (if configured correctly) automatically ensures, using two phase commit, that the transaction is either committed or rolled back on each node, so that consistency is guaranteed. But that requires that all the services be deployed on the same type of application server (so that they are compatible) and in reality only ever worked with services deployed by a single company.
Are there patterns that allow transactions to span multiple REST requests?
For SOAP (ok, not REST), there is the WS-AT specification but no service that I have ever had to integrate has support that. For REST, JBoss has something in the pipeline. Otherwise, the "pattern" is to either find a product which you can plug into your architecture, or build your own solution (not recommended).
I have published such a product for Java EE: https://github.com/maxant/genericconnector
According to the paper you reference, there is also the Try-Cancel/Confirm pattern and associated Product from Atomikos.
BPEL Engines handle consistency between remotely deployed services using compensation.
Alternatively, I know REST might just not be suited for this use case. Would perhaps the correct way to handle this situation to drop REST entirely and use a different communication protocol like a message queue system?
There are many ways of "binding" non-transactional resources into a transaction:
As you suggest, you could use a transactional message queue, but it will be asynchronous, so if you depend on the response it becomes messy.
You could write the fact that you need to call the back end services into your database, and then call the back end services using a batch. Again, async, so can get messy.
You could use a business process engine as your API gateway to orchestrate the back end microservices.
You could use remote EJB, as mentioned at the start, since that supports distributed transactions out of the box.
Or should I enforce consistency in my application code (for example, by having a background job that detects inconsistencies and fixes them or by having a "state" attribute on my User model with "creating", "created" values, etc.)?
Playing devils advocate: why build something like that, when there are products which do that for you (see above), and probably do it better than you can, because they are tried and tested?
In micro-services world the communication between services should be either through rest client or messaging queue. There can be two ways to handle the transactions across services depending on how are you communicating between the services. I will personally prefer message driven architecture so that a long transaction should be a non blocking operation for a user.
Lets take you example to explain it :
Create user BOB with event CREATE USER and push the message to a message bus.
Wallet service subscribed to this event can create a wallet corresponding to the user.
The one thing which you have to take care is to select a robust reliable message backbone which can persists the state in case of failure. You can use kafka or rabbitmq for messaging backbone. There will be a delay in execution because of eventual consistency but that can be easily updated through socket notification. A notifications service/task manager framework can be a service which update the state of the transactions through asynchronous mechanism like sockets and can help UI to update show the proper progress.
Personally I like the idea of Micro Services, modules defined by the use cases, but as your question mentions, they have adaptation problems for the classical businesses like banks, insurance, telecom, etc...
Distributed transactions, as many mentioned, is not a good choice, people now going more for eventually consistent systems but I am not sure this will work for banks, insurance, etc....
I wrote a blog about my proposed solution, may be this can help you....
https://mehmetsalgar.wordpress.com/2016/11/05/micro-services-fan-out-transaction-problems-and-solutions-with-spring-bootjboss-and-netflix-eureka/
Eventual consistency is the key here.
One of the services is chosen to become primary handler of the event.
This service will handle the original event with single commit.
Primary handler will take responsibility for asynchronously communicating the secondary effects to other services.
The primary handler will do the orchestration of other services calls.
The commander is in charge of the distributed transaction and takes control. It knows the instruction to be executed and will coordinate executing them. In most scenarios there will just be two instructions, but it can handle multiple instructions.
The commander takes responsibility of guaranteeing the execution of all instructions, and that means retires.
When the commander tries to effect the remote update and doesn’t get a response, it has no retry.
This way the system can be configured to be less prone to failure and it heals itself.
As we have retries we have idempotence.
Idempotence is the property of being able to do something twice such a way that the end results be the same as if it had been done once only.
We need idempotence at the remote service or data source so that, in the case where it receives the instruction more than once, it only processes it once.
Eventual consistency
This solves most of distributed transaction challenges, however we need to consider couple of points here.
Every failed transaction will be followed by a retry, the amount of attempted retries depends on the context.
Consistency is eventual i.e., while the system is out of consistent state during a retry, for example if a customer has ordered a book, and made a payment and then updates the stock quantity. If the stock update operations fail and assuming that was the last stock available, the book will still be available till the retry operation for the stock updating has succeeded. After the retry is successful your system will be consistent.
Why not use API Management (APIM) platform that supports scripting/programming? So, you will be able to build composite service in the APIM without disturbing micro services. I have designed using APIGEE for this purpose.

Occasionally connected CQRS systems - Client and Server Commands - Task based screens

Premise:
It is recommended that in CQRS+DDD+ES style applications use task based screens, these screens guide the user and capture intent.
These task screens can also be referred to as Inductive User Interface. Some examples of UI design guidelines that can help you create mondern, user-friendly apps:
Microsoft Inductive User Interface Guidelines and,
Index of UX guidelines
The way I understand it, the tasks, generally speaking, should line up with Commands or Functions waiting on the Server.
For example, if the User makes a change to the Customer's [first name], generally speaking this should be an isolated task where a pop-up window or the like provides a mechanism for this event, and this event only.
Questions:
Part-1:
In the situation where the User is not just making a change to a Customer's [first name], but actually creating a new Customer. Surely the User will not go from [first name] => to [last name] => to [address] => to [email], etc. -- in a wizard like style, where each wizard screen maps to a Command.
a) How are the screens laid out when it's just not practical to isolate a single task? For example when creating a new Customer or Inventory Item.
b) What does the code and/or logic flow related to the Commands look like on the Client and Server in this situation, keeping in mind the obvious pull to stay consistent with the "normal" task based flow of the rest of the system? After all, these all just translate to Activities or Events in the Event Source.
Part-2:
What if the User is not just making a change to a Customer's [first name], but their [last name], [address], and [phone number] -- all the while they User is off-line.
I think ultimately, the User should still be able to do real work on multiple tasks in different areas of the application, while off-line, and perform robust conflict resolution when coming back online.
a) What is the code and/or logic flow and/or artifacts related to the Commands on the Client side while the User is off-line while handling these events locally (IndexDb, queues, etc.)? and
b) What does the connection look like and how does it act when off-line (retries)?
c) What is the code and/or logic flow and/or artifacts related to the Commands on the Client and Server side, when the User comes back on-line?
d) What does the connection look like and how does it act when coming back on-line (reestablish of connection, if it is determined that the Client side ViewModel is stale, WebSockets, etc.)?
Reference diagram:
The way I understand it, the tasks, generally speaking, should line up with Commands or Functions waiting on the Server.
Or sometimes events, but the basic idea is right.
Surely the User will not go from [first name] => to [last name] => to [address] => to [email], etc. -- in a wizard like style, where each wizard screen maps to a Command.
No, we usually want a coarser grain than that. Some tasks do only require a single property, but several properties is a common case.
How are the screens laid out when it's just not practical to isolate a single task?
By grouping together cohesive units; consider the Amazon order workflow -- there are actually several different sets of data collected (the order itself, the selection of payment, specifying new methods of payment, specifying the delivery address, specifying the shipping priority....).
all the while they User is off-line.
See CQRS, not just for server systems; but in broad strokes - treat the data collected from the user as events (FormSubmitted) rather than commands. The offline device is the authority for tracking what the user did while off line; but the unavailable server is still the authority for the consequences of those events. So the server is responsible for the merge when the client reconnects.
The precise details might vary from one domain to another -- for instance, in a warehousing system, where the offline device has been collecting information about inventory, you might handle the inconsistencies that the server observes during the merge by raising exception reports (the device registered this package leaving the warehouse, but we have no record of it entering the warehouse).

Hosting a Workflow in an MVVM application

I'm designing a MVVM application that does not use WPF or Silverlight. It will simply present web pages in HTML5, styled with CSS3.
The domain is a perfect case for using WF because it involves a number of activities in a long-running process. Specifically, I am tracking the progress of interactions with a customer over a 30 day period and that involves filling out various forms at points along the way, getting approvals from a supervisor at certain times, and making certain that the designated order of activities is followed and is executed correctly.
Each activity will normally be represented by a form on a view designed to capture the desired information at that step. Stated differently, the view that a user sees will be determined by where she is in the workflow at that moment.
My research so far has turned up examples where the workflow is used to execute business logic in accordance with the flowchart that defines it.
In my situation, I need for a user to login then pick up where she left off in the workflow (for example, some new external event has occurred and she needs to fill out the form for that or move forward in the workflow to that step.)
And I need to support the case where the supervisor logs in and can basically be presented with activities that need approval at that time.
So... it seems to me that a WF solution might be appropriate, but maybe the way I want to use it is inverted - like the cart pulling the horse so to speak.
I'd appreciate any insight that anyone here can offer.
Thanks - Steve
I have designed an app similar to yours, actually based on WPF, but the screens shown by the application are actually driven by workflows.
I use a task-based approach. I have some custom activities that create user tasks on a DB. There are different type of tasks, one for every different form type that the application supports. When the workflow reaches one of these special activities, the task is saved to DB and the WF goes idle (bookmark).
Once the user submits the form, the wf is resumed up to the point where another user task is reached and so on.
Tasks can be assigned to different users along the way (final user, supervisor, ..) and they have a pending tasks list where they can resume previous wf instances, etc.
Then, to generate user views (HTML5 forms in your case) you have to read the pending task and translate that into the corresponding form.
Hope you find it useful