What should I consider when using Cadence/Temporal to design a new project? - cadence-workflow

I am new to Cadence/Temporal and was wondering what the design review process is like. My team is ready to have a formal design review out but was wondering if there is a template available to capture Cadence/Temporal specific information?

This is something I try to call as "workflow-oriented-architecture". I would suggest to think more about the below aspects:
Different options/alternatives of “what part of the process” in the design that can be modeled as workflow. Based on that,
What will be the workflowID with which IDReusePolicy? It's usually recommended to use some business ID to guarantee the uniqueness so that there is only one workflow executing for a business entity
How is the Workflow started with what information as input parameters?
What Cadence/Temporal concepts you are planning to use, and how does a workflow interact with other system?
Regular/local/long-running activity is for making an action to external system
Durable timer (use workflow.Sleep or Workflow.Await) is to wait for certain time then wake up. Unlike using sleep in native language, durable timer is reliable that whatever host restart won't impact the firing
signal is to receive an event from external system
query is to let external system to get some workflow states
search attributes can do two things: a) letting application searching for workflows with some conditions using ListWorkflowExecutions API, and letting application to get the basic status by DescribeWorkflowExecution API
How do you handle failure, especially using Cadence/Temporal concepts: activityRetry, workflowRetry, reset

Related

What is BPMN User Task equivalent in temporal.io and how to implement it?

I'm evaluating temporal.io as an modern workflow-as-code alternative for BPMN based solutions such as Camunda.
In my scenario workflow orchestrates activity workers, which calls external microservices for business transactions. Business transactions may encounter business exceptions or require human action to proceed the flow, and rises required user tasks. Workflow should block at certain points until there are no blocking tasks for that specific activity.
Should the blocking task logic reside inside activities and services, keeping the workflow definition more abstract and deterministic? I premuse an activity should simply throw an runtime exception when there is a blocking task, is that right? Then, how do I continue the workflow when the task is completed?
Or should I use workflow signals to mimic BPMN user tasks and if so, how do I send a signal from an external service to a specific workflow instance?
Probably easiest way is to have an activity that notifies your external system
responsible for interacting with human actors, and then use signals to notify workflow of the completion of the human decision.
With Temporal you can write workflow code that waits for multiple signals in case there are multiple actors/decisions involved.
Other options could be to store a list of tasks in an external system and notify your workflow from that system directly (again via signals), or could have a workflow per "approver" which can hold a list of assigned tasks inside these workflows that you could query state, or could have them send notifications when all tasks have been performed.
how do I send a signal from an external service to a specific workflow instance?
You would use Temporal SDK client api to send a signal to a workflow execution thats uniquely identified via namespace name, task queue name workflow id. Not sure which programming language you are using but maybe this Go sample can help.

What is the exact use-case for ContinueAsNew

Team,
What is the exact use case to use continueAsNew?
As we have support for CronSchedule to do periodic activities, I don't know the scenario to use this.
Are we having this to give backward compatibility
There are many scenarios besides cron that require always running workflows. For example, a workflow that listens for external events and keeps some aggregated state. Such workflow will eventually run out of the history size limit. To support such workflow processing an unlimited number of events, it has to call continue as new periodically.

Using Commands, Events or Services

When designing an application's back-end you will often need to abstract the systems that do things from the systems that actually do them.
There are elements of this in the CQRS and PubSub design patterns.
By way of example:
A new user submits a registration form
Your application receives that data and pushes out a message saying “hey i have some new user data, please do something with this”
A listener / handler / service grabs the data and processes it
(please let me know if that makes no sense)
In my applications I would usually:
Fire a new Event that a Listener is set up to process Event::fire('user.new', $data)
Create a new Command with the data, which is bound to a CommandHandler new NewUserCommand($data)
Call a method in a Service and pass in the data UserService::newUser($data)
While these are nearly exactly the same, I am just wondering - how do you go about deciding which one to use when you are creating the architecture of your applications?
Fire a new Event that a Listener is set up to process
Event::fire('user.new', $data)
Event pattern implies that there could be many handlers, subscribing to the same event and those handlers are disconnected form the sender. Also event handlers usually do not return information to the sender (because there can be actually many handlers and there is a confusion about whose information to return).
So, this is not your case.
Create a new Command with the data, which is bound to a CommandHandler
new NewUserCommand($data)
Commands are an extended way to perform some operation. They can be dispatched, pipelined, queued etc. If you don't need all that capabilities, why to complicate things?
Call a method in a Service and pass in the data
UserService::newUser($data)
Well, this is the most suitable thing for your case, isn't it?
While these are nearly exactly the same, I
am just wondering - how do you go about deciding which one to use when
you are creating the architecture of your applications?
Easy. From many solutions choose only those, which:
metaphorically suitable (do not use events, where your logic does not look like an event)
the simplest (do not go too deep into the depths of programming theories and methods. Always choose solution, that lowers your project development complexity)
When to use command over event?
Command: when I have some single isolated action with few dependencies which must be called from different application parts. The closest analogue is some editor command, which is accessible both from toolbar and menu.
Event: when I have several (at least in perspective) dependent actions, which may be called before/after some other action is executed. For example, if you have a number of services, you can use events to perform cache invalidation for them. Service, that changes a particular object emits "IChangedObject" event. Other services subscribe to such events and respond to them invalidating their cache.

Best way of implementing a batch like operation in a RESTful architecture?

We have a predominantly RESTful architecture for our product. This enables us to nicely implement almost all of the required functionality, except this new requirement thats come in.
I need to implement a page which lets the user to large scale DB operations synchronously. They can stop the operation in between, if they realized they made a mistake (rather than waiting for it to complete and doing an undo operation)
I was wondering if some one could give some pointers as to what would be the best way to implement such a functionality?
Cheers!
Nirav
How about a resource that encapsulates a set of batch operations? Creating the resource means kicking off the operations (data to indicate what the operations should do is submitted via POST). Updating the resource allows stopping it or modifying it while processing.
I would kick off the large operation in a separate thread. Show the user a constantly updated status of the thread, along with a Cancel button. If the user clicks the Cancel button, you kill thread.
This is the way I've implemented similar things in the past.
The idea is to give them their control back immediately, but don't let them do anything else until the thread is complete except cancel.
In generic terms, you need a "job queue" and a way to manage the queue.
You need to integrate a batch manager, or implement your own. There are several products that can help you. As an example, read this article
http://www.ibm.com/developerworks/websphere/techjournal/0801_vignola/0801_vignola.html

Windows Workflow: Persistence and Polling

I'm currently learning the WF framework, so bear with me; mostly I'm looking for where to start looking, not necessarily a direct answer. I just can't seem to figure out how to begin researching what I'd like in The Google.
Let's say I have a simple one-step workflow (much more complicated than that, but for simplicity's sake). This workflow needs to watch a certain record in the database to see when it changes. I don't have the capability to "push" via a trigger from the database when the row changes, so I need to poll for it every so often.
This workflow needs to be persisted to the database to be durable against restarts and whatnot as this is a long-running workflow. I'm trying to figure out the best way to get it to check every 3 minutes or so and also persist to the database. Do the persistence capabilities of the framework allow for that? It seems to be time-based. And since the workflow won't be reawakened by an external event, how does it reload from the database and check the same step it did previously again? Does it attempt the last unfulfilled activity automatically upon reloading?
Do "while" activities with a delay attached to it work at all, or can it be handled solely through the persistence services?
I'm not sure what you mean by "handled soley through persistence services"? Persistence refers only to the storing of an idle workflow.
You could have a Delay and a Code activity in a Sequence in a While loop. When in the Delay the workflow will go idle and may be persisted if necessary. However depending on how much state is needed when persisting the workflow and/or how many such workflows you would have running at any one time may mean that a leaner approach is necessary.
A leaner approach would be to externalise the DB watching and have some "DB watching" workflow service raise an event when the desired change has occured. This service would be added to Workflow runtime.
To that end you need a service contract which is defined by an Inteface with the [ExternalDataExchange] attribute. This interface in turn defines an event that the service will raise when the desired DB change is detected. It also defines a method that a Workflow can call to specify what what change this service should be looking for. The method should accept an instance GUID so that the requesting instance can be found when the DB change is detected.
In the workflow you use a CallExternalMethodActivity to call this services method. You then flow to a HandleExternalEventActivity which listen for the event. At this point the workflow will go idle and can be persisted. It will remain there until the service raises the event.