Does Cadence have the concept of Workflow Evolution? - cadence-workflow

"Does cadence have a concept of ""workflow evolution""?
In other words, I have a ""stateful actor"" that models a customer. Initially, the customer has two fields with some signal methods that modify them, some query methods that fetch state, and some main workflow on that actor. Suppose I have 10 of these instances and they are long-lived.
Later I want to add a third field and maybe another signal method. What can I use?

Versioning with Cadence can help here. Here's the documentation.
From the documentation, as an example, a line like below
err := workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1)
becomes
var err error
v := workflow.GetVersion(ctx, "Step1", workflow.DefaultVersion, 1)
if v == workflow.DefaultVersion {
err = workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1)
} else {
err = workflow.ExecuteActivity(ctx, ActivityC, data).Get(ctx, &result1)
}
if you have more than 2 versions it will look like below:
v := workflow.GetVersion(ctx, "Step1", workflow.DefaultVersion, 2)
if v == workflow.DefaultVersion {
err = workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1)
} else if v == 1 {
err = workflow.ExecuteActivity(ctx, ActivityC, data).Get(ctx, &result1)
} else {
err = workflow.ExecuteActivity(ctx, ActivityD, data).Get(ctx, &result1)
}
and so on. You can refer to the documentation for more details.

Yes, Cadence and Temporal support the evolution of already running workflows. See Versioning documentation for more details.

Related

postgres does not save all data after committing

In my golang project that use gorm as ORM and posgress as database, in some sitution when I begin transaction to
change three tables and commiting, just one of tables changes. two other tables data does not change.
any idea how it might happen?
you can see example below
o := *gorm.DB
tx := o.Begin()
invoice.Number = 1
err := tx.Save(&invoice)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
receipt.Ref = "1331"
err = tx.Save(&receipt)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
payment.status = "succeed"
err = tx.Save(&payment)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
err = tx.Commit()
if err != nil {
err2 := tx.Rollback()
return err
}
Just payment data changed and I'm not getting any error.
Apparently you are mistakenly using save points! In PostgreSQL, we can have nested transactions, that is, defining save points make the transaction split into parts. I am not a Golang programmer and my primary language is not Go, but as I guess the problem is "tx.save" which makes a SavePoint, and does not save the data into database. SavePoints makes a new transaction save point, and thus, the last table commits.
If you are familiar with the Node.js, then any async function callback returns an error as the first argument. In Go, we follow the same norm.
https://medium.com/rungo/error-handling-in-go-f0125de052f0

When to use selector.AddReceive, selector.Select

Would appreciate some clarification on when I should use selector.AddReceive and selector.Select. This might not be a Cadence problem, but perhaps I'm missing some knowledge with regards to Golang.
For selector.Select I think the basic idea is that we wait for the next output from a channel. Not entirely sure what selector.AddRecieve does.
For example, in the cadence examples, local_activity link and pasted below:
func signalHandlingWorkflow(ctx workflow.Context) error {
logger := workflow.GetLogger(ctx)
ch := workflow.GetSignalChannel(ctx, SignalName)
for {
var signal string
if more := ch.Receive(ctx, &signal); !more {
logger.Info("Signal channel closed")
return cadence.NewCustomError("signal_channel_closed")
}
logger.Info("Signal received.", zap.String("signal", signal))
if signal == "exit" {
break
}
cwo := workflow.ChildWorkflowOptions{
ExecutionStartToCloseTimeout: time.Minute,
// TaskStartToCloseTimeout must be larger than all local activity execution time, because DecisionTask won't
// return until all local activities completed.
TaskStartToCloseTimeout: time.Second * 30,
}
childCtx := workflow.WithChildOptions(ctx, cwo)
var processResult string
err := workflow.ExecuteChildWorkflow(childCtx, processingWorkflow, signal).Get(childCtx, &processResult)
if err != nil {
return err
}
logger.Sugar().Infof("Processed signal: %v, result: %v", signal, processResult)
}
return nil
}
We don't use any selector.AddReceive
But, in the example here, where it uses signal channels as well: Changing the uber cadence sleeptime based on external input
I'll also paste the code here
func SampleTimerWorkflow(ctx workflow.Context, timerDelay time.Duration) error
{
logger := workflow.GetLogger(ctx)
resetCh := workflow.GetSignalChannel(ctx, "reset")
timerFired := false
delay := timerDelay
for ;!timerFired; {
selector := workflow.NewSelector(ctx)
logger.Sugar().Infof("Setting up a timer to fire after: %v", delay)
timerCancelCtx, cancelTimerHandler := workflow.WithCancel(ctx)
timerFuture := workflow.NewTimer(timerCancelCtx, delay)
selector.AddFuture(timerFuture, func(f workflow.Future) {
logger.Info("Timer Fired.")
timerFired = true
})
selector.AddReceive(resetCh, func(c workflow.Channel, more bool) {
logger.Info("Reset signal received.")
logger.Info("Cancel outstanding timer.")
cancelTimerHandler()
var t int
c.Receive(ctx, &t)
logger.Sugar().Infof("Reset delay: %v seconds", t)
delay = time.Second * time.Duration(t)
})
logger.Info("Waiting for timer to fire.")
selector.Select(ctx)
}
workflow.GetLogger(ctx).Info("Workflow completed.")
return nil
}
You can see there is selector.AddReceive, I'm not entirely sure what the purpose is or when I should use it.
I am trying to send a signal to my workflow that allows me to extend an expiration time. Meaning, it would delay the call of an ExpirationActivity
And when following this example (combined with my current code), as soon as I send the signal to reset, it seems that timerFired gets set immediately to true.
My current code is the below (I've taken out some irrelevant if statements), and previously, I was using only one instance of selector.Select, but somewhere my code wasn't acting properly.
func Workflow(ctx workflow.Context) (string, error) {
// local state per bonus workflow
bonusAcceptanceState := pending
logger := workflow.GetLogger(ctx).Sugar()
logger.Info("Bonus workflow started")
timerCreated := false
timerFired := false
delay := timerDelay
// To query state in Cadence GUI
err := workflow.SetQueryHandler(ctx, "bonusAcceptanceState", func(input []byte) (string, error) {
return bonusAcceptanceState, nil
})
if err != nil {
logger.Info("SetQueryHandler failed: " + err.Error())
return "", err
}
info := workflow.GetInfo(ctx)
executionTimeout := time.Duration(info.ExecutionStartToCloseTimeoutSeconds) * time.Second
// decisionTimeout := time.Duration(info.TaskStartToCloseTimeoutSeconds) * time.Second
decisionTimeout := time.Duration(info.ExecutionStartToCloseTimeoutSeconds) * time.Second
maxRetryTime := executionTimeout // retry for the entire time
retryPolicy := &cadence.RetryPolicy{
InitialInterval: time.Second,
BackoffCoefficient: 2,
MaximumInterval: executionTimeout,
ExpirationInterval: maxRetryTime,
MaximumAttempts: 0, // unlimited, bound by maxRetryTime
NonRetriableErrorReasons: []string{},
}
ao := workflow.ActivityOptions{
TaskList: taskList,
ScheduleToStartTimeout: executionTimeout, // time until a task has to be picked up by a worker
ScheduleToCloseTimeout: executionTimeout, // total execution timeout
StartToCloseTimeout: decisionTimeout, // time that a worker can take to process a task
RetryPolicy: retryPolicy,
}
ctx = workflow.WithActivityOptions(ctx, ao)
selector := workflow.NewSelector(ctx)
timerCancelCtx, cancelTimerHandler := workflow.WithCancel(ctx)
var signal *singalType
for {
signalChan := workflow.GetSignalChannel(ctx, signalName)
// resetCh := workflow.GetSignalChannel(ctx, "reset")
selector.AddReceive(signalChan, func(c workflow.Channel, more bool) {
c.Receive(ctx, &signal)
})
selector.Select(ctx)
if signal.Type == "exit" {
return "", nil
}
// We can check the age and return an appropriate response
if signal.Type == "ACCEPT" {
if bonusAcceptanceState == pending {
logger.Info("Bonus Accepted")
bonusAcceptanceState = accepted
var status string
future := workflow.ExecuteActivity(ctx, AcceptActivity)
if err := future.Get(ctx, &status); err != nil {
logger.Errorw("Activity failed", "error", err)
}
// Start expiration timer
if !timerCreated {
timerCreated = true
timerFuture := workflow.NewTimer(timerCancelCtx, delay)
selector.AddFuture(timerFuture, func(f workflow.Future) {
logger.Info("Timer Fired.")
timerFired = true
})
}
}
}
if signal.Type == "ROLLOVER_1X" && bonusAcceptanceState == accepted {
var status string
future := workflow.ExecuteActivity(ctx, Rollover1x)
if err := future.Get(ctx, &status); err != nil {
logger.Errorw("Activity failed", "error", err)
}
selector.Select(ctx)
}
if signal.Type == "ROLLOVER_COMPLETE" && bonusAcceptanceState == accepted {
var status string
future := workflow.ExecuteActivity(ctx, RolloverComplete)
if err := future.Get(ctx, &status); err != nil {
logger.Errorw("Activity failed", "error", err)
return "", err
}
// Workflow is terminated on return result
return status, nil
}
for; !timerFired && bonusAcceptanceState == accepted && signal.Type == "RESET" {
cancelTimerHandler()
i, err := strconv.Atoi(signal.Value)
if err != nil {
logger.Infow("error in converting")
}
logger.Infof("Reset delay: %v seconds", i)
delay = time.Minute * time.Duration(i)
timerFuture := workflow.NewTimer(timerCancelCtx, delay)
selector.AddFuture(timerFuture, func(f workflow.Future) {
logger.Info("Timer Fired.")
timerFired = true
})
selector.Select(ctx)
}
if timerFired {
var status string
future := workflow.ExecuteActivity(ctx, ExpirationActivity)
if err := future.Get(ctx, &status); err != nil {
logger.Errorw("Activity failed", "error", err)
}
return status, nil
}
}
}
TL;DR:
You will only use selector.AddReceive when you need to let a selector to listen on a channel, like in your 2nd code snippet. If you only need to process signals from a channel directly without selector, then you don't need to use it.
selector.Select is to let the code wait for some events to happen. Because you don't want to use busy looping to wait.
More details on when to use them
Essentially, this is exactly the same concept as Golang select statement. Golang select allows you to wait for timers and channels. Except that Golang doesn't have selector.Select() simply because it's baked into the language itself, but Cadence is a library.
So same as in golang, you don't have to use select statement to use timer or channel. You only need it when you have to write some code to listen on multiple sources of event.
For example, if you have two channels, you want to write some common logic to process these two channels, e.g increase a counter. This counter doesn't belong to any of the channels. It's a common counter. Then using a selector will looks nice.
chA := workflow.GetSignalChannel(ctx, SignalNameA)
chB := workflow.GetSignalChannel(ctx, SignalNameB)
counter := 0
selector.AddReceive(chA)
selector.AddReceive(chB)
For {
selector.Select()
counter += 1
}
The workflow code with selector looks very similar to this in Golang:
counter := 0
for {
select {
case _ := <- chA:
counter += 1
case _ := <- chB:
counter += 1
}
}
Otherwise you may have to use two goroutines to listen on each channel, and do the counting. The golang code looks like this:
counter := 0
go func(){
for{
_ := <- chA
counter += 1
}
}()
go func(){
for{
_ := <- chB
counter += 1
}
}()
This could be a problem of race condition. Unless the counter is well implemented as thread-safe.
And in Cadence workflow code, it's something like this:
chA := workflow.GetSignalChannel(ctx, SignalNameA)
chB := workflow.GetSignalChannel(ctx, SignalNameB)
counter := 0
Workflow.Go(ctx){
for{
chA.Receive(ctx,nil)
counter +=1
}
}
Workflow.Go(ctx){
for{
chB.Receive(ctx,nil)
counter +=1
}
}
However, there is no such race condition in Cadence, because Cadence's coroutine(started byWorkflow.Go()) is not really concurrency. Both the two workflow code above should work perfectly.
But Cadence still provide this selector same as Golang, mostly because the 1st one is more natural to write code.
check the future return result
selector.AddFuture(timerFuture, func(f workflow.Future) {
err := f.Get(ctx, nil)
if err == nil {
logger.Info("Timer Fired.")
timerFired = true
}
})
ref: https://github.com/uber-go/cadence-client/blob/0256258b905b677f2f38fcacfbda43398d236309/workflow/deterministic_wrappers.go#L128-L129

Mocking MongoDB response in Go

I'm fetching a document from MongoDB and passing it into function transform, e.g.
var doc map[string]interface{}
err := collection.FindOne(context.TODO(), filter).Decode(&doc)
result := transform(doc)
I want to write unit tests for transform, but I'm not sure how to mock a response from MongoDB. Ideally I want to set something like this up:
func TestTransform(t *testing.T) {
byt := []byte(`
{"hello": "world",
"message": "apple"}
`)
var doc map[string]interface{}
>>> Some method here to Decode byt into doc like the code above <<<
out := transform(doc)
expected := ...
if diff := deep.Equal(expected, out); diff != nil {
t.Error(diff)
}
}
One way would be to json.Unmarshal into doc, but this sometimes gives different results. For example, if the document in MongoDB has an array in it, then that array is decoded into doc as a bson.A type not []interface{} type.
A member from my team recently found out there is a hidden gem inside the official MongoDB driver for GO: https://pkg.go.dev/go.mongodb.org/mongo-driver#v1.9.1/mongo/integration/mtest. Although the package is in experimental mode and there is no backward compatibility guaranteed for it, it can help you to perform unit testing, at least with this version of the driver.
You can check this cool article with plenty of examples of how to use it: https://medium.com/#victor.neuret/mocking-the-official-mongo-golang-driver-5aad5b226a78. Additionally, here is the repository with the code samples for this article: https://github.com/victorneuret/mongo-go-driver-mock.
So, based in your example and the samples from the article I think you could try something like the following (of course, you might need to tweak and experiment with this):
func TestTransform(t *testing.T) {
mt := mtest.New(t, mtest.NewOptions().ClientType(mtest.Mock))
defer mt.Close()
mt.Run("find & transform", func(mt *mtest.T) {
myollection = mt.Coll
expected := myStructure{...}
mt.AddMockResponses(mtest.CreateCursorResponse(1, "foo.bar", mtest.FirstBatch, bson.D{
{"_id", expected.ID},
{"field-1", expected.Field1},
{"field-2", expected.Field2},
}))
response, err := myFindFunction(expected.ID)
if err != nil {
t.Error(err)
}
out := transform(response)
if diff := deep.Equal(expected, out); diff != nil {
t.Error(diff)
}
})
}
Alternatively, you can perform a more real testing and in an automated way via integration testing with Docker containers. There are a few good packages that could help you with this:
https://github.com/ory/dockertest
https://github.com/testcontainers/testcontainers-go
I have followed this approach with dockertest library to automate a full integration testing environment that could be setUp and tearDown via the go test -v -run Integration command. See a full example here: https://github.com/AnhellO/learn-dockertest/tree/master/mongo.
Hope this helps.
The best solution to write testable could would be to extract your code to a DAO or Data-Repository. You would define an interface which would return what you need. This way, you can just used a Mocked Version for testing.
// repository.go
type ISomeRepository interface {
Get(string) (*SomeModel, error)
}
type SomeRepository struct { ... }
func (r *SomeRepository) Get(id string) (*SomeModel, error) {
// Handling a real repository access and returning your Object
}
When you need to mock it, just create a Mock-Struct and implement the interface:
// repository_test.go
type SomeMockRepository struct { ... }
func (r *SomeRepository) Get(id string) (*SomeModel, error) {
return &SomeModel{...}, nil
}
func TestSomething() {
// You can use your mock as ISomeRepository
var repo *ISomeRepository
repo = &SomeMockRepository{}
someModel, err := repo.Get("123")
}
This is best used with some kind of dependency-injection, so passing this repository as ISomeRepository into the function.
Using monkey library to hook any function from mongo driver.
For example:
func insert(collection *mongo.Collection) (int, error) {
ctx, _ := context.WithTimeout(context.Background(), 10*time.Second)
u := User{
Name: "kevin",
Age: 20,
}
res, err := collection.InsertOne(ctx, u)
if err != nil {
log.Printf("error: %v", err)
return 0, err
}
id := res.InsertedID.(int)
return id, nil
}
func TestInsert(t *testing.T) {
var c *mongo.Collection
var guard *monkey.PatchGuard
guard = monkey.PatchInstanceMethod(reflect.TypeOf(c), "InsertOne",
func(c *mongo.Collection, ctx context.Context, document interface{}, opts ...*options.InsertOneOptions) (*mongo.InsertOneResult, error) {
guard.Unpatch()
defer guard.Restore()
log.Printf("record: %+v, collection: %s, database: %s", document, c.Name(), c.Database().Name())
res := &mongo.InsertOneResult{
InsertedID: 100,
}
return res, nil
})
collection := client.Database("db").Collection("person")
id, err := insert(collection)
require.NoError(t, err)
assert.Equal(t, id, 100)
}

Watch for MongoDB Change Streams

We want our Go application to listen to the data changes on a collection. So, googling in search for a solution, we came across MongoDB's Change Streams. That link also exhibits some implementation snippets for a bunch of languages such as Python, Java, Nodejs etc. Yet, there is no piece of code for Go.
We are using Mgo as a driver but could not find explicit statements on change streams.
Does anyone have any idea on how to watch on Change Streams using that Mgo or any other Mongo driver for Go?
The popular mgo driver (github.com/go-mgo/mgo) developed by Gustavo Niemeyer has gone dark (unmaintained). And it has no support for change streams.
The community supported fork github.com/globalsign/mgo is in much better shape, and has already added support for change streams (see details here).
To watch changes of a collection, simply use the Collection.Watch() method which returns you a value of mgo.ChangeStream. Here's a simple example using it:
coll := ... // Obtain collection
pipeline := []bson.M{}
changeStream := coll.Watch(pipeline, mgo.ChangeStreamOptions{})
var changeDoc bson.M
for changeStream.Next(&changeDoc) {
fmt.Printf("Change: %v\n", changeDoc)
}
if err := changeStream.Close(); err != nil {
return err
}
Also note that there is an official MongoDB Go driver under development, it was announced here: Considering the Community Effects of Introducing an Official MongoDB Go Driver
It is currently in alpha (!!) phase, so take this into consideration. It is available here: github.com/mongodb/mongo-go-driver. It also already has support for change streams, similarly via the Collection.Watch() method (this is a different mongo.Collection type, it has nothing to do with mgo.Collection). It returns a mongo.Cursor which you may use like this:
var coll mongo.Collection = ... // Obtain collection
ctx := context.Background()
var pipeline interface{} // set up pipeline
cur, err := coll.Watch(ctx, pipeline)
if err != nil {
// Handle err
return
}
defer cur.Close(ctx)
for cur.Next(ctx) {
elem := bson.NewDocument()
if err := cur.Decode(elem); err != nil {
log.Fatal(err)
}
// do something with elem....
}
if err := cur.Err(); err != nil {
log.Fatal(err)
}
This example uses the The MongoDB supported driver for Go with stream pipeline (filtering only documents having field1=1 and field2=false):
ctx := context.TODO()
clientOptions := options.Client().ApplyURI(mongoURI)
client, err := mongo.Connect(ctx, clientOptions)
if err != nil {
log.Fatal(err)
}
err = client.Ping(ctx, nil)
if err != nil {
log.Fatal(err)
}
fmt.Println("Connected!")
collection := client.Database("test").Collection("test")
pipeline := mongo.Pipeline{bson.D{
{"$match",
bson.D{
{"fullDocument.field1", 1},
{"fullDocument.field2", false},
},
},
}}
streamOptions := options.ChangeStream().SetFullDocument(options.UpdateLookup)
stream, err := collection.Watch(ctx, pipeline, streamOptions)
if err != nil {
log.Fatal(err)
}
log.Print("waiting for changes")
var changeDoc map[string]interface{}
for stream.Next(ctx) {
if e := stream.Decode(&changeDoc); e != nil {
log.Printf("error decoding: %s", e)
}
log.Printf("change: %+v", changeDoc)
}

Dynamic casting in Golang

So... I'm creating a RESTful API for my idea using Gin framework and I've came into the following problem -
Let's say that I've got the following endpoints:
/a/:id/*action
/b/:id/*action
/c/:id/*action
So, obviously, when I'm not giving any action then I want to return the data for the given ID. Meaning, I'm doing nothing but querying some data and returning it, this means that the functionality is basically the same and only the returned data is different.
Here's an example code of mine -
func GetBusiness(c *gin.Context) {
businessID, err := strconv.Atoi(c.Param("id"))
if businessID == 0 || err != nil {
c.JSON(http.StatusBadRequest, gin.H{"success": false, "errorMessage": "Missing ID"})
}
business := &Business{}
business, err = business.Get(businessID)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"success": false, "errorMessage": "Business not found"})
}
c.JSON(http.StatusOK, business)
}
So, obviously, business can become user or anything else. So, after this long exposition, my question to you goers, is, how can I prevent code duplication in this kind of situation? I've tried using an interface but I'm still struggling with the OO nature of Go, so I would really appriciate any help.
Thanks in advance!
There are a few things you can do to reduce code duplication, but unfortunately, you will always be writing some boilerplate in go, because of it's explicit error handling and lack of OOP-ness. (which is not necessarily a bad thing!).
So my only suggestions at the moment is to put common functionality in middleware handlers and restructure your code a litte, for example:
parseIdMiddleware := func(c *gin.Context) {
id, err := strconv.Atoi(c.Param("id"))
if businessID == 0 || err != nil {
c.AbortWithError(http.StatusBadRequest, errors.New("Missing ID"))
return
}
c.Set("id", id)
}
...
gin.Use(gin.ErrorLogger(), parseIdMiddleware)
and rewrite your handlers to
func GetBusiness(c *gin.Context) {
id := c.MustGet("id").(int)
business, err := store.GetBusiness(id)
if err != nil {
c.AbortWithError(http.StatusBadRequest, err)
return // don't forget this!
}
c.JSON(http.StatusOK, business)
}
And as always, read other people's code! I recommend https://github.com/drone/drone. That should give you a pretty good overview of how to structure your code.