How can a workflow B be triggered from a workflow A after it clompleted? - argo-workflows

I have two workflows A and B. How can workflow A trigger workflow B after completion without having to introduce a third workflow chaining A and B (f.x. using steps or dag)?

Presumable you can use an exit handler for this?

Related

Passing values to onExit template in an argo workflow

I've stumbled upon a problem when using some more complex argo workflows with initialization and clean-up logic. We are running some initialization in one of the initial steps of the workflow (e.g. creation of some resources) and we'd like to perform a clean-up regardless of the status of the workflow. onExit template seems to be an ideal solution (I think that clean-up is even mentioned in argo documentation as predestined for tasks of the onExit template).
However, I haven't found a way yet to pass some values to it. For example - let's say that in the initialization phase we created some resource with id some-random-unique-id and we'd like to let the onExit container know what resources it needs to clean up.
We tried the outputs of some steps, but it seems that steps are unknown in the onExit template.
Is there a built-in argo mechanism to pass this kind of data? We'd like to avoid some external services (like key-value storage service that would hold the context).
You can mark output parameters as global using the globalName field. A global output parameter, assuming it has been set, can be accessed from anywhere in the Workflow, including in an exit handler.
The example file for writing and consuming global output parameters should contain all the information you need to use global output parameters in an exit handler.
https://github.com/argoproj/argo-workflows/blob/master/examples/global-outputs.yaml
Please share if you found a cleaner solution.
I see this workflow example for exit handler with parameters, which may resolve your issue: Exit Handler with Parameters
One method I found was to add steps to the template I want to call the exit template on, and call the exit template as the last step so I can access steps variables. I changed the original template to an inline template so the structure does not change too much, just involves adding the last step call to the exit template. Unfortunately, this does not utilize the actual onExit method.
Inline steps template example
1.https://sourcegraph.com/github.com/argoproj/argo-workflows/-/blob/examples/exit-handler-with-param.yaml?L21
2.https://raw.githubusercontent.com/argoproj/argo-workflows/master/examples/steps-inline-workflow.yaml

How can i restart a parent workflow from child workflow or can i re-trigger within the parent workflow itself? - Cadence/ Temporal

I have a workflow with multiple activities 1,2,3...6 and if my workflow fail after activity 3 for one particular exception I'm planning to start a child workflow which will eventually fix the exception. After that I want to retry the parent workflow to finish the complete flow.
What can i use in the child workflow to achieve the above scenario?
I tried looking into the Workflow interface which has ContinueAsNew which will create it as a new workflow and perform all the activities again.
I would recommend not failing the workflow, but implement the compensation and retry logic as part of it. You can write something like:
activities.a1(...);
activities.a2(...);
try {
activities.a3(...);
} catch (MyParticularException e) {
childWorklfow.fixExceptionIn3(...);
}
activities.a4(...);
activities.a5(...);
activities.a6(...);
activities.a7(...);
The idea of retrying the whole workflow comes from the synchronous request-reply world and almost never makes sense for workflows.

How to trigger a DevOps release pipeline based on a build pipeline's path filter

I have an Azure Function App that includes 3 functions:
FunctionApp
FunctionA
FunctionB
FunctionC
I have a DevOps pipeline configured to build the function app whenever any of the contents of the FunctionApp change.
I'd like now to set up a release pipeline that invokes whichever functions were updated. For example, if, in my pull request, only FunctionA was modified, I'd like to invoke only FunctionA.
How can I do this with DevOps pipelines?
I don't want to say "impossible", but what you're asking for is difficult-verging-on-impossible.
Without building a full dependency graph of the application code and tying it back to changed source files, you can't tell which function changed. Here's an example:
Let's say Function A and Function B depend on Library C. You update code in Library C. This means that from a functional perspective, both Function A and Function B have changed. How you determine this at build time?
It starts to get even trickier when you start to consider not only build, but deployment -- because you'd really be invoking the function after deployment, not build. Here's a scenario: Function A and Function B both change, but something goes wrong and the deployment is only partially successful and only Function A has been invoked. Now you run the deployment again. How does it know to invoke only Function B, since A has already been run?
You haven't said anything about what the actual purpose of the functions are, but I suspect that a better solution to this problem is to ensure that your functions are idempotent and can safely be run multiple times. If you do that, the entire premise of the question is moot, you can just deploy and invoke all of the functions every time.

How can I pass the variable between builds in VSTS?

For example I have two builds A and B. When A build successfully, the build for B will be trigged and receives the output variable from A.
How can I implement this?
For now, there is no way to persist variables between two builds.
And there has an user voice Make it possible to change the value of a variable in a variable group which suggest similar feature, you can vote and follow up.
And the workaround trigger build B after build A successful (variables can not be passed) is adding a related task (such as Queue Build(s) Task) to queue build B at the end of build definition A.

How to yield control to a calling method

Say I have a Task object, with an Execute method. This method has one to several steps, each of which requires a user to click a 'Continue' button, e.g. when Execute is invoked, the Task tells it's container (a Windows form in this case) to display an introductory message, and wait for a button click, before continuing with step 2, notifying the user that what is taking place and performing some work.
I don't want the controller to have to be aware of the steps in the task, either implicitly, through e.g. calling Execute(Steps.ShowIntro), Execute(Steps.PerformTask) etc. or explicitly, with more than one Execute method, e.g. ExecuteIntro(), ExecuteTask(), etc.
Currently I'm using a Phase enumeration to determine which action to carry out when the Continue button is clicked:
show phase 1 intro.
set current_phase = PhaseOne.
on continue_button click
switch current_phase
case PhaseOne:
show phase 1 'Now doing:' message.
execute phase 1 task.
show phase 2 intro.
set phase to PhaseTwo.
case PhaseTwo:
show phase 2 'Now doing:' message.
execute phase 2 task.
show phase 3 intro.
set phase to PhaseThree.
Why don't you simply implement as many classes with Execute method as steps and put instances of those classes in the queue.
By pressing "Continue" you will take another instance of the class with Execute and call it.
class Task
method execute()
foreach task in queue execute task
method addSubTask( task )
add task to queue
class ShowIntroSubTask extends Task
class ExecuteIntroSubTask extends Task
Mykola's answer sounds good, but if you'd like an alternative, consider passing in a ConfirmContinuation callback, which the Execute could use as needed (e.g. on step transitions). If you wanted to keep things abstract, just call it something like NextStep and leave the semantics up to the container.