I want to create a conditional cyclic DAG in Argo Workflow.
My use case: I have a final human-approval step, if rejected, I want to workflow go back to some specific node and rerun from there.
Why rerun makes sense in my case: there're some human inputs in the workflow, if the final result is rejected, they might want to change some parameters and rerun.
Why not rerun the whole workflow: some steps are expensive.
I know cycle is not even a valid word when we're talking about DAG. But I haven't come up with a proper way to handle my usecase in Argo Workflow. Any suggestions?
Related
My pipelines activities need the date of the run as a parameter. Now I get the current date in the pipeline from the utcnow() function. Ideally this would be something I could enter dynamically in the trigger so I could rerun a failed day and the parameter would be set right, now a rerun would lead to my pipeline being rerun but with the date of today not the failed run date.
I am used to airflow where such things are pretty easy to do, including scheduling reruns. Probably I think too much in terms of airflow but I can't wrap my head around a better solution.
In ADF,it is not supported directly to pass trigger date at which pipeline got failed to trigger.
You can get the trigger time using #pipeline().TriggerTime .
This system variable will give the time at which the trigger triggers the pipeline to run.
You can store this trigger value for every pipeline and use this as a parameter for the trigger which got failed and rerun the pipeline.
Reference: Microsoft document on System Variables on ADF
To resolve my problem I had to create a nested structure of pipelines, the top pipeline setting a variable for the date and then calling other pipelines passing that variable.
With this I still can't rerun the top pipeline but rerunning Execute Pipeline1/2/3 reruns them with the right variable set. It is still not perfect since the top pipeline run stays an error and it is difficult to keep track of what needs to be rerun, however it is a partial solution.
We have 100+ services/apps in a repository in Azure Devops. We have defined a single CI/CD YAML multistage pipeline for each (build and deployment). This limits blast radius and allows for auditability of each release of each project. We rely on templates for all the real pipeline work so this is easy to maintain; just a small root azure-pipelines.yml file for each project that includes the needed templates.
Now, we'd like to start using PR validation builds. And, as best as I can tell, we have two options:
Create a separate PR build for for every project and use the UI/API for policies to create 100+ policies
Create a single PR build that has stages for all 100+ projects.
I'm not a fan of the 1st option as now we'll have 200+ builds. The 2nd option is possible, but to avoid a 3 hour PR build, we'd need a way to only run needed stages (aka project builds).
Is there a 3rd option I'm missing? If the 2nd option is our best bet, how do we turn off stages for projects not changed in that PR (i.e. what condition would we use)?
(FYI, our policy is to change only one project per PR, but there are, on occasion exceptions to that.)
For personal suggestion, I also recommend the second method. Though the build script would be very large in one configure file, but much better than have hundreds build configuration files.
But the difficulty is these 100+ apps are all in one repository. This means all the normal method will not suitable for you, include using Build.Repository.Name value as the stage condition. Also, there's no more details which describing the source file path stored in the commit.
So, I suggest you and your team developers input the project name info into your commit message. Then, in the build pipeline you could use the variable Build.SourceVersionMessage to get its comment message. Since this is a environment variable which only work in step level(Not work for stage level and the job level), it needs you add one task in the first step and use the condition for it.
The logic of it is add one step as the first one in every stages. This step is only used to conditional judgment. If the Build.SourceVersionMessage matches the prefix or any key contents words, the jobs will be early-exit.
If use the condition like this:
condition: startsWith(variables['Build.SourceVersionMessage'], '[maven-plugin]')
It needs your commit message must follow a strict content writing format, starting with the specified project name.
Another condition can for you consider is:
condition: in(variables['Build.SourceVersionMessage'], 'maven-plugin')
This does not need the strict content writing format, but also need input the project name in the commit message. Thus it could be evaluated in the job condition with the above script.
Hope it could give you some help.
I am running a pipeline where i am looping through all the tables in INFORMATION.SCHEMA.TABLES and copying it onto Azure Data lake store.My question is how do i run this pipeline for the failed tables only if any of the table fails to copy?
Best approach I’ve found is to code your process to:
0. Yes, root cause the failure and identify if it is something wrong with the pipeline or if it is a “feature” of your dependency you have to code around.
1. Be idempotent. If your process ensures a clean state as the very first step, similar to Command Design pattern’s undo (but more naive), then your process can re-execute.
* with #1, you can safely use “retry” in your pipeline activities, along with sufficient time between retries.
* this is an ADFv1 or v2 compatible approach
2. If ADFv2, then you have more options and can have more complex logic to handle errors:
* for the activity that is failing, wrap this in an until-success loop, and be sure to include a bound on execution.
* you can add more activities in the loop to handle failure and log, notify, or resolve known failure conditions due to externalities out of your control.
3. You can also use asynchronous communication to future process executions that save success to a central store. Then later executions “if” I already was successful then stop processing before the activity.
* this is powerful for more generalized pipelines, since you can choose where to begin
4. Last resort I know (and I would love to learn new ways to handle) is manual re-execution of failed activities.
Hope this helps,
J
I'm working on a project with Quartz and has been a problem with the dependencies with jobs.
we have a setup where A and B aren't dependent on eachother, though C is:
A and B can run at the same time, but C can only run when both A and B are complete.
Is there a way to set this kind of scenario up in Quartz, so that C will only trigger when A and B finish?
Not directly AFAIK, but it should be not too hard to use a TriggerListener to implement such a functionality (a TriggerListener is run both a start and end of jobs, and you can set them up for individual triggers or trigger groups).
EDIT: there is even a specific FAQ Topic about this problem:
There currently is no "direct" or "free" way to chain triggers with
Quartz. However there are several ways you can accomplish it without
much effort. Below is an outline of a couple approaches:
One way is to use a listener (i.e. a TriggerListener, JobListener or
SchedulerListener) that can notice the completion of a job/trigger and
then immediately schedule a new trigger to fire. This approach can get
a bit involved, since you'll have to inform the listener which job
follows which - and you may need to worry about persistence of this
information. See the listener
org.quartz.listeners.JobChainingJobListener which ships with Quartz -
as it already has some of this functionality.
Another way is to build a Job that contains within its JobDataMap the
name of the next job to fire, and as the job completes (the last step
in its execute() method) have the job schedule the next job. Several
people are doing this and have had good luck. Most have made a base
(abstract) class that is a Job that knows how to get the job name and
group out of the JobDataMap using pre-defined keys (constants) and
contains code to schedule the identified job. This abstract Job's
implementation of execute() delegates to an abstract template method
such as "doWork()" (where the extending Job class's real work goes)
and then it contains the code for scheduling the follow-up job. Then
they simply make extensions of this class that included the work the
job should do. The usage of 'durable' jobs, or the overloaded
addJob(JobDetail, boolean, boolean) method (added in Quartz 2.2) helps
the application define all the jobs at once with their proper data,
without yet creating triggers to fire them (other than one trigger to
fire the first job in the chain).
In the future, Quartz will provide a much cleaner way to do this, but
until then, you'll have to use one of the above approaches, or think
of yet another that works better for you.
The result of SqlWorkflowInstanceStore.WaitForEvents does not tell me what type of workflow is runnable. The constructor of WorkflowApplication takes a workflow definition, and at a minimum, I need to be able to store a workflow ID in the store and query it, so that I can determine which workflow definition to load for the WorkflowApplication.
I also don't want to create a SqlWorkflowInstanceStore for each custom workflow type, since there may be thousands of different workflows.
I thought about trying to use WorkflowServiceHost, but not every workflow has a Receive activity and I don't think it is feasible to have thousands of WorkflowServiceHosts running, each supporting a different workflow type.
Ideally, I just want to query the database for a runnable workflow, determine its workflow definition ID, load the appropriate XAML from a workflow definition table, instantiate WorkflowApplication with the workflow definition, and call LoadRunnableInstance().
I would like to have a way to correlate which workflow is related to a given HasRunnableWorkflowEvent raised by the SqlWorkflowInstanceStore (along with the custom workflow definition ID), or have an alternate way of supporting potentially thousands of different custom workflow types created at runtime. I must also load balance the execution of workflows across multiple application servers.
There's a free product from Microsoft that does pretty much everything you say there, and then some. Oh, and it's excellent too.
Windows Server AppFabric. No, not Azure.
http://www.microsoft.com/windowsserver2008/en/us/app-main.aspx
-Oisin