What is the behavior of ADF schedule triggers when a schedule starts before the previous one ends? - azure-data-factory

I am trying to contrast and find the exact behavior of ADF schedule triggers and ADF thumbing window triggers when a trigger starts before the previous pipeline triggered by the schedule ends?
For example, let’s say we have a every 5 min schedule but the pipeline takes one hour to finish. What happens to all the every-5-min triggers that happen until the pipeline is finished?
Are the behaviors of thumbing window and schedule triggers the same in this scenario?

When using the ADF schedule trigger (shorter recurrence time than the amount of time pipeline takes to execute), the pipeline starts executing (In progress state) every time it is triggered irrespective of the previous pipeline run status.
Using the trigger with 3-minute recurrence interval produced the following output.
The tumbling window trigger reacts in the same way as a schedule trigger. The pipeline starts execution irrespective of the status of previous run. The trigger used has a 5-minute recurrence interval.
So, in this scenario both types of triggers behave in the same way. But tumbling window triggers have a self-dependency property which is not available with Schedule triggers. If the consecutive pipeline runs depend on each other, the self-dependency property can be used. Other significant differences between these triggers, including the self-dependency property are mentioned in the following Microsoft Q&A link.
https://learn.microsoft.com/en-us/answers/questions/207405/when-to-use-tumbling-window-type-trigger.html

Related

What is the difference between scheduled and tumbling trigger?

I'm exploring the Azure Data factory scheduled and tumbling trigger.
In both cases suppose I choose the 1 hour recurrence interval, then the pipeline runs every hour.
When should we use scheduled vs tumbling trigger?
Tumbling window triggers have a self-dependency property which is not available with Schedule triggers. If the consecutive pipeline runs depend on each other, the self-dependency property can be used. Other significant differences between these triggers, including the self-dependency property are mentioned in the following Microsoft Q&A link.
https://learn.microsoft.com/en-us/answers/questions/207405/when-to-use-tumbling-window-type-trigger.html
Similar thread:
https://stackoverflow.com/questions/72194846/what-is-the-behavior-of-adf-schedule-triggers-when-a-schedule-starts-before-the#:~:text=The%20trigger%20used%20has%20a,not%20available%20with%20Schedule%20triggers.

Why does Data Factory Pipeline scheduled trigger sometimes gets offset?

I have some scheduled daily running pipelines.
Sometimes the scheduled trigger start time [Run start time] gets offset by few seconds [1s to 5s].
This is happening to all the pipelines having the same trigger.
Some of pipes use IR and some of them don't.
Trying to understand why it's happening.
Ex. Pipeline trigger offset by 1 second
Ashwin, yes this happens sometimes maybe due to a lag / latency between a trigger to start and actual start of a pipeline just like you clicking a button and time taken by the event behind that button to occur
But this does not hamper the execution of your pipeline

Stop running Azure Data Factory Pipeline when it is still running

I have a Azure Data Factory Pipeline. My trigger has been set for every each 5 minutes.
Sometimes my Pipeline takes more than 5 mins to finished its jobs. In this case, Trigger runs again and creates another instance of my Pipeline and two instances of the same pipeline make problem in my ETL.
How can I be sure than just one instance of my pipeline runs at time?
As you can see there are several instances running of my pipelines
Few options I could think of:
OPT 1
Specify 5 min timeout on your pipeline activities:
https://learn.microsoft.com/en-us/azure/data-factory/concepts-pipelines-activities
https://learn.microsoft.com/en-us/azure/data-factory/concepts-pipelines-activities#activity-policy
OPT 2
1) Create a 1 row 1 column sql RunStatus table: 1 will be our "completed", 0 - "running" status
2) At the end of your pipeline add a stored procedure activity that would set the bit to 1.
3) At the start of your pipeline add a lookup activity to read that bit.
4) The output of this lookup will then be used in if condition activity:
if 1 - start the pipeline's job, but before that add another stored procedure activity to set our status bit to 0.
if 0 - depending on the details of your project: do nothing, add a wait activity, send an email, etc.
To make a full use of this option, you can turn the table into a log, where the new line with start and end time will be added after each successful run (before initiating a new run, you can check if the previous run had the end time). Having this log might help you gather data on how much does it take to run your pipeline and perhaps either add more resources or increase the interval between the runs.
OPT 3
Monitor the pipeline run with SDKs (have not tried that, so this is just to possibly direct you):
https://learn.microsoft.com/en-us/azure/data-factory/monitor-programmatically
Hopefully you can use at least one of them
It sounds like you're trying to run a process more or less constantly, which is a good fit for tumbling window triggers. You can create a dependency such that the trigger is dependent on itself - so it won't run until the previous run has completed.
Start by creating a trigger that runs a pipeline on a tumbling window, then create a tumbling window trigger dependency. The section at the bottom of that article discusses "tumbling window self-dependency properties", which shows you what the code should look like once you've successfully set this up.
Try changing the concurrency of the pipeline to 1.
Link: https://www.datastackpros.com/2020/05/prevent-azure-data-factory-from-running.html
My first thought is that the recurrence is too frequent under these circumstances. If the graph you shared is all for the same pipeline, then most of them take close to 5 minutes, but you have some that take 30, 40, even 60 minutes. Situations like this are when a simple recurrence trigger probably isn't sufficient. What is supposed to happen while the 60 minute one is running? There will be 10-12 runs that wouldn't start: so they still need to run or can they be ignored?
To make sure all the pipelines run, and manage concurrency, you're going to need to build a queue manager of some kind. ADF cannot handle this itself, so I have built such a system internally and rely on it extensively. I use a combination of Logic Apps, Stored Procedures (Azure SQL), and Azure Functions to queue, execute, and monitor pipeline executions. Here is a high level break down of what you probably need:
Logic App 1: runs every 5 minutes and queues an ADF job in the SQL database.
Logic App 2: runs every 2-3 minutes and checks the queue to see if a) there is not a job currently running (status = 'InProgress') and 2) there is a job in the queue waiting to run (I do this with a Stored Procedure). IF this state is met: execute the next ADF and update its status to 'InProgress'.
I use an Azure Function to submit jobs instead of the built in Logic App activity because I have better control over variable parameters. Also, they can return the newly created ADF RunId, which I rely in #3.
Logic App 3: runs every minute and updates the status of any 'InProgress' jobs.
I use an Azure Function to check the status of the ADF pipeline based on RunId.

Azure Data Factory ADFV2 Trigger Overlap

I have a ADFV2 trigger that runs every 2 minutes. The pipeline that is called usually takes just over a minute to run but sometimes it takes over 2 minutes but if that happens the trigger kicks in again and runs regardless of the previous trigger still running or not. Is there any way to stop this overlap?
The trigger needs to run every 2 minutes.
Thanks.
There is a concurrency setting in the pipeline definition. Set it to 1. The trigger will create an event, but it will be set to Queued state until previous job completes

Quartz.net scheduler and IStatefulJob

I am wondering if I am understanding this right.
http://quartznet.sourceforge.net/apidoc/
IStatefulJob instances follow slightly
different rules from regular IJob
instances. The key difference is that
their associated JobDataMap is
re-persisted after every execution of
the job, thus preserving state for the
next execution. The other difference
is that stateful jobs are not allowed
to Execute concurrently, which means
new triggers that occur before the
completion of the IJob.Execute method
will be delayed.
Does this mean all triggers will be delayed until another trigger is done? If so how can I make it so only the same triggers will not fire until the previous trigger is done.
Say I have trigger A that fires every min but for some reason it is slow and takes a minute and half to execute. If I just use a plan IJob the next one would fire and I don't want this. I want to halt trigger A from fireing again until it is done.
However at the same time I have trigger B that fires every minute as well. It is going normal speed and finishes every minutes on time. I don't want trigger B to be held up because of trigger A.
From my understanding this is what would happen if I use IStatefulJob.
In short.. This behavior is from job's side. So regardless how many triggers you may have only single instance of given IStatefulJob (job name, job group dictates the instance id) running at a time. So there might be two instance of same job type, but no same-named jobs (name, group) if job implements IStatefulJob.
If trigger misses its fire time because of this, the misfire instructions come into play. A trigger that misses its next fire because the earlier invocation is still running decides what to do based on its misfire instruction (see API and tutorial).
With plain IJob you have no guarantees about how many jobs will be running at the same time if you have multiple triggers for it and/or misfires are happening. IJob is just contract interface for invoking the job. Quartz.NET 2.0 will split IStatefulJob combined behavior to two separate attributes: DisallowConcurrentExecution and PersistJobDataAfterExecution.
So you could combine same job type (IStatefulJobs) with two definitions (different job names) and triggers with applicable misfire instructions.