How to implement the one_done trigger rule for Airflow? - triggers

Airflow provides multiple trigger rules except for the one_done trigger.
Is there any way, we can implement the one_done trigger in airflow?
I need the one_done trigger rule for the below case:
Let's suppose I have the below tasks dependencies,
A >>[B,C,D,E]>>F
Task F need to be triggered as soon as any of tasks B or C or D or E get completed irrespective of the status of failed or success.

Starting Airflow 2.5.0 a new trigger rule one_done was added (see PR, feature request)
This new trigger rule handles your use case without any workarounds.

I think you can do a trick by adding two dummy tasks before F, one with the trigger rule one_success and the second with the trigger rule one_failed, and for F, you should use one_success:
A >> [B,C,D,E]
[B,C,D,E] >> Dummy1
[B,C,D,E] >> Dummy2
[Dummy1, Dummy2] >> F
When a task finish:
if its state is success, Dummy1 will be executed and marked as success
if its state is failed, Dummy2 will be executed and marked as success
In the two cases, F will be executed

Related

Use current Celery task in chord?

I'd like to run tasks in parallel that have a data dependency at the beginning of the first task. It seems that I should be able to start a chord with the current task in the header group that's used as the args for the body callback. I don't see a way to reference the signature of the current task in the documentation, but is there a way to do this?
I was thinking it would be something like this with the get_signature() being the missing piece:
#app.task(bind=True)
def chord_test(self, id_) -> int:
data, next_id = get_data(id_)
chord([self.get_signature(), chord_test.s(next_id)])(handle_results.s())
return expensive_processing(data)

Why does Drools re-evaluate and re-trigger rule actions after a persistent session is reloaded?

I have a simple rule like the one below:
package rules
dialect "mvel"
declare MyEvent
#role( event )
#expires( 2d )
id String
value: Double
end
rule "My Rule"
when
MyEvent($value : value)
then
System.out.println("My event with value: " + $value);
end
I create a persistent session and call fireAllRules() on it. Then, I insert a MyEvent fact, and as expected the rule is evaluated, matched and the action is executed. If I call fireAllRules() again the rule is not matched, as expected because it has already matched for the same fact. At this point everything is fine.
Then I kill the JVM and run the app again. At startup the app loads the session like this:
kieSession = kieServices.getStoreServices().loadKieSession(KIE_SESSION_ID, kieBase, kieSessionConfiguration, kieEnvironment);
The session gets loaded successfully, and then fireAllRules() is called again. Since the rule has already matched for the inserted event, I am expecting that it does not match again. However I can see the message in the rule action is printed again. Why does Drools match the rule for the same eventagaian? To me it looks like the session state is not properly saved to database. I mean, the event is saved, but Drools can not recogonize that it has already matched the rule. When I load a persistent session I expect to recover exactly the same state that the session had in the previous running instance. Is my assumption wrong? or Am I doing something wrong for the expected behaviour?
Running:
JavaSE 11
SpringBoot 2.3
Drools 7.53.0

how to add value to duration attribute in drools

Whenever i write duration(0s), it works but as soon as i change it to duration(1s) or duration(5s), the rule doesn't fire...
This is the rule which i want to fire.
rule "ContainsChecking"
agenda-group "town4"
duration(0s)
when
Town(owner matches "[N || n][a-z]+")
then
System.out.println("Rule Fired ContainsChecking");
end
do we need to import something for duration attribute to work, because i m not getting it anywhere. Thanks in advance.
You need to run the session using
kieSession.fireUntilHalt();
If you just use fireAllRules, the agenda is empty and the call terminates.
This is a CEP feature and need not and should not be used in a simple production rule environment.

celery_tasktree does not support DAG workflow in general. Is there an alternative?

celery_tasktree (https://pypi.python.org/pypi/celery-tasktree) provides a cleaner workflow canvas compared to celery workflow scheduler (http://docs.celeryproject.org/en/latest/userguide/canvas.html). However, it only support tree-like workflow structure, not a general DAG-like workflow. Celery workflow does have "chords" method but seems cumbersome to use.
Is there any other celery-based library similar to celery_tasktree that works with general DAG workflow?
Here are couple of libraries that support DAG based job schedulers.
https://github.com/thieman/dagobah
https://github.com/apache/incubator-airflow
They are not based on celery. However you can create your own primitives in celery to forward results which can used to build a DAG job schedulers.
#app.task(bind=True)
def forward(self, result, sig):
# convert JSON serialized signature back to Signature
sig = self.app.signature(sig)
# get the return value of the provided task signature
result2 = sig()
# next task will receive tuple of (result_A, result_B)
return (result, result2)
#app.task
def C(self, Ares_Bres)):
Ares, Bres = Ares__Bres
return Ares + Bres
workflow = (A.s() | forward.s(B.s()) | C.s())
See here for a more detailed discussion.

How to control jobs of different families in eclipse

How to control jobs from different families
For example, when I perform the following actions in eclipse:
From the "Project" menu , select "Clean". Then the dialog appears I click on "OK" button.
Then " Cleaning all Projects " operation begins. In the middle of the operation I try to delete some file from my workspace.
the following dialog appears, "User operation is waiting" where the first operation which I did "Cleaning all operation" progress continues. And the second "Delete" operation will be blocked showing the "lock" symbol with message "Blocked: the user operation is waiting for cleaning all projects to complete". After completing the first operation only, the "Delete" operation dialog appears.
What I need?
I am trying to get the similar situation as above in my project.
I have created one job family for my project following the tutorial "On the Job eclipse".
I schedule the job to perform some operation in background.
as soon as the operation progresses, i tried to delete the file. As soon as I select "Delete" , Delete dialog appears. However, what I need is to Block this Delete operation until the first operation which I performed completes similar way as I told in the above example.
How it can be done using eclipse jobs.? I tried with job.join(), job.setPriority() and all....
If you have any idea please share
You use 'scheduling rules' to define which jobs can run at the same time. A scheduling rule is a class which implements the ISchedulingRule interface.
A simple rule would be:
public class MutexRule implements ISchedulingRule
{
#Override
public boolean isConflicting(ISchedulingRule rule)
{
return rule == this;
}
#Override
public boolean contains(ISchedulingRule rule)
{
return rule == this;
}
}
which will only allow one job with this rule to run at a time. Use like this:
ISchedulingRule rule = new MutexRule();
Job job1 = new ....
Job job2 = new ....
job1.setRule(rule);
job2.setRule(rule);
job1.schedule();
job2.schedule();
Note that IResource extends ISchedulingRule and implements a rule to stop two jobs accessing the same resources at a time.
So to have only one job modifying the workspace at a time you can use:
ISchedulingRule rule = ResourcesPlugin.getWorkspace().getRoot();
(since IWorkspaceRoot extends IResource). Eclipse uses this rule for many jobs.
You can also use IResourceRuleFactory to create rules for controlling access to resources.
IResourceRuleFactory factory = ResourcesPlugin.getWorkspace().getRuleFactory();
There is also a MultiRule class which allows you to combine several scheduling rules.