How to make a flow sleep until a condition is satisfied in Mulesoft - mule-studio

Am having 5 batch process in my flow and the batch process are running async I need to wait untill all the batch processs are over but once batch execute component is executed the payload moves to the next component where it requires a result from batch and it fails.How can I make it wait untill all the batch process are executed

#Satheesh,
Use a sessionVars or set static variable where you will use that as a flag or increment that. Then use an expression-filter where you will check if it already process all the batch processes.
https://docs.mulesoft.com/mule-user-guide/v/3.6/filters

Related

Scheduler Processing using Spring batch

we have a requirement to process millions of records using spring batch . We have planned to use a Spring Batch to do this by reading the db using JdbcPagingItemReaderBuilder and process in chunks and write it to Kaafka Queue. The active consumers of the queue will process the chunks of data and update the db
The consumer task is to iterate every item from the chunk and invoke the external api's.
In case the external system is down or not responding with success response , there should be retries of atleast 3 times and considering that each task in the chunk has to do this, what would be the ideal approach?
Another use case to consider, what happens when the job is processing and the system goes down and say that the job has already processed 10000 record and the remaining records are yet to be processed . After the restart how to make sure the execution doesnt restart the entire process from beginning and to resume from the point of failure.
Spring Batch creates the following tables. You can use them to check the status of your job and customize your scheduler to behave in a way you see fit.
I'd use the step execution Id in BATCH_STEP_EXCECUTION to validate the status that's set and then retry based off on that status, Or something similar to that sense.
BATCH_JOB_EXECUTION
BATCH_JOB_EXECUTION_CONTEXT
BATCH_JOB_EXECUTION_PARAMS
BATCH_JOB_INSTANCE
BATCH_STEP_EXECUTION

Quartz .net - Abort/Stop Current Execution of Job & Pause All the triggers

In My Project I am using Quartz.net Scheduler (3.0.7), Now There are some automated verification process which reads the DB and process it and generate output based on few conditions, (You can take example of Email Sending Mechanism which sends email read from DB and Send to Respective mail address) Now If we assume There are 300 Request to be processed and each will take long time to complete, Now There is one feature required which pause the current execution of the job, what i want is that if from 300 requests 25 is completed and currently 26 is running so the job should complete the 26th execution but should stop rest of the request.
What I have tried is to implement the Pause and Interrupt methods of Quartz.net
i.e. await scheduler.PauseJob(jobKey); &
await scheduler.Interrupt(jobKey);
Which Can Pause the upcoming executions, If I can get any Event or Token into Job Execution Class, I can achieve what i want.
IInterruptableJob Has been removed from the Quartz.net
If anyone can help me on this.
From the migration guide:
IInterruptableJob interface has been removed. You need to check for IJobExecutionContext’s CancellationToken.IsCancellationRequested to determine whether job interruption has been requested.
So combining the pause and observing the token should work.

Asynchronous SQL procedure execution set and wait for completion

Say I have a large set of calls to a procedure to run which have varying parameters but are independent so I want to make parallel/async calls. I use the service broker to fire these all off but the problem I have is I want to know neat ways of knowing how to wait for them all to complete (or error).
Is there a way to do this? I believe I could just loop with waits on the result table checking for completion on that but that isn't very "event triggered". Hoping for a nicer way to do this.
I have used the service broker with queue code and processing based off this other answer: Remus' service broker queuing example
Good day Shiv,
There are several ways (like always) that you can use in order to implement this requirement. One of these is using this logic:
(1) Create two queues: one will be the trigger to execute the main SP that you want execute in Asynchronous, and the other will be the trigger to execute whatever you want to execute after all the executions ended.
(2) When you create the message in the first queue you should also create a message in the second queue, which will only tell us which execution did not ended yet (first queue gives the information which execution started since once we START the execution we use the message and remove it from the queue).
(3) Inside the SP that you execute using the main first queue (this part executed in synchronous):
(3.1) execute the queries you need
(3.2) clear the equivalent message from the second queue (meaning that this message will removed only after the queries ended)
(3.3) check if there are messages in the second queue. If there are no messages then all the tasks ended and you can execute your final step
** Theoretically instead of using the second queue, you can store data in a table, but using second queue should probably give better performance then updating table each time an execution ended. Anyhow, you test the option of using a table as well.

How to do batch sequencing in Mulesoft

I have multiple batch_step but currently it all starts processing synchronously i.e. batch_step2 should start only if batch_step1 is executed completely.
How to identify whether batch_step1 is finished processing and then start with batch_step2 and so on.
I am not sure what you want to achieve with the above logic. Batch processing is meant to process individual records by each batch step. Every record is passed through batch steps sequentially. batch step 2 for a record will be executed only after the batch step1 is completed. As per the mulesoft docs "Note that a batch job instance does not wait for all its queued records to finish processing in one batch step before pushing any of them to the next batch step".
Alternately you can have different batch with only one batch step. This will ensure that batch step 1 is completed for all records first and then next batch step is executed in next batch job.

End Celery worker task on, time limit, job stage or instruction from client

I'm new to celery and I would appreciate a little help with a design pattern(or example code) for a worker I have yet to write.
Below is a description of the desired characteristics of the worker.
The worker will run a task that collects data from an endless source, a generator.
The worker task will run forever feeding from the generator unless it is directed to stop.
The worker task should stop gracefully on the occurrence of any one of the following triggers.
It exceeds an execution time limit in seconds.
It exceeds a number of iterations of the endless generator loop.
The client sends a message instructing the worker task to finish immediately.
Below is some sudo code for how I believe I need to handle trigger scenarios 1 and 2.
What I don't know is how I send the 'finish immediately' signal from the client and how it is received and executed in the worker task.
Any advice or sample code would be appreciated.
from celery.task import task
from celery.exceptions import SoftTimeLimitExceeded
COUNTLIMIT = # some value sent to the worker task by the client
#task()
def getData():
try:
for count, data in enumerate(endlessGeneratorThing()):
# process data here
if count > COUNTLIMIT: # Handle trigger scenario 2
clean_up_task_nicely()
break
except SoftTimeLimitExceeded: # Handle trigger scenario 1
clean_up_task_nicely()
My understanding of revoke is that it only revokes a task prior to its execution. For (3), I think what you want to do is use an AbortableTask, which provides a cooperative way to end a task:
http://docs.celeryproject.org/en/latest/reference/celery.contrib.abortable.html
On the client end you are able to call task.abort(), on the task end, you are able to poll task.is_aborted()