Repast Symphony Scheduling method priority and agent priority - scheduler

I have a (I hope) simple question to those with experience with Repast Symphony.
The annotation based scheduling allows the setting of priorities. If I use the ScheduleParameters.FIRST_PRIORITY and ScheduleParameters.LAST_PRIORITY parameters for this, how is this interpreted by the overall scheduler if every agent executes these methods at every tick?
First, all the agents execute the method with ScheduleParameters.FIRST_PRIORITY and only after that will all agents execute the method with ScheduleParameters.LAST_PRIORITY.
For every agent, first the method with ScheduleParameters.FIRST_PRIORITY is executed and then the same agent executes the method with ScheduleParameters.LAST_PRIORITY. So every agent executes both methods before the next agent has its turn.

Option 1. is correct. All actions scheduled with FIRST_PRIORITY will be executed, followed by all actions with LAST_PRIORITY. The scheduler actually has no notion of an agent, only actions (i.e. scheduled methods).

Related

How to manually increase the task’s priority by code, using the On task suspended action of the service block

I want to do as written in the title. I'm referring to the part of Anylogic's service block page that says:
“If this order does not suit your needs, manually increase the task’s priority by code, using the On task suspended action of the block.”
I couldn't find how to do it. I did find in the seize block function that recalculates the priorities of the agents in the queue, but this function does not exist for the service block.
Here is a link to the help page:
https://anylogic.help/library-reference-guides/process-modeling-library/service.html
I will add that my goal is to use downtime for service with the policy "seize any unit" but the default of Analogic is to put the suspended agent in the back of the queue. I need it to be the first to seize the next available resource.
Thank you very much!

What exactly is a Cadence decision task?

Activity tasks are pretty easy to understand since it's executing an activity...but what is a decision task? Does the worker run through the workflow from beginning (using records of completed activities) until it hits the next "meaningful" thing it needs to do while making a "decision" on what needs to be done next?
My Opinions
Ideally users don't need to understand it!
However, decision/workflow Task a leaked technical details from Cadence/Temporal API.
Unfortunately, you won't be able to use Cadence/Temporal well if you don't fully understand it.
Fortunately, using iWF will keep you away from leakage. iWF provides a nice abstraction on top of Cadence/Temporal but keep the same power.
TL;DR
Decision is short for workflow decision.
A decision is a movement from one state to another in a workflow state machine. Essentially, your workflow code defines a state machine. This state machine must be a deterministic state machine for replay, so workflow code must be deterministic.
A decision task is a task for worker to execute workflow code to generate decision.
NOTE: in Temporal, decision is called "command", the workflow decision task is called "workflow task" which generates the "command"
Example
Let say we have this workflow code:
public string sampleWorkflowMethod(...){
var result = activityStubs.activityA(...)
if(result.startsWith("x"){
Workflow.sleep(...)
}else{
result = activityStubs.activityB(...)
}
return result
}
From Cadence/Temporal SDK's point of view, the code is a state machine.
Assuming we have an execution that the result of activityA is xyz, so that the execution will go to the sleep branch.
Then the workflow execution flow is like this graph.
Workflow code defines the state machine, and it's static.
Workflow execution will decide how to move from one state to another during the run time, based on the intput/result/and code logic
Decision is an abstraction in Cadence internal. During the workflow execution, when it change from one state to another, the decision is the result of that movement.
The abstraction is basically to define what needs to be done when execution moves from one state to another --- schedule activity, timer or childWF etc.
The decision needs to be deterministic --- with the same input/result, workflow code should make the same decision --- schedule activityA or B must be the same.
Timeline in the example
What happens during the above workflow execution:
Cadence service schedules the very first decision task, dispatched to a workflow worker
The worker execute the first decision task, and return the decision result of scheduling activityA to Cadence service. Then workflow stay there waiting.
As a result of scheduling activityA, an activity task is generated by Cadence service and the task is dispatched to an activity worker
The activity worker executes the activity and returns a result xyz to Cadence service.
As a result of receiving the activity result, Cadence service schedules the second decision task, and dispatch to a workflow worker.
The workflow worker execute the second decision task, and respond the decision result of scheduling a timer to Cadence service
On receiving the decision task respond, Cadence service schedules a timer
When the timer fires, Cadence service schedules the third decision task and dispatched to workflow worker again
The workflow worker execute the third decision task, and respond the result of completing the workflow execution successfully with result xyz.
Some more facts about decision
Workflow Decision is to orchestrate those other entities like activity/ChildWorkflow/Timer/etc.
Decision(workflow) task is to communicate with Cadence service, telling what is to do next. For example, start/cancel some activities, or complete/fail/continueAsNew a workflow.
There is always at most one outstanding(running/pending) decision task for each workflow execution. It's impossible to start one while another is started but not finished yet.
The nature of the decision task results in some non-determinism issue when writing Cadence workflow. For more details you can refer to the article.
On each decision task, Cadence Client SDK can start from very beginning to "replay" the code, for example, executing activityA. However, this replay mode won't generate the decision of scheduling activityA again. Because client knows that the activityA has been scheduled already.
However, a worker doesn't have to run the code from very beginning. Cadence SDK is smart enough to keep the states in memory, and wake up later to continue on previous states. This is called "Workflow Sticky Cache", because a workflow is sticky on a worker host for a period.
History events of the example:
1. WorkflowStarted
2. DecisionTaskScheduled
3. DecisionTaskStarted
4. DecisionTaskCompleted
5. ActivityTaskScheduled <this schedules activityA>
6. ActivityTaskStarted
7. ActivityTaskCompleted <this records the results of activityA>
8. DecisionTaskScheduled
9. DecisionTaskStarted
10. DecisionTaskCompleted
11. TimerStarted < this schedules the timer>
12. TimerFired
13. DecisionTaskScheduled
14. DecisionTaskStarted
15. DecisionTaskCompleted
16. WorkflowCompleted
TLDR; When a new external event is received a workflow task is responsible for determining which next commands to execute.
Temporal/Cadence workflows are executed by an external worker. So the only way to learn about which next steps a workflow has to take is to ask it every time new information is available. The only way to dispatch such a request to a worker is to put into a workflow task into a task queue. The workflow worker picks it up, gets workflow out of its cache, and applies new events to it. After the new events are applied the workflow executes producing a new set of commands. After the workflow code is blocked and cannot make any forward progress the workflow task is reported as completed back to the service. The list of commands to execute is included in the completion request.
Does the worker run through the workflow from beginning (using records of completed activities) until it hits the next "meaningful" thing it needs to do while making a "decision" on what needs to be done next?
This depends if a worker has the workflow object in its LRU cache. If workflow is in the cache, no recovery is needed and only new events are included in the workflow task. If object is not cached then the whole event history is shipped and the worker has to execute the workflow code from the beginning to get it to its current state. All commands produced while replaying past events are duplicates of previously produced commands and are ignored.
The above means that during a lifetime of a workflow multiple workflow tasks have to be executed. For example for a workflow that calls two activities in a sequence:
a();
b();
The tasks will be executed for every state transition:
-> workflow task at the beginning: command is ScheduleActivity "a"
a();
-> workflow task when "a" completes: command is ScheduleActivity "b"
b();
-> workflow task when "b" completes: command is CompleteWorkflowExecution
In the answer, I used terminology adopted by temporal.io fork of Cadence. Here is how the Cadence concepts map to the Temporal ones:
decision task -> workflow task
decision -> command, but it can also mean workflow task in some contexts
task list -> task queue

Run scheduler to execute jobs at an interval from the completion of the previous job

I need to create schedulers to execute jobs(class files) at specified intervals..For Now, I'm using Quartz Scheduler which triggers the jobs at defined intervals from the time of triggering of it.
For Eg: Consider I'm giving a cron expression to run for every one hour starting at morning 9.My first run will be at 9 and my second run will be at 10 and so on.
If my job is taking 20 minutes to execute then in that case this method is not that much efficient.
What I need to do is to schedule a job for every one hour from the completion time of the previously ran job
For Eg: Consider my job to run every one hour is triggered at 9 and for the first run it took 20 minutes to run, so for the next time the job should trigger only at 10:20 instead of 10 (ie., one hour from the completion of previous ran job)
I need to know whether there are any methods in Quartz Scheduling to achieve this or any other logic I need to do.
If anyone could help me out on this,it would be very helpful for me.
You can easily achieve this by job-chaining your job executions. There are various approaches you can choose from:
(1) Implement a Quartz JobListener and in its jobWasExecuted method, that is invoked by Quartz whenever a job finishes executing, re-fire your job.
(2) Look at the Quartz JobChainingJobListener that you can use to implement simple job chaining scenarios. Please note that the functionality of this listener is very limited as it does not allow you to insert delays between job executions, there is no support for conditions that must be met before target jobs are executed etc. But you can use it as a good starting point to implement (1).
(3) Use QuartzDesk (our commercial product) or any other product that allows you to create job chains while externalizing and managing all job dependencies outside of your application. A job chain can have multiple target jobs that can be executed immediately, with a fixed delay or at arbitrary time in the future produced by a JavaScript expression. It also allows you to implement somewhat more sophisticated works flows, such as firing a target job when multiple source jobs complete their execution etc. I am attaching screenshots showing you what a simple job chain that re-executes Job1 with a 1 minute delay upon Job1's completion (with any job execution status) looks like:

FreeSTOS task never get swapped

According to the FreeRTOS task scheduling documentation, the kernel can swap a task even if the task is currently executing and haven't called any blocking function. So once the kernel gets the clock ticks interrupt and is executing its ISR, it can schedule another task to execute after that.
On my system with FreeRTOS, I launch 5 tasks, each one is programmed to delay itself at some point and therefore I can see all tasks being swapped in and out and each task is executing at some point. But if I enter an infinite loop inside a task, that task is NEVER gets swapped out.
How is that possible?
Firstly you need to ensure that configUSE_TIME_SLICING is set. This enables the round robin scheduler, which allows the scheduler to do what you are expecting.
Also it will only switch to another task if it is of equal or higher priority.

Quartz.net scheduler and IStatefulJob

I am wondering if I am understanding this right.
http://quartznet.sourceforge.net/apidoc/
IStatefulJob instances follow slightly
different rules from regular IJob
instances. The key difference is that
their associated JobDataMap is
re-persisted after every execution of
the job, thus preserving state for the
next execution. The other difference
is that stateful jobs are not allowed
to Execute concurrently, which means
new triggers that occur before the
completion of the IJob.Execute method
will be delayed.
Does this mean all triggers will be delayed until another trigger is done? If so how can I make it so only the same triggers will not fire until the previous trigger is done.
Say I have trigger A that fires every min but for some reason it is slow and takes a minute and half to execute. If I just use a plan IJob the next one would fire and I don't want this. I want to halt trigger A from fireing again until it is done.
However at the same time I have trigger B that fires every minute as well. It is going normal speed and finishes every minutes on time. I don't want trigger B to be held up because of trigger A.
From my understanding this is what would happen if I use IStatefulJob.
In short.. This behavior is from job's side. So regardless how many triggers you may have only single instance of given IStatefulJob (job name, job group dictates the instance id) running at a time. So there might be two instance of same job type, but no same-named jobs (name, group) if job implements IStatefulJob.
If trigger misses its fire time because of this, the misfire instructions come into play. A trigger that misses its next fire because the earlier invocation is still running decides what to do based on its misfire instruction (see API and tutorial).
With plain IJob you have no guarantees about how many jobs will be running at the same time if you have multiple triggers for it and/or misfires are happening. IJob is just contract interface for invoking the job. Quartz.NET 2.0 will split IStatefulJob combined behavior to two separate attributes: DisallowConcurrentExecution and PersistJobDataAfterExecution.
So you could combine same job type (IStatefulJobs) with two definitions (different job names) and triggers with applicable misfire instructions.