Change priority rule and reorder queued agents in runtime using Reinforcement Learning - service

I am developing a model comprised of m consecutive machines in which n agents must be processed in random sequences of machines. I want to have an intelligent agent (Reinforcement Learning) to, in each action, set the priority rule to rank queued agents in each machine.
The problem I have is that I am not sure if I am correctly changing the queueing order of agents in each queue, whenever the ranking rule is changed.
After some googling, I found this post, which seems to be what I want.:
Change priority rule of a Queue block at runtime in Anylogic
In this post, user Stuart Rossiter posted an interesting solution, (case 2 - using service block), which consists of sorting the agents queued on the embedded service's queue, using self.queue.sortAgents().
However, AnyLogic does not recognize this expression, as when I try to use it, I get the error "queue cannot be resolved or is not a field". After some more googling, I was able to find that the embedded queue of services can be accessed through service.seize.queue; however, even through this way, the method sortAgents() cannot be used, as I get an error saying that the method is undefined.
So, I am asking how can I reorder the agents in the embedded queue of a service after changing the ranking rule in runtime?
Obviously, I am assuming that playing with the task priority of the service would not be enough, as that would only be used to rank the order of agents that arrive to the queue after the ranking rule is set, i.e., it does not update the order of jobs queued before the ranking rule is changed (this is also clearly explained by the same user Stuart Rossiter).
Thank you.

Related

How to configure channels and AMQ for spring-batch-integration where all steps are run as slaves on another cluster member

Followup to Configuration of MessageChannelPartitionHandler for assortment of remote steps
Even though the first question was answered (I think well), I think I'm confused enough that I'm not able to ask the right questions. Please direct me if you can.
Here is a sketch of the architecture I am attempting to build. Today, we have a job that runs a step across the cluster that works. We want to extend the architecture to run n (unbounded and different) jobs with n (unbounded and different) remote steps across the cluster.
I am not confusing job executions and job instances with jobs. We already run multiple job instances across the cluster. We need to be able to run other processes that are scalable in hte same way as the one we have defined already.
The source data is all coming from database which are known to the steps. The partitioner is defining the range of data for the "where" clause in the source database and putting that in the stepContext. All of the actual work happens in the stepContext. The jobContext simply serves to spawn steps, wait for completion, and provide the control API.
There will be 0 to n jobs running concurrently, with 0 to n steps from however many jobs running on the slave VM's concurrently.
Does each master job (or step?) require its own request and reply channel, and by extension its own OutboundChannelAdapter? Or are the request and reply channels shared?
Does each master job (or step?) require its own aggregator? By implication this means each job (or step) will have its own partition handler (which may be supported by the existing codebase)
The StepLocator on the slave appears to require a single shared replyChannel across all steps, but it appears to me that the messageChannelpartitionHandler requires a separate reply channel per step.
What I think is unclear (but I can't tell since it's unclear) is how the single reply channel is picked up by the aggregatedReplyChannel and then returned to the correct step. Of course I could be so lost I'm asking the wrong questions.
Thank you in advance
Does each master job (or step?) require its own request and reply channel, and by extension its own OutboundChannelAdapter? Or are the request and reply channels shared?
No, there is no need for that. StepExecutionRequests are identified with a correlation Id that makes it possible to distinguish them.
Does each master job (or step?) require its own aggregator? By implication this means each job (or step) will have its own partition handler (which may be supported by the existing codebase)
That should not be the case, as requests are uniquely identified with a correlation ID (similar to the previous point).
The StepLocator on the slave appears to require a single shared replyChannel across all steps, but it appears to me that the messageChannelpartitionHandler requires a separate reply channel per step.
The messageChannelpartitionHandler should be step or job scoped, as mentioned in the Javadoc (see recommendation in the last note). As a side note, there was an issue with message crossing in a previous version due to the reply channel being instance based, but it was fixed here.

AnyLogic: two customer classes having different priorities

I know the basics of AnyLogic/Process Modeling Library and am about to teach simulation of basic queues with AnyLogic, transitioning from Simul8 that I 've used for many years.
I have agents of two types, 1 and 2, sent to respective queues 1 and 2, which then feed a single "service" point, so that type 1 takes higher priority (that is, whenever service is ready to pull work, it pulls from queue 1 if it is non-empty, regardless of the size of queue 2). How to capture this as simply as possible?
Having seen the reference pages for a Queue object, my preliminary (unworked) idea is to use a single queue, and control agent priority by the Queue.QUEUING_PRIORITY- Priority-based" option.
For comparison, a solution in the Simul8 software is: set "service" routing-in discipline to "priority"; and assign different priorities to the two queues.
Yes you are right you cant use two queues as the pull from the queues will be done in a round robin fashion. See the screenshot below from the AnyLogic training textbook
You should use queueing in a single queue and you can have either a single parameterised source or two.
See example below
I have 2 sources and at each of them, I set the priority to a local variable inside my agent. Agents from source 1's variable is set to 1 and the from source set to 0.
Then inside the queue, I set the priority so that the agents from source 1 is always in front.

Change priority rule of a Queue block at runtime in Anylogic

I am trying to implement reinforcement learning in Anylogic using pathmind library, the RL agent can take either of two actions which is changing the priority rule of a Queue block.
I have a Queue block where I'm using priority-based queueing. I have two priority rules: using agent's departure date & agent's wait time. I want to either of these rules during runtime using another function called doAction(action). A value 0 or 1 will be passed to this function. The function body would be like this:
doAction(action){
if(action==0){
//set departure_date as priority rule of Queueblock}
else{
//set wait_time as priority rule of Queueblock}
}
The expression of my queue block is given here. .
RL parameters are mentioned here.
What should be the code to set priority rule dynamically from the doAction(action) function?
I would suggest to rather make the priority rule dynamic inside the queue.
I assume you have some agent with a field for departureTime as well as for waitingTime.
Then you can do something like the following:
You simply have a different priority level for each agent if the priority option changes.
Here I am using boolean useDaprtureTime, but you can make it as complex as you need and even have a function in the "Agent Priority" field that returns the priority level.
Just remember that you need to call queue.sortAgents() if you change the rule since only the new agents that arrive are sorted, not the entire list of agents waiting in the queue since this will be too resource-intensive.
To use priorities, you specify an expression to determine the priority of the agent in the Queue's "Agent priority" or "Agent 1 is preferred to agent 2" property (depending what priority scheme you're using).
So have that expression be calling a function (defined within the agent type in question) which returns either the departure date or wait-time alternative.
Also, you didn't say whether this is a global setting --- i.e., use either departure or wait-time-based priorities for the whole run --- or could change dynamically; if you want the latter, you potentially need to call the sortAgents function of the Queue block (which might be inside a Service or Seize block, depending what you're doing) at the appropriate times (i.e., when your prioritisation scheme changes) to re-calculate all the priorities for agents currently waiting in the queue.
EDIT: I see from your other comment that you're trying to use reinforcement learning, presumably learning how to make a decision on how to prioritise the agents. (You should put that in an edit to your question since it's pretty important and relevant!)
So if you view the queue as the 'learning agent', you need to separate the learning action (which will set up / decide which prioritisation scheme you're using) from then using that scheme in the prioritisation.
This depends on whether you're using a Queue on its own (with priority based or agent comparison queueing), or you're doing this within a Service or Seize block. It matters because the on-enter action of the latter runs before the priority calculation expression but, with a plain Queue, it runs after the priority calculation.
Case 1: Using Service or Seize block
Have the on-enter action be the RL action which would then, say, set some variable to say which prioritisation scheme it had chosen and then call sortAgents on its embedded queue (self.queue) to recalc all the priorities. Then have switches in the priority calculation expression as above to do the calculation for the incoming agent using the required scheme.
Case 2: Using a plain Queue block
As above, but do the prioritisation scheme decision in the on-at-exit actions of all immediately preceding blocks (i.e., so that this is run just before the agent arrives at the Queue block and has its prioritisation allocated).
You can always use 2 queue blocks and send agents to only one using a SelectOutput block in front of them.
Each agent decides which queue to use based on your conditions.

Is there a way of assigning an int number to different instances of stateless services?

I'm building a solution where we'll have a (service-fabric) stateless service deployed to K instances. This service is tasked with some workload (like querying) and I want to split the workload between them as evenly as I can - and I want to make this a dynamic solution, which means if I decide to go from K instances to N instances tomorrow, I want the workload splitting to happen in a way that it will automatically distribute the load across N instances now. I don't have any partitions specified for this service.
As an example -
Let's say I'd like to query a database to retrieve a particular chunk of the records. I have 5 nodes. I want these 5 nodes to retrieve different 1/5th of the set of records. This can be achieved through some query logic like (row_id % N == K) where N is the total number of instances and K is the unique instance_number.
I was hoping to leverage FabricRuntime.GetNodeContext().NodeId - but this returns a guid which is not overly useful.
I'm looking for a way where I can deterministically say it's instance number M out of N (I need to be able to name the instances through 1..N) - so I can set my querying logic according to this. One of the requirements is if that instance goes down / crashes etc... when SF automatically restarts it, it should still identify as the same instance id - so that 2 or more nodes doesn't query the same set of results.
What is the best of solving this problem? Is there a solution which involves pure configuration through ApplicationManifest.xml or ServiceManifest.xml?
There is no out of the box solution for your problem, but it can be easily done in many different ways.
The simplest way is using the Queue-Based Load Leveling pattern in conjunction with Competing Consumers pattern.
It consists of creating a queue, add the work to the queue, and each instance get one message to process this work, if one instance goes down and the message is not processed, it goes back to the queue and another instance pick it up.
This way you don't have to worry about the number of instances running, failures and so on.
Regarding the work being put in the queue, it will depend if you want to to do batch processing or process item by item.
Item by item, you put one message in the queue for each item being processed, this is a simple way to handle the work and each instance process one message at time, or multiple messages in parallel.
In batch, you can put a message that represents a list of items to be processed and each instance process that batch until completed, this is a bit trickier because you might have to handle the progress of the work being done, in case of failure, the next time you can continue from where it stopped.
The queue approach is a reactive design, in this case the work need to be put in the queue to trigger the processing, If you want a proactive approach and need to keep track of which work goes to who, you probably might be better of using some other approach, like a Leasing mechanism, where each instance acquire a lease that belongs to the instance until it releases the lease, this would more suitable when you work with partitioned data or other mechanism where you can easily split the load.
Regarding the issue with the ID, an option would be the InstanceId of the replica you are on, you can reach by StatelessService.Context.InstanceId, it is not a sequential ID, but it is a random number. It is better than using the node id, because you might have multiple partitions on same node and the id would conflict with each other.
If you decide to use named partitions, you could use order in the partition name instead, so each partition would have a sequential name.
Worth mention that service fabric has a limitation that doesn't allow services to have multiple replicas on same node, because of this limitation you might have to design your services with this in mind, otherwise you won't be able to scale out once the limit is reached. Also, the same thread has some discussion about approaches to process multiple distributed items that might give you some ideas.

Request entity from Anyogic Process Block and wait until it's available if there is not currently one

I'm trying to emulate what QUEST does when a buffer is queried for a certain Part. In there if the part is not in the buffer the request is left pending and if a Part arrives to the buffer it's released to the machine requesting it. I have also seen this behavior in SimPy which is another DES engine.
I can't seem to find a simple way to do this in AL. The queue block has the following methods:
release(agent): Will return false and forget about the request if there's not an agent as the one specified
remove(agent): Will return null if there's no agent in the queue
So those methods won't do what I want...
It gets a little more complicated as the queue contains agents with parameters and I want to request a specific set of parameters (let's say the agents have a number parameter that can go from 1 to 3 and I'm only interested in agents in the queue if this parameter has the value 2).
Also there's a series of agents pulling this agents from the queue simultaneously and I'd like a priority to be set (let's say FIFO)
so there's a couple things that I've tried and have lead me nowhere:
Using a seize block instead of queue and adding the agents to the embedded queue in the seize block. -> I can't find the proper method to seize from the buffer in a different way from a buffer block (so I moved to option 2) but seize does have a promising customize resource choice that could help with the parameter down-selection
Using a seize block and storing the agents in a pool as resources. issues with dynamic creation of resources, seizing the appropriate one etc...
Creating a queue of requests that have returned null from a queue. This sounds like an overkill but I'll look into it
All of those appear to be a bit complex for such a simple thing in other softwares for simulation so I'm wondering if I'm missing something or if someone has come across this issue before
Suggestion 1: may it helps you to store the agents in the queue in a collection (or different collections, according to the parameter settings). Events: "on enter" and "on exit"
Suggestion 2: may the Wait - block helps you here?