How can I define queuing discipline for a service in AnyLogic? - queue

Is there a way to define queuing discipline (FIFO etc) for a service?

The default for a service is FIFO, the other option is to use priorities:
This is the same as priority based in the queue.
To do a LIFO you can do it using priorities, setting up the priority as the time in which the agent arrived to the queue. The bigger the time, the best priority it has.
To do an agent comparison, you can also manage to do that only with priorities.
Conclusion: even if there's no explicit way to define the queuing discipline as you call it, you can still do it accepting the default fifo or using the priorities depending on your needs.

Related

Change priority rule and reorder queued agents in runtime using Reinforcement Learning

I am developing a model comprised of m consecutive machines in which n agents must be processed in random sequences of machines. I want to have an intelligent agent (Reinforcement Learning) to, in each action, set the priority rule to rank queued agents in each machine.
The problem I have is that I am not sure if I am correctly changing the queueing order of agents in each queue, whenever the ranking rule is changed.
After some googling, I found this post, which seems to be what I want.:
Change priority rule of a Queue block at runtime in Anylogic
In this post, user Stuart Rossiter posted an interesting solution, (case 2 - using service block), which consists of sorting the agents queued on the embedded service's queue, using self.queue.sortAgents().
However, AnyLogic does not recognize this expression, as when I try to use it, I get the error "queue cannot be resolved or is not a field". After some more googling, I was able to find that the embedded queue of services can be accessed through service.seize.queue; however, even through this way, the method sortAgents() cannot be used, as I get an error saying that the method is undefined.
So, I am asking how can I reorder the agents in the embedded queue of a service after changing the ranking rule in runtime?
Obviously, I am assuming that playing with the task priority of the service would not be enough, as that would only be used to rank the order of agents that arrive to the queue after the ranking rule is set, i.e., it does not update the order of jobs queued before the ranking rule is changed (this is also clearly explained by the same user Stuart Rossiter).
Thank you.

Change priority rule of a Queue block at runtime in Anylogic

I am trying to implement reinforcement learning in Anylogic using pathmind library, the RL agent can take either of two actions which is changing the priority rule of a Queue block.
I have a Queue block where I'm using priority-based queueing. I have two priority rules: using agent's departure date & agent's wait time. I want to either of these rules during runtime using another function called doAction(action). A value 0 or 1 will be passed to this function. The function body would be like this:
doAction(action){
if(action==0){
//set departure_date as priority rule of Queueblock}
else{
//set wait_time as priority rule of Queueblock}
}
The expression of my queue block is given here. .
RL parameters are mentioned here.
What should be the code to set priority rule dynamically from the doAction(action) function?
I would suggest to rather make the priority rule dynamic inside the queue.
I assume you have some agent with a field for departureTime as well as for waitingTime.
Then you can do something like the following:
You simply have a different priority level for each agent if the priority option changes.
Here I am using boolean useDaprtureTime, but you can make it as complex as you need and even have a function in the "Agent Priority" field that returns the priority level.
Just remember that you need to call queue.sortAgents() if you change the rule since only the new agents that arrive are sorted, not the entire list of agents waiting in the queue since this will be too resource-intensive.
To use priorities, you specify an expression to determine the priority of the agent in the Queue's "Agent priority" or "Agent 1 is preferred to agent 2" property (depending what priority scheme you're using).
So have that expression be calling a function (defined within the agent type in question) which returns either the departure date or wait-time alternative.
Also, you didn't say whether this is a global setting --- i.e., use either departure or wait-time-based priorities for the whole run --- or could change dynamically; if you want the latter, you potentially need to call the sortAgents function of the Queue block (which might be inside a Service or Seize block, depending what you're doing) at the appropriate times (i.e., when your prioritisation scheme changes) to re-calculate all the priorities for agents currently waiting in the queue.
EDIT: I see from your other comment that you're trying to use reinforcement learning, presumably learning how to make a decision on how to prioritise the agents. (You should put that in an edit to your question since it's pretty important and relevant!)
So if you view the queue as the 'learning agent', you need to separate the learning action (which will set up / decide which prioritisation scheme you're using) from then using that scheme in the prioritisation.
This depends on whether you're using a Queue on its own (with priority based or agent comparison queueing), or you're doing this within a Service or Seize block. It matters because the on-enter action of the latter runs before the priority calculation expression but, with a plain Queue, it runs after the priority calculation.
Case 1: Using Service or Seize block
Have the on-enter action be the RL action which would then, say, set some variable to say which prioritisation scheme it had chosen and then call sortAgents on its embedded queue (self.queue) to recalc all the priorities. Then have switches in the priority calculation expression as above to do the calculation for the incoming agent using the required scheme.
Case 2: Using a plain Queue block
As above, but do the prioritisation scheme decision in the on-at-exit actions of all immediately preceding blocks (i.e., so that this is run just before the agent arrives at the Queue block and has its prioritisation allocated).
You can always use 2 queue blocks and send agents to only one using a SelectOutput block in front of them.
Each agent decides which queue to use based on your conditions.

Accessing Queue Priorities in a Seize Block

According to the AnyLogic's documentation, a Seize block embeds a Queue block, and "The rich interface of Queue (ability to use priorities, timeouts, remove agents, etc.) is fully exposed by Seize.".
I want to access the queue portion of a seize block in order to make agent prioritization, which can be found under the first "Advanced" tab of the Queue block properties. However, I cannot see this in the properties of a Seize block.
Is there anything I have to do in order for this property to appear in the Seize block? Or do I have to set the queue capacity of the Seize block to 0 and add a separate Queue block in front? I want the model to be as readable as possible for my case organization, thus I want to use as few blocks as possible.
In the seize, the conceptual difference is that instead of "queue priority" you have "task priority"
You can basically do everything related to priority using only that. If you do nothing, you use FIFO, if you want to prioritize based on priority based, well then it's the exact same. If you want to use LIFO, then you can use agent.getBlockEnterTime() as your priority variable, and if you want to compare agents, it's the same as using priority based.
So no, you don't need to add another queue

High Scalability Question: How to sync data across multiple microservices

I have the following use cases:
Assume you have two micro-services one AccountManagement and ActivityReporting that processes event U.
When a user registers, event U containing the user information will published into a broker for the two micro-services to process.
AccountManagement, and ActivityReporting microservice are replicated across two instances each for performance and scalability reasons.
Each microservice instance has a consumer listening on the broker topic. The choice of topic is so that both AccountManagement, and ActivityReporting can process U concurrently.
However, I want only one instance of AccountManagement to process event U, and one instance of ActivityReporting to process event U.
Please share your experience implementing a Consume Once per Application Group, broker system.
As this would effectively solve this problem.
If all your consumer listeners even from different instances have the same group.id property then only one of them will receive the message. You need to set this property when you initialise the consumer. So in your case you will need one group.id for AccountManagement and another for ActivityReporting.
I would recommend Cadence Workflow which is much more powerful solution for microservice orchestration.
It offers a lot of advantages over using queues for your use case.
Built it exponential retries with unlimited expiration interval
Failure handling. For example it allows to execute a task that notifies another service if both updates couldn't succeed during a configured interval.
Support for long running heartbeating operations
Ability to implement complex task dependencies. For example to implement chaining of calls or compensation logic in case of unrecoverble failures (SAGA)
Gives complete visibility into current state of the update. For example when using queues all you know if there are some messages in a queue and you need additional DB to track the overall progress. With Cadence every event is recorded.
Ability to cancel an update in flight.
See the presentation that goes over Cadence programming model.

Akka: Adding a delay to a durable mailbox

I am wondering if there is some way to delay an akka message from processing?
My use case: For every request I have, I have a small amount of work that I need to do and then I need to additional work two hours later.
Is there any easy way to delay the processing of a message in AKKA? I know I can probably setup an external distributed queue such as ActiveMQ, RabbitMQ which probably has this feature but I rather not.
I know I would need to make the mailbox durable so it can survive restarts or crashes. We already have mongo setup so I probably be using the MongoBasedMailbox for durability.
Temporal Workflow is capable of supporting your use case with minimal effort. You can think about it as a Durable Actor platform. When actor state including threads and local variables is preserved across process restarts.
Temporal offers a lot of other features for task processing.
Built it exponential retries with unlimited expiration interval
Failure handling. For example, it allows executing a task that notifies another service if both updates couldn't succeed during a configured interval.
Support for long running heartbeating operations
Ability to implement complex task dependencies. For example to implement chaining of calls or compensation logic in case of unrecoverable failures (SAGA)
Gives complete visibility into the current state of the update. For example, when using queues all you know if there are some messages in a queue and you need additional DB to track the overall progress. With Temporal every event is recorded.
Ability to cancel an update in flight.
Throttling of requests
See the presentation that goes over the Temporal programming model. It talks about Cadence which is the predecessor of Temporal.
It's not ideal, but the Akka Camel Quartz scheduler would do the trick. More heavyweight than the built-in ActorSystem scheduler, but know that Quartz has its own issues.
you could still use the normal Akka scheduler, you will just have to keep a state on the actor persistence to avoid loosing the job if the server restarted.
I have recently used PersistentFsmActor - which will keep the state of the actor persisted
I'm not sure in your case you have to use FSM (Finite State Machine) , so you could basically just use a persistentActor to save the time the job was inserted, and start a scheduler to that time. this way - even if you restarted the server, the actor will start and create a new scheduled job use the persistent data to calculate the time left to run it