Agent can't be in several flowcharts at the time. At least two flowchart blocks are in conflict: - anylogic

Suppose I have the following supply chain model see model model1
Agents are communicating with each other through a defined network and send messages to each other through ports. for example, demand is generated for customers through their ports and send as "orders" upstream to facilities. Upstream facilities send "shipments" to downstream facilities
and stats are collected at each node.
The model seems to work for 2 echelons but when one facility is connected to two facilities downstream as desired I get the following error "Agent can't be in several flowcharts at the time. At least two flowchart blocks are in conflict" see error. Based on the description it seems the agent "shipment" is sent to two facilities at the same time.
My question is how could I avoid this conflict?
more information about each node:
Agents' "orders" enter through each node's port and are capture as Enter. take(msg), follow a flowchart, and exit as Agent "shipment" to each destination. Each agent "order" has a double amount and port destination. see facility node
any suggestions please?

You must make sure that you do not send agents into a flowchart that is already in another flow chart, correct. This is bad model design.
One way to debug and find the root issue: before sending any message agent, check currentBlock()!=null and traceln the agent and the block. Also pause the model.
You can then see where you want to (re)send that agent that is already in some other flowchart block.
You probably send message agents out that are still somewhere else.
PS: For messages, you probably do not want to use flow charts at all but normal message passing. This avoids these pains here as you can easily send the same message to several agents. Check how message passing is done in the example agent models

Related

How to configure channels and AMQ for spring-batch-integration where all steps are run as slaves on another cluster member

Followup to Configuration of MessageChannelPartitionHandler for assortment of remote steps
Even though the first question was answered (I think well), I think I'm confused enough that I'm not able to ask the right questions. Please direct me if you can.
Here is a sketch of the architecture I am attempting to build. Today, we have a job that runs a step across the cluster that works. We want to extend the architecture to run n (unbounded and different) jobs with n (unbounded and different) remote steps across the cluster.
I am not confusing job executions and job instances with jobs. We already run multiple job instances across the cluster. We need to be able to run other processes that are scalable in hte same way as the one we have defined already.
The source data is all coming from database which are known to the steps. The partitioner is defining the range of data for the "where" clause in the source database and putting that in the stepContext. All of the actual work happens in the stepContext. The jobContext simply serves to spawn steps, wait for completion, and provide the control API.
There will be 0 to n jobs running concurrently, with 0 to n steps from however many jobs running on the slave VM's concurrently.
Does each master job (or step?) require its own request and reply channel, and by extension its own OutboundChannelAdapter? Or are the request and reply channels shared?
Does each master job (or step?) require its own aggregator? By implication this means each job (or step) will have its own partition handler (which may be supported by the existing codebase)
The StepLocator on the slave appears to require a single shared replyChannel across all steps, but it appears to me that the messageChannelpartitionHandler requires a separate reply channel per step.
What I think is unclear (but I can't tell since it's unclear) is how the single reply channel is picked up by the aggregatedReplyChannel and then returned to the correct step. Of course I could be so lost I'm asking the wrong questions.
Thank you in advance
Does each master job (or step?) require its own request and reply channel, and by extension its own OutboundChannelAdapter? Or are the request and reply channels shared?
No, there is no need for that. StepExecutionRequests are identified with a correlation Id that makes it possible to distinguish them.
Does each master job (or step?) require its own aggregator? By implication this means each job (or step) will have its own partition handler (which may be supported by the existing codebase)
That should not be the case, as requests are uniquely identified with a correlation ID (similar to the previous point).
The StepLocator on the slave appears to require a single shared replyChannel across all steps, but it appears to me that the messageChannelpartitionHandler requires a separate reply channel per step.
The messageChannelpartitionHandler should be step or job scoped, as mentioned in the Javadoc (see recommendation in the last note). As a side note, there was an issue with message crossing in a previous version due to the reply channel being instance based, but it was fixed here.

How to use a population of Queues

I am trying to work out how to use the "Population of agents" radio button functionality in the Advanced section of the "Queue" block from the process library.
I am able to successfully select the "Population of agents" option and specify the number of queues to be in the population however I am then unable to direct agents to any of the queues in the population. Ultimately, I need to send agents to specific queues in the collection (population) but I can't seem to work out how to do that.
The screenshot shows a bit more of what I am trying to achieve:
Even though you have the option to create a population of queues, there's no possible use for it since you can't add an agent to the queue without a flow (for insteance enter block)
In order to make a good population of queues, you need to create a population of agents called for instance QueueAgent
And in this queue agent you will have
enter->queue->exit
Then you can choose to what queue to send your agent to send it by using queueAgents.get(index).enter.take(agent) where index is the queue index to which your agent needs to be sent
the exit block will send your agent back to an enter block that will be connected to speacialtyProcesses
The only blocks in which it makes sense to create a population are the blocks that create agents such as source or enter blocks.

Change priority rule and reorder queued agents in runtime using Reinforcement Learning

I am developing a model comprised of m consecutive machines in which n agents must be processed in random sequences of machines. I want to have an intelligent agent (Reinforcement Learning) to, in each action, set the priority rule to rank queued agents in each machine.
The problem I have is that I am not sure if I am correctly changing the queueing order of agents in each queue, whenever the ranking rule is changed.
After some googling, I found this post, which seems to be what I want.:
Change priority rule of a Queue block at runtime in Anylogic
In this post, user Stuart Rossiter posted an interesting solution, (case 2 - using service block), which consists of sorting the agents queued on the embedded service's queue, using self.queue.sortAgents().
However, AnyLogic does not recognize this expression, as when I try to use it, I get the error "queue cannot be resolved or is not a field". After some more googling, I was able to find that the embedded queue of services can be accessed through service.seize.queue; however, even through this way, the method sortAgents() cannot be used, as I get an error saying that the method is undefined.
So, I am asking how can I reorder the agents in the embedded queue of a service after changing the ranking rule in runtime?
Obviously, I am assuming that playing with the task priority of the service would not be enough, as that would only be used to rank the order of agents that arrive to the queue after the ranking rule is set, i.e., it does not update the order of jobs queued before the ranking rule is changed (this is also clearly explained by the same user Stuart Rossiter).
Thank you.

AnyLogic: two customer classes having different priorities

I know the basics of AnyLogic/Process Modeling Library and am about to teach simulation of basic queues with AnyLogic, transitioning from Simul8 that I 've used for many years.
I have agents of two types, 1 and 2, sent to respective queues 1 and 2, which then feed a single "service" point, so that type 1 takes higher priority (that is, whenever service is ready to pull work, it pulls from queue 1 if it is non-empty, regardless of the size of queue 2). How to capture this as simply as possible?
Having seen the reference pages for a Queue object, my preliminary (unworked) idea is to use a single queue, and control agent priority by the Queue.QUEUING_PRIORITY- Priority-based" option.
For comparison, a solution in the Simul8 software is: set "service" routing-in discipline to "priority"; and assign different priorities to the two queues.
Yes you are right you cant use two queues as the pull from the queues will be done in a round robin fashion. See the screenshot below from the AnyLogic training textbook
You should use queueing in a single queue and you can have either a single parameterised source or two.
See example below
I have 2 sources and at each of them, I set the priority to a local variable inside my agent. Agents from source 1's variable is set to 1 and the from source set to 0.
Then inside the queue, I set the priority so that the agents from source 1 is always in front.

Request entity from Anyogic Process Block and wait until it's available if there is not currently one

I'm trying to emulate what QUEST does when a buffer is queried for a certain Part. In there if the part is not in the buffer the request is left pending and if a Part arrives to the buffer it's released to the machine requesting it. I have also seen this behavior in SimPy which is another DES engine.
I can't seem to find a simple way to do this in AL. The queue block has the following methods:
release(agent): Will return false and forget about the request if there's not an agent as the one specified
remove(agent): Will return null if there's no agent in the queue
So those methods won't do what I want...
It gets a little more complicated as the queue contains agents with parameters and I want to request a specific set of parameters (let's say the agents have a number parameter that can go from 1 to 3 and I'm only interested in agents in the queue if this parameter has the value 2).
Also there's a series of agents pulling this agents from the queue simultaneously and I'd like a priority to be set (let's say FIFO)
so there's a couple things that I've tried and have lead me nowhere:
Using a seize block instead of queue and adding the agents to the embedded queue in the seize block. -> I can't find the proper method to seize from the buffer in a different way from a buffer block (so I moved to option 2) but seize does have a promising customize resource choice that could help with the parameter down-selection
Using a seize block and storing the agents in a pool as resources. issues with dynamic creation of resources, seizing the appropriate one etc...
Creating a queue of requests that have returned null from a queue. This sounds like an overkill but I'll look into it
All of those appear to be a bit complex for such a simple thing in other softwares for simulation so I'm wondering if I'm missing something or if someone has come across this issue before
Suggestion 1: may it helps you to store the agents in the queue in a collection (or different collections, according to the parameter settings). Events: "on enter" and "on exit"
Suggestion 2: may the Wait - block helps you here?