Agent Inheritance and Population Grouping - anylogic

I am digging deeper into Agent Inheritance and I am still at the exploration level so my question will not be specific to an example but rather conceptual.
My objective is to create a model with an Agent Type called Machine. However, there will be different types of Machines and some may have different statecharts or different parameters. So, initially I thought it would be a good idea to create an Agent Type called Machine and then, using Agent inheritance, create Agent Types that extend from it (e.g. Machine 1, Machine 2, etc.).
The result is that if I have one machine of each type, the Machine Agent Type population will be empty, while Machine 1 and Machine 2 each will have a population of 1. I understand AnyLogic is designed that way, but ideally, I would like to see the population of the Machine Agent Type with a population of 2, one of type Machine 1 and the other of type Machine 2.
Agent inheritance might not be the answer, but I was hoping I could find a solution to this problem where I can have one main population with different sub-types.
You may ask why would that be needed. The answer is that all machines should have a similar behavior. Comparing this to DES, it's like having different Resources. All will have similar behavior (e.g. can be seize, released, attached, etc.) but each can be unique.
Your thoughts/suggestions would be appreciated.
Thanks!

If you want to use agent inheritance, then you would need to have 2 different populations. A population of type Machine will have agents of type Machine, not their child agent types. I typically deal with this by having the populations for the child agents, and then storing all agents in a list (i.e., array list - allMachines).
You mention different state charts, which is a good reason to use inheritance. Many people will try to inherit because one delay block takes X minutes, while another is Y. In those cases, just parametrize your one agent type. If the logical differences between these machines is small, I would consider just 1 class, with a few extra decides/branches to get the behavior you want. It can sometimes be tricky in AL to have process blocks/visual elements in a parent connect to areas in a child - not impossible, but not as easy as pure Java code where you can override and call super.function().

Related

Modelling agent behaviour, where single agent operates under two conditions simultaneously

Any tips on how to model agent behaviour in an environment where two sets of rules apply simultaneously.
Specifically, what I am looking to simulate is a situation where an Agent operates under a specific set of rules, such as an employee-employer relationship, but at the same time, operates on perhaps different "informal" rules, such as an employee-employee relationship. Effectively there are two network structures in place, but the agent operates in both structures.
Any example models out there that I could take a look at?
(This is a model design question, not programming, so it probably belongs on the NetLogo user group instead of here.)
My colleague and I wrote a book on modeling decisions that are tradeoffs between competing objectives, in ABMs. It's focus is on ecology but the concepts could be useful for you.
The basic idea is to come up with an objective function that includes both "sets of rules" as you call them. Perhaps something like maximizing your relations with fellow employees without getting fired by the employer. Then build very simple models of how decisions affect the mechanisms that drive co-worker relations, probability of getting fired, etc. It's not simple, but it's very powerful and flexible. You won't find a simple approach that's very general.
The fun part is trying different variations and comparing how they work.
https://press.princeton.edu/books/paperback/9780691195285/modeling-populations-of-adaptive-individuals
Maybe I’m under-thinking this, but..I would encode both (all) sets of rules, and have the agent execute those rules as appropriate.
So, how to choose and execute rules?
Per social interaction:
Execute one behavior randomly based on which relationships are present
Choose one or more behaviors, as #1 and execute all of them in a specific order
As above, but execute behaviors in random order
For all possible behaviors, assign a probability based on whatever factors (number of role-members present, utility, consequence, etc l) Choose and execute one behavior randomly based on that probability (roulettes-wheel selection)
Choose more than one… execute in fixed or random order
Proportionate to the value of a “social-competence” property, Select a number of possible behaviors as #4. Then randomly select one of those to execute.
Here’s a code-segment example of #6
;; list of possible reactions
;; these are variables or reporters that report
;; an anonymous command that executes the behavior
Let action-list (list
boss-action
employee-action
coworker-action
peer-action
)
;; measure social situation
;; list of values from reporters
;; these are reporters that return a value
;; based on, for example, how many of that type of
;; relationship are present.
Let choice-list ( list
( probability-of-doing-boss-behavior )
( probability-of-employee-behavior )
( probability-of-coworker-behavior )
( probability-of-peers-behavior )
)
;; think about situation, choose possible actions,
;; as many times as social-competence allows.
;; roulette-wheel should return the index of the selected action
Let reaction-list (n-values social-competence
[ -> roulette-wheel choice-list ] )
;; choose an action from those options
Let action-index one-of reaction-list
;; execute that action
Run item action-index action-list
The same result as a single combined objective function might be a physics type model where the result of any single set of rules is a vector of some strength pushing ("nudging"?) the agent in some direction. Then you could have as many sets of rules as you want, each contributing it's own vector of force, and the final result would be a resultant combined net force and subsequent Newtonian F=m*a or rearranging acceleration = ( vector sum of forces )/mass.
I'm trying to imagine how I respond when the expectations of different groups I belong to clash, and whether a linear sum vector model describes it. I recall in college when I couldn't resolve Catholic support for the Vietnam War and "Kill for Christ" was a tongue-in-cheek slogan. I think in that case the "forces" didn't cancel out -- they resulted in ABANDONING membership in one of the groups to reduce cognitive dissonance. So, not a linear sum of zero in that case.
Another unstable human approach might be to keep going back and forth when two forces attempting to dominate behavior conflict -- first going with one a few steps then feeling guilty and going the other way a few steps. So which one dominates at any given step might depend on one's recent path and history and which one you "owed" something to. Or imagine being married to two people and trying to keep both of them happy. Maybe you partition space and Monday-Tuesday-Wednesday you keep one happy and Thursday-Friday-Saturday you keep the other happy and Sunday you go fishing.

How can I use a static population of agents with a pedestrian model?

I am trying to make a model to simulate the contagion of covid in public spaces using a mix between SEIR and pedestrian models. and I was stuck when using my population of agents with the pedestrian library.
Looking in the documentation about pedSource I was able to make it add the agents it created to the population. but I want that when the agent leaves the space of the simulation it is not deleted and then that agent can reappear through the entrance.
For this reason I am using a pedEntry and pedExit and send the agents to another space where they wait in queues until they return to the main space of the simulation.
Is there any documentation that talks about using a static population with the pedestrian library?
You can convert your peds into normal agents if you like. You should do that as soon as you do not need the pedestrian capabilities, as they eat a lot of processing power.
Simply create a normal agent type and duplicate the parameters that you need.
Then, create an agent in PedSink using the ped's characteristics before it is destroyed.
Pro tip: You can even use a parent agent type that your PedAgent type and your normal AgentType inherit from. It can hold all the characteristics, so no need to duplicate elements ;)

Define agent population which is dispersed in specific areas or divided in groups?

I have one agent (100 population) in main. Is it possible to divide this population into several groups, separated from each other but still connected?
There was a solution where we can define this population in another agent (containing square fields), but we need not have this dependency over another agent. I hope there will be a solution.
Thank you for help.
Sure. Many different options, depends on what you want to do with them.
Easiest is to use a parameter within your Agent type, maybe "gender". Create an OptionList "availableGenders" with entries like "FEMALE" and "MALE".
Then, when creating your agents, you can assign each a gender via that parameter.
Now, you have 1 population of humans but you can easily filter them by gender (using findAll(myPopulation, a -> a.gender.equals(MALE)) or similar)
Lots of example models use this approach, please check them to understand how it is done.

Storing parameters for rules

I am using RdeHat Decision Maker 7.1 (Drools) to create a rule for assigning a case to a department. The rule itself is quite simple, however it requires quite a lot of parameters (~12) like the agent type, working area, case type, customer seniority and more. The result "action" is the department to which the case is assigned.
I tried to place the parameters in a decision table , but the table quickly bloated to over 15,000 rows and will probably get even larger then that. I did, however, notices that in many cases the different between two rows is 1 or two parameters (e.g. same row with the only different is agent type "Local" vs. "Regional") resulting in different assignment.
I am thinking of replacing the table with something else, like a tree structure, so I can group similar rows under the same node and then navigate over the tree to make the decision. To do this I plan to prioritize the parameters and give parameters with higher priority a higher place in the tree.
Does anyone has experience with such a problem ? I looked at decision trees but they focus more on ML and probabilities, so I'm not sure this is what I need.
Is there any other method to deal with bloated tables that become unmanageable ? I cannot go to our customer and ask them to maintain a 15,000 rows excel. They'll shoot me there and then.
Thanks
Alon.

Moving agents with other agents using Pickup/Dropoff from PML in Anylogic without duplicate code

Info: The question was updated with more explanation
I want to transport a agent (e.g. bananas) with a moving agent (e.g. truck) from place A to place B, where, for example, place A is where the bananas where plucked and place B is some storage for the bananas. So the bananas are simply being transported by the truck. Especially, the agent to be moved (the bananas) are not a resource (in the sense of Anylogic PLM) and have no upper amount limit.
There are various ways to solve this problem, but most of them either require some element in the model that I don't need or want (for example, a rack/pallet system in the case of the block 'Rack Store') or require the agents to be Anylogic resources.
As described in this answer, it kinda makes sense to use pickup and dropoff for this task. The problem is that the agent to be moved is not being transported, so that answer does not solve my question. To explain further, when the agent to be moved (the bananas) are being dropped off at the target location (place B), they simply re-appear at their original location (place A), even though the truck which picked them up via the pickup block has moved to place B.
I made a minimal example of this here.
As I described, the 'transportation' only works if I add the separate 'moveTo1' block for the dropped off agents.
Is there any simple and obvious way to handle this rather simple task of transportation in Anylogic without having multiple move blocks or other workarounds? I know there is 'ResourceAttach', but that requires the agent to be moved to be resources, and there is 'RackStore', which requires a rack/pallet system, which I don't need or want in my model.
What I want to know is what the 'standard' Anylogic way would be to do this.
Thanks a lot in advance!
Now I understand what your problem is...
When you use dropoff, the block that comes after it needs to define the new location of the agents, otherwise they stay in the same place.. You can use the moveTo block with a jump so the agents are teleported to the location you want them to be:
In almost all the blocks of the PML you can define the agent location in the properties, and this is a case where using that property is necessary.
You can set the position of the bananas to the position of the truck.
e.g. using agent.setXY(container.getX(), container.getY()) in the "On dropoff" field.
It seems to work for a simple test model.