I am trying to model an agent based model where a certain agent population of people avoid to get close to a single agent, a random moving VIP.
I have tried to useif (distanceTo(main.vip < restrictedArea)) ;moveTo(uniform(500),uniform(500))
The agent will, most of the time, move to its new random destination through the restricted area which i want to avoid
Either you use the Material-handling library (where the transporters have build-in collision avoidance).
Or you model it youself. For that, you need a cyclic event in your agent that constantly checks the distance to whatever other agent you are interested in. If below some threshold, you tell the agent to move elsewhere.
Note: the first option can be quite slow. The second is not trivial to implement. Less due to coding skills, more because having intelligent collision avoidance algorithms is not trivial
Related
I am trying to simulate a finite calling population model in AnyLogic. My population consists of 10 agents and I want them to come back to the Source node after they have been served.
I thought about making conditioning with the SelectOutput node but the Source node does not have any input. The best thing that I came up with is to just limit the number of customers arrivals to 10. However, in this case, the model stops running after 10 arrivals which is not an appropriate result.
What can I do to be able to simulate such a type of model in AnyLogic?
EDIT: I thought that making agents come back to the Source node could be a solution to building the finite calling population model. The main purpose of my question is to understand how can I build such type of model in AnyLogic. Here is the description of the concept of the model.
You cannot send them back to a Source element, as it only acts to create agents.
However, you can send them back to blocks that come after the source as below:
Here, all agents created by the Source block will infinitely loop through the Queue and Delay blocks.
Is there a way to make attractor choice both free & random at the same time?
The problem I have with either on it's own is:
When the choice is set to Free - agents are using attractors in very predictable order based on attractor creation.
When the choice is set to Random - more than one agent is using an attractor at the same time, which I don't want.
Solution I found, but don't know how to implement correctly is in the following thread:
AnyLogic Attractor weird behavior
I tried to create an agent type (instead of a class) 'myAttractor' with a boolean variable inside (occupied or not occupied attractor), but I don't know how to assign that agent type to actual attractors within a node - if that is even possible?
Maybe there are other solutions to customise the attractor choice to achieve complete randomness with only one agent per attractor?
Many thanks in advance,
Peter
That is a good question and often a problem on the animation side of things.
One option is to create a collection, simple ArrayList will do, of all the attractors
Then in the Process Modelling Block (PML) where you setup the attractor you have a function that returns an Attractor. I supply the agent here so that we can keep track of what agent is sent to which attractor so that we can put the attractor back into the available pile once the agent leaves the attractor location.
Here is the getAttractor function
It gets a random available attractor and then also saves the agent that is taking it to a map
Here is the setup for the map mapAgentPerAttractor
If you want to free the attractor you can simple call this at any point that the attractor is freed
attractorsAvailable.add(mapAgentPerAttractor.get(agent));
mapAgentPerAttractor.remove(agent);
Here is the final result as well as a comparison where we are replciating the problem you described
One can see in the bottom node there are only 8 dots available as some agents are on the same attractor...
My understanding is that attractors within nodes should have a capacity of 1, in the sense that in a 3D animation, there should only be one agent per attractor.
When I run the model, I am seeing two agent shapes on the same attractor while other attractors are empty.
Is this normal behavior? Is there a way to prevent this from happening?
Note that this is not happening all the time, but as the model is running, sometimes agents go to empty attractors, while other times they go to one that already has an agent.
One option is to create a collection, simple ArrayList will do, of all the attractors
Then in the Process Modelling Block (PML) where you setup the attractor you have a function that returns an Attractor. I supply the agent here so that we can keep track of what agent is sent to which attractor so that we can put the attractor back into the available pile once the agent leaves the attractor location.
Here is the getAttractor function
It gets a random available attractor and then also saves the agent that is taking it to a map
If you want to free the attractor you can simple call this at any point that the attractor is freed
attractorsAvailable.add(mapAgentPerAttractor.get(agent));
mapAgentPerAttractor.remove(agent);
Here is the final result as well as a comparison where we are replciating the problem you described
this is totally possible... and how to avoid depends on your model in particular.
The attractors ONLY define where your agents are going to be inside a node, and there's no rule that states that you can't have many agents in the same attractor.
AnyLogic sends the agents to the attractors in the order in which they are created, and if you have 10 attractors and 10 agents initially went to the attractors if agent 3 leaves the attractor, the next agent will not go to attractor 3, instead it will go to attractor 1, then the next one will go to attractor 2, following the same order...
If you want to avoid it in general, you should specify explicitly to what attractor your agent should go.
What i do is i create a class and i associate it to that attractor and set up a variable that defines if the attractor is occupied or not...
Then you need to create additional logic to send the agents somewhere else if all the attractors are occupied
I have an AnyLogic simulation model using trucks and forklifts as agents (Transporter type), and among other things would like to identify each time one of them becomes within a certain distance of another one (example within 5 meters). I will record this count as a variable within the main space of the model. I am using path-guided navigation.
I have seen a method "agentsInRange" which will probably be able to do the trick, but am not sure where to call this from. I assume I should be able to use the AL functionality of "Min distance to obstacle" (TransporterFleet) and "Collision detection timeout" (TransporterControl)?
Thanks in advance!
Since there don't seem to be pre-built functions for this, afaik, the easiest way is to:
add an int variable to your transporter agent type counter
add an event to your transporter type checkCollision that triggers every second or so
in the event, loop across the entire population of transporters and count the number that are closer than X meters (use distanceTo(otherTransporter) and write your own custom code)
add that number to counter
Note that this might be very inefficient computationally as it is quite brute-force. But might be good enough :)
I am thinking to implement a learning strategy for different types of agents in my model. To be honest, I still do not know what kind of questions should I ask first or where to start.
I have two types of agents which I want them to learn by experience, they have a pool of actions which each has different reward based on specific situations that might happen.
I am new to reinforcement Learning methods, therefore any suggestions on what kind of questions should I ask myself is welcomed :)
Here is how I am going forward to formulate my problem:
Agents have a lifetime and they keep track of a few things that matter for them and these indicators are different for different agents, for example, one agent wants to increase A another wants B more than A.
States are points in an agent's lifetime which they
Have more than one option (I do not have a clear definition for
States as they might happen a few times or not happen at all because
Agents move around and they might never face a situation)
The reward is the an increase or decrease in an indicator that agents can get from an action in a specific State, and agent do not know what would be the gain if he chose another action.
The gain is not constant, the states are not well defined and there is no formal transition of one state into another,
For example agent can decide to share with one of the co-located agent (Action 1) or with all of the agents at the same location(Action 2) If certain conditions hold true Action A will be more rewarding for that agent, while in other conditions Action 2 will have higher reward; my problem is I did not see any example with unknown rewards since sharing in this scenario also depends on the other agent's characteristics (which affects the conditions of reward system) and in different states it will be different.
In my model there is no relationship between the action and the following state,and that makes me wonder if its ok to think about RL in this situation at all.
What I am looking to optimize here is the ability for my agents to reason about current situation in a better way and not only respond to their need which is triggered by their internal states. They have a few personalities which can define their long term goal and can affect their decision making in different situations, but I want them to remember what action in a situation helped them to increase their preferred long term goal.
In my model there is no relationship between the action and the following state,and that makes me wonder if its ok to think about RL in this situation at all.
This seems strange. What do actions do if not change state? Note that agents don't have to necessarily know how their actions will change their state. Similarly, actions could change the state imperfectly (a robots treads could skid out so it doesn't actually move when it tries to). In fact, some algorithms are specifically designed for this uncertainty.
In any case, even if the agents are moving around the state space without having any control, it can still learn the rewards for the different states. Indeed, many RL algorithms involve moving around the state space semi-randomly to figure out what the rewards are.
I do not have a clear definition for States as they might happen a few times or not happen at all because Agents move around and they might never face a situation
You might consider expanding what goes into what you consider to be a "state". For instance, the position seems like it should definitely go into the variables identifying a state. Not all states need to have rewards (although good RL algorithms typically infer a measure of goodness of neutral states).
I would recommend clearly defining the variables that determine an agent's state. For instance, the state space could be current-patch X internal-variable-value X other-agents-present. In the simplest case, the agent can observe all of the variables that make up their state. However, there are algorithms that don't require this. An agent should always be in a state, even if the state has no reward value.
Now, concerning unknown reward. That's actually totally okay. Reward can be a random variable. In that case, a simple way to apply standard RL algorithms would be to use the expected value of the variable when making decisions. If the distribution is unknown, then the algorithm could just use the mean of the rewards observed so far.
Alternatively, you could include the variables that determine the reward in the definition of the state. That way, if the reward changes, then it is literally in a different state. For example, suppose a robot is on top of a building. It needs to get to the top of the building in front of it. If it just moves forward, it falls to ground. Thus, that state has a very low reward. However, if it first places a plank that goes from one building to the other, and then moves forward, the reward changes. To represent this, we could include plank-in-place as a variable so that putting the board in place actually changes the robot's current state and the state that would result from moving forward. Thus, the reward itself has not changed; it's just in a different state.
Hopefully this helps!
UPDATE 2/7/2018: A recent upvote reminded me of the existence of this question. In the years since it was asked, I've actually dived into RL in NetLogo to a much greater extent. In particular, I've made a python extension for NetLogo, primarily to make it easier to integrate machine learning algorithms in with model. One of the demos of the extension trains a collection of agents using deep Q-learning as the model runs.