is it possible to copy the current state of the statechart from one agent to another? - anylogic

I am trying to make a model to simulate the contagion of covid in public spaces using a mix between SEIR and pedestrian models.
In another question I asked to use a static population. They suggested that before deleting the agent a copy be saved in a list and after the first X agents have been generated I want the next agent generated by the pedSource to be one of the list.
Currently what I do is take a random agent from the list and if it is infected I send a message to the new agent so that it goes into the infected state. But by doing that I am resetting the timeout to recover every time an agent enters the zone that I am modeling.
this is the code that currently runs in the pedSource on exit:
if (personasEnCasa.size()+personasEnSuper.size() > poblacionMaxima){
Persona p = randomFrom(personasEnCasa);
if (p.statechart.getState() == Persona.Infeccioso){
send("Contagiado", ped);
};
personasEnCasa.remove(p);
};
personasEnSuper is my population of Persona, personasEnCasa is my list of agents outside the zone and and poblacionMaxima is the maximum number of agents in the lista and the population
I would like to be able to copy the current statechart of the agent in the list to the agent that generates my pedSource. Or use something similar to a pedSource.inject () but inserting an agent from the list instead of a new one. But I did not know how to do it.
is there any way to do this?

your ped already exists and you don't need to copy it you can just move it to the flow like this, with pedWait being any pedestrian block that you want, so instead of send("Contagiado", ped); you would do enter.take(ped);
but if you insist in using the send, then you can use branches on your statechart to define where this ped goes:
you will need in this case before the send, use ped.infectious=true; and the condition in the branch would be infectious==true to move to the infectious state.
As a side note, instead of p.statechart.getState() == Persona.Infeccioso you should use p.statechart.getState().equals(Persona.Infeccioso)
use == only with primitives such as boolean, int and double, otherwise you are susceptible to errors that are very difficult to discover

Related

Anylogic: How to you make a random agent move to from state to another?

I want to randomly select a person (agent) that is in state1 and instruct this random agent to move to state2. I also want to change the random agent's var1(variable) value to "true".
I think that I should use randomWhere(population,condition) to select the random agent, but I do not know how to code it.
Assume you have an Agent type MyAgentType with a state chart statechart and a msg-based transition between state1 and state2 that triggers upon the String "change", and the agents live in a population myPopulation, then you can do:
MyAgentType agentInState1 = randomWhere(myPopulation, p->p.statechart.isStateActive(MyAgentType.state1));
agentInState1.statechart.fireTransition("change");
agentInState1.var1 = true;

Anylogic detect that agent stopped on conveyor

is there any easy way to detect that agent stopped on a roller conveyor because there are other agents ahead? I tried to use dynamic variable and conditional event but it is consuming too much performence. I donĀ“t want to go for dynamically checking condition (with cyclic event) because I have some bad experience with it.
Thanks!
Try this:
Create a LinkedList<YourAgentType> agentsOnConveyor.
Whenever an agent enters the conveyor, call agentsOnConveyor.addLast(agent). When it leaves, call agentsOnConveyor.removeFirst(agent).
Now you have a linked representation of the agents on the conveyor.
Last, you use the "on stopped" field in the conveyor and pass the message "upstream" through the linked list, i.e. use a for-loop from the first to the last entry and send them a msg "conveyor stopped".
You may still need to check if the individual agent is moving itself or has also stopped, but it might put you on a working path

Anylogic Message Animation

I am trying to force agents of a population to exchange messages in AnyLogic. I would like each time agent A sends a message to B the icon of the message to move from A to B. How can I implement this?
The code Emile sent you works to move an agent from one place to another one. I understand you don't want to move your two agents, but instead you want to move only a "message icon" from one to the other. For that you can create an agent (let's call it agent "Message"), create it and locate it in the agentA, and tell it (as Emile said) to move to agentB: messageAB.moveTo(agentB.getPosition()); this way you'll get the effect you want.
You could also:
use a timer to move from one place to another, or
use an event and change the position of the icon dinamically depending on how much time you have remaining on that event
use a source/delay/sink for the same as in point 2
There are basically two ways to move an agent:
Jump to agent B: Instantly appears near agent B
Move to agent A at a certain speed
For each one the code is respectively as follows:
agentA.jumpTo( agentB.getXYZ() );
agentA.moveTo( agentB );
Where agentA and agentB refer to the agents which you might call differently depending where you are in the model.

Setting up openai gym

I've been given a task to set up an openai toy gym which can only be solved by an agent with memory. I've been given an example with two doors, and at time t = 0 I'm shown either 1 or -1. At t = 1 I can move to correct door and open it.
Does anyone know how I would go about starting out? I want to show that a2c or ppo can solve this using an lstm policy. How do I go about setting up environment, etc?
To create a new environment in gym format, it should have the 5 functions mentioned in the gym.core file.
https://github.com/openai/gym/blob/e689f93a425d97489e590bba0a7d4518de0dcc03/gym/core.py#L11-L35
To lay this down in steps-
Define observation space and action space for your environment, preferably using gym.spaces module.
Write down the step function which performs agent's action and returns a 4 tuple containing - next set of observations from the environment , reward ,
done - a boolean indicating whether the episode is over , and some extra info if you want.
Write a reset function for the environment to reinitialise the episode to a random start state and return a 4 tuple similar to step.
These functions are enough to be able to run an RL agent on your environment.
You can skip the render, seed and close functions if you want.
For the task you have defined,you can model the observation and action space using Discrete(2). 0 for first door and 1 for second door.
Reset would return in it's observation which door has the reward.
Then agent would choose either of the door - 0 or 1.
Then perform a environment step by calling step(action), which will return agent's reward and done flag as true - signifying that the episode is over.
Frankly, the problem you describe seems too simple to accomplish for any reinforcement learning algorithm, but I assume you have provided that as an example.
Remembering for longer horizons is usually harder.
You can read their documentation and toy environments to understand how to create one.

difficult to find the current location of agents in Anylogic simulation

i built a simple model for pedestrian movement from start line towards target line, I want to find the number of moving agents in some area using the XY-coordinates (from X=150 to X=350, Y is the same )
The action for the event is to get the count of agents in that area and set the value for the variable crowd1:
crowd1=count(agents(), p-> p.getX()>150 && p.getX()<350)
the problem is that it's always 0 , even though the gents are moving in the simulation.
There are no agents in your environment because you haven't created any agent type... For your code to work you need to have a population of pedestrians registered in your environment (meaning that you have to create the agent type and add it to main as a populatin), and then you have to add to a custom population the agents created in pedSource...
Otherwise, you can use this code:
count(pedGoTo.getPeds(),p->p.getX()>150 && p.getX()<350)