I am having two agents, agentA(evStations) (initial location and numbers are loaded from a database), and an AgentB(eVs) (initially empty and the number of agents is specified by the user).
At the model start up, I want to place AgentsB at the locations of AgentA (exact latitude and longitude). How can I do that? Knowing that the number of AgentB is much larger than AgnetA.
What i have tried(based on an existing anylogic example), on the main> Agent actions> On startup
for(EV ev: eVs){
ev.set_lat(
selectFrom(evstations)
.where(evstations.id.eq(ev.getIndex()))
.firstResult(evstations.latitude)
);
ev.set_lon(
selectFrom(evstations)
.where(evstations.id.eq(ev.getIndex()))
.firstResult(evstations.logtitude)
);
ev.setLocation(ev.lat, ev.lon);
}
[enter image description here][1]
i am not sure how to do it correctly, I think this only works if both agents are having the same size.
Please advise?
thanks
first define all the evStations as agents, you can read them from the database.
Then
for(EV ev: eVs){
EvStation evs=evStations.random();
ev.setLocation(evs);
}
Related
I am trying to dynamically change the source Arrival rate using a variable "arrivalRate" linked to a slider (see image).
However, during the simulation the initial rate remains the same, even when I change the arrivalRate. I know that the arrivalRate variable is changing successfully (it is an int) - but this has no effect on the source rate during the simulation.
Anyone have an idea what the issue is - or how to fix it?
Whenever you see the = sign before a field, it means it's not dynamic, it is only evaluated at the start of the model or at the element creation and will not change throughout the simulation run unless you force it. In other words, the variable arrivalRate is checked only once to assign the source's arrival rate and that's it.
Now if you want to change it dynamically, in the slider's Action field, write the following:
source.set_rate( arrivalRate );
I am trying to force agents of a population to exchange messages in AnyLogic. I would like each time agent A sends a message to B the icon of the message to move from A to B. How can I implement this?
The code Emile sent you works to move an agent from one place to another one. I understand you don't want to move your two agents, but instead you want to move only a "message icon" from one to the other. For that you can create an agent (let's call it agent "Message"), create it and locate it in the agentA, and tell it (as Emile said) to move to agentB: messageAB.moveTo(agentB.getPosition()); this way you'll get the effect you want.
You could also:
use a timer to move from one place to another, or
use an event and change the position of the icon dinamically depending on how much time you have remaining on that event
use a source/delay/sink for the same as in point 2
There are basically two ways to move an agent:
Jump to agent B: Instantly appears near agent B
Move to agent A at a certain speed
For each one the code is respectively as follows:
agentA.jumpTo( agentB.getXYZ() );
agentA.moveTo( agentB );
Where agentA and agentB refer to the agents which you might call differently depending where you are in the model.
I've been given a task to set up an openai toy gym which can only be solved by an agent with memory. I've been given an example with two doors, and at time t = 0 I'm shown either 1 or -1. At t = 1 I can move to correct door and open it.
Does anyone know how I would go about starting out? I want to show that a2c or ppo can solve this using an lstm policy. How do I go about setting up environment, etc?
To create a new environment in gym format, it should have the 5 functions mentioned in the gym.core file.
https://github.com/openai/gym/blob/e689f93a425d97489e590bba0a7d4518de0dcc03/gym/core.py#L11-L35
To lay this down in steps-
Define observation space and action space for your environment, preferably using gym.spaces module.
Write down the step function which performs agent's action and returns a 4 tuple containing - next set of observations from the environment , reward ,
done - a boolean indicating whether the episode is over , and some extra info if you want.
Write a reset function for the environment to reinitialise the episode to a random start state and return a 4 tuple similar to step.
These functions are enough to be able to run an RL agent on your environment.
You can skip the render, seed and close functions if you want.
For the task you have defined,you can model the observation and action space using Discrete(2). 0 for first door and 1 for second door.
Reset would return in it's observation which door has the reward.
Then agent would choose either of the door - 0 or 1.
Then perform a environment step by calling step(action), which will return agent's reward and done flag as true - signifying that the episode is over.
Frankly, the problem you describe seems too simple to accomplish for any reinforcement learning algorithm, but I assume you have provided that as an example.
Remembering for longer horizons is usually harder.
You can read their documentation and toy environments to understand how to create one.
I am very new at Anylogic. I have a simple model, using the Fluid dynamics library: two tanks and a valve between them. The valve have to open at a rate, say X, only when the amount in the first tank, say tank_1, were twice of the amount of the second tank, say tank_2
Could you please help me with that?
Regards
You probably have more conditions on what to use with the valve depending on different things. But it's a bad idea to make something occur when then tank_1 is exactly 2 times larger than tank_2... Instead, create a boolean variable that tells you if the current situation is that tank_1 is over or below 2*tank_2. Let's call it "belowTank2". I will assume tank_1 is below 2*tank_2 in the beginning of the simulation. so belowTank2 is true.
Then you create an event that runs cyclically every second or even more often if you want and you use the following code:
if(belowTank2){
if(tank_1.amount()>2*tank_2.amount()){
valve.set_openRate(0.1);
belowTank2=false;
}
}else{
if(tank_1.amount()<2*tank_2.amount()){
valve.set_openRate(0.3);
belowTank2=true;
}
}
So this means that whenever tank_1 surpases 2*tank_2, it will trigger the rate change on the valve. And it will trigger again a change when it's below 2*tank_2
I am trying to simulate self-sustaining population of cows, but when i dinamicly add new agents to main agent's population newly created agent does not appear. How to fix it? I create agents from mother agent with
this.main.add_cows();
first generation population
pupulation without presentation
First go to the agent where the animation will exist (which is also where your agent is defined). In most cases this is the main agent, so you probably want to go to "main" and click on the cow agent to see the "initial location" in the properties of the cow agent.
When a new agent is created, the default position is at the agent animation location, which is probably somewhere out of the visible area since we generally position our defined agents out of the canvas, and in your case it's the same position for all cows.
Now you have other options: you can select for instance a random coordinate in the space (assuming you have a 600x600 pixels square):
Or you can select a node (as long as a node is defined in your animation canvas:
So, to summarize, when you create a new agent in your population, you have to tell AnyLogic where you want to locate it... Otherwise, how can the software know what you want?
ensure to set the initial location correct, otherwise it might appear at a default location you do not expect. Something like:
Cow myNewCow = this.main.add_cows();
myNewCow.setXY(uniform(0,600), uniform(0,400));