I'm trying to simulate in Matlab the traffic of a network of 14 nodes and 21 links. I have a function called "New_Connection" and another called " Close_connection " among others.
For now I have implemented traffic using a 'for' loop. In each iteration is called "New_Connection" that randomly chooses a source node , a destination node and a random duration (now this value is an integer) is executed. This connection may or may not be set (lock).
After, is called the "Close_connection" function, which checks all connection times (stored in an array) and if they have a value of 0 closes the connection.
Finally, before the end of the loop, is subtracted a temporary unit to all connections established least the last one.
What I would like is to perform this simulation using a system that implements a time (eg 1 minute) and at any time, any node establishes a new connection. For example:
t=0.000134 s ---- Node1 to Node8
t=0.003024 s ---- Node12 to Node11
t=0.003799 s ---- Node6 to Node3
.
.
.
t=59.341432 s ---- Node1 to Node4
And the "Close_Connection" function considers these time to close connections.
I have searched for information on Simulink, SimEvents, Parallel computing, Discrete event simulation... but I can not really understand the functioning.
Thank you very much in advance and apologies for my English.
You don't need to to use a complex framework like SIMEVENTS. For simple tasks you can write your own event queue. The following code implements a simple scenario. Create new Connections ever T=Uniform(0,10) seconds, delete the connection after 10s
%max duration
SIMTIME=60;
T=0;
NODES=[1:20];
%Constructor for new events. 'Command' is a string, 'data' gives the parameters
MAKEEVENT=#(t,c,d)(struct('time',t,'command',c,'data',{d}));
%create event to end simulation
QUEUE(1)=MAKEEVENT(SIMTIME,'ENDSIM',[]);
%create initial event to create the first connection
QUEUE(end+1)=MAKEEVENT(0,'PRODUCECONNECTION',[]);
RUN=true;
while RUN
[nT,cevent]=min([QUEUE.time]);
assert(nT>=T,'event was created for the past')
T=nT;
EVENT=QUEUE(cevent);
QUEUE(cevent)=[];
fprintf('T=%f\n',T)
switch (EVENT.command)
case 'ENDSIM'
%maybe collect data here
RUN=false;
case 'PRODUCECONNECTION'
%standard producer pattern
%Create a connection between two random nodes every 10s
next=rand*10;
QUEUE(end+1)=MAKEEVENT(T+next,'PRODUCECONNECTION',[]);
R=randperm(size(NODES,2));
first=NODES(R(1));
second=NODES(R(2));
fprintf('CONNECT NODE %d and %d\n',first,second)
%connection will last for 20s
QUEUE(end+1)=MAKEEVENT(T+next,'RELEASECONNECTION',{first,second});
case 'RELEASECONNECTION'
first=EVENT.data{1};
second=EVENT.data{2};
fprintf('DISCONNNECT NODE %d and %d\n',first,second)
end
end
Related
I'm trying to use pytransitions to implement retransmit logic from an initialization state. The summary is that during the init state if the other party isn't responding after 1 second resend the packet. This is very similar to what I see here: https://github.com/pytransitions/transitions/pull/461
I tried this patch, and even though I see the timeouts/failures happening, my callback is only called the first time. This is true with before/after and on_enter/exit. No matter what I've tried, I can't get the retransmit to occur again. Any ideas?
Even though this question is a bit dated I'd like to post an answer since Retry states have been added to transitions in release 0.9.
Retry itself will only count how often a state has been re-entered meaning that the counter will increase when transition source and destination are equal and reset otherwise. It's entirely passive and need another mean to trigger events. The Timeout state extension is commonly used in addition to Retry to achieve this. In the example below a state machine is decorated with Retry and Timeout state extensions which allows to use a couple of keywords for state definitions:
timeout - time in seconds before a timeout is triggered after a state has been entered
on_timeout- the callback(s) called when timeout was triggered
retries - the number of retries before failure callbacks are called when a state is re-entered
on_failure - the callback(s) called when the re-entrance counter reaches retries
The example will re-enter pinging unless a randomly generated number between 0 and 1 is larger than 0.8. This can be interpreted as a server that roughly answers only every fifth request. When you execute the example the retries required to reach 'initialized' can vary or even fail when retries are reached.
from transitions import Machine
from transitions.extensions.states import add_state_features, Retry, Timeout
import random
import time
# create a custom machine with state extension features and also
# add enter callbacks for the states 'pinging', 'initialized' and 'init_failed'
#add_state_features(Retry, Timeout)
class RetryMachine(Machine):
def on_enter_pinging(self):
print("pinging server...")
if random.random() > 0.8:
self.to_initialized()
def on_enter_initialized(self):
print("server answered")
def on_enter_init_failed(self):
print("server did not answer!")
states = ["init",
{"name": "pinging",
"timeout": 0.5, # after 0.5s we assume the "server" wont answer
"on_timeout": "to_pinging", # when timeout enter 'pinging' again
"retries": 3, # three pinging attempts will be conducted
"on_failure": "to_init_failed"},
"initialized",
"init_failed"]
# we don't pass a model to the machine which will result in the machine
# itself acting as a model; if we add another model, the 'on_enter_<state>'
# methods must be defined on the model and not machine
m = RetryMachine(states=states, initial="init")
assert m.is_init()
m.to_pinging()
while m.is_pinging():
time.sleep(0.2)
I have the following closed loop process model consisting of trucks moving between three stations (the three service blocks):
The three resource pools have a capacity on 1 and...:
"shovel" is assigned to the "cLoad" service block
"loadingPersonnel" is assigned to the "bLoad_unload" service block
"dcPeronnel" is assigned to "dcUnload"
There will be generated 12 trucks and they operate in the closed loop until the simulation clock finishes. The truck agent class has a "loadingCap" parameter of value 17 (data type is double) and a "ballast" variable with an initial value of 0 (data type is double and access is public):
In the "on enter" action field of the "cornfieldBakery" moveTo-block from the first image, the following java code is inserted for incrementing the ballast variable with the loadingCap parameter:
agent.ballast += agent.loadingCap;
traceln("the current ballast of the leaving truck is: " + agent.ballast);
When I run the simulation I get the following in the console after the first truck enters the moveTo block:
"the current ballast of the leaving truck is: 17.0"
However, when I inspect the 1st agent instance during the simulation run, the variable ballast still remains at 0:
Are there any explanation to why the traceln() value and the "ballast" variable of the 1st instance differ from each other?
I have a model with a queue and two machines, one of which is used just in case of overcrowding of the queue in front of these resources.
My model has a simple Queue and a Delay block and I tried to mutate the Delay capacity based on a previous queue length using a function like this (written in Delay block capacity text field):
if (queue.size() > 5)
return 2;
else
return 1;
But it doesn't seem to work... is it possible to change the number of resources dynamically based on a condition?
the capacity value in the delay block is only considered in the beginning of the simulation, so it can only be considered as the initial value...
To change the capacity later, you can put some code in the on enter and on exit of the queue block:
delay.set_capacity(queue.size() > 5 ? 2 : 1);
Something like that.
I've been given a task to set up an openai toy gym which can only be solved by an agent with memory. I've been given an example with two doors, and at time t = 0 I'm shown either 1 or -1. At t = 1 I can move to correct door and open it.
Does anyone know how I would go about starting out? I want to show that a2c or ppo can solve this using an lstm policy. How do I go about setting up environment, etc?
To create a new environment in gym format, it should have the 5 functions mentioned in the gym.core file.
https://github.com/openai/gym/blob/e689f93a425d97489e590bba0a7d4518de0dcc03/gym/core.py#L11-L35
To lay this down in steps-
Define observation space and action space for your environment, preferably using gym.spaces module.
Write down the step function which performs agent's action and returns a 4 tuple containing - next set of observations from the environment , reward ,
done - a boolean indicating whether the episode is over , and some extra info if you want.
Write a reset function for the environment to reinitialise the episode to a random start state and return a 4 tuple similar to step.
These functions are enough to be able to run an RL agent on your environment.
You can skip the render, seed and close functions if you want.
For the task you have defined,you can model the observation and action space using Discrete(2). 0 for first door and 1 for second door.
Reset would return in it's observation which door has the reward.
Then agent would choose either of the door - 0 or 1.
Then perform a environment step by calling step(action), which will return agent's reward and done flag as true - signifying that the episode is over.
Frankly, the problem you describe seems too simple to accomplish for any reinforcement learning algorithm, but I assume you have provided that as an example.
Remembering for longer horizons is usually harder.
You can read their documentation and toy environments to understand how to create one.
When an out-of-memory error is raised in a parfor, is there any way to kill only one Matlab slave to free some memory instead of having the entire script terminate?
Here is what happens by default when an out-of-memory error occurs in a parfor: the script terminated, as shown in the screenshot below.
I wish there was a way to just kill one slave (i.e. removing a worker from parpool) or stop using it to release as much memory as possible from it:
If you get a out of memory in the master process there is no chance to fix this. For out of memory on the slave, this should do it:
The simple idea of the code: Restart the parfor again and again with the missing data until you get all results. If one iteration fails, a flag (file) is written which let's all iterations throw an error as soon as the first error occurred. This way we get "out of the loop" without wasting time producing other out of memory.
%Your intended iterator
iterator=1:10;
%flags which indicate what succeeded
succeeded=false(size(iterator));
%result array
result=nan(size(iterator));
FLAG='ANY_WORKER_CRASHED';
while ~all(succeeded)
fprintf('Another try\n')
%determine which iterations should be done
todo=iterator(~succeeded);
%initialize array for the remaining results
partresult=nan(size(todo));
%initialize flags which indicate which iterations succeeded (we can not
%throw erros, it throws aray results)
partsucceeded=false(size(todo));
%flag indicates that any worker crashed. Have to use file based
%solution, don't know a better one. #'
delete(FLAG);
try
parfor falseindex=1:sum(~succeeded)
realindex=todo(falseindex);
try
% The flag is used to let all other workers jump out of the
% loop as soon as one calculation has crashed.
if exist(FLAG,'file')
error('some other worker crashed');
end
% insert your code here
%dummy code which randomly trowsexpection
if rand<.5
error('hit out of memory')
end
partresult(falseindex)=realindex*2
% End of user code
partsucceeded(falseindex)=true;
fprintf('trying to run %d and succeeded\n',realindex)
catch ME
% catch errors within workers to preserve work
partresult(falseindex)=nan
partsucceeded(falseindex)=false;
fprintf('trying to run %d but it failed\n',realindex)
fclose(fopen(FLAG,'w'));
end
end
catch
%reduce poolsize by 1
newsize = matlabpool('size')-1;
matlabpool close
matlabpool(newsize)
end
%put the result of the current iteration into the full result
result(~succeeded)=partresult;
succeeded(~succeeded)=partsucceeded;
end
After quite bit of research, and a lot of trial and error, I think I may have a decent, compact answer. What you're going to do is:
Declare some max memory value. You can set it dynamically using the MATLAB function memory, but I like to set it directly.
Call memory inside your parfor loop, which returns the memory information for that particular worker.
If the memory used by the worker exceeds the threshold, cancel the task that worker was working on. Now, here it get's a bit tricky. Depending on the way you're using parfor, you'll either need to delete or cancel either the task or worker. I've verified that it works with the code below when there is one task per worker, on a remote cluster.
Insert the following code at the beginning of your parfor contents. Tweak as necessary.
memLimit = 280000000; %// This doesn't have to be in parfor. Everything else does.
memData = memory;
if memData.MemUsedMATLAB > memLimit
task = getCurrentTask();
cancel(task);
end
Enjoy! (Fun question, by the way.)
One other option to consider is that since R2013b, you can open a parallel pool with 'SpmdEnabled' set to false - this allows MATLAB worker processes to die without the whole pool being shut down - see the doc here http://www.mathworks.co.uk/help/distcomp/parpool.html . Of course, you still need to arrange somehow to shutdown the workers.