Netlogo BehaviorSpace mean value after every repetition - netlogo

in BehaviorSpace I have ["concetration" 0 0.2 1] and for each value of concetration I have to make 1000 repetiotions, but I have to bring a mean value of ticks after those repetition, not any other value. Someone knows how to do this? So what I want in result is:
concetration = 0.0 <mean value of ticks after 1000 repetitions>
concetration = 0.2 <mean value of ticks after 1000 repetitions>
concetration = 0.4 <mean value of ticks after 1000 repetitions>

ticks is the number of time steps that the simulation ran for. Does it end naturally (like an epidemic runs out) or do you stop it after a certain number of ticks? If you stop it after a fixed number of ticks, then the mean of ticks doesn't make sense.
Assuming that what you are asking is to calculate the average number of ticks that the simulation runs for before ending naturally (1) have BehaviorSpace just record the end of the simulation, not every step (2) look at the BehaviorSpace output and the variable [step] is the last step that the simulation did, which is also ticks if you have the normal setup / go structure.

Related

Netlogo: create a stopping rule considering multiple conditions

I am modelling a network of agents who interact with each other. Each agent is randomly connected to 8 others in the network. Each agent has an initial float-value between 0 and 1. If two agents' values are close enough (determined by the threshold "x" e.g. x = 0.4, v1 = 0.1, v2 = 0.3, so distance of 0.2 < x), one agent influences the other so that the values go even closer together.
However, if the values are greater than the other threshold "y", e.g. y = 0.8, one agent influences the other contrarily, so that the values drift further away from each other. If the difference is between x and y, no influence is taken place.
Now I wanted to create a stopping rule for my network, where an equilibrium is found.
Thank you very much!!!
I started with: if the total number of links (connections between agents) equals to the sum of 1.) number of links where the value difference is below 0.001 (so the two agents have almost the same value) and 2.) links with difference > 0.9, the iterations shall stop. But it is quasi never the case. Is there a possibility where it stops not too late and where it works for whatever values x and y have?
here is my initial code:
if count links = (count links with [value_difference < 0.001] + count links with [value_difference > 0.9])
[print number-of-clusters-conti
stop]

Can we code to calculate the difference of tick values in netlogo

Actually i need to calculate density increase/decrease rate of human population for my model, the model is same as i asked in unable to make non-stationary turtles change their direction if an obstacle a patch ahead (a specific area and a building within it, people are randomly visiting and going). What i thought that i will be needing to save the tick values for initial population value and after some time difference updated population value. Below is the procedure i want to plot graph for.
to density-increase-rate
store population-density at some initial time (ticks)
store updated-population-density at some later-time (ticks)
calculate density-increase-rate
( ( ( updated-pd - previous-pd ) / (updated-tick - previous-tick ) ) * 100 ) / 10
end
I am calculating population-density in my code as
total-density-inside-boundary count people with [inside-boundary?]
for any suggestion or code help i am very thankful.
If you just want to plot this change, there is no need to store it because the plot will update each tick.
globals [total-density-inside-boundary density-increase-rate]
to calc-plot-vars
let old-density total-density-inside-boundary
set total-density-inside-boundary count people with [inside-boundary?]
set density-increase-rate (total-density-inside-boundary - old-density) / 100
end
Then have a plot on the interface with plot total-density-inside-boundary and plot density-increase-rate. You may need to do some rescaling to have them both on the same plot.
If you want to have the rate based on total time, then create a variable to hold the initial value and calculate it at the specific time you think initial means (such as the end of the setup or at a specific tick).
globals [total-density-inside-boundary initial-density]
to setup
... (commands that create your people)
set initial-density count people with [inside-boundary?]
...
end
to go
...
if ticks = 1 [ set initial-density count people with [inside-boundary?] ]
...
end
Then have the rate plot in the interface have plot (total-density-inside-boundary - initial-density) / 100

Interpretations of probabilities and percentages

Thank you very much for your help for coding my model,
If you do not mind please, I want to ask you about some interpretations in the coding
I am sorry I am not expert in mathematics
to move
ask turtles with [gender = "male" ]
[ if ( random-float 1) <= 0.025]
why it is <= and what is the interpretation of this code,
and for the percentage
ask turtles
[ if random 100 <= 50
[become-fat]]
the same question why <= if we always say 50 % in the group will be fat why we put this sign???
and what is the different between random and random float
sorry for disturbance
The difference between the two primitives is that:
random gives you only integer numbers, e.g.: 0, 1, 2, 3, etc.
random-float gives you floating point numbers, e.g.: 0.0, 0.125, 0.528476587245, 3.66, etc.
Both can be used to make things happen probabilistically in NetLogo. I'll start with the use of random, which is slightly easier to understand.
Using random
As stated in the documentation, if you pass a positive number to random, it will give you a number that is greater or equal to 0, but strictly less than that number.
For example, random 2 will always give you either 0 or 1. You could use that to simulate the flipping of a coin:
ifelse random 2 = 0 [ print "heads" ] [ print "tail" ].
That will print "heads" 50% of the time (when random 2 gives you 0), and "tail" 50% of the time (when random 2 gives you 1).
Now it's easy to generalize this to probabilities expressed in percentages by using random 100 instead of random 2. I'll use an example with 50%, but it could very well be 25%, 80% or even 1% or 100%.
Now since random 100 gives you a number between 0 and 99 inclusively, it means that the first 50 numbers it can give you are: 0, 1, 2, 3... all the way to 49. And the next 50 are: 50, 51, 52, 53... all the way to 99. You can imagine a 100-sided dice labeled from 0 to 99 if you wish.
If you want your turtles to "become fat" 50% of the time, you can do:
ask turtles [ if random 100 < 50 [become-fat] ]
Notice that I used the < (strictly less) sign instead of the <= (less or equal) sign. This is because I only want the turtles to become fat if the "dice" lands on one of the first 50 faces, from 0 to 49.
(If you used random 100 <= 50, like in the code you posted above, they would actually have a 51% probability of becoming fat and a 49% probability to not become fat. You should now also be able to figure out why something like if random 100 = 50 doesn't make sense: it would only be true if the "dice" lands exactly on 50, which happens only 1% of the time.)
If you wanted your turtles to become fat only 20% of the time, you'd want to use the first 20 faces of the dice, from 0 to 19:
ask turtles [ if random 100 < 20 [become-fat] ]
It is often enough to use random 100 when dealing with probabilities in NetLogo.
Using random-float
Sometimes, however, you need a little more precision. And mathematically oriented work often expresses probabilities as numbers between 0.0 (for 0%) and 1.0 (for 100%). In those cases, random-float 1 comes handy. Again, as stated in the documentation, random-float will give you a number between 0 (inclusively) and the number you pass to it (exclusively). Thus, random-float 1 gives you a number between 0.0 and 1.0 (but never exactly 1.0).
This expression:
random-float 1 < 0.025
will be true 2.5% of the time.
The dice metaphor doesn't work for random-float, but you can imagine a roulette wheel (or a wheel of fortune). Asking if random-float 1 < 0.025 is like painting a "pie slice" that's 2.5% of the circumference of the wheel, spinning the wheel, and checking if the ball (or the arrow, or whatever) falls in that slice.
Now does it matter if you use <= instead of < with random-float? Not a whole lot. It would only make a difference if the wheel falls exactly on the line that separates your pie slice from the rest of the wheel, and the probability of that happening is very very small.

Calculating time of simulation

I am doing simulations for
time step = 1.0 e^-7 &
total number of steps ==> nsteps = 1.0 e^8
I have to find the total time of simulation at which n# steps is achieved.
Is it ok to multiply both to get the time of simulation?
time of simulation = time step * total number of steps
time of simulation = 1.0 e^-7 * 1.0 e^8
time of simulation = 10
Is this right or wrong?
Thanks in advance.
This is a yes/no question, so:
No, (or yes, depending on how accurately you want the answer to be)!
But you are really close... You need to subtract one time_step, thus the answer is really:
time_of_simulation = time_step * total_number_of_steps - time_step;
You will see the reason if you consider counting seconds. Start with a number and see how far you get if you count one second at a time.
1, 2, 3 => Three measurements, but only 2 seconds.
However, in your case, I guess you are close enough without the last subtraction, because
time of simulation = 9.999999 is pretty close to 10

Output Value Of Neural Network Does Not Arrive To Desired Values

I made a neural network that also have Back Propagation.it has 5 nodes in input layer,6 nodes in hidden layer,1 node in output layer and have random weights and i use sigmoid as activation function.
i have two set of data for input.
for example :
13.5 22.27 0 0 0 desired value=0.02
7 19 4 7 2 desired value=0.03
now i train the network with 5000 iteration or iteration will stop if the error
value(desired - calculated output value) is less than or equal to 0.001.
the output value of first iteration for each input set is about 60 And it will decrease in each iteration.
now the problem is that the second set of inputs(that has desired value of 0.03),cause to stop iteration because of calculated output value of 3.001 but the first set of inputs did not arrived to desired value of it(that is 0.02) and its output is about 0.03 .
EDITED :
I used LMS algorithm andchanged the error threshold 0.00001 to find correct error value,but now output value of last iteration for both 0.03 and 0.02 desired value is between 0.023 and 0.027 and that is incorrect yet.
For your error value stop threshold, you should take the error on one epoch (Sum of every error of all your dataset) and not only on one member of you dataset. With this you will have to increase the value of your error threshold but it will force your neural network to do a good classification on all your example and not only on some example.