I am having a problem in working on Behavior Space. I have 3 parameters, percentage A, Percentage B and Percentage C. I want to vary values of these three in behavior space experiment but the sum of it must be always 100. For example, Percentage A 30%, Percentage B 30%, Percentage C 40%.
["percentage A" 50]
["percentage B" 25]
["percentage C" 25]
One way to skip unsufficient parameter settings would be the use of a stop condition. In the variables section of "Behaviour space" you can vary your parameters automatically by a range definition like:
["percentageA" [0 10 100]]
["percentageB" [0 10 100]]
["percentageC" [0 10 100]]
This would of course generate combinations which do not have a sum of 100.
Next in the reporter section you could add a reporter, which helps to filter your results later on:
(percentageA + percentageB + percentageC)
In the bottom section of the Behaviour Space Menu you can then simply add a stop condition like:
(percentageA + percentageB + percentageC != 100)
This condition will skip all unsufficient variations. Nevertheless you still would have entries in the output file for runs with unsufficient combinations but you can easily filter them. Just use the defined reporter and select only those entries with a value of 100 in that column.
Related
I have a cross tab report that has categories as the rows and the month/year. Additionally, I have the average and std dev for each row.
For instance:
2022-01
2022-02
2022-03
Average(myData)
stdDev(myData)
electrical
1
0
2
1
1
mechanical
3
3
3
3
0
admin
1
7
1
3
3.46
Now, I am able to format the cells against a static value. For instance, I can set up a conditional format like this:
CellValue () > 2
This will allow me to highlight any crosstab intersection with a value greater than 2.
But I am at a loss on how to get this to work comparing it against the average and/or standard deviation
for instance, the following
CellValue ()>[myQuery].[Average(myData)]
highlights nothing, whereas I would have expected this to highlight any cell above average.
My end goal is to highlight any value that is above 1.645 * standard deviation + average, but I cannot even get a simpler rule to work.
I was able to get something to work, but far from ideal.
I made queries to get the summary stats, then I joined those to the original data.
Then I put the categories and each of the summary stats on the left edge.
I could then reference them as expected in a conditional format. e.g. [mydata] > [myAve] + 1.645[myStdDev]
It isnt as straight forward as I would like, and it is a bit messy, but it works
I created a model which as 2 different sliders, namely ratio1 and ratio2. They are located on the interface and their values should add up to 1 (here: labour-unit), and also cannot exceed this value. For now, NetLogo let's me exceed the condition.
I tried this:
to setup
create-turtles number-of-turtles ;; number of firms to be defined through slider
set labour-unit ratio1 + ratio2
set labour-unit 1
end
Therefore, my question is: How to create a condition in the setup that 2 slider values cannot exceed a defined value?
Is there any reason you actually need two sliders if the values always add to 1? Could you just have one slider called "proportion with labor-type x" or whatever you're modelling? Then, you can just have reporters to return the values for the actual proportion you're after- for example:
to-report ratio1
report proportion-slider
end
to-report ratio2
report precision ( 1 - proportion-slider ) 2
end
Then on your interface you could have the slider (and monitors if needed):
I am designing a fuzzy controller and for that, I have to define 3 triangular function sets. They are:
1 large
2 medium
3 small
But my problem is I have following data only:
Maximum input = 3 Minimum input= 0.1
Maximum output = 5.5 Minimum output= 0.8
How to define 3 triangular set range based on only this given information?
Here is the formula for a triangular membership function
f=0 if x<=a
f=(x-a)/(b-a) if a<=x<=b
f=(c-x)/(c-b) if b<=x<=c
f=0 if x>c
where a is the min, c is the max and b is the midpoint.
In your case, take the top situation where the max is 3 and the min is 0.1. The midpoint is (3+0.1)/2=1.55, so you have
f=0 if x<=0.1
f=(x-0)/(1.55-1) if 0.1<=x<=1.55
f=(3-x)/(3-1.55) if 1.55<=x<=3
f=0 if x>3
You should be able to take the 2nd example from here, but if not let me know. Something worth pointing out is that the midpoint may not be the ideal b in your situation. Any point between a and c could serve as your b, just know that it is the point where the membership function equals 1.
It is difficult to tell, but it looks like maybe you just have given parameters for two of the functions, perhaps for small and large or medium and large. You may need to use some judgement for the 3rd membership function.
I have looked thoroughly on the internet for an answer to this question, but it seems to be too specific for an answer anywhere else. This is my last stop.
To preface, this is not a homework problem, but it is adapted from an online Coursera course, whose quiz has already passed. I got the correct answer, but it was mostly luck. Also, it is a more of a general programming question than anything related to the course, so I know for a fact that it is within my right to ask it on a public forum.
The last thing is that I'm trying to do this in MatLab; however if you have an answer that is in C++ or Python or any other high level language, that would be wonderful, as I could easily adapt those solutions to MatLab syntax.
Here it is:
I have two vectors, T and M, each with 600,000 elements/entries/integers.
T is entered as milliseconds from 1 to 600,000 in ascending order, and each element in M represents 'on' or 'off' (entered as 1 or 0 respectively) for each corresponding millisecond entry in T. So there are random 1's and 0's in M that correspond to a particular millisecond from 1 to 600,000 in T.
I need to take, starting with the 150th millisecond of T, and in 150 element/millisecond increments from there on (inclusive), the average millisecond value of those groups of 150 but ONLY of those milliseconds whose entries are 1 in M ('on'). For example, I need to look at the first 150 milliseconds in T, see which ones have a value of 1 in M, and then average them. Then I need to do it again with entries 151 to 300 in T, then 301 to 450, etc. etc. These new averages should also be stored in a new vector. The problem is, the number of corresponding 1's in M isn't going to be the same for every group of 150 milliseconds in T. (And yes, we are trying to average the actual value of the milliseconds, so the values we are using to average and the order of the entries in T will be the same).
My attempt:
It turns out there are only 53,583 random 1's in M (out of the 600,000 entries, the rest are 0). I used a 'find' operator to extract those entries from M that are a 1 into a new vector K that has the millisecond value corresponding from T. So K looks like a bunch of random numbers in ascending order, which is just a list of all the milliseconds in T who are 'on' (assigned a 1 in M).
So K looks something like [2 5 11 27 39 40 79 ...... 599,698 599,727 etc.] (all of the millisecond values who are a 1 in M).
So I have the vector K which is all of the values that I need to average in groups of 150, but the problem is that I need to go in groups of 150 based on the vector T (1 to 600,000), which means there won't always be the same number of 1's (or values in K) in every group of 150 milliseconds in T, which in turn means the number I need to divide by to get the average of each group is going to change for each group of 150. I know I need to use a loop to do the average millisecond value for every 150 entries, but how do I get the dividing number (the number of entries for each group of 150 who is assigned a 1 or 'on') to change on each iteration of the loop? Is there a way to bind T and M together so that they only use the requisite values from K whenever there is a 1 in M, and then just use a simple counter to average?
It's not a complicated problem, but it is very hard to explain. Sorry about that! I hope I explained as clearly as I could. Any help would be appreciated, although I'm sure you'll have questions first.
Thank you very much!
I think this should work OK.
sz = length(T);
n = sz / 150;
K = T.*M';
t = 1;
aver = zeros(n-1,1); % Your result vector
for i = 1:150:sz-150
aver(t) = mean(K(i:(i+150)-1));
t = t + 1;
end
-Rob
I have a lisp program on roulette wheel selection,I am trying to understand the theory behind it but I cannot understand anything.
How to calculate the fitness of the selected strng?
For example,if I have a string 01101,how did they get the fitness value as 169?
Is it that the binary coding of 01101 evaluates to 13,so i square the value and get the answer as 169?
That sounds lame but somehow I am getting the right answers by doing that.
The fitness function you have is therefore F=X^2.
The roulette wheel calculates the proportion (according to its fitness) of the whole that that individual (string) takes, this is then used to randomly select a set of strings for the next generation.
Suggest you read this a few times.
The "fitness function" for a given problem is chosen (often) arbitrarily keeping in mind that as the "fitness" metric rises, the solution should approach optimality. For example for a problem in which the objective is to minimize a positive value, the natural choice for F(x) would be 1/x.
For the problem at hand, it seems that the fitness function has been given as F(x) = val(x)*val(x) though one cannot be certain from just a single value pair of (x,F(x)).
Roulette-wheel selection is just a commonly employed method of fitness-based pseudo-random selection. This is easy to understand if you've ever played roulette or watched 'Wheel of Fortune'.
Let us consider the simplest case, where F(x) = val(x),
Suppose we have four values, 1,2,3 and 4.
This implies that these "individuals" have fitnesses 1,2,3 and 4 respectively. Now the probability of selection of an individual 'x1' is calculated as F(x1)/(sum of all F(x)). That is to say here, since the sum of the fitnesses would be 10, the probabilities of selection would be, respectively, 0.1,0.2,0.3 and 0.4.
Now if we consider these probabilities from a cumulative perspective the values of x would be mapped to the following ranges of "probability:
1 ---> (0.0, 0.1]
2 ---> (0.1, (0.1 + 0.2)] ---> (0.1, 0.3]
3 ---> (0.3, (0.1 + 0.2 + 0.3)] ---> (0.3, 0.6]
4 ---> (0.6, (0.1 + 0.2 + 0.3 + 0.4)] ---> (0.6, 1.0]
That is, an instance of a uniformly distributed random variable generated, say R lying in the normalised interval, (0, 1], is four times as likely to be in the interval corresponding to 4 as to that corresponding to 1.
To put it another way, suppose you were to spin a roulette-wheel-type structure with each x assigned a sector with the areas of the sectors being in proportion to their respective values of F(x), then the probability that the indicator will stop in any given sector is directly propotional to the value of F(x) for that x.