Genetic algorithm: Minimum Number of Generations? - matlab

I have a Matlab script (actually a function, funModel), which I'm trying to solve with 7 integer variables via a genetic algorithm:
nvars = 7; %number of variables
Aineq = [1 1 1 1 1 1 1]; Aeq = [];
bineq = [VesMaxCrew]; beq = [];
LowBound = [1 1 1 1 1 4 0];
UpBound = [1 1 VesMaxCrew 1 VesMaxCrew VesMaxCrew VesMaxCrew];
Nonlcon = [];
IntCon = [1:7]; % all 7 variables to be treated as integers
Options = gaoptimset('Display','iter',... %display every iteration
'Generations',70,... %maximum number of generations is 70
'TolFun',1,... %tolerance for optimisation is 1
'TolCon',1,...
'PlotFcns',#gaplotbestf);
OptimisedValue = ga(#funModel,nvars,Aineq,bineq,Aeq,beq,,LowBound,UpBound,NonlCon,IntCon,Options);
The genetic algorithm works fine and finds a good solution, easily within 70 generations (as can be seen with the plot function #gaplotbestf). With the current input, the optimal solution is chosen for every individual after 25 to 30 generations. The algorithm, however, continues to run until 51 generations have been made. This would seem like at least 20 generations too many.
Even if I change the input parameters of funModel, the genetic algorithm still runs at least 51 generations, like there is some constraint or setting saying the algorithm has to run 51 generations minimum. (As can be seen, a maximum number of generations has been entered)
Why doesn't the algorithm stop between 25 or 30 generations? (or just after 30 generations)
And more importantly, does anyone know how to alter this?
(I haven't been able to find anything about a setting (gaoptimset) of minimum generations in the Matlab documentation. Neither have I been able to find somebody with the same problem/question.)

"Stall generations" option has default value of 50. This is actually the point where it stops in your case. This can be considered as a minimum number of generations. For more details please check here.

Related

Fast intersection of several interval ranges?

I have several variables, all of which are numeric ranges: (intervals in rows)
a = [ 1 4; 5 9; 11 15; 20 30];
b = [ 2 6; 12 14; 19 22];
c = [ 15 22; 24 29; 33 35];
d = [ 0 3; 15 17; 23 26];
(The values in my real dataset are not integers, but are represented here as such for clarity).
I would like to find intervals in which at least 3 of the variables intersect. In the above example [20 22] and [24 26] would be two such cases.
One way to approach this is to bin my values and add the bins together, but as my values are continuous, that'd create an 'edge effect', and I'd lose time binning the values in the first place. (binning my dataset at my desired resolution would create hundreds of GB of data).
Another approach, which doesn't involve binning, would use pairwise intersections (let's call it X) between all possible combinations of variables, and then the intersection of X with all other variables O(n^3).
What are your thoughts on this? Are there algorithms/libraries which have tools to solve this?
I was thinking of using sort of a geometric approach to solve this: Basically, if I considered that my intervals were segments in 1D space, then my desired output would be points where three segments (from three variables) intersect. I'm not sure if this is efficient algorithmically though. Advice?
O(N lg N) method:
Convert each interval (t_A, t_B) to a pair of tagged endpoints ('begin', t_A), ('end', t_B)
Sort all the endpoints by time, this is the most expensive step
Do one pass through, tracking nesting depth (increment if tag is 'start', decrement if tag is 'end'). This takes linear time.
When the depth changes from 2 to 3, it's the start of an output interval.
When it changes from 3 to 2, it's the end of an interval.

Using bin counts as weights for random number selection

I have a set of data that I wish to approximate via random sampling in a non-parametric manner, e.g.:
eventl=
4
5
6
8
10
11
12
24
32
In order to accomplish this, I initially bin the data up to a certain value:
binsize = 5;
nbins = 20;
[bincounts,ind] = histc(eventl,1:binsize:binsize*nbins);
Then populate a matrix with all possible numbers covered by the bins which the approximation can choose:
sizes = transpose(1:binsize*nbins);
To use the bin counts as weights for selection i.e. bincount (1-5) = 2, thus the weight for choosing 1,2,3,4 or 5 = 2 whilst (16-20) = 0 so 16,17,18, 19 or 20 can never be chosen, I simply take the bincounts and replicate them across the bin size:
w = repelem(bincounts,binsize);
To then perform weighted number selection, I use:
[~,R] = histc(rand(1,1),cumsum([0;w(:)./sum(w)]));
R = sizes(R);
For some reason this approach is unable to approximate the data. It was my understanding that was sufficient sampling depth, the binned version of R would be identical to the binned version of eventl however there is significant variation and often data found in bins whose weights were 0.
Could anybody suggest a better method to do this or point out the error?
For a better method, I suggest randsample:
values = [1 2 3 4 5 6 7 8]; %# values from which you want to pick
numberOfElements = 1000; %# how many values you want to pick
weights = [2 2 2 2 2 1 1 1]; %# weights given to the values (1-5 are twice as likely as 6-8)
sample = randsample(values, numberOfElements, true, weights);
Note that even with 1000 samples, the distribution does not exactly correspond to the weights, so if you only pick 20 samples, the histogram may look rather different.

Why Matlab K-means does not find the best centroids while Excel Solver does?

I have a data set as follows:
Data = [4 12; 5 10; 8 7; 5 3; 5 4; 2 11; 5 4; 3 8; 6 2; 7 4; 10 8; 8 9; 10 9; 10 12]
Then I proceed with:
[idx,ctrs, sumD] = kmeans(Data,3)
It gives me the centroids and sumD (sums of point-to-centroid distances within cluster) like:
ctrs = [5.6000 3.4000; 3.5000 10.2500; 9.2000 9.0000]
sumD = [6.4000; 13.7500; 18.8000]
Whereas according to Excel Solver (from a published article), ctrs and sumD are as follows for k=3:
ctrs = [5.21815716 3.66736761; 3.615385665 10.461533; 9.47841197 8.75055345]
sumD = [5.151897802; 7.285383286; 8.573829765]
(NB: In that article, the authors give an initial (seed) centroid to each cluster such as [4 4; 5 12; 10 6] by visual decision from the plot.)
Apparently, Excel finds more accurate ctrs values thereby smaller sumD values. I could not achieve this with Matlab. That's why I used other parameters of kmeans function. I used 'replicates'` and 'options' (MaxIter) and also 'start' parameters - even with 3D array seed - to no avail. I even adopted the same initial seed from the article to Matlab. Followings are what I tried and failed:
First:
opts = statset('MaxIter',100);
Seed = [4 4; 5 12; 10 6];
[idx,ctrs] = kmeans(Data,3,'Replicates',50,'options',opts,'start',Seed)
This gives an error: The third dimension of the 'Start' array must match the 'replicates' parameter value.
Second:
I created a 3D array of 50 pages where the first page is the same initial seed above and the rest 49 are random. I created the random pages as:
T = rand(3,2,49);
After that, I created the 50 pages 3D array like this:
Seed2 = cat(3,Seed,T);
Then used kmeans:
[idx,ctrs] = kmeans(Data,3,'Replicates',50,'options',opts,'start',Seed2)
However, Matlab gave warnings indicated that all the replicates after the first replication were terminated due to empty cluster created at iteration 1. Also, the idx, ctrs and sumD values obtained were still the same as before - as if I ran my very first function above (i.e. [idx,ctrs, sumD] = kmeans(Data,3) ).
I am stuck. I am trying to verify the results of the Excel solver published in the article using Matlab because then I will apply the same algorithm applied on 14 observations from the article to a larger data set of 900+ observations.
What am I doing wrong? What should I correct in my coding to obtain the same or much similar result of the Excel Solver?
The difference appears to be in the choice of the measure of distance used, not in the coding. There is more than one way to define "distance" in this context.
MATLAB uses squared Euclidean distance by default. By hand calculating this with the MATLAB results I can replicate the sumD results you get. However, using squared Euclidean distance measure with the results you give from the paper gives a higher value of sumD.
I get the same results for sumD as the paper if I use plain (not squared) Euclidean distance. Using this measure the MATLAB results return higher values for sumD.
So neither result is wrong as such, they're just measuring "rightness" in different ways.
How can you be certain that excel values are correct and MATLAB kmeans gives you not so accurate result.
With the quick MATLAB script below, I plotted the centroids, and at least visually it seems correct
Data = [4 12; 5 10; 8 7; 5 3; 5 4; 2 11; 5 4; 3 8; 6 2; 7 4; 10 8; 8 9; 10 9; 10 12];
plot(Data(:,1), Data(:,2),'ob','markersize', 10);
axis([min(Data(:,1))-2, max(Data(:,1))+2, min(Data(:,2))-2, max(Data(:,2))+2]);
hold on;
[idx,ctrs, sumD] = kmeans(Data,3);
plot(ctrs(:,1), ctrs(:,2), '*r', 'markersize', 10);
If this is not accurate enough, Instead of trying to customize MATLAB's kmeans, we can define our kmean function. I had implemented the kmeans sometime ago and it seemed easier that asking matlab to fine tune the parameters.

Find median value of the largest clump of similar values in an array in the most computationally efficient manner

Sorry for the long title, but that about sums it up.
I am looking to find the median value of the largest clump of similar values in an array in the most computationally efficient manner.
for example:
H = [99,100,101,102,103,180,181,182,5,250,17]
I would be looking for the 101.
The array is not sorted, I just typed it in the above order for easier understanding.
The array is of a constant length and you can always assume there will be at least one clump of similar values.
What I have been doing so far is basically computing the standard deviation with one of the values removed and finding the value which corresponds to the largest reduction in STD and repeating that for the number of elements in the array, which is terribly inefficient.
for j = 1:7
G = double(H);
for i = 1:7
G(i) = NaN;
T(i) = nanstd(G);
end
best = find(T==min(T));
H(best) = NaN;
end
x = find(H==max(H));
Any thoughts?
This possibility bins your data and looks for the bin with most elements. If your distribution consists of well separated clusters this should work reasonably well.
H = [99,100,101,102,103,180,181,182,5,250,17];
nbins = length(H); % <-- set # of bins here
[v bins]=hist(H,nbins);
[vm im]=max(v); % find max in histogram
bl = bins(2)-bins(1); % bin size
bm = bins(im); % position of bin with max #
ifb =find(abs(H-bm)<bl/2) % elements within bin
median(H(ifb)) % average over those elements in bin
Output:
ifb = 1 2 3 4 5
H(ifb) = 99 100 101 102 103
median = 101
The more challenging parameters to set are the number of bins and the size of the region to look around the most populated bin. In the example you provided neither of these is so critical, you could set the number of bins to 3 (instead of length(H)) and it still would work. Using length(H) as the number of bins is in fact a little extreme and probably not a good general choice. A better choice is somewhere between that number and the expected number of clusters.
It may help for certain distributions to change bl within the find expression to a value you judge better in advance.
I should also note that there are clustering methods (kmeans) that may work better, but perhaps less efficiently. For instance this is the output of [H' kmeans(H',4) ]:
99 2
100 2
101 2
102 2
103 2
180 3
181 3
182 3
5 4
250 3
17 1
In this case I decided in advance to attempt grouping into 4 clusters.
Using kmeans you can get an answer as follows:
nbin = 4;
km = kmeans(H',nbin);
[mv iv]=max(histc(km,[1:nbin]));
H(km==km(iv))
median(H(km==km(iv)))
Notice however that kmeans does not necessarily return the same value every time it is run, so you might need to average over a few iterations.
I timed the two methods and found that kmeans takes ~10 X longer. However, it is more robust since the bin sizes adapt to your problem and do not need to be set beforehand (only the number of bins does).

find peak values in matlab

suppose that we are determine peaks in vector as follow:
we have real values one dimensional vector with length m,or
x(1),x(2),.....x(m)
if x(1)>x(2) then clearly for first point peak(1)=x(1);else we are then comparing x(3) to x(2),if x(3)
[ indexes,peaks]=function(x,m);
c=[];
b=[];
if x(1)>x(2)
peaks(1)=x(1);
else
for i=2:m-1
if x(i+1)< x(i) & x(i)>x(i-1)
peak(i)=x(i);
end;
end
end
end
peaks are determined also using following picture:
sorry for the second picture,maybe it is not triangle,just A and C are on straight line,but here peak is B,so i can't continue my code for writing algorithm to find peak values in my vector.please help me to continue it
updated.numercial example given
x=[2 1 3 5 4 7 6 8 9]
here because first point is more then second,so it means that peak(1)=2,then we are comparing 1 to 3,because 3 is more then 1,we now want to compare 5 to 3,it is also more,compare 5 to 4,because 5 is more then 4,then it means that peak(2)=5,,so if we continue next peak is 7,and final peak would be 9
in case of first element is less then second,then we are comparing second element to third one,if second is more then third and first elements at the same time,then peak is second,and so on
You could try something like this:
function [peaks,peak_indices] = find_peaks(row_vector)
A = [min(row_vector)-1 row_vector min(row_vector)-1];
j = 1;
for i=1:length(A)-2
temp=A(i:i+2);
if(max(temp)==temp(2))
peaks(j) = row_vector(i);
peak_indices(j) = i;
j = j+1;
end
end
end
Save it as find_peaks.m
Now, you can use it as:
>> A = [2 1 3 5 4 7 6 8 9];
>> [peaks, peak_indices] = find_peaks(A)
peaks =
2 5 7 9
peak_indices =
1 4 6 9
This would however give you "plateaus" as well (adjacent and equal "peaks").
You can use diff to do the comparison and add two points in the beginning and end to cover the border cases:
B=[1 diff(A) -1];
peak_indices = find(B(1:end-1)>=0 & B(2:end)<=0);
peaks = A(peak_indices);
It returns
peak_indices =
1 4 6 9
peaks =
2 5 7 9
for your example.
findpeaks does it if you have a recent matlab version, but it's also a bit slow.
This proposed solution would be quite slow due to the for loop, and you also have a risk of rounding error due to the fact that you compare the maximal value to the central one instead of comparing the position of the maximum, which is better for your purpose.
You can stack the data so as to have three columns : the first one for the preceeding value, the second is the data and the third one is the next value, do a max, and your local maxima are the points for which the position of the max along columns is 2.
I've coded this as a subroutine of my own peak detection function, that adds a further level of iterative peak detection
http://www.mathworks.com/matlabcentral/fileexchange/42927-find-peaks-using-scale-space-approach