I'm solving a tsp optimization problem with heuristics, and the following code works for all euclidian tsplib instances excluding the a280 instance that gives me the "Index exceeds matrix dimension" error and I can't figure out why?
I put comments so the code is understandable hopefully easily. The error happens at the before-last line of code i = mod(si(selected_edge)-1,n)+1;
n = size(a280,1); % n number of nodes
distances = dist(a280,a280');
savings = zeros(n);
lengths_obtained = [];
depot = 1 % depot is the central node
%% Test n number of solutions where each node is selected to be the depot
for depot = 1:n
%% Compute the matrix of savings of each pair of nodes
for i = 1:n
if i == depot
continue;
end
savings(i,(i+1):n)=distances(i,depot)+distances(depot,(i+1):n)-distances(i,(i+1):n);
end
%% Initialisation steps
minParent = 1:n;
% Rank the savings s(i,j) and list them in descending order of
% magnitude.
[~,si] = sort(savings(:),'descend');
si = si(1:fix(end/2));
% Setting cache
Nodes_depot = zeros(1,n); % binary vector, 1 if node i is disconnected from the depot, 0 otherwise
Nodes_depot(depot) = 1;
Nodes_depot_count = n-1;
degrees = zeros(1,n); % number of edges linking a node
selected_edge = 1;
pairs_of_nodes = zeros(n,2); % listing the couples (i,j) of nodes linked together
currentEdgeCount = 1;
%% Pair nodes as long as more than 2 nodes are still connected to the depot
while Nodes_depot_count>2
% Start with the edge (i,j) generating the topmost amount of
% savings.
i = mod(si(selected_edge)-1,n)+1;
j = floor((si(selected_edge)-1)/n)+1;
if Nodes_depot(i) == 0 && Nodes_depot(j)==0 && (minParent(i)~=minParent(j)) && i~=j && i~=depot && j~=depot
degrees(i) = degrees(i)+1;
degrees(j) = degrees(j)+1;
pairs_of_nodes(currentEdgeCount,:) = [i,j];
if minParent(i) < minParent(j)
minParent(minParent == minParent(j)) = minParent(i);
else
minParent(minParent == minParent(i)) = minParent(j);
end
currentEdgeCount = currentEdgeCount + 1;
% Removing i and/or j if they now are paired with two other
% nodes from partial tour.
if degrees(i) == 2
Nodes_depot(i) = 1;
Nodes_depot_count = Nodes_depot_count - 1;
end
if degrees(j) == 2
Nodes_depot(j) = 1;
Nodes_depot_count = Nodes_depot_count - 1;
end
end
selected_edge = selected_edge + 1;
end
If someone can push me in the right directions would be greatly appreciated. Thanks!!
Related
I'm currently working on a code that makes use of a 2D cellular automata as an epidemic simulator in MATLAB. The main basic rule I'm trying to implement is that if any neighbour within a Moore Neighbourhood with a 1-cell radius is infected, the cell will become infected. But I can't seem to get a good code working for it.
Basically what I'm trying to do is say with for a cell with a one cell radius Moore neighbourhood, if any values in this neighbourhood = 2, then the initial cell will become 2.
I've tried using the forest fire code on the rosetta code as a basis for my code behaviour but it doesnt work very well. The rules don't really work that well when applying it to mine. I've tried using the mod function and a series of if loops to attach. I'll put in some code of each to give context.
This example doesn't really function well as an epidemic simulator to be honest.
matlab
clear; clc;
n = 200;
N = n/2;
E = 0.001; % Creating an arbitrary number for population exposed to
the disease but not infected
p = 1 + (rand(n,n)<E);
%p = ceil(rand(n,n)*2.12) - 1;
% ratio0 = sum(p(:)==0)/n^2;
% ratio1 = sum(p(:)==1)/n^2;
% ratio2 = sum(p(:)==2)/n^2;
% ratio3 = sum(p(:)==3)/n^2;
S = ones(3); S(2,2) = 0;
ff = 0.00000000002;
p(N,N) = 3;
%% Running the simulation for a set number of loops
colormap([1,1,1;1,0,1;1,0,0]); %Setting colourmap to Green, red and
grey
count = 0;
while(count<365) % Running the simulation with limited number of runs
count = count + 1;
image(p); pause(0.1); % Creating an image of the model
P = (p==1); % Adding empty cells to new array
P = P + (p==2).*((filter2(S,p==3)>0) + (rand(n,n)<ff) + 2); % Setting
2 as a tree, ignites based on proximity of trees and random
chance ff
P = P + (p==3); % Setting 3 as a burning tree, that becomes 1,
p = P;
end
second idea. this basically returns nothing
matlab
clear;clf;clc;
n = 200;
pos = mod((1:n),n) + 1; neg = mod((1:n)-2,n) + 1;
p = (ceil(rand(n,n)*1.0005));
for t = 1:365
if p(neg,neg) ==2
p(:,:) = 2;
end
if p(:,neg)==2
p(:,:) = 2;
end
if p(pos,neg)==2
p(:,:) = 2;
end
if p(neg,:)==2
p(:,:) = 2;
end
if p(pos,:)==2
p(:,:) = 2;
end
if p(neg,pos)==2
p(:,:) = 2;
end
if p(:,pos)==2
p(:,:) = 2;
end
if p(pos,pos)== 2
p(:,:) = 2;
end
image(p)
colormap([1,1,1;1,0,1])
end
third I tried using logic gates to see if that would work. I don't know if commas would work instead.
matlab
clear;clf;clc;
n = 200;
pos = mod((1:n),n) + 1; neg = mod((1:n)-2,n) + 1;
p = (ceil(rand(n,n)*1.0005));
%P = p(neg,neg) + p(:,neg) + p(pos,neg) + p(neg,:) + p(:,:) + p(pos,:)
+ p(neg,pos) + p(:,pos) + p(pos,pos)
for t=1:365
if p(neg,neg)|| p(:,neg) || p(pos,neg) || p(neg,:) || p(pos,:) ||
p(neg,pos) || p(:,pos) || p(pos,pos) == 2
p(:,:) = 2;
end
image(p)
colormap([1,1,1;1,0,1])
end
I expected the matrix to just gradually become more magenta but nothing happens in the second one. I get this error for the third.
"Operands to the || and && operators must be convertible to logical scalar values."
I just have no idea what to do!
Cells do not heal
I assume that
Infected is 2, non-infected is 1;
An infected cell remains infected;
A non-infected cell becomes infected if any neighbour is.
A simple way to achieve this is using 2-D convolution:
n = 200;
p = (ceil(rand(n,n)*1.0005));
neighbourhood = [1 1 1; 1 1 1; 1 1 1]; % Moore plus own cell
for t = 1:356
p = (conv2(p-1, neighbourhood, 'same')>0) + 1; % update
image(p), axis equal, axis tight, colormap([.4 .4 .5; .8 0 0]), pause(.1) % plot
end
Cells heal after a specified time
To model this, it is better to use 0 for a non-infected cell and a positive integer for an infected cell, which indicated how long it has been infected.
A cell heals after it has been infected for a specified number of iterations (but can immediately become infeced again...)
The code uses convolution, as the previous one, but now already infected cells need to be dealt with separately from newly infected cells, and so a true Moore neighbourhood is used.
n = 200;
p = (ceil(rand(n,n)*1.0005))-1; % 0: non-infected. 1: just infected
T = 20; % time to heal
neighbourhood = [1 1 1; 1 0 1; 1 1 1]; % Moore
for t = 1:356
already_infected = p>0; % logical index
p(already_infected) = p(already_infected)+1; % increase time count for infected
newly_infected = conv2(p>0, neighbourhood, 'same')>0; % logical index
p(newly_infected & ~already_infected) = 1; % these just became infected
newly_healed = p==T; % logical index
p(newly_healed) = 0; % these are just healed
image(p>0), axis equal, axis tight, colormap([.4 .4 .5; .8 0 0]), pause(.1) % plot
% infected / non-infected state
end
I have a 2D image (matrix). I have found the local maxima of this image. Now I want to define the boundaries around each local maxima in such a way that I want all the pixels around the local maxima that have a value above 85% of the maximum.
Here is my existing code:
function [location]= Mfind_peak_2D( Image,varargin )
p = inputParser;
addParamValue(p,'max_n_loc_max',5);
addParamValue(p,'nb_size',3);
addParamValue(p,'thre',0);
addParamValue(p,'drop',0.15);
parse(p,varargin{:});
p=p.Results;
if sum(isnan(Image(:)))>0
Image(isnan(Image))=0;
end
hLocalMax = vision.LocalMaximaFinder;
hLocalMax.MaximumNumLocalMaxima = p.max_n_loc_max;
hLocalMax.NeighborhoodSize = [p.nb_size p.nb_size];
end
This should do the job (the code is full of comments and should be pretty self-explanatory but if you have doubts feel free to ask for more details):
% Load the image...
img = imread('peppers.png');
img = rgb2gray(img);
% Find the local maxima...
mask = ones(3);
mask(5) = 0;
img_dil = imdilate(img,mask);
lm = img > img_dil;
% Find the neighboring pixels of the local maxima...
img_size = size(img);
img_h = img_size(1);
img_w = img_size(2);
res = cell(sum(sum(lm)),3);
res_off = 1;
for i = 1:img_h
for j = 1:img_w
if (~lm(i,j))
continue;
end
value = img(i,j);
value_thr = value * 0.85;
% Retrieve the neighboring column and row offsets...
c = bsxfun(#plus,j,[-1 0 1 -1 1 -1 0 1]);
r = bsxfun(#plus,i,[-1 -1 -1 0 0 1 1 1]);
% Filter the invalid positions...
idx = (c > 0) & (c <= img_w) & (r > 0) & (r <= img_h);
% Transform the valid positions into linear indices...
idx = (((idx .* c) - 1) .* img_h) + (idx .* r);
idx = reshape(idx.',1,numel(idx));
idx = idx(idx > 0);
% Retrieve the neighbors and filter them based on te threshold...
neighbors = img(idx);
neighbors = neighbors(neighbors > value_thr);
% Update the final result...
res(res_off,:) = {sub2ind(img_size,i,j) value neighbors};
res_off = res_off + 1;
end
end
res = sortrows(res,1);
The variable res will be a cell matrix with three columns: the first one contain the linear indices to the local maxima of the image, the second one contains the values of the local maxima and the third one a vector with the pixels around the local maxima that fall within the specified threshold.
I have a numeric dataset and I want to cluster data with a non-parametric algorithm. Basically, I would like to cluster without specifying the number of clusters for the input. I am using this code that I accessed through the MathWorks File Exchange network which implements the Mean Shift algorithm. However, I don't Know how to adapt my data to this code as my dataset has dimensions 516 x 19.
function [clustCent,data2cluster,cluster2dataCell] =MeanShiftCluster(dataPts,bandWidth,plotFlag)
%UNTITLED2 Summary of this function goes here
% Detailed explanation goes here
%perform MeanShift Clustering of data using a flat kernel
%
% ---INPUT---
% dataPts - input data, (numDim x numPts)
% bandWidth - is bandwidth parameter (scalar)
% plotFlag - display output if 2 or 3 D (logical)
% ---OUTPUT---
% clustCent - is locations of cluster centers (numDim x numClust)
% data2cluster - for every data point which cluster it belongs to (numPts)
% cluster2dataCell - for every cluster which points are in it (numClust)
%
% Bryan Feldman 02/24/06
% MeanShift first appears in
% K. Funkunaga and L.D. Hosteler, "The Estimation of the Gradient of a
% Density Function, with Applications in Pattern Recognition"
%*** Check input ****
if nargin < 2
error('no bandwidth specified')
end
if nargin < 3
plotFlag = true;
plotFlag = false;
end
%**** Initialize stuff ***
%[numPts,numDim] = size(dataPts);
[numDim,numPts] = size(dataPts);
numClust = 0;
bandSq = bandWidth^2;
initPtInds = 1:numPts
maxPos = max(dataPts,[],2); %biggest size in each dimension
minPos = min(dataPts,[],2); %smallest size in each dimension
boundBox = maxPos-minPos; %bounding box size
sizeSpace = norm(boundBox); %indicator of size of data space
stopThresh = 1e-3*bandWidth; %when mean has converged
clustCent = []; %center of clust
beenVisitedFlag = zeros(1,numPts,'uint8'); %track if a points been seen already
numInitPts = numPts %number of points to posibaly use as initilization points
clusterVotes = zeros(1,numPts,'uint16'); %used to resolve conflicts on cluster membership
while numInitPts
tempInd = ceil( (numInitPts-1e-6)*rand) %pick a random seed point
stInd = initPtInds(tempInd) %use this point as start of mean
myMean = dataPts(:,stInd); % intilize mean to this points location
myMembers = []; % points that will get added to this cluster
thisClusterVotes = zeros(1,numPts,'uint16'); %used to resolve conflicts on cluster membership
while 1 %loop untill convergence
sqDistToAll = sum((repmat(myMean,1,numPts) - dataPts).^2); %dist squared from mean to all points still active
inInds = find(sqDistToAll < bandSq); %points within bandWidth
thisClusterVotes(inInds) = thisClusterVotes(inInds)+1; %add a vote for all the in points belonging to this cluster
myOldMean = myMean; %save the old mean
myMean = mean(dataPts(:,inInds),2); %compute the new mean
myMembers = [myMembers inInds]; %add any point within bandWidth to the cluster
beenVisitedFlag(myMembers) = 1; %mark that these points have been visited
%*** plot stuff ****
if plotFlag
figure(12345),clf,hold on
if numDim == 2
plot(dataPts(1,:),dataPts(2,:),'.')
plot(dataPts(1,myMembers),dataPts(2,myMembers),'ys')
plot(myMean(1),myMean(2),'go')
plot(myOldMean(1),myOldMean(2),'rd')
pause
end
end
%**** if mean doesnt move much stop this cluster ***
if norm(myMean-myOldMean) < stopThresh
%check for merge posibilities
mergeWith = 0;
for cN = 1:numClust
distToOther = norm(myMean-clustCent(:,cN)); %distance from posible new clust max to old clust max
if distToOther < bandWidth/2 %if its within bandwidth/2 merge new and old
mergeWith = cN;
break;
end
end
if mergeWith > 0 % something to merge
clustCent(:,mergeWith) = 0.5*(myMean+clustCent(:,mergeWith)); %record the max as the mean of the two merged (I know biased twoards new ones)
%clustMembsCell{mergeWith} = unique([clustMembsCell{mergeWith} myMembers]); %record which points inside
clusterVotes(mergeWith,:) = clusterVotes(mergeWith,:) + thisClusterVotes; %add these votes to the merged cluster
else %its a new cluster
numClust = numClust+1 %increment clusters
clustCent(:,numClust) = myMean; %record the mean
%clustMembsCell{numClust} = myMembers; %store my members
clusterVotes(numClust,:) = thisClusterVotes;
end
break;
end
end
initPtInds = find(beenVisitedFlag == 0); %we can initialize with any of the points not yet visited
numInitPts = length(initPtInds); %number of active points in set
end
[val,data2cluster] = max(clusterVotes,[],1); %a point belongs to the cluster with the most votes
%*** If they want the cluster2data cell find it for them
if nargout > 2
cluster2dataCell = cell(numClust,1);
for cN = 1:numClust
myMembers = find(data2cluster == cN);
cluster2dataCell{cN} = myMembers;
end
end
This is the test code I am using to try and get the Mean Shift program to work:
clear
profile on
nPtsPerClust = 250;
nClust = 3;
totalNumPts = nPtsPerClust*nClust;
m(:,1) = [1 1];
m(:,2) = [-1 -1];
m(:,3) = [1 -1];
var = .6;
bandwidth = .75;
clustMed = [];
%clustCent;
x = var*randn(2,nPtsPerClust*nClust);
%*** build the point set
for i = 1:nClust
x(:,1+(i-1)*nPtsPerClust:(i)*nPtsPerClust) = x(:,1+(i-1)*nPtsPerClust:(i)*nPtsPerClust) + repmat(m(:,i),1,nPtsPerClust);
end
tic
[clustCent,point2cluster,clustMembsCell] = MeanShiftCluster(x,bandwidth);
toc
numClust = length(clustMembsCell)
figure(10),clf,hold on
cVec = 'bgrcmykbgrcmykbgrcmykbgrcmyk';%, cVec = [cVec cVec];
for k = 1:min(numClust,length(cVec))
myMembers = clustMembsCell{k};
myClustCen = clustCent(:,k);
plot(x(1,myMembers),x(2,myMembers),[cVec(k) '.'])
plot(myClustCen(1),myClustCen(2),'o','MarkerEdgeColor','k','MarkerFaceColor',cVec(k), 'MarkerSize',10)
end
title(['no shifting, numClust:' int2str(numClust)])
The test script generates random data X. In my case. I want to use the matrix D of size 516 x 19 but I am not sure how to adapt my data to this function. The function is returning results that are not agreeing with my understanding of the algorithm.
Does anyone know how to do this?
I have implemented the Clarke-Wright huristic to solve TSP (based on the pseudo-code here). I have attached my implementation in Matlab. However it is not fast enough for me and takes O(n2) space (because of pairwise distances). I wonder if there is any theoretic or practical optimization I can apply to reduce the complexity (specially, space complexity).
It would be grateful if anyone can help me.
function [tour, length] = clarke_wright (data)
n=size(data,1); % number of records
center = mean(data,1); % mean of data
hubIdx = knnsearch(data,center,'k',1); % nearest record to center
distances = dist(data,data'); % this requires O(n^2) space :(
savings = zeros(n); % place to store the saving after adding an edge %
% Can be more vectorized? %
for i=1:n
if i==hubIdx
continue;
end
savings(i,(i+1):n)=distances(i,hubIdx)+distances(hubIdx,(i+1):n)-distances(i,(i+1):n);
end
minParent = 1:n;
[~,si] = sort(savings(:),'descend');
si=si(1:(end/2));
Vh = zeros(1,n);
Vh(hubIdx) = 1;
VhCount = n-1;
degrees = zeros(1,n);
selectedIdx = 1; % edge to try for insertion
tour = zeros(n,2);
curEdgeCount = 1;
while VhCount>2
i = mod(si(selectedIdx)-1,n)+1;
j = floor((si(selectedIdx)-1)/n)+1;
if Vh(i)==0 && Vh(j)==0 && (minParent(i)~=minParent(j)) && i~=j && i~=hubIdx && j~=hubIdx % always all degrees are <= 2, so it is not required to check them
% if (minParent(i)~=minParent(j)) && isempty(find(degrees>2, 1)) && i~=j && i~=hubIdx && j~=hubIdx && Vh(i)==0 && Vh(j)==0
degrees(i)=degrees(i)+1;
degrees(j)=degrees(j)+1;
tour(curEdgeCount,:) = [i,j];
if minParent(i)<minParent(j)
minParent(minParent==minParent(j))=minParent(i);
else
minParent(minParent==minParent(i))=minParent(j);
end
curEdgeCount = curEdgeCount + 1;
if degrees(i)==2
Vh(i) = 1;
VhCount = VhCount - 1;
end
if degrees(j)==2
Vh(j) = 1;
VhCount = VhCount - 1;
end
end
selectedIdx = selectedIdx + 1;
end
remain = find(Vh==0);
n1=remain(1);
n2=remain(2);
tour(curEdgeCount,:) = [hubIdx n1];
curEdgeCount = curEdgeCount + 1;
tour(curEdgeCount,:) = [hubIdx n2];
tour = stitchTour(tour);
tour=tour(:,1)';
length=distances(tour(end),tour(1));
for i=1:n-1 % how can I vectorize these lines?
length=length+distances(tour(i),tour(i+1));
end
end
function tour = stitchTour(t) % uniforms the tour [a b; b c; c d; d e;.... ]
n=size(t,1);
[~,nIdx] = sort(t(:,1));
t=t(nIdx,:);
tour(1,:) = t(1,:);
t(1,:) = -t(1,:);
lastNodeIdx = tour(1,2);
for i=2:n
nextEdgeIdx = find(t(:,1)==lastNodeIdx,1);
if ~isempty(nextEdgeIdx)
tour(i,:) = t(nextEdgeIdx,:);
t(nextEdgeIdx,:)=-t(nextEdgeIdx,:);
else
nextEdgeIdx = find(t(:,2)==lastNodeIdx,1);
tour(i,:) = t(nextEdgeIdx,[2 1]);
t(nextEdgeIdx,:)=-t(nextEdgeIdx,:);
end
lastNodeIdx = tour(i,2);
end
end
This is what you can do if space is an issue (will probably reduce calculation speed a bit).
I have not really looked into your code, but judging from the pseudo code this should do the trick:
For each pair or points, calculate the savings created by connecting them.
If this is better than the best savings found so far, update the best savings, and remember the two points.
After checking all pairs just implement the best savings.
This way you will barely require extra space at all.
I am a total beginner in Matlab and trying to write some Machine Learning Algorithms in Matlab. I would really appreciate it if someone can help me in debugging this code.
function y = KNNpredict(trX,trY,K,X)
% trX is NxD, trY is Nx1, K is 1x1 and X is 1xD
% we return a single value 'y' which is the predicted class
% TODO: write this function
% int[] distance = new int[N];
distances = zeroes(N, 1);
examples = zeroes(K, D+2);
i = 0;
% for(every row in trX) { // taking ONE example
for row=1:N,
examples(row,:) = trX(row,:);
%sum = 0.0;
%for(every col in this example) { // taking every feature of this example
for col=1:D,
% diff = compute squared difference between these points - (trX[row][col]-X[col])^2
diff =(trX(row,col)-X(col))^2;
sum += diff;
end % for
distances(row) = sqrt(sum);
examples(i:D+1) = distances(row);
examples(i:D+2) = trY(row:1);
end % for
% sort the examples based on their distances thus calculated
sortrows(examples, D+1);
% for(int i = 0; i < K; K++) {
% These are the nearest neighbors
pos = 0;
neg = 0;
res = 0;
for row=1:K,
if(examples(row,D+2 == -1))
neg = neg + 1;
else
pos = pos + 1;
%disp(distances(row));
end
end % for
if(pos > neg)
y = 1;
return;
else
y = -1;
return;
end
end
end
Thanks so much
When working with matrices in MATLAB, it is usually better to avoid excessive loops and instead use vectorized operations whenever possible. This will usually produce faster and shorter code.
In your case, the k-nearest neighbors algorithm is simple enough and can be well vectorized. Consider the following implementation:
function y = KNNpredict(trX, trY, K, x)
%# euclidean distance between instance x and every training instance
dist = sqrt( sum( bsxfun(#minus, trX, x).^2 , 2) );
%# sorting indices from smaller to larger distances
[~,ord] = sort(dist, 'ascend');
%# get the labels of the K nearest neighbors
kTrY = trY( ord(1:min(K,end)) );
%# majority class vote
y = mode(kTrY);
end
Here is an example to test it using the Fisher-Iris dataset:
%# load dataset (data + labels)
load fisheriris
X = meas;
Y = grp2idx(species);
%# partition the data into training/testing
c = cvpartition(Y, 'holdout',1/3);
trX = X(c.training,:);
trY = Y(c.training);
tsX = X(c.test,:);
tsY = Y(c.test);
%# prediction
K = 10;
pred = zeros(c.TestSize,1);
for i=1:c.TestSize
pred(i) = KNNpredict(trX, trY, K, tsX(i,:));
end
%# validation
C = confusionmat(tsY, pred)
The confusion matrix of the kNN prediction with K=10:
C =
17 0 0
0 16 0
0 1 16