I have the following implementation of algorithm
function[x,error,iter,flag,vetnorm_r]=gmres_givens(A,x,b,restart,maxit,tol)
% input A REAL nonsymmetric positive definite matrix
% x REAL initial guess vector
% b REAL right hand side vector
% M REAL preconditioner matrix
% restart INTEGER number of iterations between restarts
% maxit INTEGER maximum number of iterations
% tol REAL error tolerance
%
% output x REAL solution vector
% error REAL error norm
% iter INTEGER number of iterations performed
% flag INTEGER: 0 = solution found to tolerance
% 1 = no convergence given maxit
iter = 0; % initialization
flag = 0;
bnrm2=norm(b);
if(bnrm2==0), bnrm2=1.0; end
r=b-A*x;
error=norm(r)/bnrm2;
if(error < tol) return, end
[n,n]=size(A); % initialize workspace
m=restart;
V(:,1:m+1)=zeros(n,m+1);
H(1:m+1,1:m)=zeros(m+1,m);
c(1:m)=zeros(m,1);
s(1:m)=zeros(m,1);
e1=zeros(n,1);
e1(1)=1;
vetnorm_r=zeros(m+1,1);
for iter=1:maxit % begin iteration
r=(b-A*x);
V(:,1)=r/norm(r);
g=norm(r)*e1;
vetnorm_r(1)=norm(r);
for i=1:m % construct orthonormal system
w=(A*V(:,i));
for k=1:i
H(k,i)=w'*V(:,k); % basis using Gram-Schmidt
w=w-H(k,i)*V(:,k);
end
H(i+1,i)=norm(w);
V(:,i+1)=w/H(i+1,i);
for k=1:i-1 % apply Givens rotation
temp=c(k)*H(k,i)+s(k)*H(k+1,i);
H(k+1,i)=-s(k)*H(k,i)+c(k)*H(k+1,i);
H(k,i)=temp;
end
[c(i),s(i)]=givens(H(i,i),H(i+1,i)); % form i-th rotation matrix
temp=c(i)*g(i); % approximate residual norm
g(i+1)=-s(i)*g(i);
g(i)=temp;
H(i,i)=c(i)*H(i,i)+s(i)*H(i+1,i);
H(i+1,i)=0;
error=abs(g(i+1))/bnrm2;
vetnorm_r(i+1)=abs(g(i+1));
if (error <= tol) % update approximation
y=H(1:i,1:i)\g(1:i); % and exit
x=x+V(:,1:i)*y;
break;
end
end
if(error <= tol), break, end
y=H(1:m,1:m)\g(1:m); % update approximation
x=x+V*y; % compute residual
r=b-A*x
g(i+1)=norm(r);
error=g(i+1)/bnrm2; % check convergence
if(error <= tol), break, end;
end
if (error>tol) flag=1; end;
end
where
function [c,s]=givens(a,b)
if(b==0)
c=1;
s=0;
elseif (abs(b) > abs(a)),
temp=a/b;
s=1/sqrt(1+temp^2);
c=temp*s;
else
temp=b/a;
c=1/sqrt(1+temp^2);
s=temp*c;
end
My problem is to get a vector (probably a matrix) vetnorm_r which contains all the norms of the residual (as output), at each iteration (and possibly at every restart). I do not know how to build this vector or matrix.
% input A REAL nonsymmetric positive definite matrix
% x REAL initial guess vector
% b REAL right hand side vector
% M REAL preconditioner matrix
% restart INTEGER number of iterations between restarts
% maxit INTEGER maximum number of iterations
% tol REAL error tolerance
%
% output x REAL solution vector
% error REAL error norm
% iter INTEGER number of iterations performed
% flag INTEGER: 0 = solution found to tolerance
% 1 = no convergence given maxit
Thanks for any reply
You probably shouldn't copy code you don't understand. The residual is where it says check for convergence.
if(error <= tol), break, end
y=H(1:m,1:m)\g(1:m); % update approximation
x=x+V*y; % compute residual
r=b-A*x
g(i+1)=norm(r);
error=g(i+1)/bnrm2; % check convergence
if(error <= tol), break, end;
end
I am not sure what you're asking for. You can either code so that it doesn't break on tolerance and only performs and a number of iterations so that the vector of residuals is prestored, which is what I would do. Or you can set the residual vector length long then when it gets near about to break add more to it.
Related
I've got this code:
function [sigma,shrinkage]=covMarket(x,shrink)
% function sigma=covmarket(x)
% x (t*n): t iid observations on n random variables
% sigma (n*n): invertible covariance matrix estimator
%
% This estimator is a weighted average of the sample
% covariance matrix and a "prior" or "shrinkage target".
% Here, the prior is given by a one-factor model.
% The factor is equal to the cross-sectional average
% of all the random variables.
% The notation follows Ledoit and Wolf (2003)
% This version: 04/2014
% de-mean returns
t=size(x,1);
n=size(x,2);
meanx=mean(x);
x=x-meanx(ones(t,1),:);
xmkt=mean(x')';
sample=cov([x xmkt])*(t-1)/t;
covmkt=sample(1:n,n+1);
varmkt=sample(n+1,n+1);
sample(:,n+1)=[];
sample(n+1,:)=[];
prior=covmkt*covmkt'./varmkt;
prior(logical(eye(n)))=diag(sample);
if (nargin < 2 | shrink == -1) % compute shrinkage parameters
c=norm(sample-prior,'fro')^2;
y=x.^2;
p=1/t*sum(sum(y'*y))-sum(sum(sample.^2));
% r is divided into diagonal
% and off-diagonal terms, and the off-diagonal term
% is itself divided into smaller terms
rdiag=1/t*sum(sum(y.^2))-sum(diag(sample).^2);
z=x.*xmkt(:,ones(1,n));
v1=1/t*y'*z-covmkt(:,ones(1,n)).*sample;
roff1=sum(sum(v1.*covmkt(:,ones(1,n))'))/varmkt...
-sum(diag(v1).*covmkt)/varmkt;
v3=1/t*z'*z-varmkt*sample;
roff3=sum(sum(v3.*(covmkt*covmkt')))/varmkt^2 ...
-sum(diag(v3).*covmkt.^2)/varmkt^2;
roff=2*roff1-roff3;
r=rdiag+roff;
% compute shrinkage constant
k=(p-r)/c;
shrinkage=max(0,min(1,k/t))
else % use specified number
shrinkage = shrink;
end
% compute the estimator
sigma=shrinkage*prior+(1-shrinkage)*sample;
end
It's a Part of the Matlab code from Ledoit/Wolf (2003). I don't understand why the demeaning the returns before calculating the covariance? Is this Matlab specific? In my opinion, there is no need for demeaning returns before calculating with the cov-function. (The function does it on its own)
Thanks for help in advance!
The most interesting aspect of the problem is that it is not always that I get the error on running the code (I get it 3 out of 5 times). The exact error message is given below
" Attempted to access para(2); index out of bounds because numel(para)=1.
Error in sofunc2 (line 6)
r2 = para(2)
Error in sosmc_sch1_c2 (line 55)
obj_value(i)=sofunc2(current_position{:,i}); "
The code is given below
tic
clear all
close all
clc
% Initializing variables
S=10; % no of swarm particles
d=3; % dimension of swarm particles
c1=2; % self confidence parameter
c2=1; % swarm confidence parameter
C=1; % constriction factor
LB=[0 0 0]; %lower bounds of variables
UB=[2 2 2]; %upper bounds of variables
wmax=0.9; % maximum inertia weight
wmin=0.4; % minimum inertia weight
Xmax=1; % maximum position
iter=1; % initial iteration
R1=rand();
R2=rand();
tolerance=1;
Maxiter=40; % maximum number of iteration
maxrun=5;
dt=1;
particle_best=cell(1,S);
particle_best_value=ones(1,S)*1E10;
global_best_value=1E10;
global_best=zeros(1,d);
obj_value=zeros(1,S); % Objective function value of particles
%%%%%%%%
current_position=cell(1,S); % particle position
velocity=cell(1,S); % particle velocity
dummy=zeros(1,length(d)); % dummy list
% Initialize particle position and velocity
for run=1:maxrun
run
for i=1:S
for j=1:d
dummy(j)=round(LB(j)+rand()*(UB(j)-LB(j)));
end
current_position{i}=dummy;
velocity{i}=current_position{i}/dt;
end
while iter <= Maxiter && tolerance > 0.01
for i=1:S
for j=1:d
if current_position{i}<LB(j) %%handling boundary conditions
current_position{i}=LB(j);
elseif current_position{i}>UB(j)
current_position{i}=UB(j);
end
end
end
for i=1:S
obj_value(i)=sofunc2(current_position{:,i});
end
for i=1:S
if obj_value(i) < particle_best_value(i) % finding best local
particle_best{i}= current_position{i};
particle_best_value(i)=obj_value(i);
end
end
[fmin,index]=min(obj_value); % finding out the best particle
ffmin(iter,run)=fmin; % storing best fitness
ffite(run)=iter; % storing iteration count
if min(obj_value)< global_best_value % finding best global
global_best=current_position{obj_value==min(obj_value)}; % updating gbest and best fitness
global_best_value=min(obj_value);
fmin0=global_best_value;
end
for i=1:S
w=wmax-((wmax-wmin)/Maxiter)*iter;
velocity{i}=C*(w*velocity{i}+c1*R1*(particle_best{i}-current_position{i})+c2*R2*(global_best-current_position{i}));
current_position{i}=current_position{i}+velocity{i};
end
% calculating tolerance
if iter>20;
tolerance=abs(ffmin(iter-20,run)-fmin0)
end
% displaying iterative results
if iter==1
disp(fprintf('Iteration Best particle Objective fun'));
end
disp(fprintf('%8g %8g %8.4f',iter,index,fmin0));
iter=iter+1;
subplot(1,2,1);
plot(current_position{i}(1),'rO')
xlim([-5*Xmax 5*Xmax])
ylim([-5*Xmax 5*Xmax])
title('Particles instant location')
grid on
hold on;
hold off;
subplot(1,2,2);
plot(iter,global_best_value,'bO')
grid on
hold on
xlim([0 Maxiter]);
title('Best Particle value histogram');
xlabel('iteration no')
ylabel('global best value')
% pause(0.05)
% disp(['BEST PARTICLE VALUE >> ' num2str(global_best_value)])
end
% pso algorithm-----------------------------------------------------end
global_best;
fvalue=10*(global_best(1)-1)^2+20*(global_best(2)-2)^2+30*(global_best(3)-3)^2;
fff(run)=fvalue;
rgbest(run,:)=global_best;
disp(sprintf('--------------------------------------'));
% disp(['BEST PARTICLE POSITION >> ' num2str(global_best)])
end
% pso main program------------------------------------------------------end
disp(sprintf('\n'));
disp(sprintf('*********************************************************'));
disp(sprintf('Final Results-----------------------------'));
[bestfun,bestrun]=min(fff)
best_variables=rgbest(bestrun,:)
disp(sprintf('*********************************************************'));
toc
% PSO convergence characteristic
plot(ffmin(1:ffite(bestrun),bestrun),'-k');
xlabel('Iteration');
ylabel('Fitness function value');
title('PSO convergence characteristic')
%##########################################################################
function F = sofunc2(para)
% Track the output of optsim to a signal of 1
% Variables a1 and a2 are shared with RUNTRACKLSQ
c2 = para(1)
r2 = para(2)
W2 = para(3)
disp(sprintf('The value of parameters c2= %3.0f, r2= %3.0f, W2= %3.0f', para(1),para(2),para(3)));
% Compute function value
simopt = simset('solver','ode1','SrcWorkspace','current','DstWorkspace','current'); % Initialize sim options
[tout,xout,yout] = sim('sosmc_c2',[0 200],simopt);
% e=yout-1 ; % compute the error
% sys_overshoot=max(yout)-2 % compute the overshoot
if para(1)<0 || para(1)>5
penalty=500;
elseif para(2)<0 || para(2)>5
penalty=500;
elseif para(3)<0 || para(3)>5
penalty=500;
else
penalty=0;
end
A=5; %B=1;C=5;
F=int_abs_error*A+penalty
end
The code has another part which involves a simulink block named 'sosmc_c2'. However, there is no issue with that block diagram and runs smoothly with other instances of the same program. I would like to know what exactly is causing the error in this particular case and how should I solve it.
here is the error :-|
for i=1:S
for j=1:d
if current_position{i}<LB(j) %%handling boundary conditions
current_position{i}=LB(j);
elseif current_position{i}>UB(j)
current_position{i}=UB(j);
end
end
end
you are comparing current position which is a 1x3 matrices with LB(j) which is a "one element matrix"!!!!
I think you might be calling the function, the wrong parameters:
obj_value(i)=sofunc2(current_position{i});
instead of current_position{:,i}.
As far as I can see current_position is a columns vector of cells, not a matrix. I can not run the code fully cause I am missing sim and simset.
Another error, pointed by 'daren shan'
Checking the boundary conditions you are comparing cells and vector.
for i=1:S
for j=1:d
if current_position{i}(j)<LB(j)
current_position{i}(j)=LB(j);
elseif current_position{i}(j)>UB(j)
current_position{i}(j)=UB(j);
end
end
end
What was happening before was that if the whole vector current_position{i} is smaller or larger by LB/UB then the vector is replaced by a single value.
I have code of the following kind in MATLAB:
indices = find([1 2 2 3 3 3 4 5 6 7 7] == 3)
This returns 4,5,6 - the indices of elements in the array equal to 3. Now. my code does this sort of thing with very long vectors. The vectors are always sorted.
Therefore, I would like a function which replaces the O(n) complexity of find with O(log n), at the expense that the array has to be sorted.
I am aware of ismember, but for what I know it does not return the indices of all items, just the last one (I need all of them).
For reasons of portability, I need the solution to be MATLAB-only (no compiled mex files etc.)
Here is a fast implementation using binary search. This file is also available on github
function [b,c]=findInSorted(x,range)
%findInSorted fast binary search replacement for ismember(A,B) for the
%special case where the first input argument is sorted.
%
% [a,b] = findInSorted(x,s) returns the range which is equal to s.
% r=a:b and r=find(x == s) produce the same result
%
% [a,b] = findInSorted(x,[from,to]) returns the range which is between from and to
% r=a:b and r=find(x >= from & x <= to) return the same result
%
% For any sorted list x you can replace
% [lia] = ismember(x,from:to)
% with
% [a,b] = findInSorted(x,[from,to])
% lia=a:b
%
% Examples:
%
% x = 1:99
% s = 42
% r1 = find(x == s)
% [a,b] = myFind(x,s)
% r2 = a:b
% %r1 and r2 are equal
%
% See also FIND, ISMEMBER.
%
% Author Daniel Roeske <danielroeske.de>
A=range(1);
B=range(end);
a=1;
b=numel(x);
c=1;
d=numel(x);
if A<=x(1)
b=a;
end
if B>=x(end)
c=d;
end
while (a+1<b)
lw=(floor((a+b)/2));
if (x(lw)<A)
a=lw;
else
b=lw;
end
end
while (c+1<d)
lw=(floor((c+d)/2));
if (x(lw)<=B)
c=lw;
else
d=lw;
end
end
end
Daniel's approach is clever and his myFind2 function is definitely fast, but there are errors/bugs that occur near the boundary conditions or in the case that the upper and lower bounds produce a range outside the set passed in.
Additionally, as he noted in his comment on his answer, his implementation had some inefficiencies that could be improved. I implemented an improved version of his code, which runs faster, while also correctly handling boundary conditions. Furthermore, this code includes more comments to explain what is happening. I hope this helps someone the way Daniel's code helped me here!
function [lower_index,upper_index] = myFindDrGar(x,LowerBound,UpperBound)
% fast O(log2(N)) computation of the range of indices of x that satify the
% upper and lower bound values using the fact that the x vector is sorted
% from low to high values. Computation is done via a binary search.
%
% Input:
%
% x- A vector of sorted values from low to high.
%
% LowerBound- Lower boundary on the values of x in the search
%
% UpperBound- Upper boundary on the values of x in the search
%
% Output:
%
% lower_index- The smallest index such that
% LowerBound<=x(index)<=UpperBound
%
% upper_index- The largest index such that
% LowerBound<=x(index)<=UpperBound
if LowerBound>x(end) || UpperBound<x(1) || UpperBound<LowerBound
% no indices satify bounding conditions
lower_index = [];
upper_index = [];
return;
end
lower_index_a=1;
lower_index_b=length(x); % x(lower_index_b) will always satisfy lowerbound
upper_index_a=1; % x(upper_index_a) will always satisfy upperbound
upper_index_b=length(x);
%
% The following loop increases _a and decreases _b until they differ
% by at most 1. Because one of these index variables always satisfies the
% appropriate bound, this means the loop will terminate with either
% lower_index_a or lower_index_b having the minimum possible index that
% satifies the lower bound, and either upper_index_a or upper_index_b
% having the largest possible index that satisfies the upper bound.
%
while (lower_index_a+1<lower_index_b) || (upper_index_a+1<upper_index_b)
lw=floor((lower_index_a+lower_index_b)/2); % split the upper index
if x(lw) >= LowerBound
lower_index_b=lw; % decrease lower_index_b (whose x value remains \geq to lower bound)
else
lower_index_a=lw; % increase lower_index_a (whose x value remains less than lower bound)
if (lw>upper_index_a) && (lw<upper_index_b)
upper_index_a=lw;% increase upper_index_a (whose x value remains less than lower bound and thus upper bound)
end
end
up=ceil((upper_index_a+upper_index_b)/2);% split the lower index
if x(up) <= UpperBound
upper_index_a=up; % increase upper_index_a (whose x value remains \leq to upper bound)
else
upper_index_b=up; % decrease upper_index_b
if (up<lower_index_b) && (up>lower_index_a)
lower_index_b=up;%decrease lower_index_b (whose x value remains greater than upper bound and thus lower bound)
end
end
end
if x(lower_index_a)>=LowerBound
lower_index = lower_index_a;
else
lower_index = lower_index_b;
end
if x(upper_index_b)<=UpperBound
upper_index = upper_index_b;
else
upper_index = upper_index_a;
end
Note that the improved version of Daniels searchFor function is now simply:
function [lower_index,upper_index] = mySearchForDrGar(x,value)
[lower_index,upper_index] = myFindDrGar(x,value,value);
EDIT many years later: there was an error in the last two if/else blocks, fixed it.
ismember will give you all the indexes if you look at the first output:
>> x = [1 2 2 3 3 3 4 5 6 7 7];
>> [tf,loc]=ismember(x,3);
>> inds = find(tf)
inds =
4 5 6
You just need to use the right order of inputs.
Note that there is a helper function used by ismember that you can call directly:
% ISMEMBC - S must be sorted - Returns logical vector indicating which
% elements of A occur in S
tf = ismembc(x,3);
inds = find(tf);
Using ismembc will save computation time since ismember calls issorted first, but this will omit the check.
Note that newer versions of matlab have a builtin called by builtin('_ismemberoneoutput',a,b) with the same functionality.
Since the above applications of ismember, etc. are somewhat backwards (searching for each element of x in the second argument rather than the other way around), the code is much slower than necessary. As the OP points out, it is unfortunate that [~,loc]=ismember(3,x) only provides the location of the first occurrence of 3 in x, rather than all. However, if you have a recent version of MATLAB (R2012b+, I think), you can use yet more undocumented builtin functions to get the first an last indexes! These are ismembc2 and builtin('_ismemberfirst',searchfor,x):
firstInd = builtin('_ismemberfirst',searchfor,x); % find first occurrence
lastInd = ismembc2(searchfor,x); % find last occurrence
% lastInd = ismembc2(searchfor,x(firstInd:end))+firstInd-1; % slower
inds = firstInd:lastInd;
Still slower than Daniel R.'s great MATLAB code, but there it is (rntmX added to randomatlabuser's benchmark) just for fun:
mean([rntm1 rntm2 rntm3 rntmX])
ans =
0.559204323050486 0.263756852283128 0.000017989974213 0.000153682125682
Here are the bits of documentation for these functions inside ismember.m:
% ISMEMBC2 - S must be sorted - Returns a vector of the locations of
% the elements of A occurring in S. If multiple instances occur,
% the last occurrence is returned
% ISMEMBERFIRST(A,B) - B must be sorted - Returns a vector of the
% locations of the elements of A occurring in B. If multiple
% instances occur, the first occurence is returned.
There is actually reference to an ISMEMBERLAST builtin, but it doesn't seem to exist (yet?).
This is not an answer - I am just comparing the running time of the three solutions suggested by chappjc and Daniel R.
N = 5e7; % length of vector
p = 0.99; % probability
KK = 100; % number of instances
rntm1 = zeros(KK, 1); % runtime with ismember
rntm2 = zeros(KK, 1); % runtime with ismembc
rntm3 = zeros(KK, 1); % runtime with Daniel's function
for kk = 1:KK
x = cumsum(rand(N, 1) > p);
searchfor = x(ceil(4*N/5));
tic
[tf,loc]=ismember(x, searchfor);
inds1 = find(tf);
rntm1(kk) = toc;
tic
tf = ismembc(x, searchfor);
inds2 = find(tf);
rntm2(kk) = toc;
tic
a=1;
b=numel(x);
c=1;
d=numel(x);
while (a+1<b||c+1<d)
lw=(floor((a+b)/2));
if (x(lw)<searchfor)
a=lw;
else
b=lw;
end
lw=(floor((c+d)/2));
if (x(lw)<=searchfor)
c=lw;
else
d=lw;
end
end
inds3 = (b:c)';
rntm3(kk) = toc;
end
Daniel's binary search is very fast.
% Mean of running time
mean([rntm1 rntm2 rntm3])
% 0.631132275892504 0.295233981447746 0.000400786666188
% Percentiles of running time
prctile([rntm1 rntm2 rntm3], [0 25 50 75 100])
% 0.410663611685559 0.175298784336465 0.000012828868032
% 0.429120717937665 0.185935198821797 0.000014539383770
% 0.582281366154709 0.268931132925888 0.000019243302048
% 0.775917520641649 0.385297304740352 0.000026940622867
% 1.063753914942895 0.592429428396956 0.037773746662356
I needed a function like this. Thanks for the post #Daniel!
I worked a little with it because I needed to find several indexes in the same array. I wanted to avoid the overhead of arrayfun (or the like) or calling the function multiple times. So you can pass a bunch of values in range and you will get the indexes in the array.
function idx = findInSorted(x,range)
% Author Dídac Rodríguez Arbonès (May 2018)
% Based on Daniel Roeske's solution:
% Daniel Roeske <danielroeske.de>
% https://github.com/danielroeske/danielsmatlabtools/blob/master/matlab/data/findinsorted.m
range = sort(range);
idx = nan(size(range));
for i=1:numel(range)
idx(i) = aux(x, range(i));
end
end
function b = aux(x, lim)
a=1;
b=numel(x);
if lim<=x(1)
b=a;
end
if lim>=x(end)
a=b;
end
while (a+1<b)
lw=(floor((a+b)/2));
if (x(lw)<lim)
a=lw;
else
b=lw;
end
end
end
I guess you can use a parfor or arrayfun instead. I have not tested myself at what size of range it pays off, though.
Another possible improvement would be to use the previous found indexes (if range is sorted) to decrease the search space. I am skeptical of its potential to save CPU because of the O(log n) runtime.
The final function ended up running slightly faster. I used #randomatlabuser 's framework for that:
N = 5e6; % length of vector
p = 0.99; % probability
KK = 100; % number of instances
rntm1 = zeros(KK, 1); % runtime with ismember
rntm2 = zeros(KK, 1); % runtime with ismembc
rntm3 = zeros(KK, 1); % runtime with Daniel's function
for kk = 1:KK
x = cumsum(rand(N, 1) > p);
searchfor = x(ceil(4*N/5));
tic
range = sort(searchfor);
idx = nan(size(range));
for i=1:numel(range)
idx(i) = aux(x, range(i));
end
rntm1(kk) = toc;
tic
a=1;
b=numel(x);
c=1;
d=numel(x);
while (a+1<b||c+1<d)
lw=(floor((a+b)/2));
if (x(lw)<searchfor)
a=lw;
else
b=lw;
end
lw=(floor((c+d)/2));
if (x(lw)<=searchfor)
c=lw;
else
d=lw;
end
end
inds3 = (b:c)';
rntm2(kk) = toc;
end
%%
function b = aux(x, lim)
a=1;
b=numel(x);
if lim<=x(1)
b=a;
end
if lim>=x(end)
a=b;
end
while (a+1<b)
lw=(floor((a+b)/2));
if (x(lw)<lim)
a=lw;
else
b=lw;
end
end
end
It is not a big improvement, but it helps because I need to run several thousand searches.
% Mean of running time
mean([rntm1 rntm2])
% 9.9624e-05 5.6303e-05
% Percentiles of running time
prctile([rntm1 rntm2], [0 25 50 75 100])
% 3.0435e-05 1.0524e-05
% 3.4133e-05 1.2231e-05
% 3.7262e-05 1.3369e-05
% 3.9111e-05 1.4507e-05
% 0.0027426 0.0020301
I hope this can help somebody.
EDIT
If there is a significant chance of having exact matches, it pays off to use the very fast built-in ismember before calling the function:
[found, idx] = ismember(range, x);
idx(~found) = arrayfun(#(r) aux(x, r), range(~found));
I've been asked to write down a Matlab program in order to solve LPs using the Revised Simplex Method.
The code I wrote runs without problems with input data although I've realised it doesn't solve the problem properly, as it does not update the inverse of the basis B (the real core idea of the abovementioned method).
The problem is only related to a part of the code, the one in the bottom of the script aiming at:
Computing the new inverse basis B^-1 by performing elementary row operations on [B^-1 u] (pivot row index is l_out). The vector u is transformed into a unit vector with u(l_out) = 1 and u(i) = 0 for other i.
Here's the code I wrote:
%% Implementation of the revised Simplex. Solves a linear
% programming problem of the form
%
% min c'*x
% s.t. Ax = b
% x >= 0
%
% The function input parameters are the following:
% A: The constraint matrix
% b: The rhs vector
% c: The vector of cost coefficients
% C: The indices of the basic variables corresponding to an
% initial basic feasible solution
%
% The function returns:
% x_opt: Decision variable values at the optimal solution
% f_opt: Objective function value at the optimal solution
%
% Usage: [x_opt, f_opt] = S12345X(A,b,c,C)
% NOTE: Replace 12345X with your own student number
% and rename the file accordingly
function [x_opt, f_opt] = SXXXXX(A,b,c,C)
%% Initialization phase
% Initialize the vector of decision variables
x = zeros(length(c),1);
% Create the initial Basis matrix, compute its inverse and
% compute the inital basic feasible solution
B=A(:,C);
invB = inv(B);
x(C) = invB*b;
%% Iteration phase
n_max = 10; % At most n_max iterations
for n = 1:n_max % Main loop
% Compute the vector of reduced costs c_r
c_B = c(C); % Basic variable costs
p = (c_B'*invB)'; % Dual variables
c_r = c' - p'*A; % Vector of reduced costs
% Check if the solution is optimal. If optimal, use
% 'return' to break from the function, e.g.
J = find(c_r < 0); % Find indices with negative reduced costs
if (isempty(J))
f_opt = c'*x;
x_opt = x;
return;
end
% Choose the entering variable
j_in = J(1);
% Compute the vector u (i.e., the reverse of the basic directions)
u = invB*A(:,j_in);
I = find(u > 0);
if (isempty(I))
f_opt = -inf; % Optimal objective function cost = -inf
x_opt = []; % Produce empty vector []
return % Break from the function
end
% Compute the optimal step length theta
theta = min(x(C(I))./u(I));
L = find(x(C)./u == theta); % Find all indices with ratio theta
% Select the exiting variable
l_out = L(1);
% Move to the adjacent solution
x(C) = x(C) - theta*u;
% Value of the entering variable is theta
x(j_in) = theta;
% Update the set of basic indices C
C(l_out) = j_in;
% Compute the new inverse basis B^-1 by performing elementary row
% operations on [B^-1 u] (pivot row index is l_out). The vector u is trans-
% formed into a unit vector with u(l_out) = 1 and u(i) = 0 for
% other i.
M=horzcat(invB,u);
[f g]=size(M);
R(l_out,:)=M(l_out,:)/M(l_out,j_in); % Copy row l_out, normalizing M(l_out,j_in) to 1
u(l_out)=1;
for k = 1:f % For all matrix rows
if (k ~= l_out) % Other then l_out
u(k)=0;
R(k,:)=M(k,:)-M(k,j_in)*R(l_out,:); % Set them equal to the original matrix Minus a multiple of normalized row l_out, making R(k,j_in)=0
end
end
invM=horzcat(u,invB);
% Check if too many iterations are performed (increase n_max to
% allow more iterations)
if(n == n_max)
fprintf('Max number of iterations performed!\n\n');
return
end
end % End for (the main iteration loop)
end % End function
%% Example 3.5 from the book (A small test problem)
% Data in standard form:
% A = [1 2 2 1 0 0;
% 2 1 2 0 1 0;
% 2 2 1 0 0 1];
% b = [20 20 20]';
% c = [-10 -12 -12 0 0 0]';
% C = [4 5 6]; % Indices of the basic variables of
% % the initial basic feasible solution
%
% The optimal solution
% x_opt = [4 4 4 0 0 0]' % Optimal decision variable values
% f_opt = -136 % Optimal objective function cost
Ok, after a lot of hrs spent on the intensive use of printmat and disp to understand what was happening inside the code from a mathematical point of view I realized it was a problem with the index j_in and normalization in case of dividing by zero therefore I managed to solve the issue as follows.
Now it runs perfectly. Cheers.
%% Implementation of the revised Simplex. Solves a linear
% programming problem of the form
%
% min c'*x
% s.t. Ax = b
% x >= 0
%
% The function input parameters are the following:
% A: The constraint matrix
% b: The rhs vector
% c: The vector of cost coefficients
% C: The indices of the basic variables corresponding to an
% initial basic feasible solution
%
% The function returns:
% x_opt: Decision variable values at the optimal solution
% f_opt: Objective function value at the optimal solution
%
% Usage: [x_opt, f_opt] = S12345X(A,b,c,C)
% NOTE: Replace 12345X with your own student number
% and rename the file accordingly
function [x_opt, f_opt] = S472366(A,b,c,C)
%% Initialization phase
% Initialize the vector of decision variables
x = zeros(length(c),1);
% Create the initial Basis matrix, compute its inverse and
% compute the inital basic feasible solution
B=A(:,C);
invB = inv(B);
x(C) = invB*b;
%% Iteration phase
n_max = 10; % At most n_max iterations
for n = 1:n_max % Main loop
% Compute the vector of reduced costs c_r
c_B = c(C); % Basic variable costs
p = (c_B'*invB)'; % Dual variables
c_r = c' - p'*A; % Vector of reduced costs
% Check if the solution is optimal. If optimal, use
% 'return' to break from the function, e.g.
J = find(c_r < 0); % Find indices with negative reduced costs
if (isempty(J))
f_opt = c'*x;
x_opt = x;
return;
end
% Choose the entering variable
j_in = J(1);
% Compute the vector u (i.e., the reverse of the basic directions)
u = invB*A(:,j_in);
I = find(u > 0);
if (isempty(I))
f_opt = -inf; % Optimal objective function cost = -inf
x_opt = []; % Produce empty vector []
return % Break from the function
end
% Compute the optimal step length theta
theta = min(x(C(I))./u(I));
L = find(x(C)./u == theta); % Find all indices with ratio theta
% Select the exiting variable
l_out = L(1);
% Move to the adjacent solution
x(C) = x(C) - theta*u;
% Value of the entering variable is theta
x(j_in) = theta;
% Update the set of basic indices C
C(l_out) = j_in;
% Compute the new inverse basis B^-1 by performing elementary row
% operations on [B^-1 u] (pivot row index is l_out). The vector u is trans-
% formed into a unit vector with u(l_out) = 1 and u(i) = 0 for
% other i.
M=horzcat(u, invB);
[f g]=size(M);
if (theta~=0)
M(l_out,:)=M(l_out,:)/M(l_out,1); % Copy row l_out, normalizing M(l_out,1) to 1
end
for k = 1:f % For all matrix rows
if (k ~= l_out) % Other then l_out
M(k,:)=M(k,:)-M(k,1)*M(l_out,:); % Set them equal to the original matrix Minus a multiple of normalized row l_out, making R(k,j_in)=0
end
end
invB=M(1:3,2:end);
% Check if too many iterations are performed (increase n_max to
% allow more iterations)
if(n == n_max)
fprintf('Max number of iterations performed!\n\n');
return
end
end % End for (the main iteration loop)
end % End function
%% Example 3.5 from the book (A small test problem)
% Data in standard form:
% A = [1 2 2 1 0 0;
% 2 1 2 0 1 0;
% 2 2 1 0 0 1];
% b = [20 20 20]';
% c = [-10 -12 -12 0 0 0]';
% C = [4 5 6]; % Indices of the basic variables of
% % the initial basic feasible solution
%
% The optimal solution
% x_opt = [4 4 4 0 0 0]' % Optimal decision variable values
% f_opt = -136 % Optimal objective function cost
I need to adjust the following algorithm in Matlab because the double loop in step (2) takes to much time when n is large (I should be able to run the algorithm for n=15). Any ideas?
n=3;
% (1) construct A: list of DISPOSITIONS of the elements of the set {0,1} in n
%places (organise 2 elements in n places with possibility of repetitions and the order matters)
A=makeindex(n); %matrix 2^n x n (FAST)
% (2) modify A: it should give the list of COMBINATIONS of the elements of the set
%{0,1} in n-1 places (organise 2 elements in n-1 places with repetitions and the
%order does NOT matter), repeated twice:
% once when the n-th element is 0, the other when the n-th element is 1
Xr=A(:,n);
m=sum(A,2); %vector totalx1
%each element is the total number of ones in the
%corresponding row
drop=false(2^n,1); %logical vector totalx1
for i=1:2^n
parfor j=1:2^n
if j>i
if m(i)==m(j) && Xr(i)==Xr(j) %(VERY SLOW)
drop(j)=true;
end
end
end
end
A(drop,:)=[];
The function makeindex is
function [index]=makeindex(k) %
total=2^k; %
index=zeros(total,k); %
for i=1:k %
ii=1; %
cc=1; %
c=total/(2^i); %
while ii<=total %
if cc <=c %
index(ii,i)=1; %
cc=cc+1; %
ii=ii+1; %
else %
ii=ii+c; %
cc=1; %
end %
end %
end %
%
end
Here my solution if that's what you want:
A=zeros(n,2*n);
A(:,1)=1;
for i=2:2:n*2-1
A(:,i-1)=circshift(A(:,i-1),[-1 0]);
A(:,i)=A(:,i-1);
A(end,i)=0;
A(:,i+1)=A(:,i);
end
A(:,end-1)=circshift(A(:,end-1),[-1 0]);
A=A';
For n = 10:
Elapsed time is 0.000976 seconds.
Old code:
Elapsed time is 32.804602 seconds.
n=15:
Elapsed time is 0.000866 seconds.
Old code:
... still calculating... ;)