Issues with Simplex method for linear programming in Matlab (linprog funcion) - matlab

I am using the linprog function in Matlab to solve a set of large linear programming problems. I have 2601 decision variables, 51 inequality constraints, 71 equality constraints, and lower bounds of 0 for all variables.
The coefficients in the objective function and constraints vary in different problems. I am using the simplex method (when I try active-set and interior-point the program never stops running, as long as I have waited which was more than hours).
The simplex method converges for some of the problems very quickly, and for some of them (also very quickly) shows this message:
Exiting: The constraints are overly stringent; no feasible starting point found.
However, even for the ones with that message, it still provides a solution which satisfy the constraints. Can I just ignore that message and use the solutions or the message is important and the solution is probably not optimum?
Update: It turned out that the interior-point method solves some of them, but not the others. So in the code below, I used the interior-point method for the ones that work with it, and the simplex method with the rest.
These are my files and this is my code:
clc; clear;
%distances
t1 = readtable('t.xlsx', 'ReadVariableNames',false);
ti = table2array(t1);
sz = size(ti);
tiv = reshape(ti, [1,sz(1)*sz(2)]);
%crude oil production and attraction
A = readtable('A.xlsx', 'ReadVariableNames',false);
Ai = table2array(A);
P = readtable('P.xlsx', 'ReadVariableNames',false);
Pi = table2array(P);
%others
one1 = readtable('A Matrix.xlsx', 'ReadVariableNames',false);
one = table2array(one1);
two1 = readtable('Aeq Matrix.xlsx', 'ReadVariableNames',false);
two = table2array(two1);
zero = zeros(sz(1), sz(1));
infin = inf(sz(1), sz(1));
zerov = reshape(zero, [1,sz(1)*sz(2)]);
infinv = reshape(infin, [1,sz(1)*sz(2)]);
%OF
f = (tiv).^1;
%linear program
%x = linprog(f,A,b,Aeq,beq,lb,ub)
options1 = optimoptions('linprog','Algorithm','interior-point');
options2 = optimoptions('linprog','Algorithm','simplex');
x1999 = vec2mat(linprog(f,one,Pi(1,1:end),two,Ai(1,1:end),zerov,infinv,zerov,options2),sz(1));
x2000 = vec2mat(linprog(f,one,Pi(2,1:end),two,Ai(2,1:end),zerov,infinv,zerov,options1),sz(1));
x2001 = vec2mat(linprog(f,one,Pi(3,1:end),two,Ai(3,1:end),zerov,infinv,zerov,options1),sz(1));
x2002 = vec2mat(linprog(f,one,Pi(4,1:end),two,Ai(4,1:end),zerov,infinv,zerov,options1),sz(1));
x2003 = vec2mat(linprog(f,one,Pi(5,1:end),two,Ai(5,1:end),zerov,infinv,zerov,options1),sz(1));
x2004 = vec2mat(linprog(f,one,Pi(6,1:end),two,Ai(6,1:end),zerov,infinv,zerov,options1),sz(1));
x2005 = vec2mat(linprog(f,one,Pi(7,1:end),two,Ai(7,1:end),zerov,infinv,zerov,options1),sz(1));
x2006 = vec2mat(linprog(f,one,Pi(8,1:end),two,Ai(8,1:end),zerov,infinv,zerov,options1),sz(1));
x2007 = vec2mat(linprog(f,one,Pi(9,1:end),two,Ai(9,1:end),zerov,infinv,zerov,options2),sz(1));
x2008 = vec2mat(linprog(f,one,Pi(10,1:end),two,Ai(10,1:end),zerov,infinv,zerov,options2),sz(1));
x2009 = vec2mat(linprog(f,one,Pi(11,1:end),two,Ai(11,1:end),zerov,infinv,zerov,options2),sz(1));
x2010 = vec2mat(linprog(f,one,Pi(12,1:end),two,Ai(12,1:end),zerov,infinv,zerov,options2),sz(1));
x2011 = vec2mat(linprog(f,one,Pi(13,1:end),two,Ai(13,1:end),zerov,infinv,zerov,options2),sz(1));
x2012 = vec2mat(linprog(f,one,Pi(14,1:end),two,Ai(14,1:end),zerov,infinv,zerov,options1),sz(1));
x2013 = vec2mat(linprog(f,one,Pi(15,1:end),two,Ai(15,1:end),zerov,infinv,zerov,options2),sz(1));
x2014 = vec2mat(linprog(f,one,Pi(16,1:end),two,Ai(16,1:end),zerov,infinv,zerov,options2),sz(1));
x2015 = vec2mat(linprog(f,one,Pi(17,1:end),two,Ai(17,1:end),zerov,infinv,zerov,options2),sz(1));
x2016 = vec2mat(linprog(f,one,Pi(18,1:end),two,Ai(18,1:end),zerov,infinv,zerov,options1),sz(1));

In case somebody wants to know what the problem was, I found that for those programs with error, there was actually no feasible point and what the error said was correct. I found it out by running the same linear programs with a vector of zeros for the objective function's coefficients, and getting the same error (recommended method by Matlab's manual).

Related

How do I properly "slice" a 4D matrix in Matlab in a parfor loop?

I am trying to make a portion of my code run faster in MatLab, and I'd like to use parfor. When I try to, I get the following error about one of my variables D_all.
"The PARFOR loop cannot run because of the way D_all is used".
Here is a sample of my code.
M = 161;
N = 24;
P = 161;
parfor n=1:M*N*P
[j,i,k] = ind2sub([N,M,P],n);
r0 = Rw(n,1:3);
R0 = repmat(r0,M*N*P,1);
delta = sqrt(dXnd(i)^2 + dZnd(k)^2);
d = R_prime - R0;
inS = Rw_prime(find(sqrt(sum(d.^2,2))<0.8*delta),:);
if isempty(inS)
D_all(j,i,k,tj) = D_all(j,i,k,tj-1);
else
y0 = r0(2);
inC = inS(find(inS(:,2)==y0),:);
dw = sqrt(sum(d(find(sqrt(sum(d.^2,2))<0.8*delta & d(:,2)==0),:).^2,2));
V_avg = sum(dw.^(-1).*inC(:,4))/sum(dw.^(-1));
D_all(j,i,k,tj) = V_avg;
end
end
I'm not very familiar with parallel computing, and I've looked at the guides online and don't really understand how to apply them to my situation. I guess I need to "slice" D_all but I don't know how to do that.
EDIT: I think I understand that the major problem is that when using D_all I have tj and tj-1.
EDIT 2: I don't show this above, it probably would have been helpful, but I defined D_all(:,:,:,1) = V_1; where V_1 corresponds to a previous time step. I tried making multiple variables V_2, V_3, etc. for each step and replacing D_all(j,i,k,tj-1) with V_1(j,i,k). This still led to the same error I am seeing with D_all.
"Valid indices for D_all are restricted for PARFOR loops"

Speeding up matlab for loop

I have a system of 5 ODEs with nonlinear terms involved. I am trying to vary 3 parameters over some ranges to see what parameters would produce the necessary behaviour that I am looking for.
The issue is I have written the code with 3 for loops and it takes a very long time to get the output.
I am also storing the parameter values within the loops when it meets a parameter set that satisfies an ODE event.
This is how I have implemented it in matlab.
function [m,cVal,x,y]=parameters()
b=5000;
q=0;
r=10^4;
s=0;
n=10^-8;
time=3000;
m=[];
cVal=[];
x=[];
y=[];
val1=0.1:0.01:5;
val2=0.1:0.2:8;
val3=10^-13:10^-14:10^-11;
for i=1:length(val1)
for j=1:length(val2)
for k=1:length(val3)
options = odeset('AbsTol',1e-15,'RelTol',1e-13,'Events',#eventfunction);
[t,y,te,ye]=ode45(#(t,y)systemFunc(t,y,[val1(i),val2(j),val3(k)]),0:time,[b,q,s,r,n],options);
if length(te)==1
m=[m;val1(i)];
cVal=[cVal;val2(j)];
x=[x;val3(k)];
y=[y;ye(1)];
end
end
end
end
Is there any other way that I can use to speed up this process?
Profile viewer results
I have written the system of ODEs simply with the a format like
function s=systemFunc(t,y,p)
s= zeros(2,1);
s(1)=f*y(1)*(1-(y(1)/k))-p(1)*y(2)*y(1)/(p(2)*y(2)+y(1));
s(2)=p(3)*y(1)-d*y(2);
end
f,d,k are constant parameters.
The equations are more complicated than what's here as its a system of 5 ODEs with lots of non linear terms interacting with each other.
Tommaso is right. Preallocating will save some time.
But I would guess that there is fundamentally not a lot you can do since you are running ode45 in a loop. ode45 itself may be the bottleneck.
I would suggest you profile your code to see where the bottleneck is:
profile on
parameters(... )
profile viewer
I would guess that ode45 is the problem. Probably you will find that you should actually focus your time on optimizing the systemFunc code for performance. But you won't know that until you run the profiler.
EDIT
Based on the profiler output and additional code, I see some things that will help
It seems like the vectorization of your values is hurting you. Instead of
#(t,y)systemFunc(t,y,[val1(i),val2(j),val3(k)])
try
#(t,y)systemFunc(t,y,val1(i),val2(j),val3(k))
where your system function is defined as
function s=systemFunc(t,y,p1,p2,p3)
s= zeros(2,1);
s(1)=f*y(1)*(1-(y(1)/k))-p1*y(2)*y(1)/(p2*y(2)+y(1));
s(2)=p3*y(1)-d*y(2);
end
Next, note that you don't have to preallocate space in the systemFunc, just combine them in the output:
function s=systemFunc(t,y,p1,p2,p3)
s = [ f*y(1)*(1-(y(1)/k))-p1*y(2)*y(1)/(p2*y(2)+y(1)),
p3*y(1)-d*y(2) ];
end
Finally, note that ode45 is internally taking about 1/3 of your runtime. There is not much you will be able to do about that. If you can live with it, I would suggest increasing your 'AbsTol' and 'RelTol' to more reasonable numbers. Those values are really small, and are making ode45 run for a really long time. If you can live with it, try increasing them to something like 1e-6 or 1e-8 and see how much the performance increases. Alternatively, depending on how smooth your function is, you might be able to do better with a different integrator (like ode23). But your mileage will vary based on how smooth your problem is.
I have two suggestions for you.
Preallocate the vectors in which you store your results and use an
increasing index to populate them into each iteration.
Since the options you use are always the same, instantiate then
outside the loop only once.
Final code:
function [m,cVal,x,y] = parameters()
b = 5000;
q = 0;
r = 10^4;
s = 0;
n = 10^-8;
time = 3000;
options = odeset('AbsTol',1e-15,'RelTol',1e-13,'Events',#eventfunction);
val1 = 0.1:0.01:5;
val1_len = numel(val1);
val2 = 0.1:0.2:8;
val2_len = numel(val2);
val3 = 10^-13:10^-14:10^-11;
val3_len = numel(val3);
total_len = val1_len * val2_len * val3_len;
m = NaN(total_len,1);
cVal = NaN(total_len,1);
x = NaN(total_len,1);
y = NaN(total_len,1);
res_offset = 1;
for i = 1:val1_len
for j = 1:val2_len
for k = 1:val3_len
[t,y,te,ye] = ode45(#(t,y)systemFunc(t,y,[val1(i),val2(j),val3(k)]),0:time,[b,q,s,r,n],options);
if (length(te) == 1)
m(res_offset) = val1(i);
cVal(res_offset) = val2(j);
x(res_offset) = val3(k);
y(res_offset) = ye(1);
end
res_offset = res_offset + 1;
end
end
end
end
If you only want to preserve result values that have been correctly computed, you can remove the rows containing NaNs at the bottom of your function. Indexing on one of the vectors will be enough to clear everything:
rows_ok = ~isnan(y);
m = m(rows_ok);
cVal = cVal(rows_ok);
x = x(rows_ok);
y = y(rows_ok);
In continuation of the other suggestions, I have 2 more suggestions for you:
You might want to try with a different solver, ODE45 is for non-stiff problems, but from the looks of it, it might seem like your problem could be stiff (parameters have a different order of magnitude). Try for instance with the ode23s method.
Secondly, without knowing which event you are looking for, maybe it is possible for you to use a logarithmic search rather than a linear one. e.g. the Bisection method. This will severely cut down on the number of times you have to solve the equation.

Using PCA before classification

I am using PCA to reduce number of features before training Random Forest. I first used around 70 principal components out of 125 which were around 99% of the energy (according to eigen values). I got much worse results after training Random Forests with new transformed features. After that I used all the principal components and I got the same results as when I used 70. This made no sense to me since that is the same feature space only in difirent base (the space has only be rotated so that should not affect the boundary).
Does anyone have the idea what may be the problem here?
Here is my code
clc;
clear all;
close all;
load patches_training_256.txt
load patches_testing_256.txt
Xtr = patches_training_256(:,2:end);
Xtr = Xtr';
Ytr = patches_training_256(:,1);
Ytr = Ytr';
Xtest = patches_testing_256(:,2:end);
Xtest = Xtest';
Ytest = patches_testing_256(:,1);
Ytest = Ytest';
data_size = size(Xtr, 2);
feature_size = size(Xtr, 1);
mu = mean(Xtr,2);
sigma = std(Xtr,0,2);
mu_mat = repmat(mu,1,data_size);
sigma_mat = repmat(sigma,1,data_size);
cov = ((Xtr - mu_mat)./sigma_mat) * ((Xtr - mu_mat)./sigma_mat)' / data_size;
[v d] = eig(cov);
%[U S V] = svd(((Xtr - mu_mat)./sigma_mat)');
k = 124;
%Ureduce = U(:,1:k);
%XtrReduce = ((Xtr - mu_mat)./sigma_mat) * Ureduce;
XtrReduce = v'*((Xtr - mu_mat)./sigma_mat);
B = TreeBagger(300, XtrReduce', Ytr', 'Prior', 'Empirical', 'NPrint', 1);
data_size_test = size(Xtest, 2);
mu_test = repmat(mu,1,data_size_test);
sigma_test = repmat(sigma,1,data_size_test);
XtestReduce = v' * ((Xtest - mu_test) ./ sigma_test);
Ypredict = predict(B,XtestReduce');
error = sum(Ytest' ~= (double(cell2mat(Ypredict)) - 48))
Random forest heavily depends on the choice of the base. It is not a linear model, which is (up to normalization) rotation invariant, RF completely changes behaviour once you "rotate the space". The reason behind it lies in the fact that it uses decision trees as base classifiers which analyze each feature completely independently, so as the result it fails to find any linear combination of features. Once you rotate your space you change "meaning" of features. There is nothing wrong with that, simply tree based classifiers are rather bad choice to apply after such transformations. Use features selection methods instead (methods which select which features are valuable without creating any linear combinations). In fact, RFs themselves can be used for such task due to their internal "feature importance" computation,
There is already a matlab function princomp which would do pca for you. I would suggest not to fall in numerical error loops. They have done it for us..:)

Non linear function parameter estimation - matlab, lsqnonlin, fzero

I'm having difficulty with a fitting problem. From the errors that I get I imagine that the boundaries are not defined correctly and I haven't managed to find a solution. Any help would be very much appreciated.
Alternative methods for the solution of the same problem are also accepted.
Description
I have to estimate the parameters of a non-linear function of the type:
A*y(x) + B*EXP(C*y(x)) + g(x,D) = 0
subjected to the parameters PAR = [A,B,C,D] being within the range
LB < PAR < UB
Code
To solve the problem I'm using the Matlab functions lsqnonlin and fzero. The simplified code used is reported below.
The problem is divided in four functions:
parameterEstimation - (a wrapper for the lsqnonlin function)
objectiveFunction_lsq - (the objective function for the param estimation)
yFun - (the function returing the value of the variable y)
objectiveFunction_zero - (the objective function of the non-linear equation used to calculate y)
Errors
Running the code on the data I get the this waring
Warning: Length of lower bounds is > length(x); ignoring extra bounds
and this error
Failure in initial user-supplied objective function evaluation.
LSQNONLIN cannot continue
This makes me to think that the boundaries are not correctly used or not correctly called, but maybe the problem is elsewhere.
function Done = parameterEstimation()
%read inputs
Xmeas = xlsread('filepath','worksheet','range');
Ymeas = xlsread('filepath','worksheet','range');
%inital values and boundary conditions
initialGuess = [1,1,1,1]; %model parameters initial guess
LB = [0,0,0,0]; %model parameters lower boundaries
UB = [2,2,2,2]; %model parameters upper boundaries
%parameter estimation
calcParam = lsqnonlin(#objectiveFunction_lsq_2,initialGuess,LB,UB,[],Xmeas,Ymeas);
Done = calcParam;
function diff = objectiveFunction_lsq_2(PAR,Xmeas,Ymeas)
y_calculated = yFun(PAR,Xmeas);
diff = y_calculated-Ymeas;
function result = yFun(PAR,X)
y_0 = 2;
val = fzero(#(y)objfun_y(y,PAR,X),y_0);
result = val;
function result = objfun_y(y,PAR,X)
A = PAR(1);
B = PAR(2);
A = PAR(3);
C = PAR(4);
D = PAR(5);
val = A*y+B*exp(y*C)+g(D,X);
result = val;
I don't have the optimization toolbox, but are you sure you can pass the constants like that?
I would do this instead:
calcParam = lsqnonlin(#(PAR) objectiveFunction_lsq_2(PAR,Xmeas,Ymeas),initialGuess,LB,UB);

Matlab Code To Approximate The Exponential Function

Does anyone know how to make the following Matlab code approximate the exponential function more accurately when dealing with large and negative real numbers?
For example when x = 1, the code works well, when x = -100, it returns an answer of 8.7364e+31 when it should be closer to 3.7201e-44.
The code is as follows:
s=1
a=1;
y=1;
for k=1:40
a=a/k;
y=y*x;
s=s+a*y;
end
s
Any assistance is appreciated, cheers.
EDIT:
Ok so the question is as follows:
Which mathematical function does this code approximate? (I say the exponential function.) Does it work when x = 1? (Yes.) Unfortunately, using this when x = -100 produces the answer s = 8.7364e+31. Your colleague believes that there is a silly bug in the program, and asks for your assistance. Explain the behaviour carefully and give a simple fix which produces a better result. [You must suggest a modification to the above code, or it's use. You must also check your simple fix works.]
So I somewhat understand that the problem surrounds large numbers when there is 16 (or more) orders of magnitude between terms, precision is lost, but the solution eludes me.
Thanks
EDIT:
So in the end I went with this:
s = 1;
x = -100;
a = 1;
y = 1;
x1 = 1;
for k=1:40
x1 = x/10;
a = a/k;
y = y*x1;
s = s + a*y;
end
s = s^10;
s
Not sure if it's completely correct but it returns some good approximations.
exp(-100) = 3.720075976020836e-044
s = 3.722053303838800e-044
After further analysis (and unfortunately submitting the assignment), I realised increasing the number of iterations, and thus increasing terms, further improves efficiency. In fact the following was even more efficient:
s = 1;
x = -100;
a = 1;
y = 1;
x1 = 1;
for k=1:200
x1 = x/200;
a = a/k;
y = y*x1;
s = s + a*y;
end
s = s^200;
s
Which gives:
exp(-100) = 3.720075976020836e-044
s = 3.720075976020701e-044
As John points out in a comment, you have an error inside the loop. The y = y*k line does not do what you need. Look more carefully at the terms in the series for exp(x).
Anyway, I assume this is why you have been given this homework assignment, to learn that series like this don't converge very well for large values. Instead, you should consider how to do range reduction.
For example, can you use the identity
exp(x+y) = exp(x)*exp(y)
to your advantage? Suppose you store the value of exp(1) = 2.7182818284590452353...
Now, if I were to ask you to compute the value of exp(1.3), how would you use the above information?
exp(1.3) = exp(1)*exp(0.3)
But we KNOW the value of exp(1) already. In fact, with a little thought, this will let you reduce the range for an exponential down to needing the series to converge rapidly only for abs(x) <= 0.5.
Edit: There is a second way one can do range reduction using a variation of the same identity.
exp(x) = exp(x/2)*exp(x/2) = exp(x/2)^2
Thus, suppose you wish to compute the exponential of large number, perhaps 12.8. Getting this to converge acceptably fast will take many terms in the simple series, and there will be a great deal of subtractive cancellation happening, so you won't get good accuracy anyway. However, if we recognize that
12.8 = 2*6.4 = 2*2*3.2 = ... = 16*0.8
then IF you could efficiently compute the exponential of 0.8, then the desired value is easy to recover, perhaps by repeated squaring.
exp(12.8)
ans =
362217.449611248
a = exp(0.8)
a =
2.22554092849247
a = a*a;
a = a*a;
a = a*a;
a = a*a
362217.449611249
exp(0.8)^16
ans =
362217.449611249
Note that WHENEVER you do range reduction using methods like this, while you may incur numerical problems due to the additional computations necessary, you will usually come out way ahead due to the greatly enhanced convergence of your series.
Why do you think that's the wrong answer? Look at the last term of that sequence, and it's size, and tell me why you expect you should have an answer that's close to 0.
My original answer stated that roundoff error was the problem. That will be a problem with this basic approach, but why do you think 40 is enough terms for the appropriate mathematical ( as opposed to computer floating point arithmetic) answer.
100^40 / 40! ~= 10^31.
Woodchip has the right idea with range reduction. That's the typical approach people use to implement these kinds of functions very quickly. Once you get that all figured out, you deal with roundoff errors of alternating sequences, by summing adjacent terms within the loop, and stepping with k = 1 : 2 : 40 (for instance). That doesn't work here until you use woodchips's idea because for x = -100, the summands grow for a very long time. You need |x| < 1 to guarantee intermediate terms are shrinking, and thus a rewrite will work.