I am very new to using IBM CPLEX , and am using CPLEX with Matlab. I was wondering how to compile a custom objective function in CPLEX using Matlab. The objective function is as follows:
Here aj is a column vector of size 36000 X 1 and A is a sparse matrix of size 36000 x 4503. wj is a column vector of size 4503 x 1 of optimization variables. Until now this is a simple cplexlsqnonneglin if we include the wj >=0 constraint. But I would also like to include the two other sum terms with 'beta' and lambda and the wjj = 0 constraint. Any help in recreating this optimization problem in CPLEX would be much appreciated.
Thanks in advance!
When you add the other terms into the objective, your problem becomes a general quadratic program. Since wj >= 0 we have that ||wj||_1 = e'*wj. So we can write your problem as:
minimize 0.5*(aj - A*wj)'*(aj - A*wj) + 0.5*beta*wj'*wj + lambda*e'*w
wj
subject to wj >= 0, wj(j) = 0
After pushing the quadratic terms in the objective together we have the following QP:
minimize 0.5*aj'*aj -aj'*A*wj + 0.5*wj'*(A'*A + beta*I)*w + lambda*e'*w
wj
subject to wj >= 0, w(j) = 0
I can't help you with CPLEX. But you can solve this problem with Gurobi in MATLAB using the following code
m = 36000;
n = 4503;
A = sprand(m, n, .01);
aj = rand(m, 1);
lambda = 0.1;
beta = 0.4;
j = 300;
model.objcon = 0.5*aj'*aj;
model.obj = -aj'*A + lambda*ones(1,n);
model.A = sparse(1, n);
model.sense = '=';
model.rhs = 0;
model.Q = 0.5*(A'*A + beta*speye(n));
model.vtype = 'C';
model.lb = zeros(n, 1);
model.ub = inf(n,1);
model.ub(j) = 0; % set 0 <= wj(j) <= 0
params.outputflag = 1;
result = gurobi(model, params);
if strcmp(result.status, 'OPTIMAL')
wj = result.x(1:n);
end
For more details see the documentation on Gurobi's MATLAB interface:
http://www.gurobi.com/documentation/5.6/reference-manual/matlab_gurobi
Note you may want to create variables and constraints to avoid forming A'*A + beta*I in the objective. For example you could create a new variable r and a constraint r = A*wj. Then the objective wj'*(A'*A + beta*I) wj would become r'*r + beta*wj'*wj. This may help with the numerics.
Related
I am currently involved in a group project where we have to conduct portfolio selection and optimisation. The paper being referenced is given here: (specifically page 5 and 6, equations 7-10)
http://faculty.london.edu/avmiguel/DeMiguel-Nogales-OR.pdf
We are having trouble creating the optimisation problem using M-Portfolios, given below
min (wrt w,m) (1/T) * sum_(rho)*(w'*r_t - m) (Sorry I couldn't get the formatting to work)
s.t. w'e = 1 (just a condition saying that all weights add to 1)
So far, this is what we have attempted:
function optPortfolio = portfoliofminconM(returns,theta)
% Compute the inputs of the mean-variance model
mu = mean(returns)';
sigma = cov(returns);
% Inputs for the fmincon function
T = 120;
n = length(mu);
w = theta(1:n);
m = theta((n+1):(2*n));
c = 0.01*ones(1,n);
Aeq = ones(1,(2*n));
beq = 1;
lb = zeros(2,n);
ub = ones(2,n);
x0 = ones(n,2) / n; % Start with the equally-weighted portfolio
options = optimset('Algorithm', 'interior-point', ...
'MaxIter', 1E10, 'MaxFunEvals', 1E10);
% Nested function which is used as the objective function
function objValue = objfunction(w,m)
cRp = (w'*(returns - (ones(T,1)*m'))';
objValue = 0;
for i = 1:T
if abs(cRp(i)) <= c;
objValue = objValue + (((cRp(i))^2)/2);
else
objValue = objValue + (c*(abs(cRp(i))-(c/2)));
end
end
The problem starts at our definitions for theta being used as a vector of w and m. We don't know how to use fmincon with 2 variables in the objective function properly. In addition, the value of the objective function is conditional on another value (as shown in the paper) and this needs to be done over a rolling time window of 120 months for a total period of 264 months.(hence the for-loop and if-else)
If any more information is required, I will gladly provide it!
If you can additionally provide an example that deals with a similar problem, can you please link us to it.
Thank you in advance.
The way you minimize a function of two scalars with fmincon is to write your objective function as a function of a single, two-dimensional vector. For example, you would write f(x,y) = x.^2 + 2*x*y + y.^2 as f(x) = x(1)^2 + 2*x(1)*x(2) + x(2)^2.
More generally, you would write a function of two vectors as a function of a single, large vector. In your case, you could rewrite your objfunction or do a quick hack like:
objfunction_for_fmincon = #(x) objfunction(x(1:n), x(n+1:2*n));
Given a function f(k',w') = ((k'-k)^2+(w'-w)^2)^(1/2), where k and w are known real parameters. The objective is to find a couple (k',w') such that f(k',w') is minimal under the following constraints
b(v,s,w') < 10s <=> w'< 10s
b(v,s,w')< a(v,s,k')^2 <=> (w'-10s)-(k'-3)^2/s < 0
q( a(v,s,k'),b(v,s,w')) < s[v^(1/2)]
where b(v,s,w') = (v/s)(w'-10s) and a(v,s,k') = (1/s)(k'-3)v^(1/2). Above that, v (=vari) > 0 and s (=skew) < 0 are known parameters. Furthermore q(a,b) is a root of the quartic polynomial:
(48a^2 + 16b)x^4 + (-40a^3 - 168ab)x^3 + (-45a^4 + 225a^2b + 72b^2)x^2 + (27a^3b - 162ab^2)x + 27b^3.
To be more precise, whenever the quartic has four real roots then q is the second greatest root. If the quartic has two real and two complex roots then q is the greatest real root. The problem is that the algebraic expressions for q are quite monstrous. Ideally, I would like an analytical solution for the above non-linear constrained optimization problem. However, I think that might turn out quite ugly. Therefore, I thought it was better to do it numerically with Matlab by using constrained optimizers such as fmincon.
f = #(x) sqrt((x(1)-hyp_skew)^2+(x(2)-kurt)^2);
A = [1,0];
d = 10*skew;
Aeq = [];
beq = [];
ub = [];
lb = [];
[x,fval,exitflag] = fmincon(f,[w,k],A,d,Aeq,beq,lb,ub,#(x)quarticcondition2(x,skew,vari),options);
where the (non-linear) optimization function is given by
function [c, ceq] = quarticcondition2(x,skew,vari)
av = ((x(2)-3)*sqrt(vari))/skew;
bv = (vari/skew)*(x(1)-10*skew);
A = (-40*av^3-168*av*bv)/(48*av^2+16*bv);
B = (-45*av^4+225*av^2*bv+72*bv^2)/(48*av^2+16*bv);
C = (27*av^3*bv-162*av*bv^2)/(48*av^2+16*bv);
D = (27*bv^3)/(48*av^2+16*bv);
roots_quartic = roots([1,A,B,C,D]);
z = imag(roots_quartic);
in = find(z ~= 0);
if isempty(in)
r = sort(roots_quartic);
c2 = r(2)-skew*sqrt(vari);
else
index = find(z == 0);
c2 = max(roots_quartic(index))-skew*sqrt(vari);
end
c1 = ((x(2)-3)^2/skew)-(x(1)-10*skew);
c = [c1 c2];
ceq = [];
end
My code works for some initial parameter sets [w,k]. However, finding this initial parameter set turns out to be quite difficult (since the constraints are hard to handle). I need to run the program for quite some possible scenarios, hence it would be nice to have some logic in choosing my starting values. I know this is a well known issue when using optimization solvers. However, is there a good/proper way to find start values?
Thanks!
Cheers
I am trying to sum a function and then attempting to find the root of said function. That is, for example, take:
Consider that I have a matrix,X, and vector,t, of values: X(2*n+1,n+1), t(n+1)
for j = 1:n+1
sum = 0;
for i = 1:2*j+1
f = #(g)exp[-exp[X(i,j)+g]*(t(j+1)-t(j))];
sum = sum + f;
end
fzero(sum,0)
end
That is,
I want to evaluate at
j = 1
f = #(g)exp[-exp[X(1,1)+g]*(t(j+1)-t(j))]
fzero(f,0)
j = 2
f = #(g)exp[-exp[X(1,2)+g]*(t(j+1)-t(j))] + exp[-exp[X(2,2)+g]*(t(j+1)-t(j))] + exp[-exp[X(3,2)+g]*(t(j+1)-t(j))]
fzero(f,0)
j = 3
etc...
However, I have no idea how to actually implement this in practice.
Any help is appreciated!
PS - I do not have the symbolic toolbox in Matlab.
I suggest making use of matlab's array operations:
zerovec = zeros(1,n+1); %preallocate
for k = 1:n+1
f = #(y) sum(exp(-exp(X(1:2*k+1,k)+y)*(t(k+1)-t(k))));
zerovec(k) = fzero(f,0);
end
However, note that the sum of exponentials will never be zero, unless the exponent is complex. Which fzero will never find, so the question is a bit of a moot point.
Another solution is to write a function:
function [ sum ] = func(j,g,t,X)
sum = 0;
for i = 0:2*j
f = exp(-exp(X(i+1,j+1)+g)*(t(j+3)-t(j+2)));
sum = sum + f;
end
end
Then loop your solver
for j=0:n
fun = #(g)func(j,g,t,X);
fzero(fun,0)
end
I have to optimize an objective using binary integer linear programming, my objective function is:
Maximize f= (c1 * x1) + (c2 * x2) +(c3 * x3) + ... + (c10000 * x10000)
Subject to some constraints
For solving the problem efficiently I want to use some heuristics, according to one of the heuristics, some variables(xi) have more chance to be part of the answer (Xi=1), so my goal is to give priority (preference) to such variables to solve the problem faster than usual way, I know the solution may be sub-optimal but our main concern is time.
So my question are:
How to prioritize this variables in the LP model?
Can we multiply coefficients of this variables by constant (C>1) in order to increase their priority? or decrease priority of other variables by multiply their coefficients by another constant (D<1)?
If we use the approach of question #2, do we have to do that just with objective function coefficients or both of objective function coefficients and constraints coefficients should be altered related to those variables?
It should be noted that in the approach of question #2, after solving the LP model, we rollback any of changes in the coefficients according to the solution (Which variables are in the solution).
Thanks in advance
If you know that xi will be part of solution, you should include it as 1 into initial point x0 you pass to bintprog. The same for xj known to be likely not part of solution should be included as 0. If initial point is very close to the solution, this will reduce time to find it.
x = bintprog(f,A,b,Aeq,beq,x0);
Another option is to relax BILP problem to LP problem with adding two extra conditions
x <= 1
-x <= 0
and then using rounded solution for this problem as initial point for BILP problem.
Here authors state that bintprog performs well only on small problems. As I use Octave instead of Matlab, I tried GNU Linear Programming Kit (glpk). I tried to solve BILP problem from Matlab documentation and here is a script
close all; clear all;
f = [25,35,28,20,40,-10,-20,-40,-18,-36,-72,-11,-22,-44,-9,-18,-36,-10,-20]';
A = zeros(14,19);
A(1,1:19) = [25 35 28 20 40 5 10 20 7 14 28 6 12 24 4 8 16 8 16];
A(2,1) = 1; A(2,6) = -1; A(2,7) = -1; A(2,8) = -1;
A(3,2) = 1; A(3,9) = -1; A(3,10) = -1; A(3,11) = -1;
A(4,3) = 1; A(4,12) = -1; A(4,13) = -1; A(4,14) = -1;
A(5,4) = 1; A(5,15) = -1; A(5,16) = -1; A(5,17) = -1;
A(6,5) = 1; A(6,18) = -1; A(6,19) = -1;
A(7,1) = -5; A(7,6) = 1; A(7,7) = 2; A(7,8) = 4;
A(8,2) = -4; A(8,9) = 1; A(8,10) = 2; A(8,11) = 4;
A(9,3) = -5; A(9,12) = 1; A(9,13) = 2; A(9,14) = 4;
A(10,4) = -7; A(10,15) = 1; A(10,16) = 2; A(10,17) = 4;
A(11,5) = -3; A(11,18) = 1; A(11,19) = 2;
A(12,2) = 1; A(12,5) = 1;
A(13,1) = 1; A(13,2) = -1; A(13,3) = -1;
A(14,3) = -1; A(14,4) = -1; A(14,5) = -1;
b = [125 0 0 0 0 0 0 0 0 0 0 1 0 -2]';
lb = zeros(size(f));
ub = ones(size(f));
ctype = repmat("U" , size(b))'; # inequality constraint
sense = 1; # minimization
param.msglev = 0;
vartype = repmat("C" , size(f)); # continuous variables
tic
for i = 1:10000
[xopt, fmin, errnum, extra] = glpk (f, A, b, lb, ub, ctype, vartype, sense, param);
end
toc
fprintf('Solution %s with value %f\n', mat2str(xopt), fmin)
vartype = repmat("I" , size(f)); # integer variables
tic
for i = 1:10000
[xopt, fmin, errnum, extra] = glpk (f, A, b, lb, ub, ctype, vartype, sense, param);
end
toc
fprintf('Solution %s with value %f\n', mat2str(xopt), fmin)
These are found solutions:
Elapsed time is 7.9 seconds.
Solution [0;0.301587301587301;1;1;0;0;0;0;0;0.603174603174603;0;1;1;0.5;1;1;1;0;0] with value -81.158730
Elapsed time is 11.5 seconds.
Solution [0;0;1;1;0;0;0;0;0;0;0;1;0;1;1;1;1;0;0] with value -70.000000
I had to perform 10000 iterations to make performance difference visible as the problem is still quite small. LP solution is faster comparing to BILP solution, and they are quite close.
According to CPLEX Performance Tuning for Mixed Integer Programs and Issuing priority orders we can set priority orders to increase or decrease priority of some variables in CPLEX, this approach is as follows:
options = cplexoptimset('cplex');
options.mip.ordertype=fsl;
[x,fval,exitflag,output] = cplexbilp(f, Aineq, bineq, Aeq, beq,[],options);
fsl is priority array for problem variables.
Because CPLEX can generate a priority order automatically, based on problem-data characteristics, we can leave the prioritization decision to CPLEX as follows:
value branching priority order
===== ========================
0 no automatic priority order will be generated (default)
1 decreasing cost coefficients among the variables
2 increasing bound range among the variables
3 increasing cost per matrix coefficient count among the variables
After using priorities, my problem is solved, the solution is valid and converging to solution is faster than before!
I'm following a Numerical Methods course and I made a small MATLAB script to compute integrals using the trapezoidal method. However my script uses a FOR loop and my friend told me I'm doing something wrong if I use a FOR loop in Matlab. Is there a way to convert this script to a Matlab-friendly one?
%Number of points to use
N = 4;
%Integration interval
a = 0;
b = 0.5;
%Width of the integration segments
h = (b-a) / N;
F = exp(a);
for i = 1:N-1
F = F + 2*exp(a+i*h);
end
F = F + exp(b);
F = h/2*F
Vectorization is important speed and clarity, but so is using built-in functions whenever possible. Matlab has a built in function for trapezoidal numerical integration called trapz. Here is an example.
x = 0:.125:.5
y = exp(x)
F = trapz(x,y)
It is recommended to vectorize your code.
%Number of points to use
N = 4;
%Integration interval
a = 0;
b = 0.5;
%Width of the integration segments
h = (b-a) / N;
x = 1:1:N-1;
F = h/2*(exp(a) + sum(2*exp(a+x*h)) + exp(b));
However, I've read that Matlab is no longer slow at for loops.