Quadratically constrainted quadratic programming (QCQP) in MATLAB - matlab

Recently I have run into a Quadratically constrainted quadratic programming (QCQP) problem in my research. I have found something useful in MATLAB optimization toolbox, i.e. 'fmincon' function (general nonlinear optimization with nonlinear constraints), it use 'interior point algorithm' to solve my problem, which contains 8 variables, 1 equality quadratic constraint and 1 inequality quadratic constraint. 'fmincon' with or without 'Hessian' and 'Gradient' provide quite good solution, the only thing I am not satisfied is the efficiency, since I need to call it like million times in my main code. I need to find something which may be more specific to QCQP, possibly efficiency may improved. However I have found a lot of information from netlib and wiki, but I have no judgement on which one I should use, and it would be tedious to try things one by one, I really need some suggestions. By the way, I am mostly programming in MATLAB for this problem, but suitable c/fortran are also useful.
-Yan

An alternative is to use CVX, available here, which works nicely for QCQPs (amongst many other types of problems). Here is a code snippet which solves a QCQP:
close all; clear; clc
n = 10;
H = rand(n); H = H*H'; % make spsd
f = -rand(n,1);
Q = rand(n); Q = Q*Q'; % make spsd
g = -rand(n,1);
cvx_begin
variable x(n)
0.5*x'*Q*x+g'*x <=0
x >= 0
minimize(0.5*x'*H*x + f'*x)
cvx_end

Related

Normalization of integrand for numerical integration in Matlab

First off, I'm not sure if this is the best place to post this, but since there isn't a dedicated Matlab community I'm posting this here.
To give a little background, I'm currently prototyping a plasma physics simulation which involves triple integration. The innermost integral can be done analytically, but for the outer two this is just impossible. I always thought it's best to work with values close to unity and thus normalized the my innermost integral such that it is unit-less and usually takes values close to unity. However, compared to an earlier version of the code where the this innermost integral evaluated to values of the order of 1e-50, the numerical double integration, which uses the native Matlab function integral2 with target relative tolerance of 1e-6, now requires around 1000 times more function evaluations to converge. As a consequence my simulation now takes roughly 12h instead of the previous 20 minutes.
Question
So my questions are:
Is it possible that the faster convergence in the older version is simply due to the additional evaluations vanishing as roundoff errors and that the results thus arn't trustworthy even though it passes the 1e-6 relative tolerance? In the few tests I run the results seemed to be the same in both versions though.
What is the best practice concerning the normalization of the integrand for numerical integration?
Is there some way to improve the convergence of numerical integrals, especially if the integrand might have singularities?
I'm thankful for any help or insight, especially since I don't fully understand the inner workings of Matlab's integral2 function and what should be paid attention to when using it.
If I didn't know any better I would actually conclude, that the integrand which is of the order of 1e-50 works way better than one of say the order of 1e+0, but that doesn't seem to make sense. Is there some numerical reason why this could actually be the case?
TL;DR when multiplying the function to be integrated numerically by Matlab 's integral2 with a factor 1e-50 and then the result in turn with a factor 1e+50, the integral gives the same result but converges way faster and I don't understand why.
edit:
I prepared a short script to illustrate the problem. Here the relative difference between the two results was of the order of 1e-4 and thus below the actual relative tolerance of integral2. In my original problem however the difference was even smaller.
fun = #(x,y,l) l./(sqrt(1-x.*cos(y)).^5).*((1-x).*sin(y));
x = linspace(0,1,101);
y = linspace(0,pi,101).';
figure
surf(x,y,fun(x,y,1));
l = linspace(0,1,101); l=l(2:end);
v1 = zeros(1,100); v2 = v1;
tval = tic;
for i=1:100
fun1 = #(x,y) fun(x,y,l(i));
v1(i) = integral2(fun1,0,1,0,pi,'RelTol',1e-6);
end
t1 = toc(tval)
tval = tic;
for i=1:100
fun1 = #(x,y) 1e-50*fun(x,y,l(i));
v2(i) = 1e+50*integral2(fun1,0,1,0,pi,'RelTol',1e-6);
end
t2 = toc(tval)
figure
hold all;
plot(l,v1);
plot(l,v2);
plot(l,abs((v2-v1)./v1));

Matlab equivalent to Mathematica's FindInstance

I do just about everything in Matlab but I have yet to work out a good way to replicate Mathematica's FindInstance function in Matlab. As an example, with Mathematica, I can enter:
FindInstance[x + y == 1 && x > 0 && y > 0, {x, y}]
And it will give me:
{{x -> 1/2, y -> 1/2}}
When no solution exists, it will give me an empty Out. I use this often in my work to check whether or not a solution to a system of inequalities exists -- I don't really care about a particular solution.
It seems like there should be a way to replicate this in Matlab with Solve. There are sections in the help file on solving a set of inequalities for a parametrized solution with conditions. There's another section on spitting out just one solution using PrincipalValue, but this seems to just select from a finite solution set, rather than coming up with one that meets the parameters.
Can anybody come up with a way to replicate the FindInstance functionality in Matlab?
Building on what jlandercy said, you can certainly use MATLAB's linprog function, which is MATLAB's linear programming solver. A linear program in the MATLAB universe can be formulated like so:
You seek to find a solution x in R^n which minimizes the objective function f^{T}*x subject to a set of inequality constraints, equality constraints, and each component in x is bounded between a lower and upper bound. Because you want to find the minimum possible value that satisfies the above constraint given, what you're really after is:
Because MATLAB only supports inequalities of less than, you'll need to take the negative of the first two constraints. In addition, MATLAB doesn't support strict inequalities, and so what you'll have to do is enforce a constraint so that you are checking to see if each variable is lesser than a small number, perhaps something like setting a threshold epsilon to 1e-4. Therefore, with the above, your formulation is now:
Note that we don't have any upper or lower bounds as those conditions are already satisfied in the equality and inequality constraints. All you have to do now is plug this problem into linprog. linprog accepts syntax in the following way:
x = linprog(f,A,b,Aeq,beq);
f is a vector of coefficients that work with the objective function, A is a matrix of coefficients that work with the inequality, b is a vector of coefficients that are for the right-hand side of each inequality constraint, and Aeq,beq, are the same as the inequality but for the equality constraints. x would be the solution to the linear programming problem formulated. If we reformulate your problem into matrix form for the above, we now get:
With respect to the linear programming formulation, we can now see what each variable in the MATLAB universe needs to be. Therefore, in MATLAB syntax, each variable becomes:
f = [1; 1];
A = [-1 0; 0 -1];
b = [1e-4; 1e-4];
Aeq = [1 1];
beq = 1;
As such:
x = linprog(f, A, b, Aeq, beq);
We get:
Optimization terminated.
x =
0.5000
0.5000
If linear programming is not what you're looking for, consider looking at MATLAB's MuPAD interface: http://www.mathworks.com/help/symbolic/mupad_ug/solve-algebraic-equations-and-inequalities.html - This more or less mimics what you see in Mathematica if you're more comfortable with that.
Good luck!
Matlab is not a symbolic solver as Mathematica is, so you will not get exact solutions but numeric approximations. Anyway if you are about to solve linear programming (simplex) such as in your example, you should use linprog function.

What is the fastest method for solving exp(ax)-ax+c=0 for x in matlab

What is the least computational time consuming way to solve in Matlab the equation:
exp(ax)-ax+c=0
where a and c are constants and x is the value I'm trying to find?
Currently I am using the in built solver function, and I know the solution is single valued, but it is just taking longer than I would like.
Just wanting something to run more quickly is insufficient for that to happen.
And, sorry, but if fzero is not fast enough then you won't do much better for a general root finding tool.
If you aren't using fzero, then why not? After all, that IS the built-in solver you did not name. (BE EXPLICIT! Otherwise we must guess.) Perhaps you are using solve, from the symbolic toolbox. It will be more slow, since it is a symbolic tool.
Having said the above, I might point out that you might be able to improve by recognizing that this is really a problem with a single parameter, c. That is, transform the problem to solving
exp(y) - y + c = 0
where
y = ax
Once you know the value of y, divide by a to get x.
Of course, this way of looking at the problem makes it obvious that you have made an incorrect statement, that the solution is single valued. There are TWO solutions for any negative value of c less than -1. When c = -1, the solution is unique, and for c greater than -1, no solutions exist in real numbers. (If you allow complex results, then there will be solutions there too.)
So if you MUST solve the above problem frequently and fzero was inadequate, then I would consider a spline model, where I had precomputed solutions to the problem for a sufficient number of distinct values of c. Interpolate that spline model to get a predicted value of y for any c.
If I needed more accuracy, I might take a single Newton step from that point.
In the event that you can use the Lambert W function, then solve actually does give us a solution for the general problem. (As you see, I am just guessing what you are trying to solve this with, and what are your goals. Explicit questions help the person trying to help you.)
solve('exp(y) - y + c')
ans =
c - lambertw(0, -exp(c))
The zero first argument to lambertw yields the negative solution. In fact, we can use lambertw to give us both the positive and negative real solutions for any c no larger than -1.
X = #(c) c - lambertw([0 -1],-exp(c));
X(-1.1)
ans =
-0.48318 0.41622
X(-2)
ans =
-1.8414 1.1462
Solving your system symbolically
syms a c x;
fx0 = solve(exp(a*x)-a*x+c==0,x)
which results in
fx0 =
(c - lambertw(0, -exp(c)))/a
As #woodchips pointed out, the Lambert W function has two primary branches, W0 and W−1. The solution given is with respect to the upper (or principal) branch, denoted W0, your equation actually has an infinite number of complex solutions for Wk (the W0 and W−1 solutions are real if c is in [−∞, 0]). In Matlab, lambertw is only implemented for symbolic inputs and thus is very slow method of solving your equation if you're interested in numerical (double precision) solutions.
If you wish to solve such equations numerically in an efficient manner, you might look at Corless, et al. 1996. But, as long as your parameter c is in [−∞, 0], i.e., -exp(c) in [−1/e, 0] and you're interested in the W0 branch, you can use the Matlab code that I wrote to answer a similar question at Math.StackExchange. This code should be much much more efficient that using a naïve approach with fzero.
If your values of c are not in [−∞, 0] or you want the solution corresponding to a different branch, then your solution may be complex-valued and you won't be able to use the simple code I linked to above. In that case, you can more fully implement the function by reading the Corless, et al. 1996 paper or you can try converting the Lambert W to a Wright ω function: W0(z) = ω(log(z)), W−1(z) = ω(log(z)−2πi). In your case, using Matlab's wrightOmega, the W0 branch corresponds to:
fx0 =
(c - wrightOmega(log(-exp(c))))/a
and the W−1 branch to:
fxm1 =
(c - wrightOmega(log(-exp(c))-2*sym(pi)*1i))/a
If c is real, then the above reduces to
fx0 =
(c - wrightOmega(c+sym(pi)*1i))/a
and
fxm1 =
(c - wrightOmega(c-sym(pi)*1i))/a
Matlab's wrightOmega function is also symbolic only, but I have written a double precision implementation (based on Lawrence, et al. 2012) that you can find on my GitHub here and that is 3+ orders of magnitude faster than evaluating the function symbolically. As your problem is technically in terms of a Lambert W, it may be more efficient, and possibly more numerically accurate, to implement that more complicated function for the regime of interest (this is due to the log transformation and the extra evaluation of a complex log). But feel free to test.

MATLAB: fast and memory efficient solution of a particular linear program

I have a little programming experience, so I'm pretty sure I didn't code the problem in the optimal way, so I would be happy to hear any hints.
I have two parameters: the dimension of the problem n and an N x N matrix of constraints B where N = 2n. In my case B is symmetric and has only positive values. I need to solve the following problem
That is I need to maximize a certain average of the distances subject to constraints on pairwise distances given by B(i,j).
They way I'm doing it now is an implementation of linprog(-f,A,b) where
f = ones([1,n])/n;
f = [f -f]
and
b = reshape(B',numel(B),[])
and A is defined as follows
A = zeros([N^2,N]);
for i = 1:N
for j = 1:N
if i ~= j
A((i-1)*N + j,i) = 1;
A((i-1)*N + j,j) = -1;
end
end
end
However, when n = 500 even a simple construction of A takes quite some time, not to say how long does the solution of the linear program take. Any hints are highly appreciated and please feel free to retag.
First of all, try constructing A like so:
AI = eye(N);
AV = ones(N, 1);
A = kron(AI, AV) - kron(AV, AI);
I think it should run by at least an order of magnitude faster than the way you're creating it.
In addition to creating your problem matrix in a more efficient way, you may want to look into using glpk with the glpkmex interface for MATLAB. I've found that my solution times can decrease substantially. You may see another order of magnitude decrease depending on the problem size.
If you are an academic you can get either CPLEX or Gurobi licenses for free, which should net you further decreases in solution time without a lot of fiddling around with solver parameters. This may be necessary with a problem of the size you describe.

Doing a PCA using an optimization in Matlab

I'd like to find the principal components of a data matrix X in Matlab by solving the optimization problem min||X-XBB'||, where the norm is the Frobenius norm, and B is an orthonormal matrix. I'm wondering if anyone could tell me how to do that. Ideally, I'd like to be able to do this using the optimization toolbox. I know how to find the principal components using other methods. My goal is to understand how to set up and solve an optimization problem which has a matrix as the answer. I'd very much appreciate any suggestions or comments.
Thanks!
MJ
The thing about Optimization is that there are different methods to solve a problem, some of which can require extensive computation.
Your solution, given the constraints for B, is to use fmincon. Start by creating a file for the non-linear constraints:
function [c,ceq] = nonLinCon(x)
c = 0;
ceq = norm((x'*x - eye (size(x))),'fro'); %this checks to see if B is orthonormal.
then call the routine:
B = fmincon(#(B) norm(X - X*B*B','fro'),B0,[],[],[],[],[],[],#nonLinCon)
with B0 being a good guess on what the answer will be.
Also, you need to understand that this algorithms tries to find a local minimum, which may not be the solution you ultimately want. For instance:
X = randn(1,2)
fmincon(#(B) norm(X - X*B*B','fro'),rand(2),[],[],[],[],[],[],#nonLinCon)
ans =
0.4904 0.8719
0.8708 -0.4909
fmincon(#(B) norm(X - X*B*B','fro'),rand(2),[],[],[],[],[],[],#nonLinCon)
ans =
0.9864 -0.1646
0.1646 0.9864
So be careful, when using these methods, and try to select a good starting point
The Statistics toolbox has a built-in function 'princomp' that does PCA. If you want to learn (in general, without the optimization toolbox) how to create your own code to do PCA, this site is a good resource.
Since you've specifically mentioned wanting to use the Optimization Toolbox and to set this up as an optimization problem, there is a very well-trusted 3rd-party package known as CVX from Stanford University that can solve the optimization problem you are referring to at this site.
Do you have the optimization toolbox? The documentation is really good, just try one of their examples: http://www.mathworks.com/help/toolbox/optim/ug/brg0p3g-1.html.
But in general the optimization function look like this:
[OptimizedMatrix, OptimizedObjectiveFunction] = optimize( (#MatrixToOptimize) MyObjectiveFunction(MatrixToOptimize), InitialConditionsMatrix, ...optional constraints and options... );
You must create MyObjectiveFunction() yourself, it must take the Matrix you want to optimize as an input and output a scalar value indicating the cost of the current input Matrix. Most of the optimizers will try to minimise this cost. Note that the cost must be a scalar.
fmincon() is a good place to start, once you are used to the toolbox you and if you can you should choose a more specific optimization algorithm for your problem.
To optimize a matrix rather than a vector, reshape the matrix to a vector, pass this vector to your objective function, and then reshape it back to the matrix within your objective function.
For example say you are trying to optimize the 3 x 3 matrix M. You have defined objective function MyObjectiveFunction(InputVector). Pass M as a vector:
MyObjectiveFunction(M(:));
And within the MyObjectiveFunction you must reshape M (if necessary) to be a matrix again:
function cost = MyObjectiveFunction(InputVector)
InputMatrix = reshape(InputVector, [3 3]);
%Code that performs matrix operations on InputMatrix to produce a scalar cost
cost = %some scalar value
end