how to enforce positive constraints in cvx? - matlab

I have a simple CVX program, where two variables are:
variable A(n, n)
variable D(n, N)
I learned from here:
https://mycourses.aalto.fi/pluginfile.php/1612433/mod_resource/content/1/lec06.pdf
that I think adding the constraints A >= 0 (and even possibly D >= 0) will not enforce A to be all of positive values, but rather for A to be a positive definite.
How do I enforce instead for all elements of A to be positive (and similar for D)?
I am using Matlab.

Related

Smallest eigenvalue for large nearly singular matrix

In Matlab I have a real and symmetric n x n matrix A, where n > 6000. Even though A is positive definite it is close to singular. A goes from being positive definite to singular to indefinite for a particular variable which is changed. I am to determine when A becomes singular. I don't trust the determinants so I am looking at the eigenvalues, but I don't have the memory (or time) to calculate all n eigenvalues, and I am only interested in the smallest - and in particular when it changes sign from positive to negative. I've tried
D = eigs(A,1,'smallestabs')
by which I lose the sign of the eigenvalue, and by
D = eigs(A,1,'smallestreal')
Matlab cannot get the lowest eigenvalue to converge. Then I've tried defining a shift value like
for i = 1:10
if i == 1
D(i) = eigs(A,1,0)
else
D(i) = eigs(A,1,D(i-1))
end
end
where i look in the range of the last lowest eigenvalue. However, the eigenvalues seem to behave oddly, and I am not sure if I actually find the true lowest one.
So, any ideas on how to
without doubt find the smallest eigenvalue with 'eigs', or
by another way determine when A becomes singular (when changing a variable in A)
are highly appreciated!
Solution
I seem to have solved my particular problem. Matlabs command chol have the possibility to return a value p which is zero if the matrix is positive definite. Thus, performing
[~,p] = chol(A)
in my case determines the transition from positive definite to not positive definite (meaning first singular then indefinite), and is also computationally very efficient. In the documentation for chol it is also preferred over eigs to check for positive definiteness. However, there seem to be some confusion about the result if the matrix is only positive semi-definite, so be carefull if this is the case.
Alternative solutions
I've come across several possible solutions which I'd like to state:
Determinant:
For a positive definite matrix the determinant is positive. However, for an indefinite matrix it may be negative - this could indicate the transition. Though, generally determinants for large nearly singular matrices are not recommended.
Eigenvalues: For a positive definite matrix the real part of all eigenvalues are positive. If at least one eigenvalue is zero the matrix is singular, and if one becomes negative and the rest is positive it is indefinite. Detecting the shift in sign for the lowest eigenvalue indicates the point the matrix becomes singular. In matlab the lowest eigenvalue may be found by
D = eigs(A,1,'smallestreal')
However, in my case Matlab coudn't perform this. Alternatively you can try searching around zero:
D = eigs(A,1,0)
This however only finds the eigenvalue closest to zero. Even if you make a loop like indicated in my original question above, you are not guaranteed to actually find the lowest. And the precision of the eigenvalues for a nearly singular matrix seems to be low in some cases.
Condition number: Matlabs cond returns the condition number of the matrix by performing
C = cond(A)
which states the ratio of the largest eigenvalue to the lowest. A shift in sign in the condition number thereby states the transition. This, however, didn't work for me, as I only got positive condition numbers even though I had negative eigenvalues. But maybe it will work in other cases.

How define inequal constraints in MATLAB optimization toolbox?

As mentioned in MATLAB R2016 we have this form: A*x ≤ b in optimization toolbox constraints. How can I define something lkie this: A*x < b in constraints?
Polyhedron {x: A*x < b} is no longer a closed set, so if you need to find max/min of a function over this set it may not belong to this set, but suprimum/infimum always exists and for example for linear (in fact, any convex) objective function it's the same as max/min over {x:A*x ≤ b}, check Weierstrass extreme value theorem. One option is to set some tolerance t and optimize over A*x ≤ b-t and use sensitivity analysis to see where the solution goes as t -> 0.
As #serge_k said, if you have a strict inequality constraint, you want to represent it as A*x <= b - t to force at least t separation. There are some situations where this reasonably comes up (eg. support vector machines solve a'x+b >= 1 and a'x +b <= -1' instead ofa'x+b > 0anda'x +b < 0'
That said, the vast vast majority of the time, strict vs. non-strict inequalities really shouldn't matter. If your constraint is A*x<b and A*x <= b won't do, you may be in the land of pure math rather than numerical computing: floating point operations aren't this precise!
There aren't many plausible, real world situation where A*x - b = 10^-99999 is wonderful, but A*x - b = 0 is 100% wrong?

The right package/software for non-linear optimization with semidefinite constraints

I am struggling to solve an optimization problem, numerically, of the following (generic) form.
minimize F(x)
such that:
___(1): 0 < x < 1
___(2): M(x) >= 0.
where M(x) is a matrix whose elements are quadratic functions of x. The last constraint means that M(x) must be a positive semidefinite matrix. Furthermore F(x) is a callable function. For the more curious, here is a similar minimum-working-example.
I have tried a few options, but to no success.
PICOS, CVXPY and CVX -- In the first two cases, I cannot find a way of encoding a minimax problem such as mine. In the third one which is implemented in MATLAB, the matrices involved in a semidefinite constraint must be affine. So my problem falls outside this criteria.
fmincon -- How can we encode a matrix positivity constraint? One way is to compute the eigenvalues of the matrix M(x) analytically, and constraint each one to be positive. But the analytic expression for the eigenvalues can be horrendous.
MOSEK -- The objective function must be a expressible in a standard form. I cannot find an example of a user-defined objective function.
scipy.optimize -- Along with the objective functions and the constraints, it is necessary to provide the derivative of these functions as well. Particularly in my case, that is fine for the objective function. But, if I were to express the matrix positivity constraint (as well as it's derivative) with an analytic expression of the eigenvalues, that can be very tedious.
My apologies for not providing a MWE to illustrate my attempts with each of the above packages/softwares.
Can anyone please suggest a package/software which could be useful to me in solving my optimization problem?
Have a look at a nonlinear optimization package with box constraints, where different type of constraints may be coded via penalty or barrier techniques.
Look at the following URL
merlin.cs.uoi.gr

Matlab equivalent to Mathematica's FindInstance

I do just about everything in Matlab but I have yet to work out a good way to replicate Mathematica's FindInstance function in Matlab. As an example, with Mathematica, I can enter:
FindInstance[x + y == 1 && x > 0 && y > 0, {x, y}]
And it will give me:
{{x -> 1/2, y -> 1/2}}
When no solution exists, it will give me an empty Out. I use this often in my work to check whether or not a solution to a system of inequalities exists -- I don't really care about a particular solution.
It seems like there should be a way to replicate this in Matlab with Solve. There are sections in the help file on solving a set of inequalities for a parametrized solution with conditions. There's another section on spitting out just one solution using PrincipalValue, but this seems to just select from a finite solution set, rather than coming up with one that meets the parameters.
Can anybody come up with a way to replicate the FindInstance functionality in Matlab?
Building on what jlandercy said, you can certainly use MATLAB's linprog function, which is MATLAB's linear programming solver. A linear program in the MATLAB universe can be formulated like so:
You seek to find a solution x in R^n which minimizes the objective function f^{T}*x subject to a set of inequality constraints, equality constraints, and each component in x is bounded between a lower and upper bound. Because you want to find the minimum possible value that satisfies the above constraint given, what you're really after is:
Because MATLAB only supports inequalities of less than, you'll need to take the negative of the first two constraints. In addition, MATLAB doesn't support strict inequalities, and so what you'll have to do is enforce a constraint so that you are checking to see if each variable is lesser than a small number, perhaps something like setting a threshold epsilon to 1e-4. Therefore, with the above, your formulation is now:
Note that we don't have any upper or lower bounds as those conditions are already satisfied in the equality and inequality constraints. All you have to do now is plug this problem into linprog. linprog accepts syntax in the following way:
x = linprog(f,A,b,Aeq,beq);
f is a vector of coefficients that work with the objective function, A is a matrix of coefficients that work with the inequality, b is a vector of coefficients that are for the right-hand side of each inequality constraint, and Aeq,beq, are the same as the inequality but for the equality constraints. x would be the solution to the linear programming problem formulated. If we reformulate your problem into matrix form for the above, we now get:
With respect to the linear programming formulation, we can now see what each variable in the MATLAB universe needs to be. Therefore, in MATLAB syntax, each variable becomes:
f = [1; 1];
A = [-1 0; 0 -1];
b = [1e-4; 1e-4];
Aeq = [1 1];
beq = 1;
As such:
x = linprog(f, A, b, Aeq, beq);
We get:
Optimization terminated.
x =
0.5000
0.5000
If linear programming is not what you're looking for, consider looking at MATLAB's MuPAD interface: http://www.mathworks.com/help/symbolic/mupad_ug/solve-algebraic-equations-and-inequalities.html - This more or less mimics what you see in Mathematica if you're more comfortable with that.
Good luck!
Matlab is not a symbolic solver as Mathematica is, so you will not get exact solutions but numeric approximations. Anyway if you are about to solve linear programming (simplex) such as in your example, you should use linprog function.

Matlab: Fsolve function and all possible roots

I'm using MATLAB's fsolve function to solve systems of nonlinear equations. I have two nonlinear equations with two variables (x,y);
I'm trying to find the all possible roots for the both variables. I noted that the fsolve gives just one root. How it possible to get the all roots for the equations?
My code as the following:
function F = fun(guess)
x = guess(1);
y = guess(2);
F = [2*x -y - exp(-x));
-x + 2*y - exp(-y) ];
end
call the function:
guess = [-5 -5]
fsolve(#fun,guess);
Prove that there is only one root, so you don't need to search further.
From the second equation,
-x + 2·y - exp(-y) = 0
⇒ x = 2·y - exp(-y)
Substitute x into the first equation:
2·x - y - exp(-x) = 0
⇒ 2·(2y-exp(-y)) - y - exp(-(2y-exp(-y)) = 0
which is a function of y only. Standard calculus will show this f(y) is monotonically increasing, starts negative at y=-∞ and is positive at y=+∞. The result is the same for when you do the substitution the other way around. This implies there is only 1 simultaneous root to both equations.
QED.
fsolve is not a global solver. There are global solvers (like genetic algorithms and simulated annealing), but they have to run for an infinite amount of time to guarantee that the returned solutions comprise all of the minimizers. On the hand, almost all other optimization solvers are local, meaning that they will only guarantee that a local minimizer is returned.
Furthermore, in addition to not knowing whether the returned solution is a global or local minimizer, there is, in general, no way of determining how many roots a problem has. So basically, there is no way of doing what you want except for 2 well known cases:
1) If the problem is convex, then there are no local minimizers that are not global minimizers. So anything returned by fsolve will be a global minimizer. Furthermore, this minimizer is almost always unique. The exception is that technically, there can exist an infinite number of solutions, but they will all be connected (like on a specific plane). There cannot exist a finite number of distinct minimizers that are not connected.
2) Polynomials have a distinct number of roots that we can uniquely determine.