The right package/software for non-linear optimization with semidefinite constraints - scipy

I am struggling to solve an optimization problem, numerically, of the following (generic) form.
minimize F(x)
such that:
___(1): 0 < x < 1
___(2): M(x) >= 0.
where M(x) is a matrix whose elements are quadratic functions of x. The last constraint means that M(x) must be a positive semidefinite matrix. Furthermore F(x) is a callable function. For the more curious, here is a similar minimum-working-example.
I have tried a few options, but to no success.
PICOS, CVXPY and CVX -- In the first two cases, I cannot find a way of encoding a minimax problem such as mine. In the third one which is implemented in MATLAB, the matrices involved in a semidefinite constraint must be affine. So my problem falls outside this criteria.
fmincon -- How can we encode a matrix positivity constraint? One way is to compute the eigenvalues of the matrix M(x) analytically, and constraint each one to be positive. But the analytic expression for the eigenvalues can be horrendous.
MOSEK -- The objective function must be a expressible in a standard form. I cannot find an example of a user-defined objective function.
scipy.optimize -- Along with the objective functions and the constraints, it is necessary to provide the derivative of these functions as well. Particularly in my case, that is fine for the objective function. But, if I were to express the matrix positivity constraint (as well as it's derivative) with an analytic expression of the eigenvalues, that can be very tedious.
My apologies for not providing a MWE to illustrate my attempts with each of the above packages/softwares.
Can anyone please suggest a package/software which could be useful to me in solving my optimization problem?

Have a look at a nonlinear optimization package with box constraints, where different type of constraints may be coded via penalty or barrier techniques.
Look at the following URL
merlin.cs.uoi.gr

Related

scipy minimization: How to code a jacobian/hessian for objective function using max value

I'm using scipy.optimize.minimize with the Newton-CG (Newton Conjugate Gradient) method since I have an objective function for which I know the analytical Jacobian and Hessian. However, I need to add a regularization term R=exp(max(s)) based on the maximum value inside the array parameter "s" that being fit. It isn't entirely obvious to me how to implement derivatives for R. Letting the minimization algorithm do numeric derivatives for the whole objective function isn't an option, by the way, because it is far too complex. Any thoughts, oh wise people of the web?

Nonlinear optimization with symbolic constraint in Matlab

I need to solve a nonlinear problem in n symbolic variables. The objective function to maximize is just a sum/difference of products of a variable with itself or with another one, so it is basically a quadratic equation.
The problem is that I need to impose a symbolic limit on the sum of these variables, as well as limit the values of these variables to be between 0 and 1 inclusive.
So I need an upper bound expressed as a function of the (symbolic) sum and not as a number. Can Matlab solve such kind of problems? If yes, how?
EDIT: Basically I need to perform the optimization of a symbolic problem with a quadratic objective function and linear constraints.

Why getting different solutions by feeding constraint to fmincon in two similar way?

I am using fmincon to solve a problem. The problem has some linear inequality constraints that are written in Matrix A and B.
I can write these constraints in 2 way and I should get analogous results. But, weirdly I am getting different solutions. why is that?
1) In the first way, I can feed the constraint to 'fmincon' function as follows:
[Xsqp, FUN ,FLAG ,Options] = fmincon(#(X)SQP(X,Dat),X,A,B,[],[],lb,ub,#(X)SQPnolcon(X,Dat,A,B),options);
% I comment the line 'C=A*X'-B;'
in the function 'SQPnolcon' and put C=[] instead, because A and B are defined already in fmincon function
2) As the second way I can write it like this:
[Xsqp, FUN ,FLAG ,Options] = fmincon(#(X)SQP(X,Dat),X,[],[],[],[],lb,ub,#(X)SQPnolcon(X,Dat,A,B),options);
and also the constraint function as follows:
function [C,Ceq] = SQPnolcon(X,Dat,A,B)
C=A*X'-B;
Ceq = [];
end
In the first, you're supplying A and B as both linear inequality constraints and as nonlinear inequality constraints, but in the second you're only supplying them as nonlinear inequality constraints.
I get why you might expect that would be equivalent, since they're the same constraints anyway. But the linear equality constraints are applied in a different context than the nonlinear equality constraints, and that leads the optimization algorithm to find a different solution.
I'm afraid I'm not able to explain exactly how the two different types of constraints are applied, and at what points in the algorithm - and in any case, this would vary depending on which algorithm you're asking fmincon to use (active-set, trust-region and so on). For that level of detail, you might need to ask MathWorks. But the basic answer is that you're getting different results because you're asking the algorithm to do two different things.

Matlab equivalent to Mathematica's FindInstance

I do just about everything in Matlab but I have yet to work out a good way to replicate Mathematica's FindInstance function in Matlab. As an example, with Mathematica, I can enter:
FindInstance[x + y == 1 && x > 0 && y > 0, {x, y}]
And it will give me:
{{x -> 1/2, y -> 1/2}}
When no solution exists, it will give me an empty Out. I use this often in my work to check whether or not a solution to a system of inequalities exists -- I don't really care about a particular solution.
It seems like there should be a way to replicate this in Matlab with Solve. There are sections in the help file on solving a set of inequalities for a parametrized solution with conditions. There's another section on spitting out just one solution using PrincipalValue, but this seems to just select from a finite solution set, rather than coming up with one that meets the parameters.
Can anybody come up with a way to replicate the FindInstance functionality in Matlab?
Building on what jlandercy said, you can certainly use MATLAB's linprog function, which is MATLAB's linear programming solver. A linear program in the MATLAB universe can be formulated like so:
You seek to find a solution x in R^n which minimizes the objective function f^{T}*x subject to a set of inequality constraints, equality constraints, and each component in x is bounded between a lower and upper bound. Because you want to find the minimum possible value that satisfies the above constraint given, what you're really after is:
Because MATLAB only supports inequalities of less than, you'll need to take the negative of the first two constraints. In addition, MATLAB doesn't support strict inequalities, and so what you'll have to do is enforce a constraint so that you are checking to see if each variable is lesser than a small number, perhaps something like setting a threshold epsilon to 1e-4. Therefore, with the above, your formulation is now:
Note that we don't have any upper or lower bounds as those conditions are already satisfied in the equality and inequality constraints. All you have to do now is plug this problem into linprog. linprog accepts syntax in the following way:
x = linprog(f,A,b,Aeq,beq);
f is a vector of coefficients that work with the objective function, A is a matrix of coefficients that work with the inequality, b is a vector of coefficients that are for the right-hand side of each inequality constraint, and Aeq,beq, are the same as the inequality but for the equality constraints. x would be the solution to the linear programming problem formulated. If we reformulate your problem into matrix form for the above, we now get:
With respect to the linear programming formulation, we can now see what each variable in the MATLAB universe needs to be. Therefore, in MATLAB syntax, each variable becomes:
f = [1; 1];
A = [-1 0; 0 -1];
b = [1e-4; 1e-4];
Aeq = [1 1];
beq = 1;
As such:
x = linprog(f, A, b, Aeq, beq);
We get:
Optimization terminated.
x =
0.5000
0.5000
If linear programming is not what you're looking for, consider looking at MATLAB's MuPAD interface: http://www.mathworks.com/help/symbolic/mupad_ug/solve-algebraic-equations-and-inequalities.html - This more or less mimics what you see in Mathematica if you're more comfortable with that.
Good luck!
Matlab is not a symbolic solver as Mathematica is, so you will not get exact solutions but numeric approximations. Anyway if you are about to solve linear programming (simplex) such as in your example, you should use linprog function.

Minimizing error of a formula in MATLAB (Least squares?)

I'm not too familiar with MATLAB or computational mathematics so I was wondering how I might solve an equation involving the sum of squares, where each term involves two vectors- one known and one unknown. This formula is supposed to represent the error and I need to minimize the error. I think I'm supposed to use least squares but I don't know too much about it and I'm wondering what function is best for doing that and what arguments would represent my equation. My teacher also mentioned something about taking derivatives and he formed a matrix using derivatives which confused me even more- am I required to take derivatives?
The problem that you must be trying to solve is
Min u'u = min \sum_i u_i^2, u=y-Xbeta, where u is the error, y is the vector of dependent variables you are trying to explain, X is a matrix of independent variables and beta is the vector you want to estimate.
Since sum u_i^2 is diferentiable (and convex), you can evaluate the minimal of this expression calculating its derivative and making it equal to zero.
If you do that, you find that beta=inv(X'X)X'y. This maybe calculated using the matlab function regress http://www.mathworks.com/help/stats/regress.html or writing this formula in Matlab. However, you should be careful how to evaluate the inverse (X'X) see Most efficient matrix inversion in MATLAB