I come across an equation that I want to solve in this paper. (I'm not sure if it can be read by public). It is about minimizing equation 9 (see the attached file) subject to the constraint in equation 10.
n is a 3-dimensional vector (so n can be expressed as a Px3 array), and c is just a vector of length K (so c is just a 1xK array). I' (a PxL array) and l (a 3xL array) are both known. I need to find the set of n and c.
The paper stated that:
'We used MATLAB implementation of the trust region reflective quadratic programming for optimization.'
I don't know how this can be done. I am not sure if what it refers to is the quadprog or just the direct use of fmincon. In either case, I have no idea how to write the objective equation and the constraints equation in the appropriate form for the function call. It would be great if someone can show me how to rewrite equation and use the 'trust region reflective quadratic programming for optimization', or provide other efficient ways to solve that equation.
thanks
Related
I am struggling to solve an optimization problem, numerically, of the following (generic) form.
minimize F(x)
such that:
___(1): 0 < x < 1
___(2): M(x) >= 0.
where M(x) is a matrix whose elements are quadratic functions of x. The last constraint means that M(x) must be a positive semidefinite matrix. Furthermore F(x) is a callable function. For the more curious, here is a similar minimum-working-example.
I have tried a few options, but to no success.
PICOS, CVXPY and CVX -- In the first two cases, I cannot find a way of encoding a minimax problem such as mine. In the third one which is implemented in MATLAB, the matrices involved in a semidefinite constraint must be affine. So my problem falls outside this criteria.
fmincon -- How can we encode a matrix positivity constraint? One way is to compute the eigenvalues of the matrix M(x) analytically, and constraint each one to be positive. But the analytic expression for the eigenvalues can be horrendous.
MOSEK -- The objective function must be a expressible in a standard form. I cannot find an example of a user-defined objective function.
scipy.optimize -- Along with the objective functions and the constraints, it is necessary to provide the derivative of these functions as well. Particularly in my case, that is fine for the objective function. But, if I were to express the matrix positivity constraint (as well as it's derivative) with an analytic expression of the eigenvalues, that can be very tedious.
My apologies for not providing a MWE to illustrate my attempts with each of the above packages/softwares.
Can anyone please suggest a package/software which could be useful to me in solving my optimization problem?
Have a look at a nonlinear optimization package with box constraints, where different type of constraints may be coded via penalty or barrier techniques.
Look at the following URL
merlin.cs.uoi.gr
I have a set of 4 PDEs:
du/dt + A(u) * du/dx = Q(u)
where,u is a matrix and contains:
u=[u1;u2;u3;u4]
and A is a 4*4 matrix. Q is 4*1. A and Q are function of u=[u1;u2;u3;u4].
But my questions are:
How can I solve above equation in MATLAB?
If I solved it by PDE functions of Matlab,can I convert it to a
simple function that is not used from ready functions of Matlab?
Is there any way that I calculate A and Q explicitly. I mean that in
every time step, I calculate A and Q from data of previous time step
and put new value in the equation that causes faster run of program?
PDEs require finite differences, finite elements, boundary elements, etc. You can also turn them into ODEs using transforms like Laplace, Fourier, etc. Solve those using ODE functions and then transform back. Neither one is trivial.
Your equation is a non-linear transient diffusion equation. It's a parabolic PDE.
The equation you posted has the additional difficulty of being non-linear, because both the A matrix and Q vector are functions of the independent variable q. You'll have to start by linearizing your equations. Solve for increments in u rather than u itself.
Once you've done that, discretize the du/dx term using finite differences, finite elements, or boundary elements. You should start with a weighted residual integral formulation.
You're almost done: Next to integrate w.r.t. time using the method of your choice.
It's not trivial.
Google found this: maybe it will help you.
http://www.mathworks.com/matlabcentral/fileexchange/3710-nonlinear-diffusion-toolbox
I'm not too familiar with MATLAB or computational mathematics so I was wondering how I might solve an equation involving the sum of squares, where each term involves two vectors- one known and one unknown. This formula is supposed to represent the error and I need to minimize the error. I think I'm supposed to use least squares but I don't know too much about it and I'm wondering what function is best for doing that and what arguments would represent my equation. My teacher also mentioned something about taking derivatives and he formed a matrix using derivatives which confused me even more- am I required to take derivatives?
The problem that you must be trying to solve is
Min u'u = min \sum_i u_i^2, u=y-Xbeta, where u is the error, y is the vector of dependent variables you are trying to explain, X is a matrix of independent variables and beta is the vector you want to estimate.
Since sum u_i^2 is diferentiable (and convex), you can evaluate the minimal of this expression calculating its derivative and making it equal to zero.
If you do that, you find that beta=inv(X'X)X'y. This maybe calculated using the matlab function regress http://www.mathworks.com/help/stats/regress.html or writing this formula in Matlab. However, you should be careful how to evaluate the inverse (X'X) see Most efficient matrix inversion in MATLAB
Does anyone know what functions are available for solving linear systems when the equations are actually congruences mod m? The desire is to solve a linear system (Ax = b) for values x in which "Ax is congruent to b"
A discussion of how to perform gaussian elimination in this situation can be found here, but I was hoping to use MATLAB rather than attempting to do it by hand.
Have a look at the lincon() method found here:
http://www.mathworks.com/matlabcentral/fileexchange/32856-system-of-linear-congruences/content/lincon.m
I am attempting to use InteriorPointSolver to solve a standard Quadratic Programming problem with linear constraints (per the definition that can be found here). My problem has no linear term (the "c" vector in the definition). I am setting up the "Q" matrix by using SetCoefficient(Int32, Rational, Int32, Int32) across all my variables (passing the "goal" row as the vidRow). Am I correct in assuming that the InteriorPointSolver is minimizing the objective function as defined in the standard definition of the quadratic programming problem?
I ask this because when I calculate x^T * Q * x myself (using the optimal solution for x that I get from the solver), I get a value that is substantially different than what the solver claims the optimal objective function value is (via Statistics.Primal or GetValue(goal)). The only time my calculation and the solver's optimal value agree is when I use an identity matrix for Q. I am guessing that I am setting something up wrong or am not understanding exactly what function is being minimized.
I have consulted all the documentation I can find and cannot find a good explanation of exactly what function the interior point solver is minimizing. Can anyone guide me in the right direction?
As it turns out,
SetCoefficient(goal, 2.0, x, y)
Has exactly the same effect as
SetCoefficient(goal, 2.0, y, x)
The effect of both calls is to set the coefficient of the x*y term in your objective function, and the second call simply overwrites the coefficient that you set in the first call. The solver does not treat the xy term as distinct from the yx term, and does not add the coefficients (as I had expected). So, if your goal is to have a 4xy term in your objective function, you must make the following call:
SetCoefficient(goal, 4.0, x, y)
instead of the two calls listed above.