Fminunc returns indefinite Hessian matrix for a convex objective - matlab

In minimizing a convex objective function, does it mean that the Hessian matrix at minimizer should be PSD? If fminunc in Matlab returns a hessian which is not psd what does it mean? am I using a wrong objective?

I do that in environments other than matlab.
Non-PSD means you can't take the Cholesky transform of it (i.e. the matrix square-root), so you can't use it to get standard errors, for example.
To get a good hessian, your objective function has to be really smooth, because you're taking a second derivative, which doubly amplifies any noise. If possible, it is best to use analytic derivatives rather than finite-difference. That is, if you really need the hessian.

Related

Is the Hessian matrix reported by MATLAB fmincon, fminunc the average Hessian Matrix?

Both fmincon and fminunc report a hessian matrix. I could not find in the documentation about how is the hessian calculated. Is it the average hessian of all sample data point?
The returned Hessian will typically correspond to either the last, or next-to-last, iterate ("point") prior to termination. Several details come into play though (BFGS approximation? User supplied Hessian evalutation?, etc.), but all that is nicely summarized in the documentation.

Matrix function minimzation

If I wanted to minimize the function f(x,y)=(abs(x+y)-abs(x)-abs(y)) squared, subject to a bunch of linear constraints, where x and y are real vectors or real arrays.
There are two problems:
- The function is not exactly convex (though certain linear cuts through it are), meaning that algorithms that use differentiation to minimize maybe won't work.
- The function to minimize does not return a real scalar, and the most common algorithms for maximization (at least in MATLAB with fmincon and fminsearch) require the objective function to return scalars.
Is there a way I can solve this cleanly numerically using whatever technology?

scipy minimization: How to code a jacobian/hessian for objective function using max value

I'm using scipy.optimize.minimize with the Newton-CG (Newton Conjugate Gradient) method since I have an objective function for which I know the analytical Jacobian and Hessian. However, I need to add a regularization term R=exp(max(s)) based on the maximum value inside the array parameter "s" that being fit. It isn't entirely obvious to me how to implement derivatives for R. Letting the minimization algorithm do numeric derivatives for the whole objective function isn't an option, by the way, because it is far too complex. Any thoughts, oh wise people of the web?

How to use symbolic-math of Matlab to obtain Gradient of a complex equation

I am solving a hug optimization problem that takes a lot of time to converge to a solution. This is for the reason that Matlab uses finite difference method for calculating the Gradient of objective functions and nonlinear constraint and also constructing Hessian matrix. But there is an option in fmincon solver that allow you to supply the analytic derivative of functions and constraints.
For this reason I wanted to know how can I calculate the Grad of the namely function which is given here both in mathematical aspect and symbolic math tool. I should note that still I want the gradient of the objective in the vector format. (not by extracting Eq1 in 5 equation.)
Lets assume we have these optimization variables
Pd=[x1 x2 x3 x4]
Now we define these 2 variables based on optimization vector i.e.,Pd
Pdn=[pd(1);mo;Pd(2);0;Pd(4)]
Pgn=[pd(2);Pd(1);m1;Pd(4),Pd(1)]
Now this is the equation that I want to take the gradient from:
Eq1=Sin(Pdn)+Pdn+Pgn.^2

MATLAB: Plot integral using quad/quadl

I would like to know if anybody knows how I can plot an integral calculated using quad/quadl, or if this is possible.
I read that I can set the trace parameter to be non-zero, and this results in the information of each iteration being provided, but I'm not sure how and if I can use the information to plot an integral.
Thanks.
quad and quadl do not compute an integral function anyway, i.e., an integral as a function of the parameter. And since tools like this work iteratively, refining their estimate until it satisfies a tolerance on the global value, they are not easily made to produce the plot you desire.
You can do what you desire by using a differential equation solver to generate the solution, ode45 for example.