How to compute the gradient and hessian matrix when the equation cannot be solved numerically?
My minimization equation is:
c=c[(x/y/(1-x)^2)^0.6 + (1-(x/y)/(1-y)^2)^0.6 + 6/y^0
I tried the MATLAB function "diff" to compute the gradient and hessian. But derivations are much longer than one can handle. How to write the code for computing the hessian or gradient?
Why do you say the equation cannot be solved numerically? Do you mean it cannot be solved analytically? There appears to be a typo in your statement of the function c that you wish to optimize. When you refer to your use of Matlab's diff() function, do you mean that you evaluated your function on a grid and then differenced it? Or are you talking about passing a function handle to Matlab's symbolic library?
Related
How can I find the total harmonic distortion of a nonlinear signal. For example, Forced Van der pol Oscillator with code as shown below. I have tried the 'thd' function in matlab but I guess I'm missing somethings.
This is the equation
x''-mu(1-x^2(t))x'(t)+x(t)=Pcos(w*t)
function vdpo()
t=0:0.001:10
mu=2
x0=-2;
v0=2;
p=10; w=7;
[t,x]= ode45(#f, t, [x0,v0])
function dxdt=f(t,x)
dxdt1=x(2); dxdt2= mu(1-x(1)^2)*x(1)+p*cos(w*t);
dxdt=[dxdt1 ;dxdt2];
end
end
Try the code below, in which function f(t,x) is our ODE equations and we call function ode45 to use Runge-Kutta methods to solve it.
function [x]=vdpo()
t=0:0.001:10
mu=2
x0=-2;
v0=2;
p=10; w=7;
[t,x]= ode45(#f, t, [x0,v0])
function dxdt=f(t,x)
dxdt1=-x(2)-x(1)+(x(1)^3)/3; dxdt2=-x(1)+p*cos(w*t);
dxdt=[dxdt1 ;dxdt2];
end
end
However, it is actually a math problem rather than a programming problem. The first thing that we have to do is to transform the equations into a more convenient form by defining y=x'+((x^3)/3-x)*mu, then we have 2 First Order Ordinary Differential Equations so we could call ode45 to solve it. I looked them through at here(page2).
By calling
X=vdpo();
x=X(:,1);
thd(x)
we could get an answer like:
p.s. NOT CERTAIN about THD part.
I am solving a hug optimization problem that takes a lot of time to converge to a solution. This is for the reason that Matlab uses finite difference method for calculating the Gradient of objective functions and nonlinear constraint and also constructing Hessian matrix. But there is an option in fmincon solver that allow you to supply the analytic derivative of functions and constraints.
For this reason I wanted to know how can I calculate the Grad of the namely function which is given here both in mathematical aspect and symbolic math tool. I should note that still I want the gradient of the objective in the vector format. (not by extracting Eq1 in 5 equation.)
Lets assume we have these optimization variables
Pd=[x1 x2 x3 x4]
Now we define these 2 variables based on optimization vector i.e.,Pd
Pdn=[pd(1);mo;Pd(2);0;Pd(4)]
Pgn=[pd(2);Pd(1);m1;Pd(4),Pd(1)]
Now this is the equation that I want to take the gradient from:
Eq1=Sin(Pdn)+Pdn+Pgn.^2
How do I solve a quadratic Maximization problem in MATLAB? It seems MATLAB only supports minimization problems, so is there a mathematical concept I can use?
simply multiply by (-1) before and after using the minimization function
Using quadprog function in MATLAB.
This function solves Quadratic Programming problems in MATLAB.
Of course if you want the maxima instead of the minima, you can multiply the cost function by -1.
Good Luck.
The above answer #Drazick seems not right.
quadprog() in matlab requires H to be positive definite. If we simply multiply (-1), -H is a negative definite matrix, which violates the requirement.
Another optimization function called fmincon( ) may help.
In minimizing a convex objective function, does it mean that the Hessian matrix at minimizer should be PSD? If fminunc in Matlab returns a hessian which is not psd what does it mean? am I using a wrong objective?
I do that in environments other than matlab.
Non-PSD means you can't take the Cholesky transform of it (i.e. the matrix square-root), so you can't use it to get standard errors, for example.
To get a good hessian, your objective function has to be really smooth, because you're taking a second derivative, which doubly amplifies any noise. If possible, it is best to use analytic derivatives rather than finite-difference. That is, if you really need the hessian.
I would like to know if anybody knows how I can plot an integral calculated using quad/quadl, or if this is possible.
I read that I can set the trace parameter to be non-zero, and this results in the information of each iteration being provided, but I'm not sure how and if I can use the information to plot an integral.
Thanks.
quad and quadl do not compute an integral function anyway, i.e., an integral as a function of the parameter. And since tools like this work iteratively, refining their estimate until it satisfies a tolerance on the global value, they are not easily made to produce the plot you desire.
You can do what you desire by using a differential equation solver to generate the solution, ode45 for example.