I want to differentiate the following function wrt in MATLAB:
T(e(x(t),t)⁄p(t))
My problem is that I know the derivatives of x numerically (I am inside a kind of odefun).
I want to use diff to make my code generalizeable for high order derivatives,but the derivatives of x are now constant. I would also like all this to be in an anonymous function where I can make the differentiation and substitute accordingly the time and the derivative of x needed,so that I don't have to write multiple functions for every state of my system.
My code is as follows:
syms q x star;
qd=symfun(90*pi/180+30*pi/180*cos(q),[q]);
p=symfun(79*pi/180*exp(-1.25*q)+pi/180,[q]);
T=log(-(1+star)/star);
e=symfun(x-qd,[x,q]);
and I want to write for example a function in the form
#(t,y)(d^2⁄dt^2 T(e(x(t),t)⁄p(t))+d⁄dt T(e(x(t),t)⁄p(t))+T(e(x(t),t)⁄p(t)))
I am not sure of the implementation details but in general this is one approach that you could take. It involves two steps.
in the T(.) function replace x with exp(t) this way when you do the differentiation exp(t) always stays there for the higher order derivatives to be taken and the outer functions will be differentiated with respect to x at the same time. After you do diff you should receive an expression that contains exp(t) (not tested so hopefully it is the case). At this point exp(t) is your time derivative of x. Now you only need to evaluate this expression in t. When doing so you need to replace exp(t) by the derivative of x. I do not know if this can be done, if not then perhaps using y instead of exp(t) with the constraint y=exp(t) would do it but you need to figure the correct implementation out yourself.
Here you need to substitute the derivative of x at the right t. If you do not have the value at the particular point t then do what I suggested in the comment. Pre-calculate it beforehand in many points and interpolate in this step.
This approach relies on swapping x(t) with exp(t) if that does not work then I would do what I suggested in the comment. Approximate x(t) by a known function and use that instead of x in your code.
I would like to measure the goodness-of-fit to an exponential decay curve. I am using the lsqcurvefit MATLAB function. I have been suggested by someone to do a chi-square test.
I would like to use the MATLAB function chi2gof but I am not sure how I would tell it that the data is being fitted to an exponential curve
The chi2gof function tests the null hypothesis that a set of data, say X, is a random sample drawn from some specified distribution (such as the exponential distribution).
From your description in the question, it sounds like you want to see how well your data X fits an exponential decay function. I really must emphasize, this is completely different to testing whether X is a random sample drawn from the exponential distribution. If you use chi2gof for your stated purpose, you'll get meaningless results.
The usual approach for testing the goodness of fit for some data X to some function f is least squares, or some variant on least squares. Further, a least squares approach can be used to generate test statistics that test goodness-of-fit, many of which are distributed according to the chi-square distribution. I believe this is probably what your friend was referring to.
EDIT: I have a few spare minutes so here's something to get you started. DISCLAIMER: I've never worked specifically on this problem, so what follows may not be correct. I'm going to assume you have a set of data x_n, n = 1, ..., N, and the corresponding timestamps for the data, t_n, n = 1, ..., N. Now, the exponential decay function is y_n = y_0 * e^{-b * t_n}. Note that by taking the natural logarithm of both sides we get: ln(y_n) = ln(y_0) - b * t_n. Okay, so this suggests using OLS to estimate the linear model ln(x_n) = ln(x_0) - b * t_n + e_n. Nice! Because now we can test goodness-of-fit using the standard R^2 measure, which matlab will return in the stats structure if you use the regress function to perform OLS. Hope this helps. Again I emphasize, I came up with this off the top of my head in a couple of minutes, so there may be good reasons why what I've suggested is a bad idea. Also, if you know the initial value of the process (ie x_0), then you may want to look into constrained least squares where you bind the parameter ln(x_0) to its known value.
i need to calculate the degenerate hypergeometric function of two variables given by integral formula:
and I used Matlab for taking numerical integral:
l = 0.067;
h = 0.933;
n = 1.067;
o = 0.2942;
p = 0.633;
func_F=#(x)(x.^(l-1)).*((1-x).^(n-l-1)).*((1-x.*o).^(-h)).*exp(x.*p);
hyper= quadl(func_F,0,1,'AbsTol',1e-6); % i use 'AbsTol' to avoid warnings
disp(hyper);
The result i got is 54.9085, and i know this value is wrong! So please help me to calculate true value of the above integral with singularity at 0.
I don't see where you have the Gamma functions in your code. Did you forget them, or did the value you were expecting already compensate for the lack of them?
Also, maybe you can state why "this value is wrong." Otherwise we are just guessing.
Edit: one more thing, as per the Matlab help page on this function, it might be better to use quadgk. See the following quote (near the bottom of the page):
The quadgk function will integrate functions that are singular at
finite endpoints if the singularities are not too strong. For example,
it will integrate functions that behave at an endpoint c like log|x-c|
or |x-c|p for p >= -1/2. If the function is singular at points inside
(a,b), write the integral as a sum of integrals over subintervals with
the singular points as endpoints, compute them with quadgk, and add
the results.
Bottom line is the the singularities near the endpoints (when your x gets near 0 or 1) might cause some problems.
I am attempting to use InteriorPointSolver to solve a standard Quadratic Programming problem with linear constraints (per the definition that can be found here). My problem has no linear term (the "c" vector in the definition). I am setting up the "Q" matrix by using SetCoefficient(Int32, Rational, Int32, Int32) across all my variables (passing the "goal" row as the vidRow). Am I correct in assuming that the InteriorPointSolver is minimizing the objective function as defined in the standard definition of the quadratic programming problem?
I ask this because when I calculate x^T * Q * x myself (using the optimal solution for x that I get from the solver), I get a value that is substantially different than what the solver claims the optimal objective function value is (via Statistics.Primal or GetValue(goal)). The only time my calculation and the solver's optimal value agree is when I use an identity matrix for Q. I am guessing that I am setting something up wrong or am not understanding exactly what function is being minimized.
I have consulted all the documentation I can find and cannot find a good explanation of exactly what function the interior point solver is minimizing. Can anyone guide me in the right direction?
As it turns out,
SetCoefficient(goal, 2.0, x, y)
Has exactly the same effect as
SetCoefficient(goal, 2.0, y, x)
The effect of both calls is to set the coefficient of the x*y term in your objective function, and the second call simply overwrites the coefficient that you set in the first call. The solver does not treat the xy term as distinct from the yx term, and does not add the coefficients (as I had expected). So, if your goal is to have a 4xy term in your objective function, you must make the following call:
SetCoefficient(goal, 4.0, x, y)
instead of the two calls listed above.
I'm trying to do some parameter estimation and want to choose parameter estimates that minimize the square error in a predicted equation over about 30 variables. If the equation were linear, I would just compute the 30 partial derivatives, set them all to zero, and use a linear-equation solver. But unfortunately the equation is nonlinear and so are its derivatives.
If the equation were over a single variable, I would just use Newton's method (also known as Newton-Raphson). The Web is rich in examples and code to implement Newton's method for functions of a single variable.
Given that I have about 30 variables, how can I program a numeric solution to this problem using Newton's method? I have the equation in closed form and can compute the first and second derivatives, but I don't know quite how to proceed from there. I have found a large number of treatments on the web, but they quickly get into heavy matrix notation. I've found something moderately helpful on Wikipedia, but I'm having trouble translating it into code.
Where I'm worried about breaking down is in the matrix algebra and matrix inversions. I can invert a matrix with a linear-equation solver but I'm worried about getting the right rows and columns, avoiding transposition errors, and so on.
To be quite concrete:
I want to work with tables mapping variables to their values. I can write a function of such a table that returns the square error given such a table as argument. I can also create functions that return a partial derivative with respect to any given variable.
I have a reasonable starting estimate for the values in the table, so I'm not worried about convergence.
I'm not sure how to write the loop that uses an estimate (table of value for each variable), the function, and a table of partial-derivative functions to produce a new estimate.
That last is what I'd like help with. Any direct help or pointers to good sources will be warmly appreciated.
Edit: Since I have the first and second derivatives in closed form, I would like to take advantage of them and avoid more slowly converging methods like simplex searches.
The Numerical Recipes link was most helpful. I wound up symbolically differentiating my error estimate to produce 30 partial derivatives, then used Newton's method to set them all to zero. Here are the highlights of the code:
__doc.findzero = [[function(functions, partials, point, [epsilon, steps]) returns table, boolean
Where
point is a table mapping variable names to real numbers
(a point in N-dimensional space)
functions is a list of functions, each of which takes a table like
point as an argument
partials is a list of tables; partials[i].x is the partial derivative
of functions[i] with respect to 'x'
epilson is a number that says how close to zero we're trying to get
steps is max number of steps to take (defaults to infinity)
result is a table like 'point', boolean that says 'converged'
]]
-- See Numerical Recipes in C, Section 9.6 [http://www.nrbook.com/a/bookcpdf.php]
function findzero(functions, partials, point, epsilon, steps)
epsilon = epsilon or 1.0e-6
steps = steps or 1/0
assert(#functions > 0)
assert(table.numpairs(partials[1]) == #functions,
'number of functions not equal to number of variables')
local equations = { }
repeat
if Linf(functions, point) <= epsilon then
return point, true
end
for i = 1, #functions do
local F = functions[i](point)
local zero = F
for x, partial in pairs(partials[i]) do
zero = zero + lineq.var(x) * partial(point)
end
equations[i] = lineq.eqn(zero, 0)
end
local delta = table.map(lineq.tonumber, lineq.solve(equations, {}).answers)
point = table.map(function(v, x) return v + delta[x] end, point)
steps = steps - 1
until steps <= 0
return point, false
end
function Linf(functions, point)
-- distance using L-infinity norm
assert(#functions > 0)
local max = 0
for i = 1, #functions do
local z = functions[i](point)
max = math.max(max, math.abs(z))
end
return max
end
You might be able to find what you need at the Numerical Recipes in C web page. There is a free version available online. Here (PDF) is the chapter containing the Newton-Raphson method implemented in C. You may also want to look at what is available at Netlib (LINPack, et. al.).
As an alternative to using Newton's method the Simplex Method of Nelder-Mead is ideally suited to this problem and referenced in Numerical Recpies in C.
Rob
You are asking for a function minimization algorithm. There are two main classes: local and global. Your problem is least squares so both local and global minimization algorithms should converge to the same unique solution. Local minimization is far more efficient than global so select that.
There are many local minimization algorithms but one particularly well suited to least squares problems is Levenberg-Marquardt. If you don't have such a solver to hand (e.g. from MINPACK) then you can probably get away with Newton's method:
x <- x - (hessian x)^-1 * grad x
where you compute the inverse matrix multiplied by a vector using a linear solver.
Since you already have the partial derivatives, how about a general gradient-descent approach?
Maybe you think you have a good-enough solution, but for me, the easiest way to think about this is to understand it in the 1-variable case first, and then extend it to the matrix case.
In the 1-variable case, if you divide the first derivative by the second derivative, you get the (negative) step size to your next trial point, e.g. -V/A.
In the N-variable case, the first derivative is a vector and the second derivative is a matrix (the Hessian). You multiply the derivative vector by the inverse of the second derivative, and the result is the negative step-vector to your next trial point, e.g. -V*(1/A)
I assume you can get the 2nd-derivative Hessian matrix. You will need a routine to invert it. There are plenty of these around in various linear algebra packages, and they are quite fast.
(For readers who are not familiar with this idea, suppose the two variables are x and y, and the surface is v(x,y). Then the first derivative is the vector:
V = [ dv/dx, dv/dy ]
and the second derivative is the matrix:
A = [dV/dx]
[dV/dy]
or:
A = [ d(dv/dx)/dx, d(dv/dy)/dx]
[ d(dv/dx)/dy, d(dv/dy)/dy]
or:
A = [d^2v/dx^2, d^2v/dydx]
[d^2v/dxdy, d^2v/dy^2]
which is symmetric.)
If the surface is parabolic (constant 2nd derivative) it will get to the answer in 1 step. On the other hand, if the 2nd derivative is very not-constant, you could encounter oscillation. Cutting each step in half (or some fraction) should make it stable.
If N == 1, you'll see that it does the same thing as in the 1-variable case.
Good luck.
Added: You wanted code:
double X[N];
// Set X to initial estimate
while(!done){
double V[N]; // 1st derivative "velocity" vector
double A[N*N]; // 2nd derivative "acceleration" matrix
double A1[N*N]; // inverse of A
double S[N]; // step vector
CalculateFirstDerivative(V, X);
CalculateSecondDerivative(A, X);
// A1 = 1/A
GetMatrixInverse(A, A1);
// S = V*(1/A)
VectorTimesMatrix(V, A1, S);
// if S is small enough, stop
// X -= S
VectorMinusVector(X, S, X);
}
My opinion is to use a stochastic optimizer, e.g., a Particle Swarm method.