Calculate hypergeometric function - matlab

i need to calculate the degenerate hypergeometric function of two variables given by integral formula:
and I used Matlab for taking numerical integral:
l = 0.067;
h = 0.933;
n = 1.067;
o = 0.2942;
p = 0.633;
func_F=#(x)(x.^(l-1)).*((1-x).^(n-l-1)).*((1-x.*o).^(-h)).*exp(x.*p);
hyper= quadl(func_F,0,1,'AbsTol',1e-6); % i use 'AbsTol' to avoid warnings
disp(hyper);
The result i got is 54.9085, and i know this value is wrong! So please help me to calculate true value of the above integral with singularity at 0.

I don't see where you have the Gamma functions in your code. Did you forget them, or did the value you were expecting already compensate for the lack of them?
Also, maybe you can state why "this value is wrong." Otherwise we are just guessing.
Edit: one more thing, as per the Matlab help page on this function, it might be better to use quadgk. See the following quote (near the bottom of the page):
The quadgk function will integrate functions that are singular at
finite endpoints if the singularities are not too strong. For example,
it will integrate functions that behave at an endpoint c like log|x-c|
or |x-c|p for p >= -1/2. If the function is singular at points inside
(a,b), write the integral as a sum of integrals over subintervals with
the singular points as endpoints, compute them with quadgk, and add
the results.
Bottom line is the the singularities near the endpoints (when your x gets near 0 or 1) might cause some problems.

Related

solve trig equation over boundary

Firstly, I'm sure a simple answer exists for this, maybe I'm just not wording it right in searching for an answer online.
I'm trying to solve an equation that looks like this:
a*x*cot(a*x) == b
Where a and b are constants. Using
solve(a*x*cot(a*x) == b, x)
I'm getting a result I know is wrong (with the values I'm using for the constants, I'm getting like -227, and it should be something around +160.) I plotted it up in Mathematica as two separate functions, and they do cross each other right around there, but since the cot part is periodic, they do so many times.
I want to constrain Matlab's search for the solution to a specific interval, such as 0 to 200; how do I do that?
I'm pretty new to Matlab (rather more experienced in Mathematica).
You can specify the bounds on x using fzero with only two requirements
The function must be in a "residual" form (i.e., r(x) = 0)
The residual values at the two bounds must have opposite sign (this guarantees that a root exists within the interval for continuous functions).
So we re-write the function in residual form:
r = #(x) a*x*cot(a*x) - b;
define the interval
% These are just random numbers; the actual bounds should come
% from the graph the ensures r has different signs a xL and xR
xL = 150;
xR = 170;
and solve
x = fzero(r,[xL,xR]);
I see you were trying to use the Symbolic Toolbox for a solution, but since the equation is a non-linear combination of a polynomial and a trigonometric function, there is more than likely no closed form solution. So I differed to a non-linear, numeric root-finder.
I tried some values and it seems solve returns a numeric solution. This is the documented behaviour if no analytic solution is found.
In this case, you may directly call the numeric solver with a matching start value
vpasolve(a*x*cot(a*x) == b, x,160)
It's not exactly what you asked for, but using your reading from the plot as a start value should do it.

MATLAB complicated integration

I have an integration function which does not have indefinite integral expression.
Specifically, the function is f(y)=h(y)+integral(#(x) exp(-x-1/x),0,y) where h(y) is a simple function.
Matlab numerically computes f(y) well, but I want to compute the following function.
g(w)=w*integral(1-f(y).^(1/w),0,inf) where w is a real number in [0,1].
The problem for computing g(w) is handling f(y).^(1/w) numerically.
How can I calculate g(w) with MATLAB? Is it impossible?
Expressions containing e^(-1/x) are generally difficult to compute near x = 0. Actually, I am surprised that Matlab computes f(y) well in the first place. I'd suggest trying to compute g(w)=w*integral(1-f(y).^(1/w),epsilon,inf) for epsilon greater than zero, then gradually decreasing epsilon toward 0 to check if you can get numerical convergence at all. Convergence is certainly not guaranteed!
You can calculate g(w) using the functions you have, but you need to add the (ArrayValued,true) name-value pair.
The option allows you to specify a vector-valued w and allows the nested integral call to receive a vector of y values, which is how integral naturally works.
f = #(y) h(y)+integral(#(x) exp(-x-1/x),0,y,'ArrayValued',true);
g = #(w) w .* integral(1-f(y).^(1./w),0,Inf,'ArrayValued',true);
At least, that works on my R2014b installation.
Note: While h(y) may be simple, if it's integral over the positive real line does not converge, g(w) will more than likely not converge (I don't think I need to qualify that, but I'll hedge my bets).

How to have square wave in Matlab symbolic equation

My project require me to use Matlab to create a symbolic equation with square wave inside.
I tried to write it like this but to no avail:
syms t;
a=square(t);
Input arguments must be 'double'.
What can i do to solve this problem? Thanks in advance for the helps offered.
here are a couple of general options using floor and sign functions:
f=#(A,T,x0,x) A*sign(sin((2*pi*(x-x0))/T));
f=#(A,T,x0,x) A*(-1).^(floor(2*(x-x0)/T));
So for example using the floor function:
syms x
sqr=2*floor(x)-floor(2*x)+1;
ezplot(sqr, [-2, 2])
Here is something to get you started. Recall that we can express a square wave as a Fourier Series expansion. I won't bother you with the details, but you can represent any periodic function as a summation of cosines and sines (à la #RTL). Without going into the derivation, this is the closed-form equation for a square wave of frequency f, with a peak-to-peak amplitude of 2 (i.e. it goes from -1 to 1). Recall that the frequency is the amount of cycles per seconds. Therefore, f = 1 means that we repeat our square wave every second.
Basically, what you have to do is code up the first line of the equation... but how in the world would you do that? Welcome to the world of the Symbolic Math Toolbox. What we will need to do before hand is declare what our frequency is. Let's assume f = 1 for now. With the Symbolic Math Toolbox, you can define what are considered as mathematics variables within MATLAB. After, MATLAB has a whole suite of tools that you can use to evaluate functions that rely on these variables. A good example would be if you want to use this to define a closed-form solution of a function f(x). You can then use diff to differentiate and see what the derivative is. Try it yourself:
syms x;
f = x^4;
df = diff(f);
syms denotes that you are declaring anything coming after the statement to be a mathematical variable. In this case, x is just that. df should now give you 4x^3. Cool eh? In any case, let's get back to our problem at hand. We see that there are in fact two variables in the periodic square function that need to be defined: t and k. Once we do this, we need to create our function that is inside the summation first. We can do this by:
syms t k;
f = 1; %//Define frequency here
funcSum = (sin(2*pi*(2*k - 1)*f*t) / (2*k - 1));
That settles that problem... now how do we encapsulate this into an infinite sum!? The sum command in MATLAB assumes that we have a finite array to sum over. If you want to symbolically sum over a function, we must use the symsum function. We usually call it like this:
funcOut = symsum(func, v, start, finish);
func is the function we wish to sum over. v is the summation variable that we wish to use to index in the sum. In our case, that's k. start is the beginning of the sum, which is 1 in our case, and finish is where we wish to finish up our summation. In our case, that's infinity, and so MATLAB has a special keyword called Inf to denote that. Therefore:
xsquare = (4/pi) * symsum(funcSum, k, 1, Inf);
xquare now contains your representation of a square wave defined in terms of the Symbolic Math Toolbox. Now, if you want to plot your square wave and see if we have this right. We can do the following. Let's go between -3 <= t <= 3. As such, you would do something like this:
tVector = -3 : 0.01 : 3; %// Choose a step size of 0.01
yout = subs(xsquare, t, tVector);
You will notice though that there will be some values that are NaN. The reason why is because right at a multiple of the period (T = 1, 2, 3, ...), the behaviour is undefined as the derivative right at these points is undefined. As such, we can fill this in using either 1 or -1. Let's just choose 1 for now. Also, because the Fourier Series is generally a complex-valued function, and the square-wave is purely real, the output of this function will actually give you a complex-valued vector. As such, simply chop off the complex parts to get the real parts only:
yout = real(double(yout)); %// To cast back to double.
yout(isnan(yout)) = 1;
plot(tVector, yout);
You'll get something like:
You could also do this the ezplot way by doing: ezplot(xsquare). However, you'll see that at the points where the wave repeats itself, we get NaN values and so there is a disconnect between the high peak and low peak.
Note:
Natan's solution is much more elegant. I was still writing this post by the time he put something up. Either way, I wanted to give a more signal processing perspective to how to do this. Go Fourier!
A Fourier series for the square wave of unit amplitude is:
alpha + 2/Pi*sum(sin( n * Pi*alpha)/n*cos(n*theta),n=1..infinity)
Here is a handy trick:
cos(n*theta) = Re( exp( I * n * theta))
and
1/n*exp(I*n*theta) = I*anti-derivative(exp(I*n*theta),theta)
Put it all together: pull the anti-derivative ( or integral ) operator out of the sum, and you get a geometric series. Then integrate and finally take the real part.
Result:
squarewave=
alpha+ 1/Pi*Re(I*ln((1-exp(I*(theta+Pi*alpha)))/(1-exp(I*(theta-Pi*alpha)))))
I tried it in MAPLE and it works great! (probably not very practical though)

How to overcome singularities in numerical integration (in Matlab or Mathematica)

I want to numerically integrate the following:
where
and a, b and β are constants which for simplicity, can all be set to 1.
Neither Matlab using dblquad, nor Mathematica using NIntegrate can deal with the singularity created by the denominator. Since it's a double integral, I can't specify where the singularity is in Mathematica.
I'm sure that it is not infinite since this integral is based in perturbation theory and without the
has been found before (just not by me so I don't know how it's done).
Any ideas?
(1) It would be helpful if you provide the explicit code you use. That way others (read: me) need not code it up separately.
(2) If the integral exists, it has to be zero. This is because you negate the n(y)-n(x) factor when you swap x and y but keep the rest the same. Yet the integration range symmetry means that amounts to just renaming your variables, hence it must stay the same.
(3) Here is some code that shows it will be zero, at least if we zero out the singular part and a small band around it.
a = 1;
b = 1;
beta = 1;
eps[x_] := 2*(a-b*Cos[x])
n[x_] := 1/(1+Exp[beta*eps[x]])
delta = .001;
pw[x_,y_] := Piecewise[{{1,Abs[Abs[x]-Abs[y]]>delta}}, 0]
We add 1 to the integrand just to avoid accuracy issues with results that are near zero.
NIntegrate[1+Cos[(x+y)/2]^2*(n[x]-n[y])/(eps[x]-eps[y])^2*pw[Cos[x],Cos[y]],
{x,-Pi,Pi}, {y,-Pi,Pi}] / (4*Pi^2)
I get the result below.
NIntegrate::slwcon:
Numerical integration converging too slowly; suspect one of the following:
singularity, value of the integration is 0, highly oscillatory integrand,
or WorkingPrecision too small.
NIntegrate::eincr:
The global error of the strategy GlobalAdaptive has increased more than
2000 times. The global error is expected to decrease monotonically after a
number of integrand evaluations. Suspect one of the following: the
working precision is insufficient for the specified precision goal; the
integrand is highly oscillatory or it is not a (piecewise) smooth
function; or the true value of the integral is 0. Increasing the value of
the GlobalAdaptive option MaxErrorIncreases might lead to a convergent
numerical integration. NIntegrate obtained 39.4791 and 0.459541
for the integral and error estimates.
Out[24]= 1.00002
This is a good indication that the unadulterated result will be zero.
(4) Substituting cx for cos(x) and cy for cos(y), and removing extraneous factors for purposes of convergence assessment, gives the expression below.
((1 + E^(2*(1 - cx)))^(-1) - (1 + E^(2*(1 - cy)))^(-1))/
(2*(1 - cx) - 2*(1 - cy))^2
A series expansion in cy, centered at cx, indicates a pole of order 1. So it does appear to be a singular integral.
Daniel Lichtblau
The integral looks like a Cauchy Principal Value type integral (i.e. it has a strong singularity). That's why you can't apply standard quadrature techniques.
Have you tried PrincipalValue->True in Mathematica's Integrate?
In addition to Daniel's observation about integrating an odd integrand over a symmetric range (so that symmetry indicates the result should be zero), you can also do this to understand its convergence better (I'll use latex, writing this out with pen and paper should make it easier to read; it took a lot longer to write than to do, it's not that complicated):
First, epsilon(x)-\epsilon(y)\propto\cos(y)-\cos(x)=2\sin(\xi_+)\sin(\xi_-) where I have defined \xi_\pm=(x\pm y)/2 (so I've rotated the axes by pi/4). The region of integration then is \xi_+ between \pi/\sqrt{2} and -\pi/\sqrt{2} and \xi_- between \pm(\pi/\sqrt{2}-\xi_-). Then the integrand takes the form \frac{1}{\sin^2(\xi_-)\sin^2(\xi_+)} times terms with no divergences. So, evidently, there are second-order poles, and this isn't convergent as presented.
Perhaps you could email the persons who obtained an answer with the cos term and ask what precisely it is they did. Perhaps there's a physical regularisation procedure being employed. Or you could have given more information on the physical origin of this (some sort of second order perturbation theory for some sort of bosonic system?), had that not been off-topic here...
May be I am missing something here, but the integrand
f[x,y]=Cos^2[(x+y)/2]*(n[x]-n[y])/(eps[x]-eps[y]) with n[x]=1/(1+Exp[Beta*eps[x]]) and eps[x]=2(a-b*Cos[x]) is indeed a symmetric function in x and y: f[x,-y]= f[-x,y]=f[x,y].
Therefore its integral over any domain [-u,u]x[-v,v] is zero. No numerical integration seems to be needed here. The result is just zero.

How to find minimum of nonlinear, multivariate function using Newton's method (code not linear algebra)

I'm trying to do some parameter estimation and want to choose parameter estimates that minimize the square error in a predicted equation over about 30 variables. If the equation were linear, I would just compute the 30 partial derivatives, set them all to zero, and use a linear-equation solver. But unfortunately the equation is nonlinear and so are its derivatives.
If the equation were over a single variable, I would just use Newton's method (also known as Newton-Raphson). The Web is rich in examples and code to implement Newton's method for functions of a single variable.
Given that I have about 30 variables, how can I program a numeric solution to this problem using Newton's method? I have the equation in closed form and can compute the first and second derivatives, but I don't know quite how to proceed from there. I have found a large number of treatments on the web, but they quickly get into heavy matrix notation. I've found something moderately helpful on Wikipedia, but I'm having trouble translating it into code.
Where I'm worried about breaking down is in the matrix algebra and matrix inversions. I can invert a matrix with a linear-equation solver but I'm worried about getting the right rows and columns, avoiding transposition errors, and so on.
To be quite concrete:
I want to work with tables mapping variables to their values. I can write a function of such a table that returns the square error given such a table as argument. I can also create functions that return a partial derivative with respect to any given variable.
I have a reasonable starting estimate for the values in the table, so I'm not worried about convergence.
I'm not sure how to write the loop that uses an estimate (table of value for each variable), the function, and a table of partial-derivative functions to produce a new estimate.
That last is what I'd like help with. Any direct help or pointers to good sources will be warmly appreciated.
Edit: Since I have the first and second derivatives in closed form, I would like to take advantage of them and avoid more slowly converging methods like simplex searches.
The Numerical Recipes link was most helpful. I wound up symbolically differentiating my error estimate to produce 30 partial derivatives, then used Newton's method to set them all to zero. Here are the highlights of the code:
__doc.findzero = [[function(functions, partials, point, [epsilon, steps]) returns table, boolean
Where
point is a table mapping variable names to real numbers
(a point in N-dimensional space)
functions is a list of functions, each of which takes a table like
point as an argument
partials is a list of tables; partials[i].x is the partial derivative
of functions[i] with respect to 'x'
epilson is a number that says how close to zero we're trying to get
steps is max number of steps to take (defaults to infinity)
result is a table like 'point', boolean that says 'converged'
]]
-- See Numerical Recipes in C, Section 9.6 [http://www.nrbook.com/a/bookcpdf.php]
function findzero(functions, partials, point, epsilon, steps)
epsilon = epsilon or 1.0e-6
steps = steps or 1/0
assert(#functions > 0)
assert(table.numpairs(partials[1]) == #functions,
'number of functions not equal to number of variables')
local equations = { }
repeat
if Linf(functions, point) <= epsilon then
return point, true
end
for i = 1, #functions do
local F = functions[i](point)
local zero = F
for x, partial in pairs(partials[i]) do
zero = zero + lineq.var(x) * partial(point)
end
equations[i] = lineq.eqn(zero, 0)
end
local delta = table.map(lineq.tonumber, lineq.solve(equations, {}).answers)
point = table.map(function(v, x) return v + delta[x] end, point)
steps = steps - 1
until steps <= 0
return point, false
end
function Linf(functions, point)
-- distance using L-infinity norm
assert(#functions > 0)
local max = 0
for i = 1, #functions do
local z = functions[i](point)
max = math.max(max, math.abs(z))
end
return max
end
You might be able to find what you need at the Numerical Recipes in C web page. There is a free version available online. Here (PDF) is the chapter containing the Newton-Raphson method implemented in C. You may also want to look at what is available at Netlib (LINPack, et. al.).
As an alternative to using Newton's method the Simplex Method of Nelder-Mead is ideally suited to this problem and referenced in Numerical Recpies in C.
Rob
You are asking for a function minimization algorithm. There are two main classes: local and global. Your problem is least squares so both local and global minimization algorithms should converge to the same unique solution. Local minimization is far more efficient than global so select that.
There are many local minimization algorithms but one particularly well suited to least squares problems is Levenberg-Marquardt. If you don't have such a solver to hand (e.g. from MINPACK) then you can probably get away with Newton's method:
x <- x - (hessian x)^-1 * grad x
where you compute the inverse matrix multiplied by a vector using a linear solver.
Since you already have the partial derivatives, how about a general gradient-descent approach?
Maybe you think you have a good-enough solution, but for me, the easiest way to think about this is to understand it in the 1-variable case first, and then extend it to the matrix case.
In the 1-variable case, if you divide the first derivative by the second derivative, you get the (negative) step size to your next trial point, e.g. -V/A.
In the N-variable case, the first derivative is a vector and the second derivative is a matrix (the Hessian). You multiply the derivative vector by the inverse of the second derivative, and the result is the negative step-vector to your next trial point, e.g. -V*(1/A)
I assume you can get the 2nd-derivative Hessian matrix. You will need a routine to invert it. There are plenty of these around in various linear algebra packages, and they are quite fast.
(For readers who are not familiar with this idea, suppose the two variables are x and y, and the surface is v(x,y). Then the first derivative is the vector:
V = [ dv/dx, dv/dy ]
and the second derivative is the matrix:
A = [dV/dx]
[dV/dy]
or:
A = [ d(dv/dx)/dx, d(dv/dy)/dx]
[ d(dv/dx)/dy, d(dv/dy)/dy]
or:
A = [d^2v/dx^2, d^2v/dydx]
[d^2v/dxdy, d^2v/dy^2]
which is symmetric.)
If the surface is parabolic (constant 2nd derivative) it will get to the answer in 1 step. On the other hand, if the 2nd derivative is very not-constant, you could encounter oscillation. Cutting each step in half (or some fraction) should make it stable.
If N == 1, you'll see that it does the same thing as in the 1-variable case.
Good luck.
Added: You wanted code:
double X[N];
// Set X to initial estimate
while(!done){
double V[N]; // 1st derivative "velocity" vector
double A[N*N]; // 2nd derivative "acceleration" matrix
double A1[N*N]; // inverse of A
double S[N]; // step vector
CalculateFirstDerivative(V, X);
CalculateSecondDerivative(A, X);
// A1 = 1/A
GetMatrixInverse(A, A1);
// S = V*(1/A)
VectorTimesMatrix(V, A1, S);
// if S is small enough, stop
// X -= S
VectorMinusVector(X, S, X);
}
My opinion is to use a stochastic optimizer, e.g., a Particle Swarm method.