Matlab double integral with heavily suppressed exponentials - matlab

I have recently been trying to calculate the double integral of the functionfun = #(v,x)(10^4)*0.648*(1+v*0.001).*( exp(-2.83./( 10^-8+(sqrt(1+2*v*0.001)).*(x.^2)) ) -1).*(exp(-(v.^2)*0.33)), in the range (-1000,1000) for v and (0,a) for x, where a is either a very large number or infinity. What I have found is that while in the case a = inf the value seems to be decently accurate (it reduces to a single integral which is less cumbersome to evaluate numerically), but if I integrate from 0 to a 10^9 and from 10^9 to infinity the integrals don't sum up to the correct value, with the latter one being zero. What I am really interested in is in the integral from 0 to 10^9, but these results make me wonder if I can trust it at all.
In what I have done, I also had to use a large prefactor (10^200) in front of the function to "compensate" for the small numbers; otherwise the results were all nonsense. I have tried to use vpa, but with no success. What am I doing wrong?
Rob

Looks like your problem has to do with the different methods Matlab uses for different cases and the big numbers you are handling.
We can see your function with ezsurf just to have an idea on how does it behave.
So hint 1 is that the value is going to be a negative value, lets integrate over small limits to see an approximation on how much will it be.
integral2(fun,-100,100,0,100)
%ans =
% -5.9050e+04
And assuming that the function tends to zero, we know the final value should be on the neighborhood.
Now hint 2:
integral2(fun,-1000,1000,0,100)
%ans =
% -2.5613e-29
This doesn't make much sense, by increasing the range of the limits the integral basically became zero. After checking the documentation of integral2
'Method' — Integration method
'auto' (default) | 'tiled' | 'iterated'
Integration method, specified as the comma-separated pair consisting of 'Method' and one of the methods described below.
Integration Method Description
'auto' For most cases, integral2 uses the 'tiled' method. It uses the 'iterated' method when any of the integration limits are infinite. This is the default method.
'tiled' integral2 transforms the region of integration to a rectangular shape and subdivides it into smaller rectangular regions as needed. The integration limits must be finite.
'iterated' integral2 calls integral to perform an iterated integral. The outer integral is evaluated over xmin ≤ x ≤ xmax. The inner integral is evaluated over ymin(x) ≤ y ≤ ymax(x). The integration limits can be infinite.
Ok, so if we don't define a method it will use "tiled" if the limits are finite, and "interpolated" if they are infinte.
Could it be that if the range is too big, the tiles created by the "tiled" method are too big to accurately calculate the integral? If that is the case then "iterated" should not have that problem, let's check
integral2(fun,-1000,1000,0,100,'Method','iterated')
%ans =
% -5.9050e+04
Interesting, looks like we are into something. Let's try the original problem
integral2(fun,-1000,1000,0,inf)
%ans =
% -5.9616e+04
integral2(fun,-1000,1000,0,10^9,'Method','tiled')
%ans =
% -2.1502e-33
integral2(fun,-1000,1000,0,10^9,'Method','iterated')
%ans =
% -5.9616e+04
integral2(fun,-1000,1000,10^9,inf)
%ans =
% 0
That looks better. So it looks like the 'tiled' method is the problem with your function because its characteristics and the size of the range of the limits. So as long as you use 'iterated' you should be ok.

Related

MATLAB - negative values go to NaN in a symmetric function

Can someone please explain why the following symmetric function cannot pass a certain limit of negative values?
D = 0.1; l = 4;
c = #(x,v) (v/D).*exp(-v*x/D)./(1-exp(-v*l/D));
v_vec = -25:0.01:25;
figure(2)
hold on
plot(v_vec,c(l,v_vec),'b')
plot(v_vec,c(0,v_vec),'r')
Notice at the figure where the blue line chops, this is where I get inf/nan values.
It seems that Matlab is trying to compute a result that is too large, outputs +inf, and then operates on that, which yields +/- inf and NaNs.
For instance, at v=-25, part of the function computes exp(-(-25)*4/0.1), which is exp(1000), and that outputs +inf. (larger than the largest representable double precision float).
You can potentially solve that problem by rewriting your function to avoid operating of such very large (or very small) numbers, say by reorganising the fraction containing exp() functions.
I did encounter the same hurdle using exp() with arguments triggering overflow. Sometimes it is difficult to trace back numeric imprecision or convergence errors. In principle the function definition using exp() only create intermediate issues as your purpose as a transition function. The intention I guess was to provide a continuous function.
My solution to this problem is to divide the argument into regions and provide in each region an approximation function. In your case zero for negative x and proportional to x for positive x. In between you can use the orginal function. Care should be taken to match the approximation at the borders of the regions and the number of continuous differentiations which is important for convergence in loops.

Optimization algorithm in Matlab

I want to calculate maximum of the function CROSS-IN-TRAY which is
shown here:
So I have made this function in Matlab:
function f = CrossInTray2(x)
%the CrossInTray2 objective function
%
f = 0.0001 *(( abs(sin(x(:,1)).* sin(x(:,2)).*exp(abs(100 - sqrt(x(:,1).^2 + x(:,2).^2)/3.14159 )) )+1 ).^0.1);
end
I multiplied the whole formula by (-1) so the function is inverted so when I will be looking for the minimum of the inverted formula it will be actually the maximum of original one.
Then when I go to optimization tools and select the GA algorithm and define lower and upper bounds as -3 and 3 it shows me the result after about 60 iterations which is about 0.13 and the final point is something like [0, 9.34].
And how is this possible that the final point is not in the range defined by the bounds? And what is the actual maximum of this function?
The maximum is (0,0) (actually, when either input is 0, and periodically at multiples of pi). After you negate, you're looking for a minimum of a positive quantity. Just looking at the outer absolute value, it obviously can't get lower than 0. That trivially occurs when either value of sin(x) is 0.
Plugging in, you have f_min = f(0,0) = .0001(0 + 1)^0.1 = 1e-4
This expression is trivial to evaluate and plot over a 2d grid. Do that until you figure out what you're looking at, and what the approximate answer should be, and only then invoke an actual optimizer. GA does not sound like a good candidate for a relatively smooth expression like this. The reason you're getting strange answers is the fact that only one of the input parameters has to be 0. Once the optimizer finds one of those, the other input could be anything.

MATLAB complicated integration

I have an integration function which does not have indefinite integral expression.
Specifically, the function is f(y)=h(y)+integral(#(x) exp(-x-1/x),0,y) where h(y) is a simple function.
Matlab numerically computes f(y) well, but I want to compute the following function.
g(w)=w*integral(1-f(y).^(1/w),0,inf) where w is a real number in [0,1].
The problem for computing g(w) is handling f(y).^(1/w) numerically.
How can I calculate g(w) with MATLAB? Is it impossible?
Expressions containing e^(-1/x) are generally difficult to compute near x = 0. Actually, I am surprised that Matlab computes f(y) well in the first place. I'd suggest trying to compute g(w)=w*integral(1-f(y).^(1/w),epsilon,inf) for epsilon greater than zero, then gradually decreasing epsilon toward 0 to check if you can get numerical convergence at all. Convergence is certainly not guaranteed!
You can calculate g(w) using the functions you have, but you need to add the (ArrayValued,true) name-value pair.
The option allows you to specify a vector-valued w and allows the nested integral call to receive a vector of y values, which is how integral naturally works.
f = #(y) h(y)+integral(#(x) exp(-x-1/x),0,y,'ArrayValued',true);
g = #(w) w .* integral(1-f(y).^(1./w),0,Inf,'ArrayValued',true);
At least, that works on my R2014b installation.
Note: While h(y) may be simple, if it's integral over the positive real line does not converge, g(w) will more than likely not converge (I don't think I need to qualify that, but I'll hedge my bets).

Calculate hypergeometric function

i need to calculate the degenerate hypergeometric function of two variables given by integral formula:
and I used Matlab for taking numerical integral:
l = 0.067;
h = 0.933;
n = 1.067;
o = 0.2942;
p = 0.633;
func_F=#(x)(x.^(l-1)).*((1-x).^(n-l-1)).*((1-x.*o).^(-h)).*exp(x.*p);
hyper= quadl(func_F,0,1,'AbsTol',1e-6); % i use 'AbsTol' to avoid warnings
disp(hyper);
The result i got is 54.9085, and i know this value is wrong! So please help me to calculate true value of the above integral with singularity at 0.
I don't see where you have the Gamma functions in your code. Did you forget them, or did the value you were expecting already compensate for the lack of them?
Also, maybe you can state why "this value is wrong." Otherwise we are just guessing.
Edit: one more thing, as per the Matlab help page on this function, it might be better to use quadgk. See the following quote (near the bottom of the page):
The quadgk function will integrate functions that are singular at
finite endpoints if the singularities are not too strong. For example,
it will integrate functions that behave at an endpoint c like log|x-c|
or |x-c|p for p >= -1/2. If the function is singular at points inside
(a,b), write the integral as a sum of integrals over subintervals with
the singular points as endpoints, compute them with quadgk, and add
the results.
Bottom line is the the singularities near the endpoints (when your x gets near 0 or 1) might cause some problems.

How to overcome singularities in numerical integration (in Matlab or Mathematica)

I want to numerically integrate the following:
where
and a, b and β are constants which for simplicity, can all be set to 1.
Neither Matlab using dblquad, nor Mathematica using NIntegrate can deal with the singularity created by the denominator. Since it's a double integral, I can't specify where the singularity is in Mathematica.
I'm sure that it is not infinite since this integral is based in perturbation theory and without the
has been found before (just not by me so I don't know how it's done).
Any ideas?
(1) It would be helpful if you provide the explicit code you use. That way others (read: me) need not code it up separately.
(2) If the integral exists, it has to be zero. This is because you negate the n(y)-n(x) factor when you swap x and y but keep the rest the same. Yet the integration range symmetry means that amounts to just renaming your variables, hence it must stay the same.
(3) Here is some code that shows it will be zero, at least if we zero out the singular part and a small band around it.
a = 1;
b = 1;
beta = 1;
eps[x_] := 2*(a-b*Cos[x])
n[x_] := 1/(1+Exp[beta*eps[x]])
delta = .001;
pw[x_,y_] := Piecewise[{{1,Abs[Abs[x]-Abs[y]]>delta}}, 0]
We add 1 to the integrand just to avoid accuracy issues with results that are near zero.
NIntegrate[1+Cos[(x+y)/2]^2*(n[x]-n[y])/(eps[x]-eps[y])^2*pw[Cos[x],Cos[y]],
{x,-Pi,Pi}, {y,-Pi,Pi}] / (4*Pi^2)
I get the result below.
NIntegrate::slwcon:
Numerical integration converging too slowly; suspect one of the following:
singularity, value of the integration is 0, highly oscillatory integrand,
or WorkingPrecision too small.
NIntegrate::eincr:
The global error of the strategy GlobalAdaptive has increased more than
2000 times. The global error is expected to decrease monotonically after a
number of integrand evaluations. Suspect one of the following: the
working precision is insufficient for the specified precision goal; the
integrand is highly oscillatory or it is not a (piecewise) smooth
function; or the true value of the integral is 0. Increasing the value of
the GlobalAdaptive option MaxErrorIncreases might lead to a convergent
numerical integration. NIntegrate obtained 39.4791 and 0.459541
for the integral and error estimates.
Out[24]= 1.00002
This is a good indication that the unadulterated result will be zero.
(4) Substituting cx for cos(x) and cy for cos(y), and removing extraneous factors for purposes of convergence assessment, gives the expression below.
((1 + E^(2*(1 - cx)))^(-1) - (1 + E^(2*(1 - cy)))^(-1))/
(2*(1 - cx) - 2*(1 - cy))^2
A series expansion in cy, centered at cx, indicates a pole of order 1. So it does appear to be a singular integral.
Daniel Lichtblau
The integral looks like a Cauchy Principal Value type integral (i.e. it has a strong singularity). That's why you can't apply standard quadrature techniques.
Have you tried PrincipalValue->True in Mathematica's Integrate?
In addition to Daniel's observation about integrating an odd integrand over a symmetric range (so that symmetry indicates the result should be zero), you can also do this to understand its convergence better (I'll use latex, writing this out with pen and paper should make it easier to read; it took a lot longer to write than to do, it's not that complicated):
First, epsilon(x)-\epsilon(y)\propto\cos(y)-\cos(x)=2\sin(\xi_+)\sin(\xi_-) where I have defined \xi_\pm=(x\pm y)/2 (so I've rotated the axes by pi/4). The region of integration then is \xi_+ between \pi/\sqrt{2} and -\pi/\sqrt{2} and \xi_- between \pm(\pi/\sqrt{2}-\xi_-). Then the integrand takes the form \frac{1}{\sin^2(\xi_-)\sin^2(\xi_+)} times terms with no divergences. So, evidently, there are second-order poles, and this isn't convergent as presented.
Perhaps you could email the persons who obtained an answer with the cos term and ask what precisely it is they did. Perhaps there's a physical regularisation procedure being employed. Or you could have given more information on the physical origin of this (some sort of second order perturbation theory for some sort of bosonic system?), had that not been off-topic here...
May be I am missing something here, but the integrand
f[x,y]=Cos^2[(x+y)/2]*(n[x]-n[y])/(eps[x]-eps[y]) with n[x]=1/(1+Exp[Beta*eps[x]]) and eps[x]=2(a-b*Cos[x]) is indeed a symmetric function in x and y: f[x,-y]= f[-x,y]=f[x,y].
Therefore its integral over any domain [-u,u]x[-v,v] is zero. No numerical integration seems to be needed here. The result is just zero.