The Problem:
I have a problem in Matlab with a part of a function that calculates the value of
atan(c*tan(x)).
For some real c and x where x is an angle. Usually this would return some values between -pi/2 and pi/2 but I want it to mind "how often the angle got around": For example with c=1 we get the same result for x=pi/2 and x=3*pi/2.
This is what I want to avoid, in this case I would want it to calculate for x=3*pi/2 the value 3*pi/2 (I know that for c=1 atan and tan would cancel out, but for arbitrary real c they do not).
In other words, I want to make atan(c*tan(x)) continuous on R in x.
How I tried to fix it:
I simply added a function that kept an eye on the argument of the tangens:
atan(c*tan(x))+pi.*(floor((x./pi)+(1/2)))
This works for values that are not too near to the poles of the tangent function, but once x lies !very! near it breaks down (jumps of heigh pi are observed). This is especially a problem for my calculations as x is near enough one of the poles all the time.
What still seems to be the problem:
Imho the problem is that the tangens has a "much higher resolution" near its poles than the "floor function" making its jump too early/late for the correcting function.
My question:
Is there another possibility to fix this problem that is not sensitive to x lying near the poles of the tangent ?
There are multiple possible solutions for this problem. A simple one would be to use a special case for x near pi + k*pi/2, letting atan(c*tan(x)) = +- pi/2 depending on whether x is slightly smaller, or slightly larger, than pi/2 respectively.
function Out = ContTangents(In,RidgeParam)
InOverPi = In./pi;
Rounded = round(InOverPi);
CloseRounded = round(InOverPi+1/2)-1/2;
ArctanResult = zeros(size(In));
IsCloseVal = abs(InOverPi - CloseRounded) < 0.00001;
CloseVal = sign(RidgeParam*(CloseRounded-InOverPi)) * pi/2;
NotCloseVal = atan(RidgeParam*tan(In));
ArctanResult(IsCloseVal) = CloseVal(IsCloseVal);
ArctanResult(~IsCloseVal) = NotCloseVal(~IsCloseVal);
Out = ArctanResult + pi*(Rounded + 1/2);
This solution appears to produce nice-looking curves, and as far as I can see it avoids discontinuities and singularities. At least, it seems nice when I run
figure, plot(ContTangents(-10:.001*pi:10,2))
Edit: Some parts of the function are modified. When running ContTangents(pi/2-exp(-36),2) (the problem value in the OP's example below), I found this very peculiar behavior. Is this a bug in Matlab?
K>> InOverPi<0.5
ans =
1
K>> round(InOverPi)
ans =
1
This could be worked around through redefining an own round function which doesn't fail at the point .5-eps(.25). In fact, due to the entertaining behavior of floating point at the limit, you can't even define that number as such, but you have to proceed as follows.
format hex
(pi/2-exp(-36))/pi
ans =
3fdfffffffffffff
-eps(.25)+.25+.25
ans =
3fdfffffffffffff
Related
I'm using Matlab 2014b. I've tried:
clear all
syms x real
assumeAlso(x>=5)
This returned:
ans =
[ 5 <= x, in(x, 'real')]
Then I tried:
int(sqrt(x^2-25)/x,x)
But this still returned a complex answer:
(x^2 - 25)^(1/2) - log(((x^2 - 25)^(1/2) + 5*i)/x)*5*i
I tried the simplify command, but still a complex answer. Now, this might be fixed in the latest version of Matlab. If so, can people let me know or offer a suggestion for getting the real answer?
The hand-calculated answer is sqrt(x^2-25)-5*asec(x/5)+C.
This behavior is present in R2017b, though when converted to floating point the imaginary components are different.
Why does this occur?
This occurs because Matlab's int function returns the full general solution when you ask for the indefinite integral. This solution is valid over the entire domain of of real values, including your restricted domain of x>=5.
With a bit of math you can show that the solution is always real for x>=5 (see complex logarithm). Or you can use more symbolic math via the isAlways function to show this:
syms x real
assume(x>=5)
y = int(sqrt(x^2-25)/x, x)
isAlways(imag(y)==0)
This returns true (logical 1). Unfortunately, Matlab's simplification routines appear to not be able to reduce this expression when assumptions are included. You might also submit this case to The MathWorks as a service request in case they'd consider improving the simplification for this and similar equations.
How can this be "fixed"?
If you want to get rid of the zero-valued imaginary part of the solution you can use sym/real:
real(y)
which returns 5*atan2(5, (x^2-25)^(1/2)) + (x^2-25)^(1/2).
Also, as #SardarUsama points out, when the full solution is converted to floating point (or variable precision) there will sometimes numeric imprecision when converting from exact symbolic form. Using the symbolic real form above should avoid this.
The answer is not really complex.
Take a look at this:
clear all; %To clear the conditions of x as real and >=5 (simple clear doesn't clear that)
syms x;
y = int(sqrt(x^2-25)/x, x)
which, as we know, gives:
y =
(x^2 - 25)^(1/2) - log(((x^2 - 25)^(1/2) + 5i)/x)*5i
Now put some real values of x≥5 to check what result it gives:
n = 1004; %We'll be putting 1000 values of x in y from 5 to 1004
yk = zeros(1000,1); %Preallocation
for k=5:n
yk(k-4) = subs(y,x,k); %Putting the value of x
end
Now let's check the imaginary part of the result we have:
>> imag(yk)
ans =
1.0e-70 *
0
0
0
0
0.028298997121333
0.028298997121333
0.028298997121333
%and so on...
Notice the multiplier 1e-70.
Let's check the maximum value of imaginary part in yk.
>> max(imag(yk))
ans =
1.131959884853339e-71
This implies that the imaginary part is extremely small and it is not a considerable amount to be worried about. Ideally it may be zero and it's coming due to imprecise calculations. Hence, it is safe to call your result real.
I am trying to compute the principal value of an integral (over s) of 1/((s - q02)*(s - q2)) on [Ecut, inf] with q02 < Ecut < q2. Doing the Principle value by hand (or Mathematica) one obtains the general result
ln((q2-Ecut)/(Ecut-q02)) / (q02 -q2)
In the specific example below this gives the result -1.58637*10^-11. One should also be able to get the same result by splitting the integral in two, integrating up to q2 - eps and then starting from q2 + eps, and then adding the two results (the divergences should cancel). By taking eps smaller and smaller one should recover the result above. When I implement this in scipi using quad however my result converges to the wrong result 6.04685e-11, as I show in the plot of eps vs integral result I include.
Why is quad doing this? even if I have eps = 0 it gives me this wrong result, when I would expect it to give me an error as the thing blows up...
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad
q02 = 485124412.
Ecut = 17909665929.
q2 = 90000000000.
def integrand(s):
return 1/((s - q02)*(s - q2))
xx=[1.,0.1,0.01,0.001,0.0001,0.00001,0.000001,0.0000001,0.00000001,
0.000000001,0.0000000001,0.00000000001,0.]
integral = [0*y for y in xx]
i=0
for eps in xx:
ans1,err = quad(integrand, Ecut, q2 -eps )
ans2,err= quad(integrand, q2 + eps, np.inf)
integral[i] = ans1 + ans2
i=i+1
plt.semilogx(xx,integral,marker='.')
plt.show()
One should also be able to get the same result by splitting the integral in two, integrating up to q2 - eps and then starting from q2 + eps, and then adding the two results
Only if computations were perfectly accurate. In numerical practice, what you described is basically the worst thing one could do. You get two large integrals of opposite signs that very nearly cancel each other when added; what is left has more to do with the errors of integration than with actual value of the integral.
I notice you disregarded the error values err in your script, not even printing them out. Bad idea: they are of size 1e-10, which would already tell you that the final result with "something e-11" is junk.
The Computational Science question Numerical Principal Value Integration - Hilbert like addresses this issue. One of the approaches they indicate is to add the values of the integrand at the points symmetric about the singularity, before trying to integrate it. This requires taking the integral over a symmetric interval centered at the singularity q2 (that is, from Ecut to 2*q2-Ecut), and then adding the contribution of the integral from 2*q2-Ecut to infinity. This split makes sense anyway, because quad treats infinite limits very differently (using Fourier integration), which is yet another thing that will affect the way the singularity cancels out.
So, an implementation of this approach would be
ans1, err = quad(lambda s: integrand(s) + integrand(2*q2-s), Ecut, q2)
ans2, err = quad(integrand, 2*q2-Ecut, np.inf)
No eps is needed. However, the result is still off: it's about -2.5e-11. Turns out, the second integral is the culprit. Unfortunately, the Fourier integral approach doesn't seem to be effective here (or I didn't find a way to make it work). It turns out that providing a large, but finite value as the upper limit leads to a better result, especially if the option epsabs is also used, e.g. epsabs=1e-20.
Better yet, read the documentation of quad extra carefully and notice that it directly supports integrals with Cauchy weight 1/(s-q2), choosing an appropriate numerical method for them. This still requires a finite upper limit, and a small value of epsabs, but the result is pretty accurate:
quad(lambda s: 1/(s - q02), Ecut, 1e9*q2, weight='cauchy', wvar=q2, epsabs=1e-20)
returns -1.5863735715967363e-11, compared to exact value -1.5863735704856253e-11. Notice that the factor 1/(s-q2) does not appear in the integrand above, being relegated to the weight options.
I have a Matlab project in which I need to make a GUI that receives two mathematical functions from the user. I then need to find their intersection point, and then plot the two functions.
So, I have several questions:
Do you know of any algorithm I can use to find the intersection point? Of course I prefer one to which I can already find a Matlab code for in the internet. Also, I prefer it wouldn't be the Newton-Raphson method.
I should point out I'm not allowed to use built in Matlab functions.
I'm having trouble plotting the functions. What I basically did is this:
fun_f = get(handles.Function_f,'string');
fun_g = get(handles.Function_g,'string');
cla % To clear axes when plotting new functions
ezplot(fun_f);
hold on
ezplot(fun_g);
axis ([-20 20 -10 10]);
The problem is that sometimes, the axes limits do not allow me to see the other function. This will happen, if, for example, I will have one function as log10(x) and the other as y=1, the y=1 will not be shown.
I have already tried using all the axis commands but to no avail. If I set the limits myself, the functions only exist in certain limits. I have no idea why.
3 . How do I display numbers in a static text? Or better yet, string with numbers?
I want to display something like x0 = [root1]; x1 = [root2]. The only solution I found was turning the roots I found into strings but I prefer not to.
As for the equation solver, this is the code I have so far. I know it is very amateurish but it seemed like the most "intuitive" way. Also keep in mind it is very very not finished (for example, it will show me only two solutions, I'm not so sure how to display multiple roots in one static text as they are strings, hence question #3).
function [Sol] = SolveEquation(handles)
fun_f = get(handles.Function_f,'string');
fun_g = get(handles.Function_g,'string');
f = inline(fun_f);
g = inline(fun_g);
i = 1;
Sol = 0;
for x = -10:0.1:10;
if (g(x) - f(x)) >= 0 && (g(x) - f(x)) < 0.01
Sol(i) = x;
i = i + 1;
end
end
solution1 = num2str(Sol(1));
solution2 = num2str(Sol(2));
set(handles.roots1,'string',solution1);
set(handles.roots2,'string',solution2);
The if condition is because the subtraction will never give me an absolute zero, and this seems to somewhat solve it, though it's really not perfect, sometimes it will give me more than two very similar solutions (e.g 1.9 and 2).
The range of x is arbitrary, chosen by me.
I know this is a long question, so I really appreciate your patience.
Thank you very much in advance!
Question 1
I think this is a more robust method for finding the roots given data at discrete points. Looking for when the difference between the functions changes sign, which corresponds to them crossing over.
S=sign(g(x)-f(x));
h=find(diff(S)~=0)
Sol=x(h);
If you can evaluate the function wherever you want there are more methods you can use, but it depends on the size of the domain and the accuracy you want as to what is best. For example, if you don't need a great deal of accurac, your f and g functions are simple to calculate, and you can't or don't want to use derivatives, you can get a more accurate root using the same idea as the first code snippet, but do it iteratively:
G=inline('sin(x)');
F=inline('1');
g=vectorize(G);
f=vectorize(F);
tol=1e-9;
tic()
x = -2*pi:.001:pi;
S=sign(g(x)-f(x));
h=find(diff(S)~=0); % Find where two lines cross over
Sol=zeros(size(h));
Err=zeros(size(h));
if ~isempty(h) % There are some cross-over points
for i=1:length(h) % For each point, improve the approximation
xN=x(h(i):h(i)+1);
err=1;
while(abs(err)>tol) % Iteratively improve aproximation
S=sign(g(xN)-f(xN));
hF=find(diff(S)~=0);
xN=xN(hF:hF+1);
[~,I]=min(abs(f(xN)-g(xN)));
xG=xN(I);
err=f(xG)-g(xG);
xN=linspace(xN(1),xN(2),15);
end
Sol(i)=xG;
Err(i)=f(xG)-g(xG);
end
else % No crossover points - lines could meet at tangents
[h,I]=findpeaks(-abs(g(x)-f(x)));
Sol=x(I(abs(f(x(I))-g(x(I)))<1e-5));
Err=f(Sol)-g(Sol)
end
% We also have to check each endpoint
if abs(f(x(end))-g(x(end)))<tol && abs(Sol(end)-x(end))>1e-12
Sol=[Sol x(end)];
Err=[Err g(x(end))-f(x(end))];
end
if abs(f(x(1))-g(x(1)))<tol && abs(Sol(1)-x(1))>1e-12
Sol=[x(1) Sol];
Err=[g(x(1))-f(x(1)) Err];
end
toc()
Sol
Err
This will "zoom" in to the region around each suspected root, and iteratively improve the accuracy. You can tweak the parameters to see whether they give better behaviour (the tolerance tol, the 15, number of new points to generate, could be higher probably).
Question 2
You would probably be best off avoiding ezplot, and using plot, which gives you greater control. You can vectorise inline functions so that you can evaluate them like anonymous functions, as I did in the previous code snippet, using
f=inline('x^2')
F=vectorize(f)
F(1:5)
and this should make plotting much easier:
plot(x,f(x),'b',Sol,f(Sol),'ro',x,g(x),'k',Sol,G(Sol),'ro')
Question 3
I'm not sure why you don't want to display your roots as strings, what's wrong with this:
text(xPos,yPos,['x0=' num2str(Sol(1))]);
I'm embarrassed to ask this as I believe I may be missing something obvious, but I just can't see where I am going wrong. As part of a larger program I am investigating the application of discretisation methods to approximate the convection equation on a square wave. However, I have noticed that in some cases the bounds of my square wave are being incorrectly applied. It should have the initial conditions of 1 when 10.25<=X<=10.5 and 0 everywhere else. This is an example of the issue:
L=20; %domain length
dx=0.01; %space step
nx=(L/dx); %number of steps in space
x=0:dx:(nx-1)*dx; %steps along x
sqr=zeros(1,nx); %pre-allocate array space
for j=1:nx
if ((10.25<=x(j))&&(x(j)<=10.5))
sqr(j)=1;
else
sqr(j)=0;
end
d=plot(x,sqr,'r','LineWidth',2); axis([10.1 10.6 0 1.1]);
drawnow;
In this case, the wave displays incorrectly, with x=10.5 taking a value of 0 and not 1, like so:
https://dl.dropboxusercontent.com/u/8037738/project/square_wave.png
The weird thing is that if I change the domain length to a different value it sometimes displays correctly. This is when the domain length is set to 30 and it displays correctly:
https://dl.dropboxusercontent.com/u/8037738/project/square_wave_correct.png
I really don't understand because my x array is always discretised in 0.01 intervals, so it will never 'miss' 10.5 when it loops through. I hope I have explained this problem adequately enough and if it is a stupid mistake on my part I apologise in advance.
You forgot the end when you finish with the if
L=20; %domain length
dx=0.01; %space step
nx=(L/dx); %number of steps in space
x=0:dx:(nx-1)*dx; %steps along x
sqr=zeros(1,nx); %pre-allocate array space
eps = 1.e-8;
for j=1:nx
if ((10.25-eps<=x(j))&&(x(j)<=10.5+eps))
sqr(j)=1;
else
sqr(j)=0;
end
end
d=plot(x,sqr,'r','LineWidth',2); axis([10.1 10.6 0 1.1]);
drawnow;
I also recomend you to use a small tolerance when using comparisons with float values, like the x(j)<=10.5
if you se the difference at that point, matlab tell you
x(1051)-10.5
ans =
1.776356839400250e-15
so because of the round-off the value of x(j) was lightly larger that 10.5, not equal as you were expecting
Your issue is that the x values do not fall exactly on the space step. For example, if you look at the following:
j=1051;
format long;
x(j)
ans =
10.500000000000002
The answer from sebas fixes this issue by adding the tolerance to each of the bounds. You could also try rounding x(j) to the nearest 100th before performing the logic in the if statement:
roundn(x(j), -2)
I am calculating the overlap of two normal bivariate distribution using the following function
function [ oa ] = bivariate_overlap_integral(mu_x1,mu_y1,mu_x2,mu_y2)
%calculating pdf. Using x as vector because of MATLAB requirements for integration
bpdf_vec1=#(x,y,mu_x,mu_y)(exp(-((x-mu_x).^2)./2.-((y-mu_y)^2)/2)./(2*pi));
%calcualting overlap of two distributions at the point x,y
overlap_point = #(x,y) min(bpdf_vec1(x,y,mu_x1,mu_y1),bpdf_vec1(x,y,mu_x2,mu_y2));
%calculating overall overlap area
oa=dblquad(overlap_point,-100,100,-100,100);
You can see that this involves taking a double integral (x: -100 to 100, y:-100 to 100, ideally -inf to inf but suffices at the moment) from the function overlap_point which is minimum of 2 pdf-s given by the function bpdf_vec1 of two distributions at the point x,y.
Now, PDF is never 0, so I would expect that the larger the area of the interval, the larger will the end result become, obviously with the negligible difference after a certain point. However, it appears that sometimes, when I decrease the size of the interval the result grows. For instance:
>> mu_x1=0;mu_y1=0;mu_x2=5;mu_y2=0;
>> bpdf_vec1=#(x,y,mu_x,mu_y)(exp(-((x-mu_x).^2)./2.-((y-mu_y)^2)/2)./(2*pi));
>> overlap_point = #(x,y) min(bpdf_vec1(x,y,mu_x1,mu_y1),bpdf_vec1(x,y,mu_x2,mu_y2));
>> dblquad(overlap_point,-10,10,-10,10)
ans =
0.0124
>> dblquad(overlap_point,-100,100,-100,100)
ans =
1.4976e-005 -----> strange, as theoretically cannot be smaller then the first answer
>> dblquad(overlap_point,-3,3,-3,3)
ans =
0.0110 -----> makes sense that the result is less than the first answer as the
interval is decreased
Here we can check that the overlaps are (close to) 0 at the border points of interval.
>> overlap_point (100,100)
ans =
0
>> overlap_point (-100,100)
ans =
0
>> overlap_point (-100,-100)
ans =
0
>> overlap_point (100,-100)
ans =
0
Does this perhaps have to do with the implementation of dblquad, or am I making a mistake somewhere? I use MATLAB R2011a.
Thanks
CONGRATULATIONS! You win the award for being the 12 millionth person to ask essentially this question. :) What I'm trying to say is this is an issue that everyone stumbles over at first. Honestly, this question gets asked over and over again, so really, this question should be marked as a dup.
What happens with these things is a bivariate normal is essentially a delta function when viewed from far enough away. And you don't need to really spread that region out too far, since the normal density drops off fast. It is essentially zero over most of the domain you are trying to integrate over, at least to within the tolerances employed.
So then if the quadrature happens to hit some sample points near the areas where there is mass, then you may get some realistic estimate of your integral. But if all the tool sees are numbers that are essentially zero over the entire domain, it concludes the integral is zero. Remember, adaptive integration tools are NOT omniscient. They do not know anything about your function. It is a black box to them. These are NOT symbolic tools.
BTW, this is NOT something I'd expect to see consistently different for Octave versus MATLAB. It is only an issue of the adaptive integrator, and where it chooses to set its sample points down.
OK, here are my octave results.
>fomat long
>z10 = dblquad(overlap_point,-10,10,-10,10)
z10 = 0.0124193303284589
>z100 = dblquad(overlap_point,-100,100,-100,100)
z100 = 0.0124193303245106
>z100 - z10
ans = -3.94835379669001e-012
>z10a = dblquad(overlap_point,-10,10,-10,10,1e-8)
z10a = 0.0124193306514996
>z100a = dblquad(overlap_point,-100,100,-100,100,1e-8)
z100a = 0.0124193306527155
>z100a-z10a
ans = 1.21588676627038e-012
BTW. I've noticed this type of problem before with numerical solutions. Sometimes you make a change that you expect will improve the accuracy of the result (in this case by making your limits closer to the ideal case of the full plane), but instead you get the opposite effect and the result becomes less accurate. What's happening in this case is that by going out "wider", to -100..100, you're shifting the focus away from where the really important action is happening in your function, which is close to the origin. At some point the implementation of dblquad that you're using must start increasing the intersample distance as you increase the limits, and then it starts missing some important stuff close to the origin.
Maybe someone with running a later version of matlab can check this out and see if it's been improved.