MATLAB sin() vs sind() - matlab

I noticed that MATLAB has a sin() and sind() functions.
I learnt that sin() accepts the angle in radians and sind() accepts the angle in degrees.
The only difference I know is sind(180) gives 0 but sin(pi) doesn't:
>> sin(pi)
ans =
1.2246e-016
>> sind(180)
ans =
0
What boggles me is whether there is any scenarios or guidelines to choose between using sin() or sind()?

From the documentation of sind:
For integers n, sind(n*180) is exactly zero, whereas sin(n*pi)
reflects the accuracy of the floating point value of pi.
So, if you are extremely troubled with the fact that sin( pi ) is not precisly zero, go ahead and use sind, but in practice it is just a wrap-around sin so you actually add a tini-tiny bit of overhead.
Personally, I prefer the elegance of radians and use sin.

Related

Is there any numerical-accurracy difference on calculating sin(pi/2-A) and cos(A) in Matlab?

I am reading a matlab function for calculating great circle distance written by my senior collegue.
The distance between two points on the earth surface should be calculated using this formula:
d = r * arccos[(sin(lat1) * sin(lat2)) + cos(lat1) * cos(lat2) * cos(long2 – long1)]
However, the script has the code like this:
dist = (acos(cos(pi/180*(90-lat2)).*cos(pi/180*(90-lat1))+sin(pi/180*(90-lat2)).*sin(pi/180*(90-lat1)).*cos(pi/180*(diff_long)))) .* r_local;
(-180 < long1,long2 <= 180, -90 < lat1,lat2 <= 90)
Why are sin(pi/2-A) and cos(pi/2-A) used to replace cos(A) and sin(A)?
Doesn't it introduced more error source by using the constant pi?
Since lat1, lat2 might be very close to zero in my work, is this a trick on the numerical accuracy of MATLAB's sin() and cos() function?
Look forward to answers that explain how trigonometric functions in MATLAB work and analyze the error of these functions when the argument is close or equal to 0 and pi/2.
If the purpose is to increase accuracy, this seems a very poor idea. When the angle is small, 90-A spoils any accuracy. That even makes tiny angles vanish (90-ε=90).
On the opposite, the sine of tiny angles is very close to the angle itself (radians) and for this reason quite accurately computed, while the cosine is virtually 1 or 1-A²/2. For top accuracy on tiny angles, you may resort to the versine, using versin(A):= 1-cos(A) = 2 sin²(A/2) and rework the equations in terms of 1-versin(A) instead of cos(A).
If the angle is close to 90°, accuracy is lost anyway, 90°-A will not restore it.
I very much doubt this has to do with accuracy. Or at least, I don't think this helps any when it comes to accuracy.
The maximum difference between both sin(pi/2-A) - cos(A) and cos(pi/2-A) - sin(A) is 1.1102e-16, which is very small. This is just basic floating point accuracy, and there's really no way of telling which of the numbers is more correct. Note that cos(pi/2) = 6.1232e-17. So, if theta = 0, your colleague's code cos(pi/2-0) will give an error of 6.1232e-17, while simply doing the obvious sin(0) will be correct.
If you need numbers that are more accurate than this then you can try vpa.
I guess this is either because your colleague found another formula and implemented that, or he/she's confused and has attempted to increase the accuracy.
The latter might be the case if he/she tried to avoid the approximations sin(theta) ≈ theta and cos(theta) ≈ 1 for small values of theta. However, this doesn't make sense, since cos(pi/2-theta) ≈ theta and sin(pi/2-theta) ≈ 1 for small values of theta.
Best chance is to ask directly to the author of the text where you got those expressions from, if possible indeed.
It may be the case that the original expressions come from navigation formulae that were written when calculations were done manually: pencil paper ruler, no computers, no calculators.
Tables and graphs were then used to speed up results: pi-x was equivalent to start read table from other side or read graph upside-down.

Why do the outcomes of converting an angle with deg2rad and by multiplying with pi/180 differ?

I'm using the Matlab function deg2rad to convert angles from degrees to radians (obviously). I use them for angles alpha and beta, which are in turn used to interpolate some tabular data f(alpha, beta), which is with respect to alpha and beta in degrees. For my purposes, I use them in radians, such that I have to convert back and forth every now and then.
Now, I found that - as opposed to using f(pi/180*alpha, pi/180*beta) - whenever I interpolate using f(deg2rad(alpha), deg2rad(beta)), the interpolation is outside the interpolation area at the boundaries. I.e., the interpolation boundaries are at alpha = [-4, 4], beta = [-3, 3] and interpolating along those boundaries gives me NaN when using deg2rad. This thus looks like some kind of round-off or machine precision error.
Now, as a MWE, suppose I want to check deg2rad(-3) == -3*pi/180, this gives me:
>> deg2rad(-3) == -3*pi/180
ans =
logical
0
My questions are the following:
How do I prevent this behaviour, where using deg2rad is basically not useful for me when operating near the edges of the interpolation area?
What does the deg2rad function do, other than multiplying with pi/180?
Thanks in advance.
P.s. It is even stranger than I thought, because with different angles, it does work:
>> deg2rad(2) == 2*pi/180
ans =
logical
1
Your question is answered by typing edit deg2rad into MATLAB. It computes:
angleInRadians = (pi/180) * angleInDegrees;
Of course, the order of operation differs between your version ((a*pi)/180) and MATLAB's (a*(pi/180)). This different order causes rounding errors to differ.
If rounding errors cause a problem in your function, your function is unstable and needs to be fixed.

Matlab Bug in Sine function?

Has anyone tried plotting a sine function for large values in MATLAB?
For e.g.:
x = 0:1000:100000;
plot(x,sin(2*pi*x))
I was just wondering why the amplitude is changing for this periodic function? As per what I expect, for any value of x, the function has a period of 2*pi. Why is it not?
Does anyone know? Is there a way to get it right? Also, is this a bug and is it already known?
That's actually not the amplitude changing. That is due to the numerical imprecisions of floating point arithmetic. Bear in mind that you are specifying an integer sequence from 0 to 100000 in steps of 1000. If you recall from trigonometry, sin(n*x*pi) = 0 when x and n are integers, and so theoretically you should be obtaining an output of all zeroes. In your case, n = 2, and x is a number from 0 to 100000 that is a multiple of 1000.
However, this is what I get when I use the above code in your post:
Take a look at the scale of that graph. It's 10^{-11}. Do you know how small that is? As further evidence, here's what the max and min values are of that sequence:
>> min(sin(2*pi*x))
ans =
-7.8397e-11
>> max(sin(2*pi*x))
ans =
2.9190e-11
The values are so small that they might as well be zero. What you are visualizing in the graph is due to numerical imprecision. As I mentioned before, sin(n*x*pi) = 0 when n and x is are integers, under the assumption that we have all of the decimal places of pi available. However, because we only have 64-bits total to represent pi numerically, you will certainly not get the result to be exactly zero. In addition, be advised that the sin function is very likely to be using numerical approximation algorithms (Taylor / MacLaurin series for example), and so that could also contribute to the fact that the result may not be exactly 0.
There are, of course, workarounds, such as using the symbolic mathematics toolbox (see #yoh.lej's answer). In this case, you will get zero, but I won't focus on that here. Your post is questioning the accuracy of the sin function in MATLAB, that works on numeric type inputs. Theoretically with your input into sin, as it is an integer sequence, every value of x should make sin(n*x*pi) = 0.
BTW, this article is good reading. This is what every programmer needs to know about floating point arithmetic and functions: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html. A more simple overview can be found here: http://floating-point-gui.de/
Because what is the exact value of pi?
This apparent error is due to the limit of floating point accuracy. If you really need/want to get around that you can do symbolic computation with matlab, see the difference between:
>> sin(2*pi*10)
ans =
-2.4493e-15
and
>> sin(sym(2*pi*10))
ans =
0

MatLab returning different trigonometric answers than calculator?

I'm having some difficulty trying to comprehend the answers that Matlab and my calculator are returning from sinusoidal functions.
Firstly, I figured that pi/2 and 90 deg are analogous, but when I pass them into a cosine function I get these two outputs:
Calculator: cos(90) = 0
Calculator: cos(pi/2) = 0.9996242169
Matlab: cos(90) = -0.4481
Matlab: cos(pi/2) = 6.1232e-17
I have been referencing the unit circle and things don't seem to stack up. I am fairly new to maths, so maybe I may be doing something wrong. I've been practising with the sine function and this is a lot closer to my calculator results:
Matlab: sin(90) = 0.8940
Matlab: sin(pi/2) = 1
If you want to provide angle in degrees then use cosd and sind and if angle is in radians then use cos and sin.
cos(0) and cosd(0) are producing 1 on my computer. However cos(pi/2) is producing 6.1232e-17 and cosd(0) is producing 0.
you can check matlab specification which number is close to 0.
They are only analogous if you calculator or whatever you are using to calculate is in the correct mode. For instance if you calculator is in degrees then yes cos(90) will equal 0. So it seems your calculator is in degress and not radians. I haven't used matlab in a long time so I don't know how that setting is set up but this is the problem you are experiencing. You must know when doing any trigonometric calculation for any calculator, whether that be an actual calculator or a program like matlab, if it is expecting you to enter an argument in degrees or radians.

Why is Matlab saving values as symbolic variables with massive fractions instead of decimal approximations?

I'm currently working on a rudimentary optimization algorithm in Matlab, and I'm running into issues with Matlab saving variables at ridiculous precision. Within a few iterations the variables are so massive that it's actually triggering some kind of infinite loop in sym.m.
Here's the line of code that's starting it all:
SLine = (m * (X - P(1))) + P(2);
Where P = [2,2] and m = 1.2595. When I type this line of code into the command line manually, SLine is saved as the symbolic expression (2519*X)/2000 - 519/1000. I'm not sure why it isn't using a decimal approximation, but at least these fractions have the correct value. When this line of code runs in my program, however, it saves SLine as the expression (2836078626493975*X)/2251799813685248 - 584278812808727/1125899906842624, which when divided out isn't even precise to four decimals. These massive fractions are getting carried through my program, growing with each new line of code, and causing it to grind to a halt.
Does anyone have any idea why Matlab is behaving in this way? Is there a way to specify what precision it should use while performing calculations? Thanks for any help you can provide.
You've told us what m and P are, but what is X? X is apparently a symbolic variable. So further computations are all done symbolically.
Welcome to the Joys of Symbolic Computing!
Most Symbolic Algebra systems represent numbers as rationals, $(p,q) = \frac{p}{q}$, and perform rational arithmetic operations (+,-,*,/) on these numbers, which produce rational results. Generally, these results are exact (also called infinite precision).
It is well-known that the sizes of the rationals generated by rationals operations on rationals grow exponentially. Hence, if you try to solve a realistic problem with any Symbolic Algebra system, you eventually run out of space or time.
Here is the last word on this topic, where Nick Trefethen FRS shows why floating point arithmetic is absolutely vital for solving realistic numeric problems.
http://people.maths.ox.ac.uk/trefethen/publication/PDF/2007_123.pdf
Try this in Matlab:
function xnew = NewtonSym(xstart,niters);
% Symbolic Newton on simple polynomial
% Derek O'Connor 2 Dec 2012. derekroconnor#eircom.net
x = sym(xstart,'f');
for iter = 1:niters
xnew = x - (x^5-2*x^4-3*x^3+3*x^2-2*x-1)/...
(5*x^4-8*x^3-9*x^2+6*x-2);
x = xnew;
end
function xnew = TestNewtonSym(maxits);
% Test the running time of Symbolic Newton
% Derek O'Connor 2 Dec 2012.
time=zeros(maxits,1);
for niters=1:maxits
xstart=0;
tic;
xnew = NewtonSym(xstart,niters);
time(niters,1)=toc;
end;
semilogy((1:maxits)',time)
So, from MATLAB reference documentation on Symbolic computations, the symbolic representation will always be in exact rational form, as opposed to decimal approximation of a floating-point number [1]. The reason this is done, apparently, is to "to help avoid rounding errors and representation errors" [2].
The exact representation is something that cannot be overcome by just doing symbolic arithmetic. However, you can use Variable-Precision Arithmetic (vpa) in Matlab to get the same precision [3].
For example
>> sym(pi)
ans =
0
>> vpa(sym(pi))
ans =
3.1415926535897932384626433832795
References
[1] http://www.mathworks.com/help/symbolic/create-symbolic-numbers-variables-and-expressions.html
[2]https://en.wikibooks.org/wiki/MATLAB_Programming/Advanced_Topics/Toolboxes_and_Extensions/Symbolic_Toolbox
[3]http://www.mathworks.com/help/symbolic/vpa.html