Negative zeros in Matlab - matlab

Basically I wanted to ask two things:
Why does this happen? (negative zero in Matlab)
When does this happen?
I came up with this. Octave has some similarities with Matlab, so the usefulness of this feature is clear, but one of the things they said, is that it does not appear in the default output. and I just tackled it right now. So, maybe a new insight on this?
For the second question, in the answered question I referred to, they just said it could happen in some calculations, and in the following calculation which I just did, it doesn't seem really necessary to use (or get) that negative zero.
The code where I encountered this is:
xcorr([1 0 1 1], [0 1 1 0 0])
where it's output is:
-0.0000 -0.0000 1.0000 1.0000 1.0000 2.0000 1.0000 0.0000 0.0000
The xcorr is actually a cross corelation function, which does only some simple operations like summing and multiplications, where it's exact function details can be found here. Anyway, nothing like "complex branch cuts and transformations of the complex plane"
Thanks

These values do not represent zeros. Instead, they are negative values which are very close to zero.
The reason for getting these values and not simply zeros is due to approximations which are performed in the function implementation. According to Matlab documentation: "xcorr estimates the cross-correlation sequence of a random process".
In other words - the values which are displayed on the screen are just approximations for negative values.
In order to test this, you can change the display format of Matlab.
code:
format shortE;
xcorr([1 0 1 1], [0 1 1 0 0])
Result:
ans =
Columns 1 through 5
-6.2450e-017 -5.5511e-017 1.0000e+000 1.0000e+000 1.0000e+000
Columns 6 through 9
2.0000e+000 1.0000e+000 1.1102e-016 1.1796e-016
As you can see, the values in coordinates 1,2,8 and 9 are actually negative.

In the specific sample case, the -0.0000 turned out to actually be tiny non-zero negative numbers. However, in an effort to make printout human readable, Matlab and Octave tries to avoid using scientific notation in printing. As a result, when mixed with big numbers, small number are reduced to 0.0000 or -0.000. This can be changed by setting the default preferences in Matlab or Octave.
But this is NOT the only answer to the asked questions:
Why does this happen? (negative zero in Matlab),
When does this happen?
In fact, in Matlab and Octave, and any computing environment that works with floating points, -0. really a thing. This is not a bug in the computing environment, but rather an allowance in the IEEE-754 standard for binary representation of a floating point number, and it is a product of the floating point processor in the CPU (not the programming language).
Whereas binary representation of integer does not have a special sign bit, IEEE-754 reserves a bit for the sign, separate from the number. So while the number part might mean 0, the sign bit is left to mean negative.
It happens wherever your CPU (Intel or AMD) does a product with 0. and any negative number (including -0.). I don't know if it is required by IEEE-754 for that to happen or if it simply the result of CPU design optimization (maximize speed, minimize size).
In either case, this -0. is a non-issue in that IEEE-754 requires it to be comparatively and arithmetically exactly the same as 0.. That is:
-0. < 0 --> FALSE
-0. == 0 --> TRUE
1+ -0. == 1 --> TRUE
etc...

Related

Difference between NAN & INF in MATLAB?

In MATLAB, INF means infinity and NAN means Not a Number
Apparently both these seem similar but there must be some difference ,because of which MATLAB makes represents them separatley?
Why in some operations output is Inf while in others , output is NAN?
Mathematically they are two different concepts. While in real analysis, infinity is not a real number it does represent an unbounded limit and is part of the set that makes up extended real numbers. That makes infinity a symbol that can result from a mathematical operation, i.e. it is a valid answer.
A NaN, on the other hand, represents a result that is not numerical.
I like to initialize floating point arrays when allocated to NaN because it will make certain bugs (e.g. off by one errors) obvious.

Explanation on the particular result of MATLAB determinant function

I observed a simple, yet particular behavior when using the determinant function in MATLAB and I would like to get some explanations as I didn't find anything about it in the function help documentation.
I'm generating a random unitary matrix Q with the following code:
[Q, R] = qr(randn(3));
After that, I evaluate the determinant of Q with the det function:
det(Q)
I would expect the result to be -1.000 or 1.000. However, the format doesn't seem to be constant. So when I do something like this:
detResults = zeros(100,1);
for ii = 1:100
[Q, R] = qr(randn(3));
detResults(ii,1) = det(Q);
end
the detResults vector contains 1.000 and sometime 1. Is it just a print format issue or is it caused by something else?
It's related to floating-point precision. Each time you iterate through that loop, even though you will theoretically get a determinant of 1 for the Q matrix, the numbers in the matrices themselves are irrational so theoretically the only way you will get the value of 1 is when your numbers are represented with infinite precision. Sometimes there are enough digits so that MATLAB can safely round to 1 with assurance. Also, you're not getting the full picture. The reason why you see 1.0000 and 1 is also related to the print format. The default print format only shows up to five decimal places but it may be prudent to show more decimal places to appreciate the bigger picture.
Here's a small example with using only 10 iterations instead of 100.
Using the default print format:
>> detResults
detResults =
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
Using a format of increased precision (just for display purposes) with format long g:
>> format long g;
>> detResults
detResults =
1
0.999999999999999
1
1
0.999999999999999
1
1
0.999999999999999
1
0.999999999999999
Internally, it really depends on what the Q matrix is and what you get out of the bag when you generate the random matrices. However, as far as precision goes for using these for further calculations, 0.999... is very close to 1 so for all intents and purposes you should consider it equal to 1.
I believe that you are observing the effect of the finite precision of the floating point number representation. By default, MATLAB uses 64-bit floating point numbers. So only a finite set of number, with at most 2^64 unique elements, can be represented by this system exactly. All other numbers, resulting during intermediate computations, are rounded to the nearest representable values. These rouding operations result in errors, which are negligible for most, but not all, applications.
You can estimate the errors in your results by appending this line to your code:
err = detResults - 1;
A simple example to observe the finite-precision artifact is:
2-(sqrt(2))^2
Obviously, this should be exactly 0. But MATLAB would return a non-zero small number because of rounding errors in the square-root and squared steps.

Matlab Bug in Sine function?

Has anyone tried plotting a sine function for large values in MATLAB?
For e.g.:
x = 0:1000:100000;
plot(x,sin(2*pi*x))
I was just wondering why the amplitude is changing for this periodic function? As per what I expect, for any value of x, the function has a period of 2*pi. Why is it not?
Does anyone know? Is there a way to get it right? Also, is this a bug and is it already known?
That's actually not the amplitude changing. That is due to the numerical imprecisions of floating point arithmetic. Bear in mind that you are specifying an integer sequence from 0 to 100000 in steps of 1000. If you recall from trigonometry, sin(n*x*pi) = 0 when x and n are integers, and so theoretically you should be obtaining an output of all zeroes. In your case, n = 2, and x is a number from 0 to 100000 that is a multiple of 1000.
However, this is what I get when I use the above code in your post:
Take a look at the scale of that graph. It's 10^{-11}. Do you know how small that is? As further evidence, here's what the max and min values are of that sequence:
>> min(sin(2*pi*x))
ans =
-7.8397e-11
>> max(sin(2*pi*x))
ans =
2.9190e-11
The values are so small that they might as well be zero. What you are visualizing in the graph is due to numerical imprecision. As I mentioned before, sin(n*x*pi) = 0 when n and x is are integers, under the assumption that we have all of the decimal places of pi available. However, because we only have 64-bits total to represent pi numerically, you will certainly not get the result to be exactly zero. In addition, be advised that the sin function is very likely to be using numerical approximation algorithms (Taylor / MacLaurin series for example), and so that could also contribute to the fact that the result may not be exactly 0.
There are, of course, workarounds, such as using the symbolic mathematics toolbox (see #yoh.lej's answer). In this case, you will get zero, but I won't focus on that here. Your post is questioning the accuracy of the sin function in MATLAB, that works on numeric type inputs. Theoretically with your input into sin, as it is an integer sequence, every value of x should make sin(n*x*pi) = 0.
BTW, this article is good reading. This is what every programmer needs to know about floating point arithmetic and functions: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html. A more simple overview can be found here: http://floating-point-gui.de/
Because what is the exact value of pi?
This apparent error is due to the limit of floating point accuracy. If you really need/want to get around that you can do symbolic computation with matlab, see the difference between:
>> sin(2*pi*10)
ans =
-2.4493e-15
and
>> sin(sym(2*pi*10))
ans =
0

What is -0.0 in matlab?

I have been working on finding a parabola with three points by using the determinant method.
The coefficients that are returned are sometimes
-0.0000
What does this mean? Why is there a negative sign and what does it signify?
Try format long g to see more significant digits. The number is probably very slightly negative.

Floating point precision matlab filter2

I'm having round-off errors caused by filter2.
Here is a minimal code example:
format long
x=[ 0 0 0 0 0
64 65 72 74 72
104 111 109 106 112];
h=[ 0 0 0 0 0
0 0.500000000000000 0 0.500000000000000 0
0 0 0 0 0]
y=filter2(h,x, 'valid')
y_= x(2,2)/2 + x(2,4)/2
y__= sum(sum(x .* h))
round(y)
round(y_)
round(y__)
results in
y = 69.499999999999986
y_ = 69.500000000000000
y_ = 69.500000000000000
ans = 69
ans = 70
ans = 70
I'm guessing this is a result of doing the filtering in the fft domain (or something similar). Unfortunately this is giving me issues when verifying the test vectors I'm generating agains an FPGA implementation.
Any ideas how to fix/avoid this error?
PS I'm using matlab 2007b.
Edit: 2007a to 2007b
Edit2: Added y__ example
It is expected behavior that operations performed with floating-point arithmetic produce approximate results. Floating-point arithmetic uses a fixed number of bits to represent numbers, and the result of each floating-point operation is rounded to the nearest representable number (unless another rounding mode is set).
In particular, performing an FFT requires sines and cosines of many values, and the sines and cosines are not exactly representable, and the arithmetic upon those values and the values in the FFT data produces many intermediate results that are not exactly representable. Consequently, the results of FFTs are expected to be approximate.
Studies of errors in floating-point FFTs have shown the error behavior is generally good. However, you cannot expect that a result will land on the “correct” side of 69.5 to cause the rounding you desire. Essentially, it is an error to expect that rounding the results of an FFT will produce exact results.
Generally, using floating-point formats with greater precision can reduce the magnitudes of errors. Thus, using greater precision could produce FFT results closer to ideal results. However, consider what is necessary to make rounding work. Any number 69.5 or slightly greater will round to 70. Any number slightly less than 69.5 will round to 69, which you do not want. Therefore, to round as you desire, no error that produces a number less than 69.5 is acceptable, regardless of how small that error is. However, every floating-point format has some error. Therefore, there is no precision that is guaranteed to produce results that can be rounded in the way you desire. (Errors can be controlled somewhat by setting rounding modes, to cause rounding upward or downward as desired. However, the FFT is a complex operation, and getting the desired rounding in the final product would require controlling the rounding in every intermediate operation and is impractical.)
So, a floating-point FFT will not produce the results you desire. Some options available to you are:
Change your expectations; do not expect the filter to produce results identical to exact arithmetic.
Perform the filter using direct arithmetic, not an FFT. (I do not use Matlab and cannot advise on good ways to do this in Matlab.) Note that doing this with floating-point arithmetic will produce exact results only so long as all intermediate values are representable in the floating-point format. This is generally true only for small filters with “simple” values in the filter and in the data. (For this purpose, and supposing a binary floating-point format, “simple” values are those representable with just a few bits in the fraction portion of the floating-point format, i.e., those that are sums of a few integer powers of two that are close to each other, such as .625, which is 2-1+2-3.)
Use exact arithmetic. Some mathematics software, such as Maple and Mathematica, support exact arithmetic. As far as I know, Matlab does not. This generally requires performing the filter with direct arithmetic, not an FFT, since performing an FFT exactly requires greater mathematical capabilities (manipulating sines and cosines exactly).
Since you state this is for testing, I suggest that you either:
Allow small errors in the results. Small errors are typical of floating-point arithmetic and generally do not indicate errors in the filter you are testing. The bugs you are looking for in your tests will generally produce large and numerous errors.
Use direct arithmetic, if it is sufficient, or exact arithmetic if necessary. This will consume more processor time. However, testing generally does not need to be high-performance, so it is okay to use more processor time.
The standard way to deal with this is to avoid floating point comparison.
Rather than checking whether two things are equal, check whether the absolute difference is smaller than some epsilon.
So if you want to see whether your two numbers match you can do:
abs(y-y_)