What is -0.0 in matlab? - matlab

I have been working on finding a parabola with three points by using the determinant method.
The coefficients that are returned are sometimes
-0.0000
What does this mean? Why is there a negative sign and what does it signify?

Try format long g to see more significant digits. The number is probably very slightly negative.

Related

Question about floating-point arithmetic in Matlab

When I compute the following in Matlab
myeps = abs(3*(4/3-1)-1);
format long e
eps_myeps = [eps ; myeps]
The output is as follows:
eps_myeps =
2.220446049250313e-16
2.220446049250313e-16
Why is myeps not 0? Why does this not hold when the base is 3 instead of 2?
Code 4/3 is 4/3 in math. 4/3 is not exactly encodable as a floating point number. Most floating point numbers are dyadic rationals (an integer times some power of 2) and a nearby value is used.*1 Much like we can not write 4/3 exactly in decimal, only 1.3333333 and stop after so many digits.
In this case, The subtraction is expected to be exact as well as the multiplication and final subtraction. Yet the first quotient is not 4/3 and so the final result might not be 0.0.
*1
decimal 1.3333333333333332593184650249895639717578887939453125
hex 0x1.5555555555555
I feel the OP is also searching for an answer for "How could obtain a 0 as an answer", besides
"Why is myeps not 0". So here is the answer. You'll need the Symbolic Math Toolbox for MATLAB.
You can create symbolic numbers by using sym. Symbolic numbers are
exact representations, unlike floating-point numbers.
So if you type 3*(sym(4)/sym(3)-1)-1 into MATLAB (once you got the toolbox), the answer will be exact
0
Just mind the sym(4)/sym(3) part. If you try sym(4/3), MATLAB will get a floating-point first, then try to convert it to symbolic. That will lose precision and not produce 0 as an answer.

Negative zeros in Matlab

Basically I wanted to ask two things:
Why does this happen? (negative zero in Matlab)
When does this happen?
I came up with this. Octave has some similarities with Matlab, so the usefulness of this feature is clear, but one of the things they said, is that it does not appear in the default output. and I just tackled it right now. So, maybe a new insight on this?
For the second question, in the answered question I referred to, they just said it could happen in some calculations, and in the following calculation which I just did, it doesn't seem really necessary to use (or get) that negative zero.
The code where I encountered this is:
xcorr([1 0 1 1], [0 1 1 0 0])
where it's output is:
-0.0000 -0.0000 1.0000 1.0000 1.0000 2.0000 1.0000 0.0000 0.0000
The xcorr is actually a cross corelation function, which does only some simple operations like summing and multiplications, where it's exact function details can be found here. Anyway, nothing like "complex branch cuts and transformations of the complex plane"
Thanks
These values do not represent zeros. Instead, they are negative values which are very close to zero.
The reason for getting these values and not simply zeros is due to approximations which are performed in the function implementation. According to Matlab documentation: "xcorr estimates the cross-correlation sequence of a random process".
In other words - the values which are displayed on the screen are just approximations for negative values.
In order to test this, you can change the display format of Matlab.
code:
format shortE;
xcorr([1 0 1 1], [0 1 1 0 0])
Result:
ans =
Columns 1 through 5
-6.2450e-017 -5.5511e-017 1.0000e+000 1.0000e+000 1.0000e+000
Columns 6 through 9
2.0000e+000 1.0000e+000 1.1102e-016 1.1796e-016
As you can see, the values in coordinates 1,2,8 and 9 are actually negative.
In the specific sample case, the -0.0000 turned out to actually be tiny non-zero negative numbers. However, in an effort to make printout human readable, Matlab and Octave tries to avoid using scientific notation in printing. As a result, when mixed with big numbers, small number are reduced to 0.0000 or -0.000. This can be changed by setting the default preferences in Matlab or Octave.
But this is NOT the only answer to the asked questions:
Why does this happen? (negative zero in Matlab),
When does this happen?
In fact, in Matlab and Octave, and any computing environment that works with floating points, -0. really a thing. This is not a bug in the computing environment, but rather an allowance in the IEEE-754 standard for binary representation of a floating point number, and it is a product of the floating point processor in the CPU (not the programming language).
Whereas binary representation of integer does not have a special sign bit, IEEE-754 reserves a bit for the sign, separate from the number. So while the number part might mean 0, the sign bit is left to mean negative.
It happens wherever your CPU (Intel or AMD) does a product with 0. and any negative number (including -0.). I don't know if it is required by IEEE-754 for that to happen or if it simply the result of CPU design optimization (maximize speed, minimize size).
In either case, this -0. is a non-issue in that IEEE-754 requires it to be comparatively and arithmetically exactly the same as 0.. That is:
-0. < 0 --> FALSE
-0. == 0 --> TRUE
1+ -0. == 1 --> TRUE
etc...

Matlab Bug in Sine function?

Has anyone tried plotting a sine function for large values in MATLAB?
For e.g.:
x = 0:1000:100000;
plot(x,sin(2*pi*x))
I was just wondering why the amplitude is changing for this periodic function? As per what I expect, for any value of x, the function has a period of 2*pi. Why is it not?
Does anyone know? Is there a way to get it right? Also, is this a bug and is it already known?
That's actually not the amplitude changing. That is due to the numerical imprecisions of floating point arithmetic. Bear in mind that you are specifying an integer sequence from 0 to 100000 in steps of 1000. If you recall from trigonometry, sin(n*x*pi) = 0 when x and n are integers, and so theoretically you should be obtaining an output of all zeroes. In your case, n = 2, and x is a number from 0 to 100000 that is a multiple of 1000.
However, this is what I get when I use the above code in your post:
Take a look at the scale of that graph. It's 10^{-11}. Do you know how small that is? As further evidence, here's what the max and min values are of that sequence:
>> min(sin(2*pi*x))
ans =
-7.8397e-11
>> max(sin(2*pi*x))
ans =
2.9190e-11
The values are so small that they might as well be zero. What you are visualizing in the graph is due to numerical imprecision. As I mentioned before, sin(n*x*pi) = 0 when n and x is are integers, under the assumption that we have all of the decimal places of pi available. However, because we only have 64-bits total to represent pi numerically, you will certainly not get the result to be exactly zero. In addition, be advised that the sin function is very likely to be using numerical approximation algorithms (Taylor / MacLaurin series for example), and so that could also contribute to the fact that the result may not be exactly 0.
There are, of course, workarounds, such as using the symbolic mathematics toolbox (see #yoh.lej's answer). In this case, you will get zero, but I won't focus on that here. Your post is questioning the accuracy of the sin function in MATLAB, that works on numeric type inputs. Theoretically with your input into sin, as it is an integer sequence, every value of x should make sin(n*x*pi) = 0.
BTW, this article is good reading. This is what every programmer needs to know about floating point arithmetic and functions: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html. A more simple overview can be found here: http://floating-point-gui.de/
Because what is the exact value of pi?
This apparent error is due to the limit of floating point accuracy. If you really need/want to get around that you can do symbolic computation with matlab, see the difference between:
>> sin(2*pi*10)
ans =
-2.4493e-15
and
>> sin(sym(2*pi*10))
ans =
0

Matlab: reverse of eps? Accuracy on positive weight?

eps returns the distance from 1.0 to the next largest double-precision number, so I can use it to interpret the numbers value on negative weight position. But for very large number with value on high positive weight position, what can I use to interpret?
I mean that I need to have some reference to count out computation noise on numbers obtained on Matlab.
Have you read "What Every Computer Scientist Should Know About Floating-Point Arithmetic"?
It discusses rounding error (what you're calling "computation noise"), the IEEE 754 standard for representation of floating-point numbers, and implementations of floating-point math on computers.
I believe that reading this paper would answer your question, or at least give you more insight into exactly how floating point math works.
Some clarifications to aid your understanding - too big to fit in the comments of #Richante's post:
Firstly, the difference between realmin and eps:
realmin is the smallest normalised floating point number. You can represent smaller numbers in denormalised form.
eps is the smallest increment between distinct numbers. realmin = eps(realmin) * 2^52.
"Normalised" and "denormalised" floating point numbers are explained in the paper linked above.
Secondly, rounding error is no indicator of how much you can "trust" the nth digit of a number.
Take, for example, this:
>> ((0.1+0.1+0.1)^512)/(0.3^512)
ans =
1.0000
We're dividing 0.3^512 by itself, so the answer should be exactly one, right? We should be able to trust every digit up to eps(1).
The error in this calculation is actually 400 * eps:
>> ((0.1+0.1+0.1)^512)/(0.3^512) - 1
ans =
9.4591e-014
>> ans / eps(1)
ans =
426
The calculation error, i.e. the extent to which the nth digit is untrustworthy, is far greater than eps, the floating-point roundoff error in the representation of the answer. Note that we only did six floating-point operations here! You can easily rack up millions of FLOPs to produce one result.
I'll say it one more time: eps() is not an indicator of the error in your calculation. Do not attempt to display : "My result is 1234.567 +/- eps(1234.567)". That is meaningless and deceptive, because it implies your numbers are more precise than they actually are.
eps, the rounding error in the representation of your answer, is only 1 part per billion trillion or so. Your real enemy is the error that accumulates every time you do a floating point operation, and that is what you need to track for a meaningful estimate of the error.
Easier to digest than the paper Li-aung Yip recommends would be the Wikipedia article on machine epsilon. Then read What Every Computer Scientist ...
Your question isn't very well worded, but I think you want something that gives the distance from a number to the next smallest double-precision number? If this is the case, then you can just use:
x = 100;
x + eps(x) %Next largest double-precision number
x - eps(-x) %Next smallest double-precision number
Double-precision numbers have a single sign bit, so counting up from a negative number is the same as counting down from a positive.
Edit:
According to help eps, "For all X, EPS(X) is equal to EPS(ABS(X))." which really confuses me; I can't see how that can be consistent with double having a single sign bit, and values not being equally spaced.

det of a matrix returns 0 in matlab

I have been give a very large matrix (I cannot change the values of the matrix) and I need to calculate the inverse of a (covariance) matrix.
Sometimes I get the error saying
Matrix is close to singular or badly scaled.
Results may be inaccurate
In these situations I see that the value of the det returns 0.
Before calculating inverse (of a covariance matrix) I want to check the value of the det and perform something like this
covarianceFea=cov(fea_class);
covdet=det(covarianceFea);
if(covdet ==0)
covdet=covdet+.00001;
%calculate the covariance using this new det
end
Is there any way to use the new det and then use this to calculate the inverse of the covariance matrix?
Sigh. Computation of the determinant to determine singularity is a ridiculous thing to do, utterly so. Especially so for a large matrix. Sorry, but it is. Why? Yes, some books tell you to do it. Maybe even your instructor.
Analytical singularity is one thing. But how about numerical determination of singularity? Unless you are using a symbolic tool, MATLAB uses floating point arithmetic. This means it stores numbers as floating point, double precision values. Those numbers cannot be smaller in magnitude than
>> realmin
ans =
2.2251e-308
(Actually, MATLAB goes a bit lower than that, in terms of denormalized numbers, which can go down to approximately 1e-323.) See that when I try to store a number smaller than that, MATLAB thinks it is zero.
>> A = 1e-323
A =
9.8813e-324
>> A = 1e-324
A =
0
What happens with a large matrix? For example, is this matrix singular:
M = eye(1000);
Since M is an identity matrix, it is fairly clearly non-singular. In fact, det does suggest that it is non-singular.
>> det(M)
ans =
1
But, multiply it by some constant. Does that make it non-singular? NO!!!!!!!!!!!!!!!!!!!!!!!! Of course not. But try it anyway.
>> det(M*0.1)
ans =
0
Hmm. Thats is odd. MATLAB tells me the determinant is zero. But we know that the determinant is 1e-1000. Oh, yes. Gosh, 1e-1000 is smaller, by a considerable amount than the smallest number that I just showed you that MATLAB can store as a double. So the determinant underflows, even though it is obviously non-zero. Is the matrix singular? Of course not. But does the use of det fail here? Of course it will, and this is completely expected.
Instead, use a good tool for the determination of singularity. Use a tool like cond, or rank. For example, can we fool rank?
>> rank(M)
ans =
1000
>> rank(M*.1)
ans =
1000
See that rank knows this is a full rank matrix, regardless of whether we scale it or not. The same is true of cond, computing the condition number of M.
>> cond(M)
ans =
1
>> cond(M*.1)
ans =
1
Welcome to the world of floating point arithmetic. And oh, by the way, forget about det as a tool for almost any computation using floating point arithmetic. It is a poor choice almost always.
Woodchips has given you a very good explanation for why you shouldn't use the determinant. This seems to be a common misconception and your question is very related to another question on inverting matrices: Is there a fast way to invert a matrix in Matlab?, where the OP decided that because the determinant of his matrix was 1, it was definitely invertible! Here's a snippet from my answer
Rather than det(A)=1, it is the condition number of your matrix that dictates how accurate or stable the inverse will be. Note that det(A)=∏i=1:n λi. So just setting λ1=M, λn=1/M and λi≠1,n=1 will give you det(A)=1. However, as M → ∞, cond(A) = M2 → ∞ and λn → 0, meaning your matrix is approaching singularity and there will be large numerical errors in computing the inverse.
You can test this in MATLAB with the following simple example:
A = eye(10);
A([1 2]) = [1e15 1e-15];
%# calculate determinant
det(A)
ans =
1
%# calculate condition number
cond(A)
ans =
1.0000e+30
In such a scenario, calculating an inverse is not a very good idea. If you just have to do it, I would suggest using this to increase display precision:
format long;
Other suggestion could be to try using an SVD of the matrix and tinker around with singular values there.
A = U∑V'
inv(A) = V*inv(∑)*U'
∑ is a diagonal matrix where you will see one of the diagonal entries close to 0. Try playing around with this number if you want some sort of an approximation.