Why do we have "is not a Number" (isNan) functions? - numbers

Many languages have an isNaN() function. I am asking myself: why check for not being a number?
Is the reason purely logical or is it faster to check for not a number instead of is a number?
Note that this is a pure question of understanding. I know that I can negate isNaN() to achieve an isNumber() function for example.
However I am searching for a reason WHY we are checking for not a number?

In computing, NaN (Not a Number) is a
value of numeric data type
representing an undefined or
unrepresentable value, especially in
floating-point calculations.
Wiki Article
Because Not a Number is a special case of an expression.
You can't just use 0 or -1 or something like that because those numbers already have meanings.
Not a Number means something went awry in a calculation and a valid number cannot be computed out of it.
It's on the same line of thinking as having null. Sure, we could assign an arbitrary numerical value to mean null but it would be confusing and we'd hit all sorts of weird errors on corner cases.

'Not a Number' is the result of specific floating point calculations. It's not about "hey, is this variable holding 120 or "abc"?'

isThisCaseExceptional seems more reasonable to me than isEverythingNormal because I'm likely to write
possible_number = some_calculation();
if (inNaN(possible_number)) handle_the_surprise;
// .. keep going
instead of
possible_number = some_calculation();
if (inANumber(possible_number)) {
// .. keep going
} else {
// handle the surprise
}

The check is for whether it is not a number, because the assumption is that it is a number. NaN is the exceptional case when you're expecting a numeric value, so it makes sense to me that it is done this way. I'd rather check for isNaN infrequently than check if a value isNum frequently, after all.

It is a way to report an error condition (mathematically undefined or outside of technical limits).

Many languages have special representations if the result of a computation is not representable by a number, for example NaN and Inf. So isNaN() checks, if a result is actually a number or the special marker for NaN.

NaNs are used for:
Nonreal or undefined values.
Error handling. Initializing a variable with NaN allows you to test for NaN and make sure it has been set with a valid value.
Objects that ignore a member or property. For example, WPF uses NaN to represent the Width and Height of visual elements that do not define their own size.

Related

how to retain the decimal part in redshift when the decimal part is only having 0

In my code we are using a multiplication as (180.00*60.00).
Ideally since both the numbers are of decimal type I'm expecting my answer to be also in decimal. But in redshift I observed that this equation is resulting only to Integer.
Basically when this multiplication happens I want the answer as 10800.0000 and not 10800.
how to achieve this in Redshift. I tried putting the cast to whole expression and also even to individual expression but still I'm getting the answer only in Integer.
any help here?
If I understand your question correctly you are asking why Redshift isn't inferring that the string "180.00*60.00" as being the multiplication of 2 values in decimal type, yes? If correct you just need to cast the input values for the multiplication. For example:
select 180.00::decimal(6,2) * 60.00::decimal(6,2) ;
It also looks like you want the result to have 4 significant digits so you may want to cast to decimal(8,4).
If I've missed the point of your question please clarify.

Matlab function NNZ, numerical zero

I am working on a code in Least Square Non Negative solution recovery context on Matlab, and I need (with no more details because it's not that important for this question) to know the number of non zero elements in my matrices and arrays.
The function NNZ on matlab does exactly what I want, but it happens that I need more information about what Matlab thinks of a "zero element", it could be 0 itself, or the numerical zero like 1e-16 or less.
Does anybody has this information about the NNZ function, cause I couldn't get the original script
Thanks.
PS : I am not an expert on Matlab, so accept my apologies if it's a really simple task.
I tried "open nnz", on Matlab but I only get a small script of commented code lines...
Since nnz counts everything that isn't an exact zero (i.e. 1e-100 is non-zero), you just have to apply a relational operator to your data first to find how many values exceed some tolerance around zero. For a matrix A:
n = nnz(abs(A) > 1e-16);
Also, this discussion of floating-point comparison might be of interest to you.
You can add in a tolerance by doing something like:
nnz(abs(myarray)>tol);
This will create a binary array that is 1 when abs(myarray)>tol and 0 otherwise and then count the number of non-zero entries.

Subtracting matrices of unequal dimension

When I try to do the following command I get an error.
err = sqrt(mean((xi256-xc256).^2))
I am aware that the matrix sizes are different.
whos xi256 xc256` gives:
Name Size Bytes Class Attributes
xc256 27x1 216 double
xi256 513x1 4104 double
I am supposed to negate find the difference of these two matrices. In fact the command given at the top was in the course notes and the course has been running for several years! I have tried googling ways to resolve this error to perform this subtraction regardless but have found no solution. Maybe one of the matrices can be scaled to match the dimensions of the other? However, I have not been able to find any such functions that would let me do this.
I need to find the RMS error in a given set of data. xc256 was calculated through a numerical method, xi256 gives the true value.
Edit: I was able to use another set of results.
First check that xc256 is correctly computed and that you do not have a matrix transposition gone wrong or something like that. Computing the RMS between two vector of different sizes does not make sense, and padding or replicating will get you rid of the error, but is most probably not what you actually want.
There are just two situations that I can think of, I will list them here:
The line is wrong (not likely as it looks pretty normal, but make sure to check the book!)
The input of the line is wrong
Assuming it is in point 2, again there are two possibilities:
xi256 is of incorrect size (likelyhood of this depends on how you got it)
xc256 is of incorrect size
Assuming it is again point 2, there are yet again 2 likely possibilities:
xc256 should be a scalar
xc256 should be a vector with the same size as xi256
From here on there is insufficient information to continue, but check whether you accidentally made it 27 times as long, or 19 times too short somewhere. Just use some breakpoints throughout the code to see how the size develops.

Test if a floating point number is an integer in Matlab

My Question - part 1: What is the best way to test if a floating point number is an "integer" (in Matlab)?
My current solution for part 1: Obviously, isinteger is out, since this tests the type of an element, rather than the value, so currently, I solve the problem like this:
abs(round(X) - X) <= sqrt(eps(X))
But perhaps there is a more native Matlab method?
My Question - part 2: If my current solution really is the best way, then I was wondering if there is a general tolerance that is recommended? As you can see from above, I use sqrt(eps(X)), but I don't really have any good reason for this. Perhaps I should just use eps(X), or maybe 5 * eps(X)? Any suggestions would be most welcome.
An Example: In Matlab, sqrt(2)^2 == 2 returns False. But in practice, we might want that logical condition to return True. One can achieve this using the method described above, since sqrt(2)^2 actually equals 2 + eps(2) (ie well within the tolerance of sqrt(eps(2)). But does this mean I should always use eps(X) as my tolerance, or is there good reason to use a larger tolerance, such as 5 * eps(X), or sqrt(eps(X))?
UPDATE (2012-10-31): #FakeDIY pointed out that my question is partially a duplicate of this SO question (apologies, not sure how I missed it in my initial search). Given this I'd like to emphasize the "tolerance" part of the question (which is not covered in that link), ie is eps(X) a sensible tolerance, or should I use something larger, like 5 * eps(X), and if so, why?
UPDATE (2012-11-01): Thanks everyone for the responses. I've +1'ed all three answers as I feel they all contribute meaningfully to various aspects of the question. I'm giving the answer tick to Eric Postpischil as that answer really nailed the tolerance part of the question well (and it has the most upvotes at this point in time).
No, there is no general tolerance that is recommended, and there cannot be.
The difference between a computed result and a mathematically ideal result is a function of the operations that produced the computed result. Because those operations are specific to each application, there is no general rule for testing any property of a computed result.
To design a proper test, you must determine what errors may have occurred during computation, determine bounds on the resulting error in the computed result, and test whether the computed result differs from the ideal result (perhaps the nearest integer) by less than those bounds. You must also decide whether those bounds are sufficiently small to satisfy your application’s requirements. (Using a relaxed test that accepts as an integer something that is not an integer decreases false negatives [incorrect rejections of a result as an integer where the ideal result would be an integer] but increases false positives [incorrect acceptances of a result as an integer where the ideal result would not be an integer].)
(Note that it can even be the case the testing as if the error bounds were zero can produce false negatives: It is possible a computation produces a result that is exactly an integer when the ideal result is not an integer, so any error tolerance, even zero, will falsely report this result is an integer. If this is unacceptable for your application, then, in such a case, the computations must be redesigned.)
It is not only not possible to state, without specific knowledge of the application, a numerical tolerance that may be used, it is impossible to state whether the tolerance should be absolute, should be relative to the computed value or to a target value, should be measured in ULPs (units of least precision), or should be set in some other manner. This is because errors may be introduced into computations in a variety of ways. For example, if there is a small relative error in a and a and b are close in value, then a-b has a large relative error. Additionally, if c is large, then (a-b)*c has a large absolute error.
Its probably not the most efficient method but I would use mod for this:
a = 15.0000000000;
b = mod(a,1.0)
c = 15.0000000001;
d = mod(c,1.0)
returns b = 0 and d = 1.0000e-010
There are a number of other alternatives suggested here:
How do I test for integers in MATLAB?
I like the idea of comparing (x == floor(x)) too.
1) I have historically used your method with a simple tolerance, eps(X). The mod methods interested me though, so I benchmarked a couple using Steve Eddins timeit function.
f = #() abs(X - round(X)) <= eps(X);
g = #() X == round(X);
h = #() ~mod(X,1);
For single values, like X=1.0, yours appears to fastest:
timeit(f) = 7.3635e-006
timeit(g) = 9.9677e-006
timeit(h) = 9.9214e-006
For vectors though, like X = 1:0.01:100, the other methods are faster (though round still beats mod):
timeit(f) = 0.00076636
timeit(g) = 0.00028182
timeit(h) = 0.00040539
2) The error bound is really problem dependent. Other answers cover this much better than I am able to.

Problem using the find function in MATLAB

I have two arrays of data that I'm trying to amalgamate. One contains actual latencies from an experiment in the first column (e.g. 0.345, 0.455... never more than 3 decimal places), along with other data from that experiment. The other contains what is effectively a 'look up' list of latencies ranging from 0.001 to 0.500 in 0.001 increments, along with other pieces of data. Both data sets are X-by-Y doubles.
What I'm trying to do is something like...
for i = 1:length(actual_latency)
row = find(predicted_data(:,1) == actual_latency(i))
full_set(i,1:4) = [actual_latency(i) other_info(i) predicted_info(row,2) ...
predicted_info(row,3)];
end
...in order to find the relevant row in predicted_data where the look up latency corresponds to the actual latency. I then use this to created an amalgamated data set, full_set.
I figured this would be really simple, but the find function keeps failing by throwing up an empty matrix when looking for an actual latency that I know is in predicted_data(:,1) (as I've double-checked during debugging).
Moreover, if I replace find with a for loop to do the same job, I get a similar error. It doesn't appear to be systematic - using different participant data sets throws it up in different places.
Furthermore, during debugging mode, if I use find to try and find a hard-coded value of actual_latency, it doesn't always work. Sometimes yes, sometimes no.
I'm really scratching my head over this, so if anyone has any ideas about what might be going on, I'd be really grateful.
You are likely running into a problem with floating point comparisons when you do the following:
predicted_data(:,1) == actual_latency(i)
Even though your numbers appear to only have three decimal places of precision, they may still differ by very small amounts that are not being displayed, thus giving you an empty matrix since FIND can't get an exact match.
One feature of floating point numbers is that certain numbers can't be exactly represented, since they aren't an integer power of 2. This occurs with the numbers 0.1 and 0.001. If you repeatedly add or multiply one of these numbers you can see some unexpected behavior. Amro pointed out one example in his comment: 0.3 is not exactly equal to 3*0.1. This can also be illustrated by creating your look-up list of latencies in two different ways. You can use the normal colon syntax:
vec1 = 0.001:0.001:0.5;
Or you can use LINSPACE:
vec2 = linspace(0.001,0.5,500);
You'd think these two vectors would be equal to one another, but think again!:
>> isequal(vec1,vec2)
ans =
0 %# FALSE!
This is because the two methods create the vectors by performing successive additions or multiplications of 0.001 in different ways, giving ever so slightly different values for some entries in the vector. You can take a look at this technical solution for more details.
When comparing floating point numbers, you should therefore do your comparisons using some tolerance. For example, this finds the indices of entries in the look-up list that are within 0.0001 of your actual latency:
tolerance = 0.0001;
for i = 1:length(actual_latency)
row = find(abs(predicted_data(:,1) - actual_latency(i)) < tolerance);
...
The topic of floating point comparison is also covered in this related question.
You may try to do the following:
row = find(abs(predicted_data(:,1) - actual_latency(i))) < eps)
EPS is accuracy of floating-point operation.
Have you tried using a tolerance rather than == ?