Scala Has Infinity but no Infinitesimal. Why? - scala

Open a Scala interpreter.
scala> 1E-200 * 1E-200
res1: Double = 0.0
scala> 1E200 * 1E200
res2: Double = Infinity
A very large product value evaluates to Infinity.
A very small value evaluates to zero.
Why not be symmetrical and create something called Infinitesimal?

Basically this has to do with the way floating point numbers work, which has more to do with your processor than scala. The small number is going to be so small that the closest representation corresponds to +0 (positive zero), and so it underflows to 0.0. The large number is going to overflow past any valid representation and be replaced with +inf (positive infinity). Remember that floating point numbers are a fixed precision estimation. If you want a system that is more exact, you can use http://www.scala-lang.org/api/2.11.8/#scala.math.BigDecimal

Scala, just like Java, follows the IEEE specification for floating point numbers, which does not have "infinitesimals". I'm not quite sure infinitesimals would make much sense either way, as they have no mathematical interpretation as numbers.

Related

Scala Simple calculation error [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 5 years ago.
Hi I am new to Scala and the following behavior is really weird. Is Scala making mistake even for this simple calculation? Or I am doing something wrong?
Thanks,
scala $ Welcome to Scala 2.12.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_111).
Type in expressions for evaluation. Or try :help.
scala> val radius:Double = 10.2
radius: Double = 10.2
scala> radius * 10
res0: Double = 102.0
scala> radius * 100
res1: Double = 1019.9999999999999
scala> radius * 1000
res2: Double = 10200.0
A Double in Scala is a 64-bit IEEE-754 floating point number equivalent to Java's double primitive type, as described in Scala's doc.
By nature of floating point number, a Double doesn't necessarily return the exact value of a number. It boils down to the fact that decimals can't always be precisely represented as binary.
For better precision, you might want to consider using BigDecimal.
Real numbers are represented by Float (32-bit) and Double (64-bit). These are binary formats which only approximately represent the complete range of Real numbers. Real numbers include all the Rationals (countable infinite), and the Irrationals (uncountable infinite). Double can at best represent only a small, finite subset of the Rationals.
The IEEE-754 double precision floating point encoding uses 53-bit mantissa (52-bits explicitly stored), an 11-bit exponent, and 1-bit for sign. Read about IEEE Double precision floating point here.
The range of integers which can be exactly represented is +/- 2^53-1, while the exponent represents a range of 10^+/-308.
You cannot express an integer x exactly where |x| > 2^53.
You cannot express fractions exactly which are not of the form k/2^n, where k, n are integers (and k, n are within above limits). Thus, you cannot represent 1/3, 1/5, 1/7, 1/11, etc.
Any rational fraction where the denominator is relatively prime to 2 cannot be exactly represented. Any fraction k/P, where P is the product of primes other than 2, and k is not a multiple of P, cannot be exactly represented by IEEE-754 floating point.
The behavior you are observing is due to 1/5 being approximately represented, and the conversion from internal double/float representation to character representation performs rounding to some precision. All languages which use machine floating point (double/float) exhibit similar behavior, but the routines which convert from float to print may round these approximate numbers differently.
As a consequence of Cantor's proof that the Real numbers are uncountable and the Rational numbers are countable, almost all Real numbers are irrational.

Conversion und comparison between Long and Double in Scala

I'm playing around with Scala AnyVal Types and having trouble to unterstand the following: I convert Long.MaxValue to Double and back to Long. As Long (64bit) can hold more digits than Double's mantissa (52 bits), I expected that this conversion will make some data be lost, but somehow this is not the case:
Long.MaxValue.equals(Long.MaxValue.toDouble.toLong) // is true
I thought there is maybe some magic/optimisation in Long.MaxValue.toDouble.toLong such that the conversion is not really happening. So I also tried:
Long.MaxValue.equals("9.223372036854776E18".toDouble.toLong) // is true
If I evaluate the expression "9.223372036854776E18".toDouble.toLong, this gives:
9223372036854775807
This really freaks me out, the last 4 digits seem just to pop up from nowhere!
First of all: as usual with questions like this, there is nothing special about Scala, all modern languages (that I know of) use IEEE 754 floating point (at least in practice, if the language specification doesn't require it) and will behave the same, just with different type and operation names.
Yes, data is lost. If you try e.g. (Long.MaxValue - 1).toDouble.toLong, you'll still get Long.MaxValue back. You can find the next smallest Long you can get from someDouble.toLong as follows:
scala> Math.nextDown(Long.MaxValue.toDouble).toLong
res0: Long = 9223372036854774784
If I evaluate the expression "9.223372036854776E18".toDouble.toLong, this gives:
9223372036854775808
This really freaks me out, the last 4 digits seem just to pop up from nowhere!
You presumably mean 9223372036854775807. 9.223372036854776E18 is of course actually larger than that: it represents 9223372036854776000. But you'll get the same result if you use any other Double larger than Long.MaxValue as well, e.g. 1E30.toLong.
Just remark, Double and Long values in Scala are equivalent of primitive types double and long.
The result of Long.MaxValue.toDouble is in reality bigger then Long.MaxValue, the reason is, that value is rounded. Next conversion i.e. Long.MaxValue.toDouble.toLong is "rounded" back to the value of Long.MaxValue.

How to stop matlab truncating long numbers

These two long numbers are the same except for the last digit.
test = [];
test(1) = 33777100285870080;
test(2) = 33777100285870082;
but the last digit is lost when the numbers are put in the array:
unique(test)
ans = 3.3777e+16
How can I prevent this? The numbers are ID codes and losing the last digit is screwing everything up.
Matlab uses 64-bit floating point representation by default for numbers. Those have a base-10 16-digit precision (more or less) and your numbers seem to exceed that.
Use something like uint64 to store your numbers:
> test = [uint64(33777100285870080); uint64(33777100285870082)];
> disp(test(1));
33777100285870080
> disp(test(2));
33777100285870082
This is really a rounding error, not a display error. To get the correct strings for output purposes, use int2str, because, again, num2str uses a 64-bit floating point representation, and that has rounding errors in this case.
To add more explanation to #rubenvb's solution, your values are greater than flintmax for IEEE 754 double precision floating-point, i.e, greater than 2^53. After this point not all integers can be exactly represented as doubles. See also this related question.

Precise division of doubles representing integers exactly (when they are divisible)

Given that 8-byte doubles can represent all 4-byte ints precisely, I'm wondering whether dividing a double A storing an int, by a double B storing an int (such that the integer B divides A) will always give the exact double corresponding to the integer that is their quotient? So, if B and C are integers, and B*C fits within a 32-bit int, then is it guaranteed that
int B,C = whatever s.t. B*C does not overflow 32-bit int
double(B*C)/double(C) == double((B*C)/C) ?
Does the IEEE754 standard guarantee this?
In my testing, it seems to work for all examples I've tried. In Python:
>>> (321312321.0*3434343.0)/321312321.0 == 3434343.0
True
The reason for asking is that Matlab makes it hard to work with ints, so I often just use the default doubles for integer calculations. And when I know that the integers are exactly divisible, and if I know that the answer to the present question is yes, then I could avoid doing casts to ints, idivide(..) etc., which is less readable.
Luis Mendo's comment does answer this question, but to specifically address the use in Matlab there are some handy utilities described here. You can use eps(numberOfInterest) to find the distance to the next largest double-precision floating point number. For example:
eps(1) = 2^(-52)
eps(2^52) = 1
This practically guarantees that mathematical operations with integers held in a double will be precise provided they don't overflow 2^52, which is quite a bit larger than what is held in a 32-bit int type.

iPhone/Obj C: Why does convert float to int: (int) float * 100 does not work?

In my code, I am using float to do currency calculation but the rounding has yielded undesired results so I am trying to convert it all to int. With as little change to the infrastructure as possible, in my init functions, I did this:
-(id)initWithPrice:(float)p;
{
[self setPrice:(int)(p*100)];
}
I multiply by 100 b/c in the main section, the values are given as .xx to 2 decimals. I abnormally I notice is that for float 1.18, the int rounds it to 117. Does anyone know it does that? The float leaves it as 1.18. I expect it to be 118 for the int equiv in cents.
Thanks.
Floating point is always a little imprecise. With IEEE floating point encoding, powers of two can be represented exactly (like 4,2,1,0.5,0.25,0.125,0.0625,...) , but numbers like 0.1 are always an approximation (just try representing it as a sum of powers of 2).
Your (int) cast will truncate whatever comes in, so if p*100 is resolving to 117.9999995 due to this imprecision , that will become 1.17 instead of 1.18.
Better solution is to use something like roundf on p*100. Even better would be if you can go upstream and fully convert to fixed-point math using integers in the entire program.