How do i declare a non inclusive floating point range in ada? - range

Declaring ranges in Ada is always done inclusively.
If i want a type that has all the integers from 0 to 42 (or as a mathmatical interval: [0, 42]) i declare it as follows:
type zero_to_42 is range 0 .. 42;
If i want to exclude the zero (the range (0, 42]), this is not an issue for discrete types:
type not_zero_to_42 is range (zero_to_42'First + 1) .. zero_to_42'Last;
but i still have to do this manually, there is no zero_to_answer'NextAfterFirst
For floating point types i have no idea how to do this properly. It's simple for excluding the zero, but excluding anything else seems implementation defined to me.
type works is digits 6 range 0.0 .. 42.0
type also_works is new works range (0.0 + works'small) .. 42.0
type broken is new works range 0.0 .. (42.0 - works'small)
since float values near 42.0 have less precision than floating point values near 0.0, 42.0 - works'small is rounded to 42.0
i could of course find a value by hand that works (e.g. 41.9999) but that seems ugly to me and might not work anymore when i change the number of digits that works has.

The 'Succ and 'Pred attributes can be used on floating-point values to return the next or previous machine numbers. If T is a floating-point type,
T'Succ(X)
is the smallest floating-point "machine number" > X, and
T'Pred(X)
is the largest floating-point machine number < X. Thus:
type Works is digits 6 range 0.0 .. 42.0;
subtype Exclusive is Works range 0.0 .. Works'Pred(42.0);
Or (since the range on the type declaration might not be relevant):
type Digits_6 is digits 6;
subtype Exclusive is Digits_6 range 0.0 .. Digits_6'Pred(42.0);
Or
type Exclusive is digits 6 range 0.0 .. Float'Pred(42.0);
assuming you know Float is a 32-bit IEEE float and Exclusive will also be one.

What is can be used here is 'Adjacent(near_value, towards_value)
type works is digits 6 range 0.0 .. 42.0
type also_works is new works range (0.0 + works'small) .. 42.0
type still_works is new works range 0.0 .. works'Adjacent(42.0, 0.0)
this looks for whichever value can be represented by the machine that is closest to near_value in the direction of towards_value
when printing out still_works'last and works'last very likely the result will look/be the same, but comparing the two won't work
declare
type works is digits 6 range 0.0 .. 42.0
subtype still_works is works range 0.0 .. works'Adjacent(42.0, 0.0)
begin
Text_IO.Put_Line(works'Image(works'Last));
Text_IO.Put_Line(still_works'Image(still_works'Last));
Text_IO.Put_Line(Boolean'Image(works'Last = still_works'Last));
end;
yields when compiled with gnat:
4.20000E+01
4.20000E+01
FALSE

You might be able to use the Ada 2012 dynamic predicate:
type Exclusive is new Float range 0.0 .. 42.0
with Dynamic_Predicate => Exclusive > 0.0 and then Exclusive < 42.0;
but GNAT seems to have troubles with this: GCC 4.8.1 is OK, GNAT GPL 2013 won’t even accept values of 1.0 or 41.0, and GCC 4.9.0-20140119 threw a bug box!

Related

Can't convert division result into float or decimal type

I have a calculation in my t-sql code that I expect will show decimal result (with at least 2 digits after comma)
My fields that I am using are integer type, but the calculations result is decimal
I tried using CAST as float, but won't work
(COUNT(ct.[ClientFK]) / ehrprg.AnnualGoalClientsServed) AS [AnnualGoal]
I tried:
CAST((COUNT(ct.[ClientFK]) / ehrprg.AnnualGoalClientsServed) as float)
AS[AnnualGoal]
I expect to see at lest two digits after comma -
2/50 to be 0.04 while now I am getting 0
Any advice / help would be much appreciated
Try explicitly casting the denominator to float before the quotient is taken:
COUNT(ct.[ClientFK]) / CAST(ehrprg.AnnualGoalClientsServed AS float) AS [AnnualGoal]
In the above approach, because one of the two terms in the quotient is floating point, the other term (in this case, the count) should be promoted to float as well.

Error when converting between Double and Int [duplicate]

I observed something really strange. If you run this code in Swift:
Int(Float(Int.max))
It crashes with the error message:
fatal error: Float value cannot be converted to Int because the result would be greater than Int.max
This is really counter-intuitive, so I expanded the expression into 3 lines and tried to see what happens in each step in a playground:
let a = Int.max
let b = Float(a)
let c = Int(b)
It crashes with the same message. This time, I see that a is 9223372036854775807 and b is 9.223372e+18. It is obvious that a is greater than b by 36854775807. I also understand that floating points are inaccurate, so I expected something less than Int.max, with the last few digits being 0.
I also tried this with Double, it crashes too.
Then I thought, maybe this is just how floating point numbers behave, so I tested the same thing in Java:
long a = Long.MAX_VALUE;
float b = (float)a;
long c = (long)b;
System.out.println(c);
It prints the expected 9223372036854775807!
What is wrong with swift?
There aren't enough bits in the mantissa of a Double or Float to accurately represent 19 significant digits, so you are getting a rounded result.
If you print the Float using String(format:) you can see a more accurate representation of the value of the Float:
let a = Int.max
print(a) // 9223372036854775807
let b = Float(a)
print(String(format: "%.1f", b)) // 9223372036854775808.0
So the value represented by the Float is 1 larger than Int.max.
Many values will be converted to the same Float value. The question becomes, how much would you have to reduce Int.max before it results in a different Double or Float value.
Starting with Double:
var y = Int.max
while Double(y) == Double(Int.max) {
y -= 1
}
print(Int.max - y) // 512
So with Double, the last 512 Ints all convert to the same Double.
Float has fewer bits to represent the value, so there are more values that all map to the same Float. Switching to - 1000 so that it runs in reasonable time:
var y = Int.max
while Float(y) == Float(Int.max) {
y -= 1000
}
print(Int.max - y) // 274877907000
So, your expectation that a Float could accurately represent a specific Int was misplaced.
Follow up question from the comments:
If float does not have enough bits to represent Int.max, how is it
able to represent a number one larger than that?
Floating point numbers are represented as two parts: mantissa and exponent. The mantissa represents the significant digits (in binary) and the exponent represents the power of 2. As a result, a floating point number can accurately express an even power of 2 by having a mantissa of 1 with an exponent that represents the power.
Numbers that are not even powers of 2 may have a binary pattern that contains more digits than can be represented in the mantissa. This is the case for Int.max (which is 2^63 - 1) because in binary that is 111111111111111111111111111111111111111111111111111111111111111 (63 1's). A Float which is 32 bits cannot store a mantissa which is 63 bits, so it has to be rounded or truncated. In the case of Int.max, rounding up by 1 results in the value
1000000000000000000000000000000000000000000000000000000000000000. Starting from the left, there is only 1 significant bit to be represented by the mantissa (the trailing 0's come for free), so this number is a mantissa of 1 and an exponent of 64.
See #MartinR's answer for an explanation of what Java is doing.
Swift and Java behave differently when converting a "too large" floating point
number to an integer. Java truncates any floating point value
larger than Long.MAX_VALUE = 2^63-1:
long c = (long)(1.0E+30f);
System.out.println(c);
// 9223372036854775807
Swift expects that the value is in the range of Int, and aborts
with a runtime exception otherwise:
/// Creates a new instance by rounding the given floating-point value toward
/// zero.
///
/// - Parameter other: A floating-point value. When `other` is rounded toward
/// zero, the result must be within the range `Int.min...Int.max`.
public init(_ value: Float)
Example:
let c = Int(Float(1.0E30))
print(c)
// fatal error: Float value cannot be converted to Int because the result would be greater than Int.max
The same happens with your value Float(Int.max), which is the
floating point representable value closest to Int.max and happens
to be larger than Int.max.

Why does Int(Float(Int.max)) give me an error?

I observed something really strange. If you run this code in Swift:
Int(Float(Int.max))
It crashes with the error message:
fatal error: Float value cannot be converted to Int because the result would be greater than Int.max
This is really counter-intuitive, so I expanded the expression into 3 lines and tried to see what happens in each step in a playground:
let a = Int.max
let b = Float(a)
let c = Int(b)
It crashes with the same message. This time, I see that a is 9223372036854775807 and b is 9.223372e+18. It is obvious that a is greater than b by 36854775807. I also understand that floating points are inaccurate, so I expected something less than Int.max, with the last few digits being 0.
I also tried this with Double, it crashes too.
Then I thought, maybe this is just how floating point numbers behave, so I tested the same thing in Java:
long a = Long.MAX_VALUE;
float b = (float)a;
long c = (long)b;
System.out.println(c);
It prints the expected 9223372036854775807!
What is wrong with swift?
There aren't enough bits in the mantissa of a Double or Float to accurately represent 19 significant digits, so you are getting a rounded result.
If you print the Float using String(format:) you can see a more accurate representation of the value of the Float:
let a = Int.max
print(a) // 9223372036854775807
let b = Float(a)
print(String(format: "%.1f", b)) // 9223372036854775808.0
So the value represented by the Float is 1 larger than Int.max.
Many values will be converted to the same Float value. The question becomes, how much would you have to reduce Int.max before it results in a different Double or Float value.
Starting with Double:
var y = Int.max
while Double(y) == Double(Int.max) {
y -= 1
}
print(Int.max - y) // 512
So with Double, the last 512 Ints all convert to the same Double.
Float has fewer bits to represent the value, so there are more values that all map to the same Float. Switching to - 1000 so that it runs in reasonable time:
var y = Int.max
while Float(y) == Float(Int.max) {
y -= 1000
}
print(Int.max - y) // 274877907000
So, your expectation that a Float could accurately represent a specific Int was misplaced.
Follow up question from the comments:
If float does not have enough bits to represent Int.max, how is it
able to represent a number one larger than that?
Floating point numbers are represented as two parts: mantissa and exponent. The mantissa represents the significant digits (in binary) and the exponent represents the power of 2. As a result, a floating point number can accurately express an even power of 2 by having a mantissa of 1 with an exponent that represents the power.
Numbers that are not even powers of 2 may have a binary pattern that contains more digits than can be represented in the mantissa. This is the case for Int.max (which is 2^63 - 1) because in binary that is 111111111111111111111111111111111111111111111111111111111111111 (63 1's). A Float which is 32 bits cannot store a mantissa which is 63 bits, so it has to be rounded or truncated. In the case of Int.max, rounding up by 1 results in the value
1000000000000000000000000000000000000000000000000000000000000000. Starting from the left, there is only 1 significant bit to be represented by the mantissa (the trailing 0's come for free), so this number is a mantissa of 1 and an exponent of 64.
See #MartinR's answer for an explanation of what Java is doing.
Swift and Java behave differently when converting a "too large" floating point
number to an integer. Java truncates any floating point value
larger than Long.MAX_VALUE = 2^63-1:
long c = (long)(1.0E+30f);
System.out.println(c);
// 9223372036854775807
Swift expects that the value is in the range of Int, and aborts
with a runtime exception otherwise:
/// Creates a new instance by rounding the given floating-point value toward
/// zero.
///
/// - Parameter other: A floating-point value. When `other` is rounded toward
/// zero, the result must be within the range `Int.min...Int.max`.
public init(_ value: Float)
Example:
let c = Int(Float(1.0E30))
print(c)
// fatal error: Float value cannot be converted to Int because the result would be greater than Int.max
The same happens with your value Float(Int.max), which is the
floating point representable value closest to Int.max and happens
to be larger than Int.max.

Why is 0.29999999999999998 converted to 0.3?

How does it work internally?
How does it decide to convert 0.29999999999999998 to 0.3, even though 0.3 cannot be represented in binary?
Here are some more example:
scala> 0.29999999999999998
res1: Double = 0.3
scala> 0.29999999999999997
res2: Double = 0.3
scala> 0.29999999999999996
res3: Double = 0.29999999999999993
scala> 0.29999999999999995
res4: Double = 0.29999999999999993
There are two conversions involved.
First 0.29999999999999998 is converted to 0.299999999999999988897769753748434595763683319091796875, the nearest representable number.
Next, 0.299999999999999988897769753748434595763683319091796875 is converted to decimal for printing. 0.3 is also one of the numbers that converts to 0.299999999999999988897769753748434595763683319091796875, and it is the one that gets printed because it is so short.
Every finite double number is exactly representable as a decimal fraction. Generally, default output does not attempt to print the exact value, because it can be very long - far longer than the example above. A common choice is to print the shortest decimal fraction that would convert to the double on input. Both conversions are done using non-trivial algorithms. See Algorithm to convert an IEEE 754 double to a string? for some discussion and references to output algorithms.
==============================================================
There has been some discussion in comments on the value 0.30000000000000004. I agree with the comments by Rick Regan and Jesper, but thought it might be useful to add to this answer.
The exact value of the closest double to 0.30000000000000004 is 0.3000000000000000444089209850062616169452667236328125. All decimal numbers in the range [0.3000000000000000166533453693773481063544750213623046875, 0.3000000000000000721644966006351751275360584259033203125] convert to that value, and no numbers even slightly outside that range do so. 0.3000000000000000 is outside the range, so it does not have enough digits. 0.30000000000000004 is inside the range, so there is no need for more digits to correctly identify the double.
Note in Scala Double (see IEEE 754 Standard and IEEE Floating-Point Arithmetic), the original declared value is rounded up to nearest,
val x = 0.29999999999999998
x: Double = 0.3
"0.29999999999999998".toDouble
Double = 0.3
in as much as
0.2999999999999999999999999999999999999999999999999999999999998
Double = 0.3
Also in BigDecimal for arbitrary precision decimal floating-point representation (see API), the original value of type Double (parameter to the constructor) is first rounded up, namely
BigDecimal(0.29999999999999998) == 0.3
Boolean = true
BigDecimal(0.29999999999999998)
scala.math.BigDecimal = 0.3
However a textual declaration of the original value is not interpreted as Double and hence rounded up,
BigDecimal("0.29999999999999998") == 0.3
Boolean = false
namely,
BigDecimal("0.29999999999999998")
scala.math.BigDecimal = 0.29999999999999998

arc4random throwing out huge numbers

In a cocos2d game, I use arc4random to generate random numbers like this:
float x = (arc4random()%10 - 5)*delta;
(delta is the time between updates in the scheduled update method)
NSLog(#"x: %f", x);
I have been checking them like that.
Most of the numbers that I get are like this:
2012-12-29 15:37:18.206 Jumpy[1924:907] x: 0.033444
or
2012-12-29 15:37:18.247 Jumpy[1924:907] x: 0.033369
But for some reason I get numbers like this sometimes:
2012-12-29 15:37:18.244 Jumpy[1924:907] x: 71658664.000000
Edit: Delta is almost always:
2012-12-29 17:01:26.612 Jumpy[2059:907] delta: 0.016590
I thought it should return numbers in a range of -5 to 5 (multiplied by some small number). Why I am getting numbers like this?
arc4random returns a u_int32_t. The u_ part tells you that it's unsigned. So all of the operators inside the parentheses use unsigned arithmetic.
If you perform the subtraction 2 - 5 using unsigned 32-bit arithmetic, you get 232 + 2 - 5 = 232 - 3 = 4294967293 (a “huge number”).
Cast to a signed type before performing the subtraction. Also, prefer arc4random_uniform if your deployment target is iOS 4.3 or later:
float x = ((int)arc4random_uniform(10) - 5) * delta;
If you want the range to include -5 and 5, you need to use 11 instead of 10, because the range [-5,5] (inclusive) contains 11 elements:
float x = ((int)arc4random_uniform(11) - 5) * delta;
arc4random returns a u_int32_t, an unsigned type. The modulus is also performed using unsigned arithmetic, which yields a number between 0 and 9, as expected (by the way, don't ever do this; use arc4random_uniform instead). You then subtract 5, which is interpreted as an unsigned value, yielding a possibly huge positive value due to underflow.
The solution is to explicitly type the 5 by storing it in a variable of signed type or with a suffix (like 5L).
Looks like arc4random % 10 becomes less than 5, and you are working with negative integer later.
What is the value of delta?