What is the min and max value for a double field in MongoDB? - mongodb

I need to find out what is the min and max value for a double field in mongoDB, including the number of precision for it.
I found this link: http://bsonspec.org/spec.html
Because MongoDB uses BSon, I'm looking for the BSON information. There is says:
double 8 bytes (64-bit IEEE 754-2008 binary floating point)
But how do I calculate the min and max number based on that?
I ended up at this link: https://en.wikipedia.org/wiki/Double-precision_floating-point_format
But couldn't understand how to calculate the min and max values.

The wikipedia link you include has the precise answer:
min: -2^1023 * (1 + (1 − 2^−52)) (approx: -1.7976931348623157 * 10^308)
max: 2^1023 * (1 + (1 − 2^−52)) (approx: 1.7976931348623157 * 10^308)

Related

how does hexadecimal to decimal work in swift? [duplicate]

I don't understand how floating point numbers are represented in hex notation in Swift. Apple's documentation shows that 0xC.3p0 is equal to 12.1875 in decimal. Can someone walk me through how to do that conversion? I understand that before the decimal hex value 0xC = 12. The 3p0 after the decimal is where I am stumped.
From the documentation:
Floating-Point Literals
...
Hexadecimal floating-point literals consist of a 0x prefix, followed
by an optional hexadecimal fraction, followed by a hexadecimal
exponent. The hexadecimal fraction consists of a decimal point
followed by a sequence of hexadecimal digits. The exponent consists of
an upper- or lowercase p prefix followed by a sequence of decimal
digits that indicates what power of 2 the value preceding the p is
multiplied by. For example, 0xFp2 represents 15 × 22, which evaluates
to 60. Similarly, 0xFp-2 represents 15 × 2-2, which evaluates to 3.75.
In your case
0xC.3p0 = (12 + 3/16) * 2^0 = 12.1875
Another example:
0xAB.CDp4 = (10*16 + 11 + 12/16 + 13/16^2) * 2^4 = 2748.8125
This format is very similar to the %a printf-format (see for example
http://pubs.opengroup.org/onlinepubs/009695399/functions/fprintf.html).
It can be used to specify a floating point number directly in its
binary IEEE 754 representation, see Why does Swift use base 2 for the exponent of hexadecimal floating point values?
for more information.
Interpret 0xC.3p0 using the place value system:
C (or 12) is in the 16^0 place
3 is in the 16^-1 place (and 3/16 == 0.1875)
p says the exponent follows (like the e in 6.022e23 in base 10)
0 is the exponent (in base 10) that is the power of 2 (2^0 == 1)
So putting it all together
0xC.3p0 = (12 + (3/16)) * 2^0 = 12.1875
In order to sum up what I've read, you can see those representations as follow:
0xC.3p0 = (12*16^0 + 3*16^-1) * 2^0 = 12.1875
From Martin R's example above :
0xAB.CDp4 = (10*16^1 + 11*16^0 + 12*16^-1 + 13*16^-2) * 2^4 = 2748.8125
The 0xC is 12, as you said. The decimal part is ((1/16)*3)*10^0.
So you need to take the decimal part and divide it by 16. Then you need to multiply it by 2 raised to the power of the number after the p
Hexadecimal -(0-9,A=10,B=11,C=12,D=13,E=14,F=15) and p0 means 2^0
ex: - 0xC = 12 (0x prefix represents hexadecimal)
After the decimal part as in 0xC.3p0 we divide the numbers with the power of 16
So here its 3/16 = 0.1875
so 0xC.3p0 = (12 + (3/16) ) 2^0
If it was 0xC.43p0 then for the 4 we would use 4/(16), for 3 we would use 3/(16 ^2) and similarly if the decimal part increases.
ex: 0xC.231p1 = (12 + 2/16 + 3/(256) + 1/(16^3)) 2^1 = 24.27392578125

Why is the range of the timestamp type 4713 BC to 294276 AD?

Postgresql has a timestamp datatype with resolution 1 microsecond and range 4713 BC to 294276 AD that takes up 8 bytes (see https://www.postgresql.org/docs/current/datatype-datetime.html).
I have calculated the total number of microseconds in that range as (294276 + 4713) × 365.25 × 24 × 60 × 60 × 1000000 = 9.435375266×10¹⁸. This is less than 2⁶⁴ = 1.844674407×10¹⁹, but also more than 2⁶³ = 9.223372037×10¹⁸.
I might be off by a few days due to calendar weirdness and leap years, but I don't think it's enough to push the number below 2⁶³.
So, why were the limits chosen like that? Why not use the full range available with 64 bits?
The internal representation of timestamps is in microseconds since 2000-01-01 00:00:00, stored as an 8-byte integer. So the maximum possible year would be something like
SELECT (2::numeric^63 -1) / 365.24219 / 24 / 60 / 60 / 1000000 + 2000;
?column?
═════════════════════════
294277.2726976055146158
(1 row)
which explains the upper limit.
The minimum is explained by a comment in src/include/datatype/timestamp.h:
/*
* Range limits for dates and timestamps.
*
* We have traditionally allowed Julian day zero as a valid datetime value,
* so that is the lower bound for both dates and timestamps.
*
* The upper limit for dates is 5874897-12-31, which is a bit less than what
* the Julian-date code can allow. For timestamps, the upper limit is
* 294276-12-31. The int64 overflow limit would be a few days later; again,
* leaving some slop avoids worries about corner-case overflow, and provides
* a simpler user-visible definition.
*/
So the minimum is taken from the lower limit on Julian dates.

Results to two decimals using ST_Area

I have performed ST_Area on a shapefile but the resulting numbers are VERY long. Need to reduce them to two decimals. This is the code so far:
SELECT mtn_name, ST_Area(geom) / 1000000 AS km2 FROM mountain ORDER BY 2 DESC;
This is what I get:
mtn_name KM2
character varying double precision
1 Monte del Pueblo de Jerez del Marquesado 6.9435657067528e-9
2 Monte de La Peza 6.113288075418532e-9
I tried ROUND() but it brings KM to 0.00
Since it is not simply possible to round a decimal value (Decimal Precision problem) you will not get a double value which is exactly 6.94e-9. It would be something like 6.9400000001e-9 after rounding.
You can do:
demos:db<>fiddle
If the exponent is always the same (in your example it is always e-9) you can round with a fixed value. With double values, this results in the problem described above.
SELECT
round(area * 10e8 * 100) / 100 / 10e8
FROM area_result
To avoid these precision problems, you can use numeric type
SELECT
round(area * 10e8 * 100)::numeric / 100 / 10e8
FROM area_result
If you have different exponents, you have to calculate the multiplicator first. According to this solution you can do:
For double output
SELECT
round(area / mul * 100) * mul / 100
FROM (
SELECT
area,
pow(10, floor(log10(area))) as mul
FROM area_result
) s
For numeric output
SELECT
round((area / mul) * 100)::numeric * mul / 100
FROM (
SELECT
area,
pow(10, floor(log10(area)))::numeric as mul
FROM area_result
) s
However, your exponential result is just a view of the values. This can vary from database tool to database tool. Internally they are not stored as the view. So, if you fetch these values, you will, in fact, get a value like 0.00000000694 and not 6.94e-9, which is just a textual representation.
If you want to ensure to get exactly this textual representation, you can use number formatting to_char() for this, which, of course, returns a type text, not a number anymore:
SELECT
to_char(area, '9.99EEEE')
FROM area_result

Floating point hex notation in Swift

I don't understand how floating point numbers are represented in hex notation in Swift. Apple's documentation shows that 0xC.3p0 is equal to 12.1875 in decimal. Can someone walk me through how to do that conversion? I understand that before the decimal hex value 0xC = 12. The 3p0 after the decimal is where I am stumped.
From the documentation:
Floating-Point Literals
...
Hexadecimal floating-point literals consist of a 0x prefix, followed
by an optional hexadecimal fraction, followed by a hexadecimal
exponent. The hexadecimal fraction consists of a decimal point
followed by a sequence of hexadecimal digits. The exponent consists of
an upper- or lowercase p prefix followed by a sequence of decimal
digits that indicates what power of 2 the value preceding the p is
multiplied by. For example, 0xFp2 represents 15 × 22, which evaluates
to 60. Similarly, 0xFp-2 represents 15 × 2-2, which evaluates to 3.75.
In your case
0xC.3p0 = (12 + 3/16) * 2^0 = 12.1875
Another example:
0xAB.CDp4 = (10*16 + 11 + 12/16 + 13/16^2) * 2^4 = 2748.8125
This format is very similar to the %a printf-format (see for example
http://pubs.opengroup.org/onlinepubs/009695399/functions/fprintf.html).
It can be used to specify a floating point number directly in its
binary IEEE 754 representation, see Why does Swift use base 2 for the exponent of hexadecimal floating point values?
for more information.
Interpret 0xC.3p0 using the place value system:
C (or 12) is in the 16^0 place
3 is in the 16^-1 place (and 3/16 == 0.1875)
p says the exponent follows (like the e in 6.022e23 in base 10)
0 is the exponent (in base 10) that is the power of 2 (2^0 == 1)
So putting it all together
0xC.3p0 = (12 + (3/16)) * 2^0 = 12.1875
In order to sum up what I've read, you can see those representations as follow:
0xC.3p0 = (12*16^0 + 3*16^-1) * 2^0 = 12.1875
From Martin R's example above :
0xAB.CDp4 = (10*16^1 + 11*16^0 + 12*16^-1 + 13*16^-2) * 2^4 = 2748.8125
The 0xC is 12, as you said. The decimal part is ((1/16)*3)*10^0.
So you need to take the decimal part and divide it by 16. Then you need to multiply it by 2 raised to the power of the number after the p
Hexadecimal -(0-9,A=10,B=11,C=12,D=13,E=14,F=15) and p0 means 2^0
ex: - 0xC = 12 (0x prefix represents hexadecimal)
After the decimal part as in 0xC.3p0 we divide the numbers with the power of 16
So here its 3/16 = 0.1875
so 0xC.3p0 = (12 + (3/16) ) 2^0
If it was 0xC.43p0 then for the 4 we would use 4/(16), for 3 we would use 3/(16 ^2) and similarly if the decimal part increases.
ex: 0xC.231p1 = (12 + 2/16 + 3/(256) + 1/(16^3)) 2^1 = 24.27392578125

High-precision random numbers on iOS

I have been trying this for a while but thus far haven't had any luck.
What is the easiest way to retrieve a random number between two very precise numbers on iOS?
For example, I want a random number between 41.37783830549337 and 41.377730629131634, how would I accomplish this?
Thank you so much in advance!
Edit: I tried this:
double min = 41.37783830549337;
double max = 41.377730629131634;
double test = ((double)rand() / RAND_MAX) * (max - min) + min;
NSLog(#"Min:%lf, max:%lf, result:%lf",min,max,test);
But the results weren't quite as precise as I was hoping, and ended up like this::
Min:41.377838, max:41.377731, result:41.377838
You can normalise the output of rand to any range you want:
((double)rand() / RAND_MAX) * (max - min) + min
[Note: This is pure C, I'm assuming it works equivalently in Obj-C.]
[Note 2: Replace double with the data-type of your choice as appropriate.]
[Note 3: Replace rand with the random-number source of your choice as appropriate.]