How many decimal places can the X and Y values of MySQL's point class have? - coordinates

I defined a column in my schema named location POINT NOT NULL. The point class specifies X- and Y-coordinate values. How many decimal places can these X and Y values have? I cannot find an exact definition – especially for the fractional part – for these properties.

MySQL stores the spacial datatypes in the WKB format.
This format uses double precision to store the X and Y co-ordinates. This means that it can store a precision of 16 decimal digits (53 bits of precision)
More information on double precision float format here

Related

Postgres Custom float type that is always truncated to 2 decimals after point

Can I generate a custom data type in postgres that everytime I insert or update a float into it it is truncate to 2 decimals after dot.
create table money(
formatted moneys_type
);
insert into money values (30.122323213);
Select * from money;
Returns
30.12
Update I didn't use numeric or decimal because they round up when 1.999 => 2
See documentation on Numeric Types / Arbitrary Precision Numbers.
The precision of a numeric is the total count of significant digits in
the whole number, that is, the number of digits to both sides of the
decimal point. The scale of a numeric is the count of decimal digits
in the fractional part, to the right of the decimal point. So the
number 23.5141 has a precision of 6 and a scale of 4. Integers can be
considered to have a scale of zero.
...
To declare a column of type numeric use the syntax:
NUMERIC(precision, scale)
The maximum allowed precision when explicitly specified in the type declaration is 1000.
So you can use
NUMERIC(1000, 2)

Precision of double values in Spark

I am reading some data from a CSV file, and I have custom code to parse string values into different data types. For numbers, I use:
val format = NumberFormat.getNumberInstance()
which returns a DecimalFormat, and I call parse function on that to get my numeric value. DecimalFormat has arbitrary precision, so I am not losing any precision there. However, when the data is pushed into a Spark DataFrame, it is stored using DoubleType. At this point, I am expecting to see some precision issues, however I do not. I tried entering values from 0.1, 0.01, 0.001, ..., 1e-11 in my CSV file, and when I look at the values stored in the Spark DataFrame, they are all accurately represented (i.e. not like 0.099999999). I am surprised by this behavior since I do not expect a double value to store arbitrary precision. Can anyone help me understand the magic here?
Cheers!
There are probably two issues here: the number of significant digits that a Double can represent in its mantissa; and the range of its exponent.
Roughly, a Double has about 16 (decimal) digits of precision, and the exponent can cover the range from about 10^-308 to 10^+308. (Obviously, the actual limits are set by the binary representation used by the ieee754 format.)
When you try to store a number like 1e-11, this can be accurately approximated within the 56 bits available in the mantissa. Where you'll get accuracy issues is when you want to subtract two numbers that are so close together that they only differ by a small number of the least significant bits (assuming that their mantissas have been aligned shifted so that their exponents are the same).
For example, if you try (1e20 + 2) - (1e20 + 1), you'd hope to get 1, but actually you'll get zero. This is because a Double does not have enough precision to represent the 20 (decimal) digits needed. However, (1e100 + 2e90) - (1e100 + 1e90) is computed to be almost exactly 1e90, as it should be.

Numeric overflow in insert query

I am receiving an error of:
Arithmetic overflow or division by zero has occurred. arithmetic
exception, numeric overflow, or string truncation. numeric value is
out of range.
This can be replicated with:
create table testing (avalue numeric(3,2));
and the following insert:
insert into testing values (328);
However, using the following works fine:
insert into testing values (327);
328 seems to be the magic figure the error occurs. To me, the numeric(3,2) declaration should allow me 000-999 with 2 decimal places but based on the above that is wrong.
Can someone explain why this is and what I should declare my domain as if I want to allow 0-999 with 2 decimal places as.
Thanks
328 is not "magic" number :)
The magic number is 32767 ( 0x7FFF). This is SMALLINT type limit.
Note : Firebird not support unsigned integer type.
Limit for NUMERIC type vary according to storage type and scale.
Internal storage type are SMALLINT, INTEGER and BIGINT according by precision as:
precision-type
1..4 - SMALLINT
5..9 - INTEGER
10..18 - BIGINT
So
NUMERIC(3,2) is SMALLINT internal type max 32767 / 100 = 327.67.
Update
Firebird 2.5 Language Reference
by
Paul Vinkenoog,
Dmitry Yemanov and
Thomas Woinke
contains more comprehensive description of NUMERIC type than other official Firebird documents.
NUMERIC (precision, scale) is the exact number with the decimal
precision and scale specified by the and .
Syntax:
NUMERIC [precision [, scale]]
The scale of NUMERIC is the count of decimal digits in the
fractional part, to the right of the decimal point. The precision of
NUMERIC is the total count of decimal digits in the number.
The precision must be positive, the maximum supported value is 18.
The scale must be zero or positive, up to the specified precision.
If the scale is omitted, then zero value is implied, thus
meaning an integer value of the specified precision, i.e.
NUMERIC (P) is equivalent to NUMERIC (P, 0). If both the precision and
the scale are omitted, then precision of 9 and zero scale are implied,
i.e. NUMERIC is equivalent to NUMERIC (9, 0).
The internal representation of the NUMERIC data type may vary.
Numerics with the precision up to (and including) 4 are always stored
as scaled short integers (SMALLINT). Numerics with the precision up to
(and including) 9 are always stored as scaled regular integers
(INTEGER). Storage of higher precision numerics depends on the SQL
dialect. In Dialect 3, they are stored as scaled large integers
(BIGINT). In Dialect 1, however, large integers are not available,
therefore they are stored as double precision floating-point values
(DOUBLE PRECISION).
The effective precision limit for the given value depends on the
corresponding storage. For example, NUMERIC (5) will be stored as
INTEGER, thus allowing values in the precision range up to (and
including) NUMERIC (9). So beware that the declared precision is not
strictly enforced.
Values outside the range limited by the effective precision are not
allowed. Values with the scale larger than the declared one will be
rounded to the declared scale while performing an assignment.
The declaration numeric(5, 2) gives you numbers from 0.00 to 999.99. The declaration numeric(3,2) gives you numbers from 0.00 to 9.99. This is sort-of illustrated here. But these are the standard declarations for numerics in SQL.
The "3" is the scale, which is the total number of digits in the number, not the number to the left of the decimal place.
I'm not sure why 327 is allowed.

How to set precision and scale in ALTER TABLE

I have working code with PostgreSQL 9.3:
ALTER TABLE meter_data ALTER COLUMN w3pht TYPE float USING (w3pht::float);
but don't know how to set precision and scale.
The type float does not have precision and scale. Use numeric(precision, scale) instead if you need that.
Per documentation:
The data types real and double precision are inexact, variable-precision numeric types.
For your given example:
ALTER TABLE meter_data ALTER COLUMN w3pht TYPE numeric(15,2)
USING w3pht::numeric(15,2) -- may or may not be required
The manual:
A USING clause must be provided if there is no implicit or assignment cast from old to new type.
Example: if the old data type is text, you need the USING clause. If it's float, you don't.
As per PostgreSQL documentation, you can select the minimum number for the floating point numbers using syntax float(n) where n is the minimum number of binary digits, up to 53.
However, to store decimal values at all, use numeric(precision, scale) or its synonym decimal(precision, scale) but notice that these are hard limits; according to the documentation:
If the scale of a value to be stored is greater than the declared
scale of the column, the system will round the value to the specified
number of fractional digits. Then, if the number of digits to the left
of the decimal point exceeds the declared precision minus the declared
scale, an error is raised.
Thus your alter table could be:
ALTER TABLE meter_data
ALTER COLUMN w3pht TYPE numeric(10, 2)
USING (w3pht::numeric(10, 2));
for 2 digits right of decimal point and 10 total digits. However if you do not
need to specify limits, simple numeric will allow "up to 131072 digits before the decimal point; up to 16383 digits after".

Double double precision in PostGis in Postgresql

This is my SQL:
SELECT st_asText(ST_GeomFromText('POINT(52.000000000012345678 21.0000000000123456789)'))
SELECT st_asText(ST_MakePoint(52.000000000012345678, 21.0000000000123456789))
But response is:
POINT(52.0000000000123 21.0000000000123)
I need double double precision in PostGis. How can i fix it?
That is already double precision. Single precision coordinates would trim after the sixth decimal whereas double offers 15 digits of precision. You're trying to set a point with 18 decimal positions.
Also is important to note that the number of decimal places a double can hold depends on the numbers to the left of the decimal separator. (see OSGeo rants abot that) so you're using two digits for the integer part (52 and 21) and you have 13 digits left to play with, which is exactly what you're getting in the response.