Numeric Precision Rounding Artifacts with PostgreSQL - postgresql

We have an application that is attempting a bulk insert into a table within a postgresql (9.1 I think, I don't have access helping coworker troubleshoot remotely). A trace shows the raw values are generated correctly and handed off to the ODBC correctly.
The problem comes in with a column defined as NUMERIC but did not have scale or precision defined. There seem to be 'random' rounding artifacts. Sometimes rounds up, sometimes down with no relationship to number of decimal places. This is seen when values from the bulk insert are then queried.
I know it can cause issues with strings but not sure if it matters with numeric data types. The database is windows 1252 encoded and they are using the Unicode postgresql driver. Finally just some FYI its on a 32 bit windows VM with what looks like the default config_file parameters.
Question is what would/could be the cause of this?
Thanks in advance.

The data type numeric is an arbitrary precision data type, not a floating point type like float8 (double precision) or float4 (real). I.e., it stores decimal digits that are handed to it without any rounding whatsoever. Numbers are reproduced identically. Only the exact format may depend on your locale or settings of the middleware and client.
The fact that the precision and scale were not set lets the numeric column do that with almost1 no limitation.
1 Per documentaion:
up to 131072 digits before the decimal point; up to 16383 digits after the decimal point.
The gist of it: you can rule out the Postgres data type numeric as source for the rounding effect. I don't know about the rest, especially since you did not provide exact version numbers, demo values or a reproducible test case.
My shot in the dark would be that ODBC might treat the number like a floating point type. Maybe an outdated version?

Related

Why NumberLong(9007199254740993) matches NumberLong(9007199254740992) in MongoDB from mongo shell?

This situation happens when the given number is big enough (greater than 9007199254740992), along with more tests, I even found many adjacent numbers could match a single number.
Not only NumberLong(9007199254740996) would match NumberLong("9007199254740996"), but also NumberLong(9007199254740995) and NumberLong(9007199254740997).
When I want to act upon a record using its number, I could actually use three different adjacent numbers to get back the same record.
The accepted answer from here makes sense, I quote the most relevant part below:
Caveat: Don't try to invoke the constructor with a too large number, i.e. don't try db.foo.insert({"t" : NumberLong(1234657890132456789)}); Since that number is way too large for a double, it will cause roundoff errors. Above number would be converted to NumberLong("1234657890132456704"), which is wrong, obviously.
Here are some supplements to make things more clear:
Firstly, Mongo shell is a JavaScript shell. And JS does not distinguish between integer and floating-point values. All numbers in JS are represented as floating point values. This means mongo shell uses 64 bit floating point number by default. If shell sees "9007199254740995", it will treat this as a string and convert it to long long. But when we omit the double quotes, mongo shell will see unquoted 9007199254740995 and treat it as a floating-point number.
Secondly, JS uses the 64 bit floating-point format defined in IEEE 754 standard to represent numbers, the maximum it can represent is:
, and the minimum is:
There are an infinite number of real numbers, but only a limited number of real numbers can be accurately represented in the JS floating point format. This means that when you deal with real numbers in JS, the representation of the numbers will usually be an approximation of the actual numbers.
This brings the so-called rounding error issue. Because integers are also represented in binary floating-point format, the reason for the loss of trailing digits precision is actually the same as that of decimals.
The JS number format allows you to accurately represent all integers between
and
Here, since the numbers are bigger than 9007199254740992, the rounding error certainly occurs. The binary representation of NumberLong(9007199254740995), NumberLong(9007199254740996) and NumberLong(9007199254740997) are the same. So when we query with these three numbers in this way, we are practically asking for the same thing. As a result, we will get back the same record.
I think understanding that this problem is not specific to JS is important: it affects any programming language that uses binary floating point numbers.
You are misusing the NumberLong constructor.
The correct usage is to give it a string argument, as stated in the relevant documentation.
NumberLong("2090845886852")

T-SQL Data type for fixed precision and variable scale

I have a set of data with a precision of 16 digits, however this can range from very large numbers with all 16 digits to the left of the decimal point to very small number with all digits to right of the decimal point. (e.g. 1234567890123456.0 & 0.1234567890123456 ) I am trying to figure out the correct ("best") data type to store this data in. I need to store the exact values and not an approximations so float & real are not viable options. Numeric or decimal seem appropriate, however I am getting hung up on the most efficient precision & scale to set, it seems I must go with (32,16) to account for both extremes, but that seem inefficient as I am requesting twice the bit storage that I will ever use. Is there a better option?
Thank You for your assistance.

When does PostgreSQL round a double precision column type

I have a colleague of mine that keeps telling me not to use a double precision type for a PostgreSQL column, because I will eventually have rounding issues.
I am only aware of one case where a value gets stored with approximation and is when a number with "too many" decimal digits gets saved.
For example if I try to store the result of 1/3, then I will get an approximation.
On the other hand, he is claiming that the above is not the only case. He is saying that sometimes, even if the user is trying to store a number with a well defined number of digits such as 84.2 or 3.124 the value might get save as 84.19 or 3.1239 for the second case
This sounds very strange to me.
Could anyone give me an example/proof that the above can actually happen?
Your colleague is right: stay away from from float or double. But not so much because of rounding issue, but because those are approximate data types. What you put into that column is not necessarily what you get out.
If you care for precision and accurate values, use numeric.
A more detailed explanation about the pitfalls of approximate data types can be found here:
https://floating-point-gui.de/
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

Numeric vs Real Datypes for Storing Monetary Values

An answer to a question about a good schema for stock data recommended this schema:
Symbol - char 6
Date - date
Time - time
Open - decimal 18, 4
High - decimal 18, 4
Low - decimal 18, 4
Close - decimal 18, 4
Volume - int
In addition, Postgres documentation says:
"If you require exact storage and calculations (such as for monetary amounts), use the numeric type instead (of floating point types)."
I'm fairly new at SQL, and I hope this is not a really naive question. I'm wondering about the necessity of using the numeric datatype (especially 18,4) - it seems like overkill to me. And "exact" is not really something I'd specify, if exact means correct out to 12 decimal places.
I'm thinking of using real 10,2 for the monetary columns. Here's my rationale.
A typical calculation might compare a stock price (2 decimal places) to a moving average (that could have many decimal places), to determine which is larger. My understanding is that the displayed value of the average (and any calculated results) would be rounded to 2 decimal places, but that calculations would be performed using the higher precision of the stored internal number.
So such a calculation would be accurate to at least 2 decimal places, which is really all I need, I think.
Am I way off base here, and is it possible to get an incorrect answer to the above comparison by using the real 10,2 datatype?
I'd also welcome any other comments, pro or con, about using the numeric datatype.
Thanks in advance.
Floating point variables are vulnerable to floating point errors. Therefore, if accuracy is important (anytime money is involved) it's always recommended to use a numeric type.
https://en.wikipedia.org/wiki/Floating-point_arithmetic#Accuracy_problems
Floating point inaccuracy examples
Let's start with the schema above, and look how 18,4 would look like in floating point numbers:
select '12345678901234.5678'::float4;
float4
-------------
1.23457e+13
(1 row)
select '12345678901234.5678'::double precision;
float8
------------------
12345678901234.6
(1 row)
Therefore 14 numbers (before the decimal point) will always round your number, and you store rounded (and therefore wrong) values.
Also your assumption about rounding to two decimal places - where is that assumption coming from?
select '1.2345678'::float4;
float4
---------
1.23457
(1 row)
Therefore, so far you presented a number of assumptions, and shortcuts, without showing why you want to use floating point numbers instead of numeric. What is your compelling reason? Just save some bytes?
My next question is: if your application expands, and does more than just "avg" calculations - do you need to chance the data type to numeric again?

Selecting floating point numbers in decimal form

I've a small number in a PostgreSQL table:
test=# CREATE TABLE test (r real);
CREATE TABLE
test=# INSERT INTO test VALUES (0.00000000000000000000000000000000000000000009);
INSERT 0 1
When I run the following query it returns the number as 8.96831e-44:
test=# SELECT * FROM test;
r
-------------
8.96831e-44
(1 row)
How can I show the value in psql in its decimal form (0.00000000000000000000000000000000000000000009) instead of the scientific notation? I'd be happy with 0.0000000000000000000000000000000000000000000896831 too. Unfortunately I can't change the table and I don't really care about loss of precision.
(I've played with to_char for a while with no success.)
Real in Postgres is a floating point datatype, stored on 4 bytes, that is 32 bits.
Your value,
0.00000000000000000000000000000000000000000009
Can not be precisely represented in a 32bit IEEE754 floating point number. You can check the exact values in this calculator
You cold try and use double precision (64bits) to store it, according to the calculator, that seems to be an exact representation. NOT TRUE Patricia showed that it was just the calculator rounding the value, even though explicitly asking it not to... Double would mean a bit more precision, but still no exact value, as this number is not representable using finite number of binary digits. (Thanks, Patricia, a lesson learnt (again): don't believe what you see on the Intertubez)
Under normal circumstances, you should use a NUMERIC(precision, scale) format, that would store the number precisely to get back the correct value.
However, your value to store seems to have a scale larger than postgres allows (which seems to be 30) for exact decimal represenations. If you don't want to do calculations, just store them (which would not be a very common situation, I admit), you could try storing them as strings... (but this is ugly...)
EDIT
This to_char problem seems to be a known bug...
Quote:
My immediate reaction to that is that float8 values don't have 57 digits
of precision. If you are expecting that format string to do something
useful you should be applying it to a numeric column not a double
precision one.
It's possible that we can kluge things to make this particular case work
like you are expecting, but there are always going to be similar-looking
cases that can't work because the precision just isn't there.
In a quick look at the code, the reason you just get "0." is that it's
rounding off after 15 digits to ensure it doesn't print garbage. Maybe
it could be a bit smarter for cases where the value is very much smaller
than 1, but it wouldn't be a simple change.
(from here)
However, I find this not defendable. IMHO a double (IEEE754 64bit floating point to be exact) will always have ~15 significant decimal digits, if the value fits into the type...
Recommended reading:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Postgres numeric types
BUG #6217: to_char() gives incorrect output for very small float values