Can the Postgres data type NUMERIC store signed values? - postgresql

In PostgreSQL, I would like to store signed values -999.9 - 9999.9.
Can I use numeric(5.1) for this?
Or what type should I use?

You can certainly use the arbitrary precision type numeric with a precision of 5 and a scale of 1, just like #Simon commented, but without the syntax error. Use a comma(,) instead of the dot (.) in the type modifier:
SELECT numeric(5,1) '-999.9' AS nr_lower
, numeric(5,1) '9999.9' AS nr_upper;
nr_lower | nr_upper
----------+----------
-999.9 | 9999.9
The minus sign and the dot in the string literal do not count against the allowed maximum of significant digits (precision).
If you don't need to restrict the length, just use numeric.
If you need to enforce minimum and maximum, add a check constraint:
CHECK (nr_column BETWEEN -999.9 AND 9999.9)
numeric stores your number exactly. If you don't need the absolute precision and tiny rounding errors are no problem, you might also use one of the floating point types double precision (float8) or real (float4).
Or, since you only allow a single fractional decimal digit, you can multiply by 10 and use integer, which would be the most efficient storage: 4 bytes, no rounding errors and fastest processing. Just use and document the number properly.
Details for numeric types in the manual.

Related

PureScript - Convert a number to an integer?

PureScript contains a method in the Integer library fromNumber.
Here is an example of how it might be used:
myInteger = fromMaybe 0 (fromNumber myNumber)
However the docs provide this puzzling explanation:
Creates an Int from a Number value. The number must already be an integer and fall within the valid range of values for the Int type otherwise Nothing is returned.
Basically, your number must already be an Integer to convert it to an integer.
Assuming your number is not already an integer, a reasonable use case, how would you convert it to a number?
If it's not already an integer, there is no one true way of converting it to an integer. You could round to the nearest integer, round up, round down, do banker's rounding, or some sort of crazy conversion scheme of your own.
The Data.Int module offers several functions for different conversion strategies, such as floor, ceil, and round.

Numeric overflow in insert query

I am receiving an error of:
Arithmetic overflow or division by zero has occurred. arithmetic
exception, numeric overflow, or string truncation. numeric value is
out of range.
This can be replicated with:
create table testing (avalue numeric(3,2));
and the following insert:
insert into testing values (328);
However, using the following works fine:
insert into testing values (327);
328 seems to be the magic figure the error occurs. To me, the numeric(3,2) declaration should allow me 000-999 with 2 decimal places but based on the above that is wrong.
Can someone explain why this is and what I should declare my domain as if I want to allow 0-999 with 2 decimal places as.
Thanks
328 is not "magic" number :)
The magic number is 32767 ( 0x7FFF). This is SMALLINT type limit.
Note : Firebird not support unsigned integer type.
Limit for NUMERIC type vary according to storage type and scale.
Internal storage type are SMALLINT, INTEGER and BIGINT according by precision as:
precision-type
1..4 - SMALLINT
5..9 - INTEGER
10..18 - BIGINT
So
NUMERIC(3,2) is SMALLINT internal type max 32767 / 100 = 327.67.
Update
Firebird 2.5 Language Reference
by
Paul Vinkenoog,
Dmitry Yemanov and
Thomas Woinke
contains more comprehensive description of NUMERIC type than other official Firebird documents.
NUMERIC (precision, scale) is the exact number with the decimal
precision and scale specified by the and .
Syntax:
NUMERIC [precision [, scale]]
The scale of NUMERIC is the count of decimal digits in the
fractional part, to the right of the decimal point. The precision of
NUMERIC is the total count of decimal digits in the number.
The precision must be positive, the maximum supported value is 18.
The scale must be zero or positive, up to the specified precision.
If the scale is omitted, then zero value is implied, thus
meaning an integer value of the specified precision, i.e.
NUMERIC (P) is equivalent to NUMERIC (P, 0). If both the precision and
the scale are omitted, then precision of 9 and zero scale are implied,
i.e. NUMERIC is equivalent to NUMERIC (9, 0).
The internal representation of the NUMERIC data type may vary.
Numerics with the precision up to (and including) 4 are always stored
as scaled short integers (SMALLINT). Numerics with the precision up to
(and including) 9 are always stored as scaled regular integers
(INTEGER). Storage of higher precision numerics depends on the SQL
dialect. In Dialect 3, they are stored as scaled large integers
(BIGINT). In Dialect 1, however, large integers are not available,
therefore they are stored as double precision floating-point values
(DOUBLE PRECISION).
The effective precision limit for the given value depends on the
corresponding storage. For example, NUMERIC (5) will be stored as
INTEGER, thus allowing values in the precision range up to (and
including) NUMERIC (9). So beware that the declared precision is not
strictly enforced.
Values outside the range limited by the effective precision are not
allowed. Values with the scale larger than the declared one will be
rounded to the declared scale while performing an assignment.
The declaration numeric(5, 2) gives you numbers from 0.00 to 999.99. The declaration numeric(3,2) gives you numbers from 0.00 to 9.99. This is sort-of illustrated here. But these are the standard declarations for numerics in SQL.
The "3" is the scale, which is the total number of digits in the number, not the number to the left of the decimal place.
I'm not sure why 327 is allowed.

PostgreSQL adds trailing zeros to numeric

Recently I migrated a DB to PostgreSQL that has some columns defined as numeric(9,3) and numeric(9,4). In testing the app I have found that when data is saved to these columns there are trailing zeros being added to the value inserted. I am using Hibernate, and my logs show the correct values being built for the prepared statements.
An example of the data I am inserting is 0.75 in the numeric(9,3) column and the value stored is 0.750. Another example for the numeric(9,4) column: I insert the value 12 and the DB is holding 12.0000.
I found this related question: postgresql numeric type without trailing zeros. But it did not offer a solution other than to quote the 9.x documentation saying trailing zeros are not added. From that question, the answer quoted the docs (which I have also read) which said:
Numeric values are physically stored without any extra leading or
trailing zeroes. Thus, the declared precision and scale of a column
are maximums, not fixed allocations.
However, like that question poster, I see trailing zeros being added. The raw insert generated by Hibernate in the logs does not show this extra baggage. So I am assuming it is a PostgreSQL thing I have not set correctly, I just can't find how I got it wrong.
I think this is it, if I am understanding "coerce" correctly in this context. This is from the PostgreSQL docs:
Both the maximum precision and the maximum scale of a numeric column
can be configured. To declare a column of type numeric use the syntax:
NUMERIC(precision, scale)
The precision must be positive, the scale zero or positive.
Alternatively:
NUMERIC(precision)
selects a scale of 0. Specifying:
NUMERIC
without any precision or scale creates a column in
which numeric values of any precision and scale can be stored, up to
the implementation limit on precision. A column of this kind will not
coerce input values to any particular scale, whereas numeric
columns with a declared scale will coerce input values to that scale.
Bold emphasis mine.
So it is misleading later in the same section:
Numeric values are physically stored without any extra leading or
trailing zeroes. Thus, the declared precision and scale of a column
are maximums, not fixed allocations.
Bold emphasis mine again.
This may be true of the precision part, but since the scale is being coerced when it is defined, trailing zeros are being added to the input values to meet the scale definition (and I would assume truncated if too large).
I am using precision,scale definitions for constraint enforcement. It is during the DB insert that the trailing zeros are being added to the numeric scale, which seems to support the coercion and conflicts with the statement of no trailing zeros being added.
Correct or not, I had to handle the problem in code after the select is made. Lucky for me the impacted attributes are BigDecimal so stripping trailing zeros was easy (albeit not graceful). If someone out there has a better suggestion for not having PostgreSQL add trailing zeros to the numeric scale on insert, I am open to them.
If you specify a precision and scale, Pg pads to that precision and scale.
regress=> SELECT '0'::NUMERIC(8,4);
numeric
---------
0.0000
(1 row)
There's no way to turn that off. It's still the same number, and the precision is defined by the type, not the value.
If you want to have the precision defined by the value you have to use unconstrained numeric:
regress=> SELECT '0'::NUMERIC, '0.0'::NUMERIC;
numeric | numeric
---------+---------
0 | 0.0
(1 row)
You can strip training zeros with the trim_scale function from PostgreSQL v13 on. That will reduce the storage size of the number.

how I must use digits function in matlab

i have code and use double function several time to convert sym to double.to increase precision , I want to use digits function.
I want to know it is enough that I write digits in the top of code or I must write digits in above of every double function.
digits set's the precision until it is changed again. Calling digits() without any input you get the precision to verify it's set correct.
In many cases digis has absoluetly no influence on symbolic variables because an analytical solution is found. This means there are no precision errors unless you convert to double. When convertig, digits should be set to at least 16 because this matches double precision.

How to set precision and scale in ALTER TABLE

I have working code with PostgreSQL 9.3:
ALTER TABLE meter_data ALTER COLUMN w3pht TYPE float USING (w3pht::float);
but don't know how to set precision and scale.
The type float does not have precision and scale. Use numeric(precision, scale) instead if you need that.
Per documentation:
The data types real and double precision are inexact, variable-precision numeric types.
For your given example:
ALTER TABLE meter_data ALTER COLUMN w3pht TYPE numeric(15,2)
USING w3pht::numeric(15,2) -- may or may not be required
The manual:
A USING clause must be provided if there is no implicit or assignment cast from old to new type.
Example: if the old data type is text, you need the USING clause. If it's float, you don't.
As per PostgreSQL documentation, you can select the minimum number for the floating point numbers using syntax float(n) where n is the minimum number of binary digits, up to 53.
However, to store decimal values at all, use numeric(precision, scale) or its synonym decimal(precision, scale) but notice that these are hard limits; according to the documentation:
If the scale of a value to be stored is greater than the declared
scale of the column, the system will round the value to the specified
number of fractional digits. Then, if the number of digits to the left
of the decimal point exceeds the declared precision minus the declared
scale, an error is raised.
Thus your alter table could be:
ALTER TABLE meter_data
ALTER COLUMN w3pht TYPE numeric(10, 2)
USING (w3pht::numeric(10, 2));
for 2 digits right of decimal point and 10 total digits. However if you do not
need to specify limits, simple numeric will allow "up to 131072 digits before the decimal point; up to 16383 digits after".