I am trying some operations on large numeric field such as 2^89.
Postgres numeric data type can store 131072 on left of decimal and 16383 digits on right of decimal.
I tried some thing like this and it worked:
select 0.037037037037037037037037037037037037037037037037037037037037037037037037037037037037037037037037037::numeric;
But when I put some operator, it rounds off values after 14 digits.
select (2^89)::numeric(40,0);
numeric
-----------------------------
618970019642690000000000000
(1 row)
I know the value from elsewhere is:
>>> 2**89
618970019642690137449562112
Why is this strange behavior. It is not letting me enter values beyond 14 digits numeric to database.
insert into x select (2^89-1)::numeric;
select * from x;
x
-----------------------------
618970019642690000000000000
(1 row)
Is there any way to circumvent this.
Thanks in advance.
bb23850
You should not cast the result but one part of the operation to make clear that this is a numeric operation, not an integer operation:
select (2^89::numeric)
Otherwise PostgreSQL takes the 2 and the 89 as type integer. In that case the result is type integer, too, which is not an exact value at that size. Your cast is a cast of that inaccurate result, so it cannot work.
Related
I try to SUM and cast at the same time. I have a column with big numbers with a lot of decimals for example: "0.0000000000000000000000000000000000000000000043232137067129047"
when I try sum(amount::decimal) I get the following error message org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [22003]: ERROR: value overflows numeric format Where: parallel worker
What I don't get is that the doc is saying up to 131072 digits before the decimal point; up to 16383 digits after the decimal point
And my longest casted string is 63 digits so I don't get it.
What am I missing and how could I make my sum ?
EDIT:
amount type is varchar(255)
EDIT2:
I found out it's only when I try to CREATE a table from this request that it breaks, request is working fine in itself, how can it be due to create table ?
Complete request:
create table cross_dapp_ft as (select sender,receiver,sum(amount::decimal),contract from ft_transfer_event ftce
where receiver in (
select account_id from batch.cc cc
where classification not in ('ft')
)
group by sender,receiver,contract);
As Samuel Liew suggested in the comments, some rows where corrupted. Conclusion is , to be safe don't store numbers as string.
I am using PHP with PostgreSQL. I have the following query:
SELECT ra, de, concat(ra, de) AS com, count(*) OVER() AS total_rows
FROM mdust
LIMIT :pagesize OFFSET :starts
The columns ra and de are floats where de can be positive or negative, however, the de does not return the + associated with the float. It does however return the - negative sign. What I want is for the de column within concat(ra, de) to return back the positive or negative sign.
I was looking at this documentation for PostgreSQL which provides to_char(1, 'S9') which is exactly what I want but it only works for integers. I was unable to find such a function for floats.
to_char() works for float as well. You just have to define desired output format. The simple pattern S9 would truncate fractional digits and fail for numbers > 9.
test=> SELECT to_char(float '0.123' , 'FMS9999990.099999')
test-> , to_char(float '123' , 'FMS9999990.099999')
test-> , to_char(float '123.123', 'FMS9999990.099999');
to_char | to_char | to_char
---------+---------+----------
+0.123 | +123.0 | +123.123
(1 row)
The added FM modifier stands for "fill mode" and suppresses insignificant trailing zeroes (unless forced by a 0 symbol instead of 9) and padding blanks.
Add as many 9s before and after the period as you want to allow as many digits.
You can tailor desired output format pretty much any way you want. Details in the manual here.
Aside: There are more efficient solutions for paging than LIMIT :pagesize OFFSET :starts:
Optimize query with OFFSET on large table
Is there a way in Amazon Redshift to convert a varchar column (with values such as A,B,D,M) to integer (1 for A, 2 for B, 3 for C...and so on) ? I know teradata has something like ASCII() but that doesn't work in Redshift.
Note: My goal is to convert the varchar columns to a number in my query and compare those two columns to see if the numbers are same or different.
demo:db<>fiddle
Postgres:
SELECT
ascii(upper(t.letter)) - 64
FROM
table t
Explanation:
upper() makes the input to capital letters (to handle the different ascii value for capital and non-capital letters)
ascii() converts the letters to ASCII code. The capital letters begin at number 65.
decrease the input by 64 to shift from ASCII starting point == 65 downto 1
Redshift:
The ascii() function is marked as deprecated on Redshift (https://docs.aws.amazon.com/redshift/latest/dg/c_SQL_functions_leader_node_only.html)
So one possible (and more pragmatic) solution is to get a fixed alphabet string and give out the index for a given letter:
SELECT
letter,
strpos('ABCDEFGHIJKLMNOPQRSTUVWXYZ', upper(t.letter))
FROM
table t
This is what I am observing:
q)type select date,time from table
98h
q)type select date,time,size from table
0h
q)select date,time,size from table
date time size
------------------------------------------------
2007.01.03 2007.01.03D09:31:00.000000000 200
2007.01.03 2007.01.03D09:31:00.000000000 313869
2007.01.03 2007.01.03D09:31:00.000000000 114852
2007.01.03 2007.01.03D09:31:00.000000000 566600
..
Why does the resulting table have type 0h? What is the meaning of it? Why adding size to the query changes the result type? Thank you.
It means a mixed list - https://code.kx.com/q/basics/datatypes/
Thus - size is a mixed type. You can group the column into its types and identify the offending indices by running:
exec i group type each size from table
To get the column into a typed column you will need to run a cast to convert them to your required type. Perhaps your time column has a mix of ints and longs for example, just cast them to what you require.
I am new to postgres and presently migrating from sql server to postgres and facing some problems. Kindly help me with this.
I am not being able to convert to decimal whenever the answer is in whole number. Whenever the answer is in whole number,
decimal conversion results in only giving the integer part as the answer.
For example :- If the result is 48 decimal conversion gives 48 whereas I want 48.00.
you can start from using numeric(4,2), instead of decimal, eg:
t=# select 48::numeric(4,2);
numeric
---------
48.00
(1 row)
or even:
t=# select 48*1.00;
?column?
----------
48.00
(1 row)
but keep in mind the fact you don't see zeroes in decimal does not mean the number is not decimal. eg here it is still float:
t=# select 48::float;
float8
--------
48
(1 row)
You can round the value by using the statement
select round(48,2);
It will return 48.00. You can also round to more decimal points.