We have an existing column(type- double precision) in our postgres table and we want to convert the data type of that column to numeric, we've tried the below approaches but all of them had truncation/data loss on the last decimal positions.
directly converting to numeric
converting to numeric with precision and scale
converting to text and then to numeric
converting to text only
The data loss I mentioned looks like this for eg: if we have a value 23.291400909423828, then after altering the column datatype that value is converted to 23.2914009094238 resulting in loss of the last 2 decimal places.
note: This is happening only if the value has more than 13 decimals(values right to the decimal point)
One way to possibly do this:
show extra_float_digits ;
extra_float_digits
--------------------
3
create table float_numeric(number_fld float8);
insert into float_numeric values (21.291400909423828), (23.291400909422436);
select * from float_numeric ;
number_fld
--------------------
21.291400909423828
23.291400909422435
alter table float_numeric alter COLUMN number_fld type numeric using number_fld::text::numeric;
\d float_numeric
Table "public.float_numeric"
Column | Type | Collation | Nullable | Default
------------+---------+-----------+----------+---------
number_fld | numeric | | |
select * from float_numeric ;
number_fld
--------------------
21.291400909423828
23.291400909422435
Related
I have a column in a Postgresql table that is unique and character varying(10) type. The table contains old alpha-numeric values that I need to keep. Every time a new row is created from this point forward, I want it to be numeric only. I would like to get the max numeric-only value from this table for this column then create a new row with that max value incremented by 1.
Is there a way to query this table for the max numeric value only for this column?
For example, if this column currently has the values:
1111
A1111A
1234
1234A
3331
B3332
C-3333
33-D33
3**333*
Is there a query that will return 3333, AKA cut out all the non-numeric characters from the values and then perform a MAX() on them?
Not precisely what you asking, but something that I think will work better for you.
To go over all the columns, convert each to numbers, and then cast it to integer & return max.:
SELECT MAX(regexp_replace(my_column, '[^0-9]', '', 'g')::int) FROM public.foobar;
This gets you your max value... say 2999.
Now, going forward, consider making the default for your column a serial-like value, and convert it to text... that way you set the "MAX" once, and then let postgres do all the work for future values.
-- create simple integer sequence
CREATE SEQUENCE public.foobar_my_column_seq
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 1
CACHE 0;
-- use new sequence as default value for column __and__ convert to text
ALTER TABLE foobar
ALTER COLUMN my_column
SET DEFAULT nextval('publc.foobar_my_column_seq'::regclass)::text;
-- initialize "next value" of sequence to whatever is larger than
-- what you already have in your data ... say 3000:
ALTER SEQUENCE public.foobar_my_column_seq RESTART WITH 3000;
Because you're simply setting default, you don't change your current alpha-numeric values.
I figured it out. The following query works.
select text_value, regexp_replace(text_value, '[^0-9]+', '') as new_value from the_table;
Result:
text_value | new_value
-----------------------+-------------
4*215474 | 4215474
740024 | 740024
4*100535 | 4100535
42356 | 42356
CASH |
4*215474 | 4215474
740025 | 740025
740026 | 740026
4*5089655798 | 45089655798
4*15680 | 415680
4*224034 | 4224034
4*265718708 | 4265718708
I have a column that I want to get an average of, the column is varchar(200). I keep getting this error. How do I convert the column to numeric and get an average of it.
Values in the column look like
16,000.00
15,000.00
16,000.00 etc
When I execute
select CAST((COALESCE( bonus,'0')) AS numeric)
from tableone
... I get
ERROR: invalid input syntax for type numeric:
The standard way to represent (as text) a numeric in SQL is something like:
16000.00
15000.00
16000.00
So, your commas in the text are hurting you.
The most sensible way to solve this problem would be to store the data just as a numeric instead of using a string (text, varchar, character) type, as already suggested by a_horse_with_no_name.
However, assuming this is done for a good reason, such as you inherited a design you cannot change, one possibility is to get rid of all the characters which are not a (minus sign, digit, period) before casting to numeric:
Let's assume this is your input data
CREATE TABLE tableone
(
bonus text
) ;
INSERT INTO tableone(bonus)
VALUES
('16,000.00'),
('15,000.00'),
('16,000.00'),
('something strange 25'),
('why do you actually use a "text" column if you could just define it as numeric(15,0)?'),
(NULL) ;
You can remove all the straneous chars with a regexp_replace and the proper regular expression ([^-0-9.]), and do it globally:
SELECT
CAST(
COALESCE(
NULLIF(
regexp_replace(bonus, '[^-0-9.]+', '', 'g'),
''),
'0')
AS numeric)
FROM
tableone ;
| coalesce |
| -------: |
| 16000.00 |
| 15000.00 |
| 16000.00 |
| 25 |
| 150 |
| 0 |
See what happens to the 15,0 (this may NOT be what you want).
Check everything at dbfiddle here
I'm going to go out on a limb and say that it might be because you have Empty strings rather than nulls in your column; this would result in the error you are seeing. Try wrapping the column name in a nullif:
SELECT CAST(coalesce(NULLIF(bonus, ''), '0') AS integer) as new_field
But I would really question your schema that you have numeric values stored in a varchar column...
As stated in the documentation of Postgres 9.0 the double precision data type has a precision of 15 decimal digits and a storage of 8 bytes, then an integer number larger than a normal bigint (8 bytes) stored in a double precision field is approximated. Correct me if I'm wrong, I say larger than a normal bigint because if you try to cast this number to bigint you get this error:
select 211116514527303268704::bigint;
>> ERROR: bigint out of range
When you try to convert this to double precision and numeric and compare both you find that they're the same:
select 211116514527303268704::numeric,
211116514527303268704::double precision,
(211116514527303268704::double precision) = (211116514527303268704::numeric);
+-----------------------+----------------------+---------+
| numeric | float8 | boolean |
+-----------------------+----------------------+---------+
| 211116514527303268704 | 2.11116514527303e+20 | t |
+-----------------------+----------------------+---------+
and with the to_char() function they return different values:
select
trim(to_char((211116514527303268704::double precision),'999,999,999,999,999,999,999')),
trim(to_char((211116514527303268704::numeric),'999,999,999,999,999,999,999'));
+-----------------------------+-----------------------------+
| text | text |
+-----------------------------+-----------------------------+
| 211,116,514,527,303,270,400 | 211,116,514,527,303,268,704 |
+-----------------------------+-----------------------------+
As you can see the value returned with the to_char - numeric combination is correct but the to_char - double precision loses consistency from the exponential part in double precision 2.11116514527303e+20
I'm not sure if it affect something but the locale of lc_numeric is 'es_PY.utf8'
It's completely useless to implement double precision in this case or it's another alternative to keep those double precision fields? That is always the preferred option, there's some type of cast to numeric from double precision that keeps all the original digits?
For additional information I have a PostgreSQL 9.0 installation running in a CentOS 6 x86-64 server.
The reason is that for the purpose of the equality comparison, the type with the higher resolution is cast to the type of the lower resolution. I.e.: in the example the numeric is cast to double precision.
Demo:
SELECT *
, num = dp AS no_cast
, num::float8 = dp AS dp_cast
, num = dp::numeric AS num_cast
FROM (
SELECT numeric '211116514527303268705' AS num
, float8 '211116514527303268705' AS dp
) t;
num | dp | no_cast | dp_cast | num_cast
----------------------+-----------------------+---------+---------+---------
211116514527303268705 | 2.11116514527303e+020 | t | t | f
float8 is an alias of double precision.
Note that for other calculations, like addition, the type with the higher resolution is the result - which is a logic necessity. (Here the result is boolean anyway.)
I'm trying to change a column type from "character varying(15)" to an integer.
If I run "=#SELECT columnX from tableY limit(10);" I get back:
columnX
----------
34.00
12.00
7.75
18.50
4.00
11.25
18.00
16.50
If i run "=#\d+ columnX" i get back:
Column | Type | Modifiers | Storage | Description
columnX | character varying(15) | not null | extended |
I've searched high and low, asked on the postgresql irc channel, but no one could figure out how to change it, I've tried:
ALTER TABLE race_horserecord ALTER COLUMN win_odds TYPE integer USING (win_odds::integer);
Also:
ALTER TABLE tableY ALTER COLUMN columnX TYPE integer USING (trim("columnX")::integer);
Every time I get back:
"ERROR: invalid input syntax for integer: "34.00""
Any help would be appreciated.
Try USING (win_odds::numeric::integer).
Note that it will round your fractional values (e.g., '7.75'::numeric::integer = 8).
I have a varchar column in Postgres 8.3 that holds values like: '0100011101111000'
I need a function that would consider that string to be a number in base 2 and spits out the numeric in base 10. Makes sense?
So, for instance:
'000001' -> 1.0
'000010' -> 2.0
'000011' -> 3.0
Thanks!
Cast to a bit string then to an integer.
An example:
'1110'::bit(4)::integer -> 14
Though you had varying length examples, and were after bigint, so instead use bit(64) and pad the input with zeroes using the lpad function.
lpad('0100011101111000',64,'0')::bit(64)::bigint
Here's a complete example...
create temp table examples (val varchar(64));
insert into examples values('0100011101111000');
insert into examples values('000001');
insert into examples values('000010');
insert into examples values('000011');
select val,lpad(val,64,'0')::bit(64)::bigint as result from examples;
The result of the select is:
val | result
------------------+--------
0100011101111000 | 18296
000001 | 1
000010 | 2
000011 | 3
(4 rows)