In POSTGRES - I am trying to create a view from 2 tables. When the value of '0' is coded for insertion as a value for the EAST_LONGITUDE_NMBR column of datatype NUMERIC[24,20] in the lower portion of a UNION select statement, an ERROR Message is generated.
The view EXTENTS' column EAST_LONGITUDE_NMBR comes from the table and column, CELL_EXTENT.EAST_LONGITUDE_NMBR with a datatype of NUMERIC[24,20]
The following is the code.
CREATE VIEW EXTENTS
(
ID,
EXTENT_TYPE,
NAME,
EAST_LONGITUDE_NMBR
)
AS
SELECT
"CELL_EXTENT"."CELL_ID_NMBR",
'CELL',
UPPER ("CELL_EXTENT"."CELL_NAME"),
"CELL_EXTENT"."EAST_LONGITUDE_NMBR"
FROM "EARTH"."CELL_EXTENT"
UNION
(SELECT
"AREA_INTEREST"."AREA_ID_NMBR",
'GEOPOLITICAL',
UPPER ("AREA_INTEREST"."AREA_NAME"),
0
FROM "EARTH"."AREA_INTEREST");
The inserted value '0' in the lower UNION select causes the following error in the creation of view EXTENTS.
ERROR: UNION types numeric[] and integer cannot be matched
I have tried the following and received the errors shown:
0 ERROR: UNION types numeric[] and integer cannot be matched
0.0 ERROR: UNION types numeric[] and numeric cannot be matched
0.0::NUMERIC[] ERROR: cannot cast type numeric to numeric[]
0::NUMERIC[] ERROR: cannot cast type integer to numeric[]
I have checked numerous websites with discussions about the Postgres datatypes, particularly NUMERIC, NUMERIC[], INTEGER, DECIMAL
Difference between DECIMAL and NUMERIC datatype in PSQL
https://github.com/npgsql/npgsql/issues/655
https://www.cybertec-postgresql.com/en/mapping-oracle-datatypes-to-postgresql/
http://www.postgresqltutorial.com/postgresql-cast/
http://www.postgresqltutorial.com/postgresql-to_number/
I could go on, but you get the picture. There is a lot about datatypes but there are no examples for '0' as an actual value in Postgres code for a column of datatype NUMERIC[] in a UNION statement.
I feel this is a simple fix, a couple of keystrokes here or there to set the value proper, but it eludes me. I am using pgAdmin4.
Can you help?
Thanks,
Margaret
Seems easy: use an array instead of 0.
Depending on what you prefer, you could use
ARRAY[]::numeric[] -- empty array
or
ARRAY[0]::numeric[] -- array with a single 0
Related
I have a task to create a Liquibase migration to change a value affext in table trp_order_sold, which is right now int8, to varchar (or any other text type if it's more likely to be possible).
The script I made is following:
ALTER TABLE public.trp_order_sold
ALTER COLUMN affext SET DATA TYPE VARCHAR
USING affext::varchar;
I expected that USING affext::text; part is gonna work as a converter, however with or without it I am getting this error:
ERROR: operator does not exist: varchar >= integer
Hint: No operator matches the given name and argument types. You might need to add explicit type casts.
Any hints on what I'm doing wrong? Also I am writing a PostgreSQL script but a working XML equivalent would be fine for me as well.
These would most typically use or depend on your column:
a generated column
a trigger
a trigger's when condition
a view or a rule
a check constraint
In my test (online demo) only the last one leads to the error you showed:
create table test_table(col1 int);
--CREATE TABLE
alter table test_table add constraint test_constraint check (col1 >= 1);
--ALTER TABLE
alter table test_table alter column col1 type text using col1::text;
--ERROR: operator does not exist: text >= integer
--HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
You'll have to check the constraints on your table with \d+ command in psql, or by querying the system tables:
SELECT con.*
FROM pg_catalog.pg_constraint con
INNER JOIN pg_catalog.pg_class rel
ON rel.oid = con.conrelid
INNER JOIN pg_catalog.pg_namespace nsp
ON nsp.oid = connamespace
WHERE nsp.nspname = 'your_table_schema'
AND rel.relname = 'your_table_name';
Then you will need to drop the constraint causing the problem and build a new one to work with your new data type.
Since integer 20 goes before integer 100, but text '20' goes after text '100', if you plan to keep the old ordering behaviour you'd need this type of cast:
case when affext<0 then '-' else '0' end||lpad(ltrim(affext::text,'-'),10,'0')
and then make sure new incoming affext values are cast accordingly in an insert and update trigger. Or use a numeric ICU collation similar to this.
I have a postgres table I loaded from a mongodb collection in postgres.
Although the postgres column is of type 'bigint', there are rows that are larger than the max big int, so when I try to update another table from this table, it errors out. There are also bigint columns with illegal characters, such as "_2131441" which I cleared via
WHERE col_name !~ '^([0-9]+[.]?[0-9]*|[.][0-9]+)$';
How can I force cast an entire column to be valid according to it's type, and set it to null/none if otherwise?
Use a CASE expression:
CASE WHEN col_name !~ '^(\+|-)?[[:digit:]]+$'
THEN NULL::bigint
WHEN col_name::numeric NOT BETWEEN -9223372036854775808 AND 9223372036854775807
THEN NULL::bigint
ELSE col_name::bigint
END
Note that bigint is an integer and does not allow a decimal separator.
I know someone asked the same question from PostgreSQL sum typecasting as a bigint a while ago, but I don't see it was answered. I am adding value of a column whose type is integer using sum function, but it will overflow when I adding two 1.5 billion. I want the sum result to be bigint. Is there anyway to achieve it? Thanks in advance. I tried following but didn't work.
sum(count)::bigint AS total
If I do as following I am still getting error
sum(count::bigint) AS total
Caused by: org.postgresql.util.PSQLException: ERROR: cannot change data type of column "total" from integer to numeric
You should cast before to sum it. That is:
sum(count::bigint) as total
In postgres sum(integer) and sum(bigint) are different functions which returns, respectively, integer and big integer.
In fact, all postgres functions are identified not only by its name but by the combination of its name and its argument types.
If you don't cast before, then you end up using integer version of sum() which always return integer. Even if you later cast it to bigint. If it's result is an overflow, you can't cast overflow to bigint.
EDIT: As abelisto rightly points, sum() yet returns bigint for smallint and integer. But, as I can see, your error message says that "cannot change type of column total from integer to numeric". But as far as I understand, "total" is the result of the whole operation, so it should be bigint (even if overflow).
...Not sure if it tries to point to the "count" column which (after operation) is labeled as "total" (but it stucks me...) or if it simply saying that it can't cast numeric to bigint (which seems more feasible to me). It depends of the actual type of count column. Is it already bigint or numeric?
If it is, the problem is probably in trying to cast as bigint a very huge numeric (of numeric type I mean) value.
Can you tell us the exact type of "count" colunm? And better than that: can you provide a failing example with a literal value?
Something like (but I only got an "bigint out of range" error...):
somedb=> with foo as (
select 1000000000 as a
union select 231234241234123
union select 99999999999999999999999
) select sum(a) from foo;
sum
--------------------------
100000000231235241234122
(1 row)
somedb=> with foo as (
select 1000000000 as a
union select 231234241234123
union select 99999999999999999999999
) select sum(a)::bigint from foo;
ERROR: bigint out of range
I have a table named "temp_table" and a column named "temp_column" of type varchar. The problem is "temp_column" must be of type integer. If I will just automatically update the table into type integer, it will generate an error since some data has non-numeric data in it.
I want a query that will show all rows if "temp_column" has non-numeric values in it (or the other way around) and update or SET the value accordingly. I'm having a hard time since ISNUMERIC is not available in postgresql.
how to do this?
This will show all rows where you have non-integer values in that column. It uses a regular expression to find all values that have anything else than just numbers in it:
select *
from temp_table
where temp_column ~ '[^0-9]';
this can also be used in an update statement:
update temp_table
set temp_column = null
where temp_column ~ '[^0-9]';
This will also filter out "numeric" values like 3.14 as those aren't integers.
Yesterday we had a PostgreSQL database upgraded to version 9.1.3. We thought we had everything tested and ready, but there is a function we missed. It returns a table type like this:
CREATE OR REPLACE FUNCTION myfunc( patient_number varchar
, tumor_number_param varchar, facility_number varchar)
RETURNS SETOF patient_for_registrar
LANGUAGE plpgsql
AS
$body$
BEGIN
RETURN QUERY
SELECT cast(nfa.patient_id_number as varchar),
...
I only only give the first column of the select because that is where the error happens. Before today this function ran fine, but now it gives this error:
ERROR: structure of query does not match function result type
Detail: Returned type character varying does not match expected type
character varying(8) in column 1. Where: PL/pgSQL function
"getwebregistrarpatient_withdeletes" line 3 at RETURN QUERY [SQL
State=42804]
The column nfa.patient_id_number is text and is being cast for the column patient_id_number in patient_for_registrar that is varchar(8). After reading about this some I think the problem is because the column length isn't being specified when casting from text. But the problem is I've tried various combinations of substrings to fix this and none are solving the problem:
substring(cast(nfa.patient_id_number as varchar) from 1 for 8),
cast(substring(nfa.patient_id_number from 1 for 8) as varchar),
cast(substring(nfa.patient_id_number from 1 for 8) as varchar(8)),
Does anyone have any pointers?
Your function ..
RETURNS SETOF patient_for_registrar
The returned row type must match the declared type exactly. You did not disclose the definition of patient_for_registrar, probably the associated composite type of a table. I quote the manual about Declaration of Composite Types:
Whenever you create a table, a composite type is also automatically
created, with the same name as the table, to represent the table's row
type.
If the first column of that type (table) is defined varchar(8) (with length modifier) - as the error message indicates, you have to return varchar(8) with the same length modifier; varchar won't do. It is irrelevant for that matter whether the string length is only 8 characters, the data type has to match.
varchar, varchar(n) and varchar(m) are different data types for PostgreSQL.
Older versions did not enforce the type modifiers, but with PostgreSQL 9.0 this was changed for plpgsql:
PL/pgSQL now requires columns of composite results to match the
expected type modifier as well as base type (Pavel Stehule, Tom Lane)
For example, if a column of the result type is declared as
NUMERIC(30,2), it is no longer acceptable to return a NUMERIC of some
other precision in that column. Previous versions neglected to check
the type modifier and would thus allow result rows that didn't
actually conform to the declared restrictions.
Two basic ways to fix your problem:
You can cast the returned values to match the definition of patient_for_registrar:
nfa.patient_id_number::varchar(8)
Or you can change the RETURNS clause. I would use RETURNS TABLE and declare a matching composite type. Here is an example.
RETURNS TABLE (patient_for_registrar varchar, col2 some_type, ...)
As an aside: I never use varchar if I can avoid it - especially not with length modifier. It offers hardly anything that the type text couldn't do. If I need a length restriction, I use a column constraint which can be changed without rewriting the whole table.