Why is this postgres function failing only on one specifc database? - postgresql

I'm trying to fix an issue with a legacy database. The quote_literal function is not working for a specific database on an 8.4 install of postgres.
Here's my results on a fresh test database:
select quote_literal(42);
quote_literal
---------------
'42'
(1 row)
And now the same on the target db
select quote_literal(42);
ERROR: function quote_literal(integer) is not unique
LINE 1: select quote_literal(42);
^
HINT: Could not choose a best candidate function. You might need to add explicit type casts.
AIUI, the quote_literal(anyvalue) function should handle integer values ok, and this seems to be upheld by the first test.
So I figured the quote_literal function must have been overridden in this db but no this doesn't seem to be the case. I could override it with a specific quote_literal(integer) function but I don't see why I should have to.
The question is what is could be causing the failure of this function in this specific database whilst not affecting the fresh db?

Another possibility: Somebody has added implicit casts from text to your database. This was a common workaround for an intentional BC break in 8.3. See the release notes for 8.3, E.57.2. Migration to Version 8.3
Demo:
regress=# \df quote_literal
List of functions
Schema | Name | Result data type | Argument data types | Type
------------+---------------+------------------+---------------------+--------
pg_catalog | quote_literal | text | anyelement | normal
pg_catalog | quote_literal | text | text | normal
(2 rows)
regress=# CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(int4out($1));';
CREATE FUNCTION
regress=# CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS IMPLICIT;
CREATE CAST
regress=# SELECT quote_literal(42);
ERROR: function quote_literal(integer) is not unique
LINE 1: SELECT quote_literal(42);
^
HINT: Could not choose a best candidate function. You might need to add explicit type casts.
regress=#
This'll fix it, but probably break other code that's still relying on the cast:
regress=# DROP CAST (integer AS text);
DROP CAST
regress=# SELECT quote_literal(42);
quote_literal
---------------
'42'
(1 row)

Somebody has probably defined another single-argument quote_literal function with an argument type that's assignment-compatible to integer, like bigint.
In psql, connect and run:
\df quote_literal
and you'll see multiple entries, like this:
regress=> \df quote_literal
List of functions
Schema | Name | Result data type | Argument data types | Type
------------+---------------+------------------+---------------------+--------
pg_catalog | quote_literal | text | anyelement | normal
pg_catalog | quote_literal | text | text | normal
public | quote_literal | text | bigint | normal
(3 rows)
You only want the 1st two, in pg_catalog. However, I can't advise you to just:
DROP FUNCTION public.quote_literal(bigint);
... because you might have code that expects it to exist. Time to go digging and see where it's used. Have fun.
Demo showing that this is likely the problem:
regress=> SELECT quote_literal(42);
quote_literal
---------------
'42'
(1 row)
regress=> CREATE OR REPLACE FUNCTION quote_literal(bigint) RETURNS text AS 'SELECT ''borkborkbork''::text;' LANGUAGE sql;
CREATE FUNCTION
regress=> SELECT quote_literal(42);
ERROR: function quote_literal(integer) is not unique
LINE 1: SELECT quote_literal(42);
^
HINT: Could not choose a best candidate function. You might need to add explicit type casts.
regress=>

Related

equivalent of Oracle's DBMS_ASSERT.sql_object_name() in PostgreSQL?

I'm trying to come up with a function to verify the object identifier name. Like in Oracle, if a given identifier associated with any sql object (tables, functions, views,... ) It returns the name as it is else error out. Following are few examples.
SELECT SYS.DBMS_ASSERT.SQL_OBJECT_NAME('DBMS_ASSERT.sql_object_name') FROM DUAL;
SYS.DBMS_ASSERT.SQL_OBJECT_NAME('DBMS_ASSERT.SQL_OBJECT_NAME')
DBMS_ASSERT.sql_object_name
SELECT SYS.DBMS_ASSERT.SQL_OBJECT_NAME('unknown') FROM DUAL;
ORA-44002: invalid object name
For tables, views, sequences, you'd typically cast to regclass:
select 'some_table_I_will_create_later'::regclass;
ERROR: relation "some_table_I_will_create_later" does not exist`.
LINE 1: select 'some_table_I_will_create_later'::regclass;
^
For procedures and functions, it'd be a cast to regproc instead, so to get a function equivalent to DBMS_ASSERT.sql_object_name() you'd have to go through the full list of what the argument could be cast to:
create or replace function assert_sql_object_name(arg text)
returns text language sql as $function_body$
select coalesce(
to_regclass(arg)::text,
to_regcollation(arg)::text,
to_regoper(arg)::text,
to_regproc(arg)::text,
to_regtype(arg)::text,
to_regrole(quote_ident(arg))::text,
to_regnamespace(quote_ident(arg))::text )
$function_body$;
These functions work the same as a plain cast, except they return null instead of throwing an exception. coalesce() works the same in PostgreSQL as it does in Oracle, returning the first non-null argument it gets.
Note that unknown is a pseudo-type in PostgreSQL, so it doesn't make a good test.
select assert_sql_object_name('unknown');
-- assert_sql_object_name
-- ------------------------
-- unknown
select assert_sql_object_name('some_table_I_will_create_later');
-- assert_sql_object_name
-- ------------------------
-- null
create table some_table_I_will_create_later(id int);
select assert_sql_object_name('some_table_I_will_create_later');
-- assert_sql_object_name
-- --------------------------------
-- some_table_i_will_create_later
select assert_sql_object_name('different_schema.some_table_I_will_create_later');
-- assert_sql_object_name
-- ------------------------
-- null
create schema different_schema;
alter table some_table_i_will_create_later set schema different_schema;
select assert_sql_object_name('different_schema.some_table_I_will_create_later');
-- assert_sql_object_name
-- -------------------------------------------------
-- different_schema.some_table_i_will_create_later
Online demo
There is no direct equivalent, but if you know the expected type of the object, you can cast the name to one of the Object Identifier Types
For tables, views and other objects that have an entry in pg_class, you can cast it to to regclass:
select 'pg_catalog.pg_class'::regclass;
select 'public.some_table'::regclass;
The cast will result in an error if the object does not exist.
For functions or procedures you need to cast the name to regproc:
select 'my_schema.some_function'::regproc;
However, if that is an overloaded function (i.e. multiple entries exist in pg_catalog.pg_proc, then it would result in an error more than one function named "some_function". In that case you need to provide the full signature you want to test using the type regprocedureregprocedure instead, e.g.:
select 'my_schema.some_function(int4)'::regprocedure;
You can create a wrapper function in PL/pgSQL that tries the different casts to mimic the behaviour of the Oracle function.
The orafce extensions provides an implementation of dbms_assert.object_name

Is there a way to organize Postgres Functions (using pgAdmin)?

I'm using pgAdmin 4.23, PostgreSQL 12.3 for Windows
Looks like all functions are dumped into one "folder". I installed the uuid and tablefunc extensions and they get tossed in with my own user defined functions. At least the 10 uuid ones are all prefixed with "uuid_". The 11 tablefunc ones all start with "connectby", "crosstab"*, "normal_rand".
I did prefix my own functions so those at least grouped together. But as this thing grows and I add extensions, I'm concerned that maintenance will become more difficult. Is there some sort of sub-foldering option I am missing, or is naming convention the normal approach for organization? Looks like stored procs would work the same way.
Would also be nice to be able to filter the Functions based on the names. I see the Search Objects popup, but it isn't as useful as a filter.
To filter functions based on their names you can use this function:
CREATE OR REPLACE FUNCTION public.find_function(fname text DEFAULT NULL::text)
RETURNS TABLE(routine_name text, routine_schema text, return_type text)
LANGUAGE sql
AS $function$
select routine_name, routine_schema, data_type
from information_schema.routines
where specific_schema not in ('pg_catalog', 'information_schema')
and case when fname is null then true else routine_name ~* fname end
order by routine_name;
$function$;
Here is an example - find all functions that have "test" in their name:
select * from find_function('test');
+-----------------------------+----------------+-------------+
| routine_name | routine_schema | return_type |
+-----------------------------+----------------+-------------+
| clear_web_tests | datavato | void |
+-----------------------------+----------------+-------------+
| etl_generic_tests | webaccess | text |
+-----------------------------+----------------+-------------+
| fill_web_tests | datavato | void |
+-----------------------------+----------------+-------------+
| pan_arguments_test | helpers | jsonb |
+-----------------------------+----------------+-------------+
| test_bizday | public | boolean |
+-----------------------------+----------------+-------------+
| test_checkdigits | public | boolean |
+-----------------------------+----------------+-------------+
| test_jasper_dynamic_columns | scratch | record |
+-----------------------------+----------------+-------------+
I'm adding this here in case it helps for future searches...
Per #horse_with_no_name's suggestion above, and the official documentation I went with a separate schema for my third-party stuff. Here is a bit from conversation with the other db people in our company:
I didn't want third party extensions (like uuid-ossp for Guids)
installed in my public schema since that is where we keep all of our
user defined functions specific to that database. When I originally
installed the extension I put it into the public schema, then just
transferred it to a schema named extfunc. Then we have a table named
usr that references extfunc.uuid_generate_v4() in one of the column
constraints. Everything works as expected.
However, when I try to backup our Dev db and restore to QA, the
pg_dump and pg_restore tasks do not handle it properly. The restore
would error and not create the usr table. The extension was not being
properly restored to extfunc, which prevented the usr table from being
created.
The solution is to create the extension in the desired schema from the
start. Do not create it and try and move it to the destination
schema. Backup/restore now work as expected.

array_agg with distinct works in postgres 9.4 but not in postgres 9.6

I have a query that use array_agg with distinct as an argument and is not accepted on postgres 9.6.
I created this sample to illustrate the issue:
create table numbers (id integer primary key, name varchar(10));
insert into numbers values(1,'one');
insert into numbers values(2,'two');
postgres 9.4
select array_agg(distinct(id)) from numbers;
array_agg
-----------
{1,2}
postgres 9.6
ERROR: function array_agg(integer) is not unique
LINE 1: select array_agg(distinct(id)) from numbers;
^
HINT: Could not choose a best candidate function.
You might need to add explicit type casts.
What do I need to change in order to get this result on postgres 9.6?
Thanks.
This is what I get checking the functions:
nspname | proname | proargtypes
------------+-----------+---------------------
pg_catalog | array_agg | [0:0]={anyarray}
public | array_agg | [0:0]={anyelement}
pg_catalog | array_agg | [0:0]={anynonarray
Now, I found the issue thanks to the comment by pozs. I remove the public definition of the aggregated function and it worked.
The issue was just on the database that I was working on, as I found some people saying that the sample worked for them I created a new database an run the example. And then the only change there was the aggregate function definitions.
Now, I found the issue thanks to the comment by pozs. I remove the public definition of the aggregated function and it worked.
The issue was just on the database that I was working on, as I found some people saying that the sample worked for them I created a new database an run the example. And then the only change there was the aggregate function definitions.
So I drop the function public | array_agg | [0:0]={anyelement} and it worked.
Thanks a lot.
It works exactly like that as demonstrated by this dbfiddle on PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit.

how to create a range of values and then use them to insert data into postgresql database

Background Information:
I need to auto generate a bunch of records in a table. The only piece of information I have is a start range and an end range.
Let's say my table looks like this:
id
widgetnumber
The logic needs to be contained within a .sql file.
I'm running postgresql
Code
This is what I have so far... as a test... and it seems to be working:
DO $$
DECLARE widgetnum text;
BEGIN
SELECT 5 INTO widgetnum;
INSERT INTO widgets VALUES(DEFAULT, widgetnum);
END $$;
And then to run it, I do this from a command line on my database server:
testbox:/tmp# psql -U myuser -d widgets -f addwidgets.sql
DO
Questions
How would I modify this code to loop through a range of widget numbers and insert them all?
for example, I would be provided with a start range and an end range (100 to 150 let's say)
Can you point me to a good online resource to learn the syntax i should be using?
Thanks.
How would I modify this code to loop through a range of widget numbers and insert them all?
You can use generate_series() for that.
insert into widgets (widgetnumber)
select i
from generate_series(100, 150) as t(i);
Can you point me to a good online resource to learn the syntax i should be using?
https://www.postgresql.org/docs/current/static/index.html
dvdrental=# \d test
Table "public.test"
Column | Type | Collation | Nullable | Default
--------+------------------------+-----------+----------+---------
id | integer | | |
name | character varying(250) | | |
dvdrental=# begin;
BEGIN
dvdrental=# insert into test(id,name) select generate_series(1,100000),'Kishore';
INSERT 0 100000

Enforcing default time when only date in timestamptz provided

Assume I have the table:
postgres=# create table foo (datetimes timestamptz);
CREATE TABLE
postgres=# \d+ foo
Table "public.foo"
Column | Type | Modifiers | Storage | Description
-----------+--------------------------+-----------+---------+-------------
datetimes | timestamp with time zone | | plain |
Has OIDs: no
So lets insert some values into it...
postgres=# insert into foo values
('2012-12-12'), --This is the value I want to catch for.
(null),
('2012-12-12 12:12:12'),
('2012-12-12 12:12');
INSERT 0 4
And here's what we have:
postgres=# select * from foo ;
datetimes
------------------------
2012-12-12 00:00:00+00
2012-12-12 12:12:12+00
2012-12-12 12:12:00+00
(4 rows)
Ideally, I'd like to set up a default time-stamp value when a TIME is not provided with the input, rather than the de-facto time of 2012-12-12 being 00:00:00, I would like to set a default of 15:45:10.
Meaning, my results should look like:
postgres=# select * from foo ;
datetimes
------------------------
2012-12-12 15:45:10+00 --This one gets the default time.
2012-12-12 12:12:12+00
2012-12-12 12:12:00+00
(4 rows)
I'm not really sure how to do this in postgres 8.4, I can't find anything in the datetime section of the manual or the sections regarding column default values.
Values for new rows can be tweaked in a BEFORE INSERT trigger. Such a trigger
could test if there's a non-zero time component in NEW.datetimes, and if not set it to the desired fixed time.
However, the case when the time part is explicitly set to zero in the INSERT clause cannot be handled with this technique because '2012-12-12'::timestamptz is equal to '2012-12-12 00:00:00'::timestamptz. So it would be as trying to distinguish 0.0 from 0.00.
Technically, tweaking the value should happen before the implicit cast from string to the column's type, which even a RULE (dynamic query rewriting) cannot do.
It seems to me that the best option is to rewrite the INSERT and apply a function to each value converting it explicitly from string to timestamp. This function would test the input format and add the time part when needed:
create function conv(text) returns timestamptz as $$
select case when length($1)=10 then ($1||' 15:45:10')::timestamptz
else $1::timestamptz end; $$
language sql strict immutable;
insert into foo values
(conv('2012-12-12')),
(conv(null)),
(conv('2012-12-12 12:12:12')),
(conv('2012-12-12 12:12'));