I'm trying to come up with a function to verify the object identifier name. Like in Oracle, if a given identifier associated with any sql object (tables, functions, views,... ) It returns the name as it is else error out. Following are few examples.
SELECT SYS.DBMS_ASSERT.SQL_OBJECT_NAME('DBMS_ASSERT.sql_object_name') FROM DUAL;
SYS.DBMS_ASSERT.SQL_OBJECT_NAME('DBMS_ASSERT.SQL_OBJECT_NAME')
DBMS_ASSERT.sql_object_name
SELECT SYS.DBMS_ASSERT.SQL_OBJECT_NAME('unknown') FROM DUAL;
ORA-44002: invalid object name
For tables, views, sequences, you'd typically cast to regclass:
select 'some_table_I_will_create_later'::regclass;
ERROR: relation "some_table_I_will_create_later" does not exist`.
LINE 1: select 'some_table_I_will_create_later'::regclass;
^
For procedures and functions, it'd be a cast to regproc instead, so to get a function equivalent to DBMS_ASSERT.sql_object_name() you'd have to go through the full list of what the argument could be cast to:
create or replace function assert_sql_object_name(arg text)
returns text language sql as $function_body$
select coalesce(
to_regclass(arg)::text,
to_regcollation(arg)::text,
to_regoper(arg)::text,
to_regproc(arg)::text,
to_regtype(arg)::text,
to_regrole(quote_ident(arg))::text,
to_regnamespace(quote_ident(arg))::text )
$function_body$;
These functions work the same as a plain cast, except they return null instead of throwing an exception. coalesce() works the same in PostgreSQL as it does in Oracle, returning the first non-null argument it gets.
Note that unknown is a pseudo-type in PostgreSQL, so it doesn't make a good test.
select assert_sql_object_name('unknown');
-- assert_sql_object_name
-- ------------------------
-- unknown
select assert_sql_object_name('some_table_I_will_create_later');
-- assert_sql_object_name
-- ------------------------
-- null
create table some_table_I_will_create_later(id int);
select assert_sql_object_name('some_table_I_will_create_later');
-- assert_sql_object_name
-- --------------------------------
-- some_table_i_will_create_later
select assert_sql_object_name('different_schema.some_table_I_will_create_later');
-- assert_sql_object_name
-- ------------------------
-- null
create schema different_schema;
alter table some_table_i_will_create_later set schema different_schema;
select assert_sql_object_name('different_schema.some_table_I_will_create_later');
-- assert_sql_object_name
-- -------------------------------------------------
-- different_schema.some_table_i_will_create_later
Online demo
There is no direct equivalent, but if you know the expected type of the object, you can cast the name to one of the Object Identifier Types
For tables, views and other objects that have an entry in pg_class, you can cast it to to regclass:
select 'pg_catalog.pg_class'::regclass;
select 'public.some_table'::regclass;
The cast will result in an error if the object does not exist.
For functions or procedures you need to cast the name to regproc:
select 'my_schema.some_function'::regproc;
However, if that is an overloaded function (i.e. multiple entries exist in pg_catalog.pg_proc, then it would result in an error more than one function named "some_function". In that case you need to provide the full signature you want to test using the type regprocedureregprocedure instead, e.g.:
select 'my_schema.some_function(int4)'::regprocedure;
You can create a wrapper function in PL/pgSQL that tries the different casts to mimic the behaviour of the Oracle function.
The orafce extensions provides an implementation of dbms_assert.object_name
Related
I'm trying to call a polymorphic function where the table type can be determined by a subsidiary function, say, which_table(). The problem is that I can't convert the returned varchar into an actual type
I'm trying to base a solution off of Erwin Brandstetter's "Various complete table types" section of one of his previous SO answers, as well as his comments in the answer to SQL Injection-safe call of polymorphic function. So, referencing examples there, the behavior I want is to be able to do
SELECT * FROM data_of(NULL::pcdmet, 17);
but be able to specify the table name dynamically, such as,
SELECT * FROM data_of( which_table('arg that evaluates totypeNULL::pcdmet') , 17 )
In my case, the pcdmet, "table" types can be designed to be either all regular tables, temp tables, or composite types (so a solution using any of these would be fully acceptable).
Issue #1
I've been trying to use a PREPARE/EXECUTE approach as suggested, but haven't been able to see what I'm doing wrong.
create or replace function fooprep (inint int, inchar varchar) RETURNS varchar
AS $$
begin
RETURN ''||inint||' '||inchar;
end;
$$ language plpgsql;
dev_db=> create type int_char as (coli int, colv varchar);
CREATE TYPE
dev_db=> prepare fooplan (int_char) as
select * from fooprep($1, $2);
ERROR: function fooprep(int_char, unknown) does not exist
LINE 2: select * from fooprep($1, $2);
Issue #2
But further more, even if I get #1 to work, how to I return it as a type from which_table() when I don't know which table that function will return? I've tried specifying regclass as the return type for a stub which_table() that just returns a constant NULL::testtable but that doesn't seem to be usable as a data type (I'd expected to be able to use it as a table type)
I've also tried something along the lines of
create or replace FUNCTION foofn (bar anyelement DEFAULT EXECUTE fooplan_mod(5))
but get an error there, too:
ERROR: syntax error at or near "fooplan"
LINE 1: ...ce FUNCTION foofn (bar anyelement DEFAULT EXECUTE fooplan_mod(5)...
^
I've tried a plethora of other things to the point that I've pretty much abandoned keyword capitalization (as you can see :-). I feel like I must be close but overlooking something.
I'm using PostgreSQL 13: psql (13.9 (Ubuntu 13.9-1.pgdg20.04+1), server 13.10 (Ubuntu 13.10-1.pgdg20.04+1))
Try the following approach using polymorphic types:
CREATE FUNCTION selectit(anycompatible, bigint)
RETURNS SETOF anycompatible
LANGUAGE plpgsql
AS $$BEGIN
RETURN QUERY EXECUTE
format('SELECT * FROM %s LIMIT %s',
pg_typeof($1),
$2);
END;$$;
Here is a test:
CREATE TABLE test (id integer PRIMARY KEY, value text);
INSERT INTO test VALUES (1, 'one'), (2, 'two'), (3, 'three');
SELECT * FROM selectit(NULL::test, 2);
id │ value
════╪═══════
1 │ one
2 │ two
(2 rows)
I am trying to create a set of functions for a PostgreSQL database I have, and I am facing this problem. What I am trying to do is to create a function, which takes as a parameter the name of his column a user wants to change, and the new value they wish to insert.
I thought that the simplest way would be to create the queries as strings internally and execute them (sql injections are not of concern right now).
Searching a bit, I tried to use the EXECUTE command, with no success. Any arrangement of the queries I tried did not work, with or without SET, and even a super simple query shows a syntax error, inside the function code or in the pgadmin sql editor:
EXECUTE "SELECT * FROM user_data.users;";
ERROR: prepared statement "SELECT * FROM user_data.users;" does not exist
********** Error **********
ERROR: prepared statement "SELECT * FROM user_data.users;" does not exist
SQL state: 26000
Any suggestions to solve this?
To alter a column name you could use a function like this.
It takes the table name, column name, and new column name.
CREATE OR REPLACE FUNCTION foo(_t regclass, _ocn text, _ncn text)
RETURNS void AS
$func$
BEGIN
EXECUTE 'ALTER TABLE '|| _t ||'
RENAME COLUMN '|| quote_ident(_ocn) ||' TO '|| quote_ident(_ncn) ||'';
END
$func$ LANGUAGE plpgsql;
To alter the type is a bit more work and may add to the server overheads.
Lets use this table as an example
CREATE TABLE foobar (id serial primary key, name text);
Ref Postgresql Type Conversion
In many cases a user does not need to understand the details of the
type conversion mechanism. However, implicit conversions done by
PostgreSQL can affect the results of a query. When necessary, these
results can be tailored by using explicit type conversion.
Firstly we would have to get the column data type(s) from the information The Information Schema Ref postgresql 9.4
select data_type from information_schema.columns where table_name = 'foobar' and column_name = 'name';
We could cast types as mentioned in type conversion::Table 8-1. Data Types Ref Postgresql 9.4
ALTER TABLE foobar ALTER COLUMN name TYPE varchar(20) USING name::text;
I hope this helps
Can anybody tell me, why double-quoted types behave differently in PosqtgreSQL?
CREATE TABLE foo1 (pk int4); -- ok
CREATE TABLE foo2 (pk "int4"); -- ok
CREATE TABLE foo3 (pk int); -- ok
CREATE TABLE foo4 (pk "int"); -- fail: type "int" does not exist
CREATE TABLE foo5 (pk integer); -- ok
CREATE TABLE foo6 (pk "integer"); -- fail: type "integer" does not exist
I can't find anything about it in documentation. Is this a bug?
Any information would be greatly appreciated
Double quotes mean that an identifier is to be interpreted exactly as written. They cause case to be preserved instead of flattened, and they allow what would otherwise be a keyword to be interpreted as an identifier.
PostgreSQL's int is a parse-time transformation to the integer type. There is not actually any data type named int in the system catalogs:
regress=> select typname from pg_type where typname = 'int';
typname
---------
(0 rows)
It is instead handled as a parse-time transformation much like a keyword. So when you protect it from that transformation by quoting it, you're telling the DB to look for a real type by that name.
This can't really be undone in a backward compatible way, since it'd break someone's system if they created a type or table named "int". (Types and tables share the same namespace).
This is similar to how user is transformed to current_user. Rails developers often use User as a model name, which causes Rails to try to SELECT * FROM user; in the DB, but this is transformed at parse time to SELECT * FROM current_user;, causing confused users to wonder why their table has a single row with their username in it. Query generators should always quote identifiers, i.e. they should be generating SELECT * FROM "user";... but few do.
The following syntax successfully creates a user defined function, but does not drop it. Can anyone identify where my error is?
-- Example 1 - scalar function
USE AdventureWorks2012
GO
CREATE FUNCTION Sales.uf_MostRecentCustomerOrderDate (#CustomerID int)
RETURNS
DATETIME
AS
BEGIN;
DECLARE #MostRecentOrderDate datetime;
SELECT #MostRecentOrderDate = MAX(OrderDate)
FROM Sales.SalesOrderHeader as soh
Where CustomerID = #CustomerID
RETURN #MostRecentOrderDate
END;
GO
-- Using user defined scalar function
SELECT Sales.uf_MostRecentCustomerOrderDate(29825); -- returns 2008-04-01 00:00:00.000
-- Delete existing scalar valued function
USE AdventureWorks2012;
GO
-- determines if function exists in database
IF OBJECT_ID (N'Sales.uf_MostRecentCustomerOrderDate', N'IF') IS NOT NULL
-- deletes function
DROP FUNCTION Sales.uf_MostRecentCustomerOrderDate;
GO
That function gets created with a type of FN (not IF as you've used).
Try this code to drop it:
-- determines if function exists in database
IF OBJECT_ID (N'Sales.uf_MostRecentCustomerOrderDate', N'FN') IS NOT NULL
-- deletes function
DROP FUNCTION Sales.uf_MostRecentCustomerOrderDate;
GO
Type IF stands for an inline table-valued function - this is not the case here.
Type FN stands for a scalar function - which this is.
See the TechNet docs on sys.objects which also lists all defined types in SQL Server catalog views
here's the stored procedure i wrote.In this proc "p_subjectid" is an array of numbers passed from the front end.
PROCEDURE getsubjects(p_subjectid subjectid_tab,p_subjects out refCursor)
as
BEGIN
open p_subjects for select * from empsubject where subject_id in
(select column_value from table(p_subjectid));
--select * from table(cast(p_subjectid as packg.subjectid_tab))
END getsubjects;
This is the error i am getting.
Oracle error ORA-22905: cannot access rows from a non-nested table item OR
as i have seen in different post,i tried casting "cast(p_subjectid as packg.subjectid_tab)" inside table function as given in the comment below.But i am getting another error: ORA-00902: invalid datatype.
And this is the definition of the "subjectid_tab".
type subjectid_tab is table of number index by binary_integer;
Can anyone please tell me what's the error.Is anything wrong with the my procedure.
You have to declare the type on "the database level" as ammoQ suggested:
CREATE TYPE subjectid_tab AS TABLE OF NUMBER INDEX BY binary_integer;
instead of declaring the type within PL/SQL. If you declare the type just in the PL/SQL block, it won't be available to the SQL "engine".
Oracle has two execution scopes: SQL and PL/SQL. When you use a SELECT/INSERT/UPDATE (etc) statement you are working in the SQL scope and, in Oracle 11g and below, you cannot reference types that are defined in the PL/SQL scope. (Note: Oracle 12 changed this so you can reference PL/SQL types.)
TYPE subjectid_tab IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
Is an associative array and can only be defined in the PL/SQL scope so cannot be used in SQL statements.
What you want is to define a collection (not an associative array) in the SQL scope using:
CREATE TYPE subjectid_tab IS TABLE OF NUMBER;
(Note: you do not need the INDEX BY clause for a collection.)
Then you can do:
OPEN p_subjects FOR
SELECT *
FROM empsubject
WHERE subject_id MEMBER OF p_subjectid;
or
OPEN p_subjects FOR
SELECT *
FROM empsubject
WHERE subject_id IN ( SELECT COLUMN_VALUE FROM TABLE( p_subjectid ) );
This is the good solution.
You cannot use a table(cast()) if the type that you cast is in the DECLARE part of the pl/sql block.
You REALLY need to use CREATE TYPE my_type [...]. Otherwise, it will throw the "cannot fetch row[...]" exception.
I just had this problem yesterday.
DECLARE
TYPE number_table IS TABLE OF NUMBER;
result_ids number_table := number_table();
BEGIN
/* .. bunch of code that uses my type successfully */
OPEN ? AS
SELECT *
FROM TABLE(CAST(result_ids AS number_table)); /* BOOM! */
END;
This fails in both of the ways you described earlier when called from a java routine. I discovered this was due to the fact that the type number_table is not defined in an exportable manner than can be shipped off the database. The type works great internally to the routine. But as soon as you try to execute a returnable recordset that references it in any way (including IN clauses?!?) you get a datatype not defined.
So the solution really is CREATE TYPE myschema.number_table IS TABLE OF NUMBER; Then drop the type declaration from your block and use the schema level declaration. Use the schema qualifier to reference the type just to be sure you are using the right one.
you have to cast the results of the pipelined query so:
If your pipelined function returns a rowtype of varchar2 then define a type (for example )
CREATE OR REPLACE TYPE char_array_t is VARRAY(32) of varchar2(255);
select * from table(cast(fn(x) as user_type_t ) );
will now work.