Postgres PL/pgSQL, possible to declare anonymous custom types? - postgresql

With DB2 I'm able to declare anonymous custom types (e.g. row types or composite types) for my user defined functions - see the following example (especially the last line):
DB2 example:
CREATE OR REPLACE FUNCTION myFunction(IN input1 DECIMAL(5), IN input2 DECIMAL(5))
RETURNS DECIMAL(2)
READS SQL DATA
LANGUAGE SQL
NO EXTERNAL ACTION
NOT DETERMINISTIC
BEGIN
DECLARE TYPE customAnonymousType AS ROW(a1 DECIMAL(2), a2 DECIMAL(2), a3 DECIMAL(2));
/* do something fancy... */
Can I do something similar with PL/pgSQL? I know I would be able to use existing row types, also existing user defined types - but do I really have to define the type in advance?
I also know about the RECORD type, but as far as I understand I would not be able to use it in arrays (and also it would not be a well defined type).
Comments asked for an example, even though it does lengthen the question a lot I tried to define a quite simple example (still for DB2):
CREATE OR REPLACE FUNCTION myFunction(IN input1 DECIMAL(5), IN input2 DECIMAL(5))
RETURNS DECIMAL(2)
READS SQL DATA
LANGUAGE SQL
NO EXTERNAL ACTION
NOT DETERMINISTIC
BEGIN
DECLARE TYPE customAnonymousType AS ROW(a1 DECIMAL(2), a2 CHARACTER VARYING(50));
DECLARE TYPE customArray AS customAnonymousType ARRAY[INTEGER];
DECLARE myArray customArray;
SET myArray[input1] = (50, 'Product 1');
SET myArray[input2] = (99, 'Product 2');
RETURN myArray[ARRAY_FIRST(myArray)].a1;
END
This function of course only serves as a dummy function (but I suppose it is already quite long for a question here). Actually it just decides which number to return depending on if input1 is greater than input2. If input1 is smaller than input2, it returns 50, if input2 is smaller or equal than input2 it would return 99.
I know I'm not even using my a2 character field of my type (so in this case I would also be able to just use an number array) and that there are probably many, many better solutions to return two fixed numbers depending on the input values, but still my original questions remains if I am able to use anonymous custom types in PL/pgSQL (as I would in Oracle or DB2 procedures) - or if there are any similar alternatives.

You cannot to create types with local visibility in Postgres. This functionality is not supported. Postgres support global custom composite types only.
See CREATE TYPE doc. This statement cannot be used in DECLARE part of plpgsql block.

Related

How to use proper syntax when creating SQL Macro?

Using Oracle SQL Developer, I am trying to make this web (link) example working:
CREATE OR REPLACE FUNCTION concat_self(str VARCHAR2, cnt PLS_INTEGER)
RETURN VARCHAR2 SQL_MACRO(SCALAR)
IS BEGIN RETURN 'rpad(str, cnt * length(str), str)';
END;
/
But I get those errors I do not understand:
Function CONCAT_SELF compiled
LINE/COL ERROR
--------- -------------------------------------------------------------
2/37 PLS-00103: Encountered the symbol "SQL_MACRO" when expecting one of the following: . # % ; is authid as cluster order using external character deterministic parallel_enable pipelined aggregate result_cache accessible
3/4 PLS-00103: Encountered the symbol "BEGIN" when expecting one of the following: not null of nan infinite dangling a empty
5/0 PLS-00103: Encountered the symbol "end-of-file" when expecting one of the following: end not pragma final instantiable order overriding static member constructor map
Errors: check compiler log
Your code is perfectly 'valid' for any instance of Oracle where SQL_MACRO keyword is recognized by the PL/SQL Engine.
The errors start to make a little bit more sense once you realize that the database doesn't understand what you're asking for - it does not recognize that 'SQL_MACRO' is a valid component of the CREATE OR REPLACE FUNCTION PL/SQL library.
Those errors kind of allow you to see how the database's PL/SQL and SQL parser are taking your request and breaking it down into things it knows how to work with.
Everything after the first error is about the parser not being able to make it past the first problem it encountered.
This feature was introduced in version 21c of the database, as explained in the 21c New Features Guide.
You can create SQL Macros (SQM) to factor out common SQL expressions
and statements into reusable, parameterized constructs that can be
used in other SQL statements. SQL macros can either be scalar
expressions, typically used in SELECT lists, WHERE, GROUP BY and
HAVING clauses, to encapsulate calculations and business logic or can
be table expressions, typically used in a FROM clause.
SQL macros increase developer productivity, simplify collaborative
development, and improve code quality.

Pass initial condition as an argument to a custom aggregate

I want to create a function that takes an initial condition as an argument and uses a set of values to compute a final result. In my specific case (has to do with geometry processing in PostGIS), it's important that each member of the set is processed against the current (which might be the initial) state one at a time for keeping the result clean. (I need to deal with sliver and gap issues, and have had a very difficult time doing so any way other than one element at a time.) The processing I need to do is already defined as a function that takes two appropriate arguments (where the first can be the current state and the second can be a value from the set).
So I want something similar to what you would expect is intended by this:
SELECT my_func('some initial condition', my_table.some_column) FROM my_table;
Aggregates seem like a natural fit for this, but I can't figure out how to get the function to accept an initial state. An iterative approach in PL/pgSQL would be fairly straightforward:
CREATE FUNCTION my_func(initial sometype, values sometype[])
-- Returns, language, etc.
AS $$
DECLARE
current sometype := initial;
v sometype;
BEGIN
FOREACH v IN ARRAY values LOOP
current := SomeBinaryOperation(current, v);
END LOOP;
RETURN current;
END
$$
But it would require rolling the values up into an array manually:
SELECT my_func('some initial condition', ARRAY_AGG(my_table.some_column)) FROM my_table;
You can create aggregates with multiple arguments, but the arguments that follow the first one are used as additional arguments to the transition function. I can see no way that one of them could be turned into an initial condition. (At least not without a remarkably hacky function that treats its third argument as an initial condition if the first argument is NULL or similar. And that's only if the aggregate argument can be a constant instead of a column.)
Am I best off just using the PL/pgSQL iterative approach, or is there a way to create an aggregate that accepts its initial condition as an argument? Or is there something I haven't thought of?
I'm on PostgreSQL 9.3 at the moment, but upgrading may be an option if there's new stuff that would help.

PL/pgSQL - %TYPE and ARRAY

Is it possible to use the %TYPE and array together?
CREATE FUNCTION role_update(
IN id "role".role_id % TYPE,
IN name "role".role_name % TYPE,
IN user_id_list "user".user_id % TYPE[],
IN permission_id_list INT[]
)
I got syntax error by this, but I don't want to duplicate any column type, so I want to use "user".user_id % TYPE instead of simply INT because then it is easier to modify any column type later.
As the manual explains here:
The type of a column is referenced by writing table_name.column_name%TYPE. Using this feature can sometimes help make a function independent of changes to the definition of a table.
The same functionality can be used in the RETURNS clause.
But there is no simple way to derive an array type from a referenced column, at least none that I would know of.
About modifying any column type later:
You are aware that this type of syntax is only a syntactical convenience to derive the type from a table column? Once created, there is no link whatsoever to the table or column involved.
It helps to keep a whole create script in sync. But id doesn't help with later changes to live objects in the database.
Related answer on dba.SE:
Array of template type in PL/pgSQL function using %TYPE
Using referenced types in function's parameters has no sense (in PostgreSQL), because its translated intermediately to actual types, and it is stored as actual types. Sorry, PostgreSQL doesn't support this functionality - something different is using referenced types inside function, where actual type is detected every first time execution in session.

What is the purpose of the input output functions in Postgresql 9.2 user defined types?

I have been implementing user defined types in Postgresql 9.2 and got confused.
In the PostgreSQL 9.2 documentation, there is a section (35.11) on user defined types. In the third paragraph of that section, the documentation refers to input and output functions that are used to construct a type. I am confused about the purpose of these functions. Are they concerned with on-disk representation or only in-memory representation? In the section referred to above, after defining the input and output functions, it states that:
If we want to do anything more with the type than merely store it,
we must provide additional functions to implement whatever operations
we'd like to have for the type.
Do the input and output functions deal with serialization?
As I understand it, the input function is the one which will be used to perform INSERT INTO and the output function to perform SELECT on the type so basically if we want to perform an INSERT INTO then we need a serialization function embedded or invoked in the input or output function. Can anyone help explain this to me?
Types must have a text representation, so that values of this type can be expressed as literals in a SQL query, and returned as results in output columns.
For example, '2013-20-01' is a text representation of a date. It's possible to write VALUES('2013-20-01'::date) in a SQL statement, because the input function of the date type recognizes this string as a date and transforms it into an internal representation (for both using it in memory and storing to disk).
Conversely, when client code issues SELECT date_field FROM table, the values inside date_field are returned in their text representation, which is produced by the type's output function from the internal representation (unless the client requested a binary format for this column).

Convert a Navision Filter to SQL where

I have a field in table in next format 1_2..1_10|1_6|1_8| where 1_2..1_10 include 1_2, 1_3 and other.
How I can select data, where number = 1_3?
1st suggestion: Get rights to modify the db structure and figure out how to better store the Navision string.
2nd suggestion: CLR
I'll assume you are relatively comfortable with each of these concepts. If you aren't they are very well documented all over the web.
My approach would be to use a CLR function as there's going to be some high level things that are awkward in SQL that C# takes care of quite easily. The psuedo walk through would go something like this.
Implementation
Create a CLR funciton and implement it on the SQL server instance.
Using SQL resultset change the query to look for the returned value of the CLR function based on the navision filter value where "1_3".
CLR Function Logic
Create a c# function that takes in the value of the filter field and returns a string value.
The CLR function splits the filter field by the | char into a list.
Inside the CLR function create a second list. Iterate over the first list. When you find a ranged string split it on the ".." and manually add every available value between the range to the second list. When you find a value that isnt' ranged simply add it to the second list.
Join the contents of the second list together on the "|" charecter.
Return the joined value.
SQL Logic
SELECT Field1,Field2...CLRFunctionName(FilterValue) AS FixedFilterValue FROM Sometable WHERE FixedFilterValue LIKE '%1_3%';