Trying to create aggregate function in PostgreSQL - postgresql

I'm trying to create new aggregate function in PostgreSQL to use instead of the sum() function
I started my journey in the manual here.
Since I wanted to create a function that takes an array of double precision values, sums them and then does some additional calculations I first created that final function:
takes double precision as input and gives double precision as output
DECLARE
v double precision;
BEGIN
IF tax > 256 THEN
v := 256;
ELSE
v := tax;
END IF;
RETURN v*0.21/0.79;
END;
Then I wanted to create the aggregate function that takes an array of double precision values and puts out a single double precision value for my previous function to handle.
CREATE AGGREGATE aggregate_ee_income_tax (float8[]) (
sfunc = array_agg
,stype = float8
,initcond = '{}'
,finalfunc = eeincometax);
What I get when I run that command is:
ERROR: function array_agg(double precision, double precision[]) does
not exist
I'm somewhat stuck here, because the manual lists array_agg() as existing function. What am I doing wrong?
Also, when I run:
\da
List of aggregate functions
Schema | Name | Result data type | Argument data types | Description
--------+------+------------------+---------------------+-------------
(0 rows)
My installation has no aggregate functions at all? Or does only list user defined functions?
Basically what I'm trying to understand:
1) Can I use an existing functions to sum up my array values?
2) How can I find out about input and ouptut data types of functions? Docs claim that array_agg() takes any kind of input.
3) What is wrong with my own aggregate function?
Edit 1
To give more information and clearer picture of what I'm trying to achieve:
I have one huge query over several tables which goes something like this:
SELECT sum(tax) ... from (SUBQUERY) as foo group by id
I want to replace that sum function with my own aggregate function so I don't have to do additional calculations on backend - since they can all be done on database level.
Edit 2
Accepted Ants's answer. Since final solution comes from comments I post it here for reference:
CREATE AGGREGATE aggregate_ee_income_tax (float8)
(
sfunc = float8pl
,stype = float8
,initcond = '0.0'
,finalfunc = eeincometax
);

Array agg is an aggregate function not a regular function, so it can't be used as a state transition function for a new aggregate. What you want to do is to create an aggregate function which has a state transition function that is identical to array_agg and a custom final func.
Unfortunately the state transition function of array_agg is defined in terms of an internal datatype so it can't be reused. Fortunately there is an existing function in core that already does what you want.
CREATE AGGREGATE aggregate_ee_income_tax (float8)(
sfunc = array_append,
stype = float8[],
initcond = '{}',
finalfunc = eeincometax);
Also note that you had your types mixed up, you probably want aggregate a set of floats to an array, not a set of arrays to a float.

In addition to #Ants excellent advice:
1.) Your final function could be simplified to:
CREATE FUNCTION eeincometax(float8)
RETURNS float8 LANGUAGE SQL AS
$func$
SELECT (least($1, 256) * 21) / 79
$func$;
2.) It seems like you are dealing with money? In this case I would strongly advise to use the type numeric (preferred) or money for the purpose. Floating point operations are often not precise enough.
3.) The initial condition of the aggregate can simply be just 0:
CREATE AGGREGATE aggregate_ee_income_tax(float8)
(
sfunc = float8pl
,stype = float8
,initcond = 0
,finalfunc = eeincometax
);
4.) In your case (least(sum(tax), 256) * 21) / 79 is probably faster than your custom aggregate. Aggregate functions provided by PostgreSQL are written in C and optimized for performance. I would use that instead.

Related

Postgres describe of aggregate function returning type modification value of -1 for user defined type

I've created an aggregate function with the following:
CREATE FUNCTION rtrim(mychar) RETURNS mychar
AS '$libdir/libmy_pgmod', 'mycharrtrim'
LANGUAGE C IMMUTABLE STRICT;
CREATE OR REPLACE FUNCTION mychar_max( mychar, mychar ) RETURNS mychar
AS '$libdir/libmy_pgmod', 'mychar_max'
LANGUAGE C IMMUTABLE STRICT;
CREATE OR REPLACE FUNCTION mychar_min( mychar, mychar ) RETURNS mychar
AS '$libdir/libmy_pgmod', 'mychar_min'
LANGUAGE C IMMUTABLE STRICT;
CREATE AGGREGATE MAX( lzchar ) (
SFUNC = mychar_max,
STYPE = mychar,
SORTOP = >
);
CREATE AGGREGATE MIN( mychar ) (
SFUNC = mychar_min,
STYPE = mychar,
SORTOP = <
);
The mychar is a type that is defined with 2 type modifiers. The first type modifier is the length of the string and the 2nd is the CCSID of the string since we are tying to simulate a zOS string. I then create at table like the following:
create table t1 (c1 mychar(20, 1208), c2 char(20));
Within my C code I then try to do a describe of the following statement:
select c1, max(c1), max(c2) from t1 group by c1;
The describe returns fine, however, when I try to retrieve the data from the describe using the following code:
char *colName = PQfname( result, hvNum );
int colTmod = PQfmod( result, hvNum );
int colSize = PQfsize( result, hvNum );
Oid oid = PQftype( result, hvNum );
Oid tblOid = PQftable( result, hvNum );
For the first column I get the expected values (colName, colTmod, oid and tblOid). For the 2nd column (max(c1)) it returns max as the colName (which I expected), it also correctly returns the correct oid. However, for colTmod it returns -1. Is there something that I need to do to get the proper colTmod value returned in this case? For the max(c2) column which is a native char it correctly returns everything as expected including the colTmod as 24. There must be something I am doing incorrectly that results in my implementation of the char or the aggregate function not returning the type modification value correctly.
I am not 100% certain, but I am pretty sure that the result of an aggregate function has no type modifiers.
I tried your experiment with a column defined as numeric(7,2), and like you I got -1 for PQfmod when I queried the maximum.
numeric's max aggregate is defined using numeric_larger, which is defined as:
Datum
numeric_larger(PG_FUNCTION_ARGS)
{
Numeric num1 = PG_GETARG_NUMERIC(0);
Numeric num2 = PG_GETARG_NUMERIC(1);
/*
* Use cmp_numerics so that this will agree with the comparison operators,
* particularly as regards comparisons involving NaN.
*/
if (cmp_numerics(num1, num2) > 0)
PG_RETURN_NUMERIC(num1);
else
PG_RETURN_NUMERIC(num2);
}
So it is as simple as it can get and returns one of the input values.
If type modifiers are not preserved there, I'd guess they are never preserved in an aggregate.

Create function to compute an average of 3 values

I'm trying to write a function in postgre sql to take an average across three columns. I have written the following function:
create function xcol_avg (col1, col2, col3)
returns numeric as $$
begin
return (coalesce(col1, 0) + coalesce(col2,0) +coalesce(col3, 0))/
case when (col 1 is null or col1 = 0 then 0 else 1 end +
case when (col 2 is null or col2 = 0 then 0 else 1 end +
case when (col 3 is null or col3 = 0 then 0 else 1 end;
end
What is the problem with my code? Also, is there a way to get the function to return null if it ends up dividing by 0? Any help is really appreciated.
Thanks!
Actually, you can make a function that will use a variable number of arguments and depending on their number compute the average. In Postgres there's a word VARIADIC for such things:
SQL functions can be declared to accept variable numbers of arguments, so long as all the "optional" arguments are of the same data type
Function code:
CREATE FUNCTION xcol_avg(numeric, VARIADIC numeric[])
RETURNS numeric
LANGUAGE plpgsql
IMMUTABLE
AS $$
BEGIN
RETURN (SELECT AVG(vals) FROM unnest($2 || ARRAY[$1]) t(vals));
END;
$$;
Use case with different number of arguments:
select xcol_avg(1,6); -- returns 3.5
select xcol_avg(1,5.5,4); -- returns 3.5
select xcol_avg(1,2,3,4,5,6,7); -- returns 4
Click on this Button to try this online.
Explanation:
Marking a function as IMMUTABLE improves the execution time by allowing the optimizer to pre-evaluate the function. Immutable functions cannot modify the database and are guaranteed to always return the same results when called with the same input.
Declaring the last parameter of a function as VARIADIC which has to be of an array type lets you provide optional arguments that will be passed to the function as an array. Note that you don't explicitly write the array, you just list your parameters as you normally would.
unnest() is a function that returns a set of rows by expanding an array. In other words it's "unpacking" the array elements into separate rows
|| is an array operator that provides the array-to-array concatenation. Here it serves the purpose of connecting the first (required) argument with the rest given in a VARIADIC array.
AVG() is an aggregate function that computes an average of all input values. In our case it would take "unpacked" rows from a column named vals and compute the average.
With this solution you don't need to worry about dividing by zero, as at least one argument is required and avg() is doing the job you wanted to do manually by building up the denominator.
Apply it in a query:
This function would also work for computing an average of multiple columns in a row. Consider a table tbl with columns name, cost1, cost2, cost3 and below statement:
SELECT
name, cost1, cost2, cost3,
xcol_avg(cost1, cost2, cost3) AS average_cost
FROM tbl
For more general information about CREATE FUNCTION check the resourceful documentation.

How to create a custom windowing function for PostgreSQL? (Running Average Example)

I would really like to better understand what is involved in creating a UDF that operates over windows in PostgreSQL. I did some searching about how to create UDFs in general, but haven't found an example of how to do one that operates over a window.
To that end I am hoping that someone would be willing to share code for how to write a UDF (can be in C, pl/SQL or any of the procedural languages supported by PostgreSQL) that calculates the running average of numbers in a window. I realize there are ways to do this by applying the standard average aggregate function with the windowing syntax (rows between syntax I believe), I am simply asking for this functionality because I think it makes a good simple example. Also, I think if there was a windowing version of average function then the database could keep a running sum and observation count and wouldn't sum up almost identical sets of rows at each iteration.
You have to look to postgresql source code postgresql/src/backend/utils/adt/windowfuncs.c and postgresql/src/backend/executor/nodeWindowAgg.c
There are no good documentation :( -- fully functional window function should be implemented only in C or PL/v8 - there are no API for other languages.
http://www.pgcon.org/2009/schedule/track/Version%208.4/128.en.html presentation from author of implementation in PostgreSQL.
I found only one non core implementation - http://api.pgxn.org/src/kmeans/kmeans-1.1.0/
http://pgxn.org/dist/plv8/1.3.0/doc/plv8.html
According to the documentation "Other window functions can be added by the user. Also, any built-in or user-defined normal aggregate function can be used as a window function." (section 4.2.8). That worked for me for computing stock split adjustments:
CREATE OR REPLACE FUNCTION prod(float8, float8) RETURNS float8
AS 'SELECT $1 * $2;'
LANGUAGE SQL IMMUTABLE STRICT;
CREATE AGGREGATE prods ( float8 ) (
SFUNC = prod,
STYPE = float8,
INITCOND = 1.0
);
create or replace view demo.price_adjusted as
select id, vd,
prods(sdiv) OVER (PARTITION by id ORDER BY vd DESC ROWS UNBOUNDED PRECEDING) as adjf,
rawprice * prods(sdiv) OVER (PARTITION by id ORDER BY vd DESC ROWS UNBOUNDED PRECEDING) as price
from demo.prices_raw left outer join demo.adjustments using (id,vd);
Here are the schemas of the two tables:
CREATE TABLE demo.prices_raw (
id VARCHAR(30),
vd DATE,
rawprice float8 );
CREATE TABLE demo.adjustments (
id VARCHAR(30),
vd DATE,
sdiv float);
Starting with table
payments
+------------------------------+
| customer_id | amount | item |
| 5 | 10 | book |
| 5 | 71 | mouse |
| 7 | 13 | cover |
| 7 | 22 | cable |
| 7 | 19 | book |
+------------------------------+
SELECT customer_id,
AVG(amount) OVER (PARTITION BY customer_id) AS avg_amount,
item,
FROM payments`
we get
+----------------------------------+
| customer_id | avg_amount | item |
| 5 | 40.5 | book |
| 5 | 40.5 | mouse |
| 7 | 18 | cover |
| 7 | 18 | cable |
| 7 | 18 | book |
+----------------------------------+
AVG being an aggregate function, it can act as a window function. However not all window functions are aggregate functions. The aggregate functions are the non-sophisticated window functions.
In the query above, let's not use the built-in AVG function and use our own implementation. Does the same, just implemented by the user. The query above becomes:
SELECT customer_id,
my_avg(amount) OVER (PARTITION BY customer_id) AS avg_amount,
item,
FROM payments`
The only difference from the former query is that AVG has been replaced with my_avg. We now need to implement our custom function.
On how to compute the average
Sum up all the elements, then divide by the number of elements. For customer_id of 7, that would be (13 + 22 + 19) / 3 = 18.
We can devide it in:
a step-by-step accumulation -- the sum.
a final operation -- division.
On how the aggregate function gets to the result
The average is computed in steps. Only the last value is necessary.
Start with an initial value of 0.
Feed 13. Compute the intermediate/accumulated sum, which is 13.
Feed 22. Compute the accumulated sum, which needs the previous sum plus this element: 13 + 22 = 35
Feed 19. Compute the accumulated sum, which needs the previous sum plus this element: 35 + 19 = 54. This is the total that needs to be divided by the number of element (3).
The result of step 3. is fed to another function, that knows how to divide the accumulated sum by the number of elements
What happened here is that the state started with the initial value of 0 and was changed with every step, then passed to the next step.
State travels between steps for as long as there is data. When all data is consumed state goes to a final function (terminal operation). We want the state to contain all the information needed for the accumulator as well as by the terminal operation.
In the specific case of computing the average, the terminal operation needs to know how many elements the accumulator worked with because it needs to divide by that. For that reason, the state needs to include both the accumulated sum and the number of elements.
We need a tuple that will contain both. Pre-defined POINT PostgreSQL type to the rescue. POINT(5, 89) means an accumulated sum of 5 elements that has the value of 89. The initial state will be a POINT(0,0).
The accumulator is implemented in what's called a state function. The terminal operation is implemented in what's called a final function.
When defining a custom aggregate function we need to specify:
the aggregate function name and return type
the initial state
the type of the state that the infrastructure will pass between steps and to the final function
a state function -- knows how to perform the accumulation steps
a final function -- knows how to perform the terminal operation. Not always needed (e.g. in a custom implementation of SUM the final value of the accumulated sum is the result.)
Here's the definition for the custom aggregate function.
CREATE AGGREGATE my_avg (NUMERIC) ( -- NUMERIC is what the function returns
initcond = '(0,0)', -- this is the initial state of type POINT
stype = POINT, -- this is the type of the state that will be passed between steps
sfunc = my_acc, -- this is the function that knows how to compute a new average from existing average and new element. Takes in the state (type POINT) and an element for the step (type NUMERIC)
finalfunc my_final_func -- returns the result for the aggregate function. Takes in the state of type POINT (like all other steps) and returns the result as what the aggregate function returns - NUMERIC
);
The only thing left is to define two functions my_acc and my_final_func.
CREATE FUNCTION my_acc (state POINT, elem_for_step NUMERIC) -- performs accumulated sum
RETURNS POINT
LANGUAGE SQL
AS $$
-- state[0] is the number of elements, state[1] is the accumulated sum
SELECT POINT(state[0]+1, state[1] + elem_for_step);
$$;
CREATE FUNCTION my_final_func (POINT) -- performs devision and returns final value
RETURNS NUMERIC
LANGUAGE SQL
AS $$
-- $1[1] is the sum, $1[0] is the number of elements
SELECT ($1[1]/$1[0])::NUMERIC;
$$;
Now that the functions are available CREATE AGGREGATE defined above will run successfully. Now that we have the aggregate defined, the query based on my_avg instead of the built-in AVG can be run:
SELECT customer_id,
my_avg(amount) OVER (PARTITION BY customer_id) AS avg_amount,
item,
FROM payments`
The results are identical with what you get when using the built-in AVG.
The PostgreSQL documentation suggests that the users are limited to implementing user-defined aggregate functions:
In addition to these [pre-defined window] functions, any built-in or user-defined general-purpose or statistical aggregate (i.e., not ordered-set or hypothetical-set aggregates) can be used as a window function;
What I suspect ordered-set or hypothetical-set aggregates means:
the value returned is identical to all other rows (e.g. AVG and SUM. In contrast RANK returns different values for all rows in group depending on more sophisticated criteria)
it makes no sense to ORDER BY when PARTITIONing because the values are the same for all rows anyway. In contrast we want to ORDER BY when using RANK()
Query:
SELECT customer_id, item, rank() OVER (PARTITION BY customer_id ORDER BY amount desc) FROM payments;
Geometric mean
The following is a user-defined aggregate function that I found no built-in aggregate for and may be useful to some.
The state function computes the average of the natural logarithms of the terms.
The final function raises constant e to whatever the accumulator provides.
CREATE OR REPLACE FUNCTION sum_of_log(state POINT, curr_val NUMERIC)
RETURNS POINT
LANGUAGE SQL
AS $$
SELECT POINT(state[0] + 1,
(state[1] * state[0]+ LN(curr_val))/(state[0] + 1));
$$;
CREATE OR REPLACE FUNCTION e_to_avg_of_log(POINT)
RETURNS NUMERIC
LANGUAGE SQL
AS $$
select exp($1[1])::NUMERIC;
$$;
CREATE AGGREGATE geo_mean (NUMBER)
(
stype = NUMBER,
initcond = '(0,0)', -- represent POINT value
sfunc = sum_of_log,
finalfunc = e_to_avg_of_log
);
PL/R provides such functionality. See here for some examples. That said, I'm not sure that it (currently) meets your requirement of "keep[ing] a running sum and observation count and [not] sum[ming] up almost identical sets of rows at each iteration" (see here).

calculation in postgresql function

I'm trying to create function that serves as replacement for postgresql sum function in my query.
query itself it LOOONG and something like this
SELECT .... sum(tax) as tax ... from (SUBQUERY) as bar group by id;
I want to replace this sum function with my own which alters the end value by tax calculation.
So I created pgsql function like this:
DECLARE
v double precision;
BEGIN
IF val > 256 THEN
v := 256;
ELSE
v := val;
END IF;
RETURN (v*0.21)/0.79;
END;
That function uses double precision as input. Using this function gives me error though:
column "bar.tax" must appear in the GROUP BY clause or be used in an aggregate function
So I tried to use double precision[] as input type for function - in that case the error was telling me that there was no postgresql function matching the name and given input type.
So is there a way to replace sum function in my query by using pgsql functions or not - I don't want to change any other parts of my query unless I really, Really, REALLY have to.
Look into CREATE AGGREGATE for aggregate functions. And in your expression you can do something like sum(LEAST(tax,256)*21/79) if I read you right.

Are you able to use a custom Postgres comparison function for ORDER BY clauses?

In Python, I can write a sort comparison function which returns an item in the set {-1, 0, 1} and pass it to a sort function like so:
sorted(["some","data","with","a","nonconventional","sort"], custom_function)
This code will sort the sequence according to the collation order I define in the function.
Can I do the equivalent in Postgres?
e.g.
SELECT widget FROM items ORDER BY custom_function(widget)
Edit: Examples and/or pointers to documentation are welcome.
Yes you can, you can even create an functional index to speed up the sorting.
Edit: Simple example:
CREATE TABLE foo(
id serial primary key,
bar int
);
-- create some data
INSERT INTO foo(bar) SELECT i FROM generate_series(50,70) i;
-- show the result
SELECT * FROM foo;
CREATE OR REPLACE FUNCTION my_sort(int) RETURNS int
LANGUAGE sql
AS
$$
SELECT $1 % 5; -- get the modulo (remainder)
$$;
-- lets sort!
SELECT *, my_sort(bar) FROM foo ORDER BY my_sort(bar) ASC;
-- make an index as well:
CREATE INDEX idx_my_sort ON foo ((my_sort(bar)));
The manual is full of examples how to use your own functions, just start playing with it.
SQL: http://www.postgresql.org/docs/current/static/xfunc-sql.html
PL/pgSQL: http://www.postgresql.org/docs/current/static/plpgsql.html
We can avoid confusion about ordering methods using names:
"score function" of standard SQL select * from t order by f(x) clauses, and
"compare function" ("sort function" in the question text) of the Python's sort array method.
The ORDER BY clause of PostgreSQL have 3 mechanisms to sort:
Standard, using an "score function", that you can use also with INDEX.
Special "standard string-comparison alternatives", by collation configuration (only for text, varchar, etc. datatypes).
ORDER BY ... USING clause. See this question or docs example. Example: SELECT * FROM mytable ORDER BY somecol USING ~<~ where ~<~ is an operator, that is embedding a compare function.
Perhaps "standard way" in a RDBMS (as PostgreSQL) is not like Python's standard because indexing is the aim of a RDBMS, and it's easier to index score functions.
Answers to the question:
Direct solution. There are no direct way to use an user-defined function as compare function, like in the sort method of languages like Python or Javascript.
Indirect solution. You can use a user-defined compare function in an user-defined operator, and an user-defined operator class to index it. See at PostgreSQL docs:
CREATE OPERATOR with the compare function;
CREATE OPERATOR CLASS, to be indexable.
Explaining compare functions
In Python, the compare function looks like this:
def compare(a, b):
return 1 if a > b else 0 if a == b else -1
The compare function use less CPU tham a score function. It is usefull also to express order when score funcion is unknown.
See a complete description at
for C language see https://www.gnu.org/software/libc/manual/html_node/Comparison-Functions.html
for Javascript see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort#Description
Other typical compare functions
Wikipedia's example to compare tuples:
function tupleCompare((lefta, leftb, leftc), (righta, rightb, rightc))
if lefta ≠ righta
return compare(lefta, righta)
else if leftb ≠ rightb
return compare(leftb, rightb)
else
return compare(leftc, rightc)
In Javascript:
function compare(a, b) {
if (a is less than b by some ordering criterion) {
return -1;
}
if (a is greater than b by the ordering criterion) {
return 1;
}
// a must be equal to b
return 0;
}
C++ example of PostgreSQL docs:
complex_abs_cmp_internal(Complex *a, Complex *b)
{
double amag = Mag(a),
bmag = Mag(b);
if (amag < bmag)
return -1;
if (amag > bmag)
return 1;
return 0;
}
You could do something like this
SELECT DISTINCT ON (interval_alias) *,
to_timestamp(floor((extract('epoch' FROM index.created_at) / 10)) * 10) AT
TIME ZONE 'UTC' AS interval_alias
FROM index
WHERE index.created_at >= '{start_date}'
AND index.created_at <= '{end_date}'
AND product = '{product_id}'
GROUP BY id, interval_alias
ORDER BY interval_alias;
Firstly you define the parameter that will be your ordering column with AS. It could be function or any SQL expression. Then set it to ORDER BY expression and you're done!
In my opinion, this is the smoothest way to do such an ordering.