PortgreSQL - Function to calculate percentages - postgresql

In postgresql if I want percentages I just write:
select x / sum(x) over() ...
Inside a function it doesn't work since aggregate functions don't behave well.
I tried to find a solution but with no success.
This is a simple version of what I really need, but I believe the solution to this problem would surely point me in the right direction.
Some more details...
If I create this simple table:
create table ttt(v1 numeric, v2 numeric);
insert into ttt values (2,1),(5,2),(10,4);
If I run:
select v1/sum(v1) over() from ttt; --returns relative frequencies
I get:
select v1/sum(v1) over() from ttt;
?column?
------------------------
0.11764705882352941176
0.29411764705882352941
0.58823529411764705882
(3 rows)
Now, if I want to create a function which does the same thing, I would write:
create or replace function rfreq (double precision)
returns double precision
AS
'
select
$1 / sum($1) over()
'
LANGUAGE 'sql';
I get:
select rfreq(v1) from bruto;
rfreq
-------
1
1
1
(3 rows)
Postgresql is not summing up inside a function.
Any suggestions?
Thank you,
Ali.

To debug your function, write the query with arbitrary parameters in a text file, and then use psql to run it:
\i ./myfunc.sql
Content of myfunc.sql would be:
select x / sum(y) over (...) ...
This will allow you to debug the function before wrapping it in a function.
When you're done and happy with the results for a few samples, copy/paste it into your function, and replace the hard-coded test values with parameters where applicable.
As to optimizing it when it has parameters, I'm not aware of any means to run explain analyze within the Postgres function, but you can get a plan which -- best I'm aware -- is the same as the function will use by preparing a statement with the same parameters. So you can explain analyze the latter instead.
Seeing the new details, note that if you prepare the query that you're running in function, you should always get 1 -- bar with zero.
You've an error in there, in the sense that you'd need to keep state from a call to the next first to return the expected result. Per Pavel's suggestion, you actually need a custom aggregate or a custom window function here. See the link he suggested in a comment, as well as:
http://www.postgresql.org/docs/current/static/xaggr.html

I found the solution browsing through the pl/r mailing list.
Percentages (or relative frequencies) can be calculated in postgres using the following code:
CREATE OR REPLACE
FUNCTION rel_freq(float8)
RETURNS float8 AS
$BODY$
var <- as.vector(farg1)
return((var/sum(var))[prownum]
$BODY$
LANGUAGE plr WINDOW;

Related

Execute Function First in Postgres

I need to call a FUNCTION in postgreSQL before a SELECT is done on a table. My first thought was to use a TRIGGER but it appears you cannot trigger on select.
Thus, to work around this I created a VIEW that runs a select on the table and the function all at once. I.e.:
CREATE VIEW people_view AS
SELECT get_department(), name, title, department
FROM people_table
So, in a nutshell... the get_department() function would update the department column from external data (this is all using foreign data tables and wrappers).
The problem is, the function executes after name, title, department are selected and not before. So if I run it once it doesn't work. If I run it twice it does (because it updated after the first run).
Sorry if this doesn't make much sense. I don't typically do DB work. What I'd like to do is get "get_department()" to execute first in the SELECT. I tried to put the function call in a sub-query, but it still did not execute first. The only thought I have left is perhaps an explicit join to force the order? I have no idea :-(.
Basically, I just want to execute a function before a SELECT transparently for the person running the query... and I guess you can't do that with triggers. If there's a better workaround, I'd be more than happy to give it a go.
Thanks,
Isekal
with t(x) as (
select get_department()) -- Function executes here
select
t.x, ...
from
t, people_table -- You does not provide any relation between your function and table...
Also check LATERAL feature.
May be you construct a function that returns your table, and includes call to your function before selecting data.
CREATE OR REPLACE FUNCTION people_table()
RETURNS TABLE(name character varying, title character varying, department character varying) AS
$BODY$
BEGIN
-- do function call
SELECT get_department();
RETURN QUERY
SELECT people_table.* FROM people_table;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100
ROWS 1000;
-- then, later in your code, use table selecting on the new function
SELECT * from people_table();
-- notice the use of parenthesis.
-- you may also
SELECT name FROM people_table() ORDER BY department DESC;
-- and use the function as if it were the table itself.
Hope it helps.

Postgres using functions inside queries

I have a table with common word values to match against brands - so when someone types in "coke" I want to match any possible brand names associated with it as well as the original term.
CREATE TABLE word_association ( commonterm TEXT, assocterm TEXT);
INSERT INTO word_association ('coke', 'coca-cola'), ('coke', 'cocacola'), ('coke', 'coca-cola');
I have a function to create a list of these values in a pipe-delim string for pattern matching:
CREATE OR REPLACE FUNCTION usp_get_search_terms(userterm text)
RETURNS text AS
$BODY$DECLARE
returnstr TEXT DEFAULT '';
BEGIN
SET DATESTYLE TO DMY;
returnstr := userterm;
IF EXISTS (SELECT 1 FROM word_association WHERE LOWER(commonterm) = LOWER(userterm)) THEN
SELECT returnstr || '|' || string_agg(assocterm, '|') INTO returnstr
FROM word_association
WHERE commonterm = userterm;
END IF;
RETURN returnstr;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION usp_get_search_terms(text)
OWNER TO customer_role;
If you call SELECT * FROM usp_get_search_terms('coke') you end up with
coke|coca-cola|cocacola|coca cola
EDIT: this function runs <100ms so it works fine.
I want to run a query with this text inserted e.g.
SELECT X.article_number, X.online_description
FROM articles X
WHERE LOWER(X.online_description) % usp_get_search_terms ('coke');
This takes approx 56s to run against my table of ~500K records.
If I get the raw text and use it in the query it takes ~300ms e.g.
SELECT X.article_number, X.online_description
FROM articles X
WHERE X.online_description % '(coke|coca-cola|cocacola|coca cola)';
The result sets are identical.
I've tried modifying what the output string from the function to e.g. enclose it in quotes and parentheses but it doesn't seem to make a difference.
Can someone please advise why there is a difference here? Is it the data type or something about calling functions inside queries? Thanks.
Your function might take 100ms, but it's not calling your function once; it's calling it 500,000 times.
It's because your function is declared VOLATILE. This tells Postgres that either the function returns different values when called multiple times within a query (like clock_timestamp() or random()), or that it alters the state of the database in some way (for example, by inserting records).
If your function contains only SELECTs, with no INSERTs, calls to other VOLATILE functions, or other side-effects, then you can declare it STABLE instead. This tells the planner that it can call the function just once and reuse the result without affecting the outcome of the query.
But your function does have side-effects, due to the SET DATESTYLE statement, which takes effect for the rest of the session. I doubt this was the intention, however. You may be able to remove it, as it doesn't look like date formatting is relevant to anything in there. But if it is necessary, the correct approach is to use the SET clause of the CREATE FUNCTION statement to change it only for the duration of the function call:
...
$BODY$
LANGUAGE plpgsql STABLE
SET DATESTYLE TO DMY
COST 100;
The other issue with the slow version of the query is the call to LOWER(X.online_description), which will prevent the query from utilising the index (since online_description is indexed, but LOWER(online_description) is not).
With these changes, the performance of both queries is the same; see this SQLFiddle.
So the answer came to me about dawn this morning - CTEs to the rescue!
Particularly as this is the "simple" version of a very large query, it helps to get this defined once in isolation, then do the matching against it. The alternative (given I'm calling this from a NodeJS platform) is to have one request retrieve the string of terms, then make another request to pass the string back. Not elegant.
WITH matches AS
( SELECT * FROM usp_get_search_terms('coke') )
, main AS
( SELECT X.article_number, X.online_description
FROM articles X
JOIN matches M ON X.online_description % M.usp_get_search_terms )
SELECT * FROM main
Execution time is somewhere around 300-500ms depending on term searched and articles returned.
Thanks for all your input guys - I've learned a few things about PostGres that my MS-SQL background didn't necessarily prepare me for :)
Have you tried removing the IF EXISTS() and simply using:
SELECT returnstr || '|' || string_agg(assocterm, '|') INTO returnstr
FROM word_association
WHERE LOWER(commonterm) = LOWER(userterm)
In instead of calling the function for each row call it once:
select x.article_number, x.online_description
from
woolworths.articles x
cross join
woolworths.usp_get_search_terms ('coke') c (s)
where lower(x.online_description) % s

Executing queries dynamically in PL/pgSQL

I have found solutions (I think) to the problem I'm about to ask for on Oracle and SQL Server, but can't seem to translate this into a Postgres solution. I am using Postgres 9.3.6.
The idea is to be able to generate "metadata" about the table content for profiling purposes. This can only be done (AFAIK) by having queries run for each column so as to find out, say... min/max/count values and such. In order to automate the procedure, it is preferable to have the queries generated by the DB, then executed.
With an example salesdata table, I'm able to generate a select query for each column, returning the min() value, using the following snippet:
SELECT 'SELECT min('||column_name||') as minval_'||column_name||' from salesdata '
FROM information_schema.columns
WHERE table_name = 'salesdata'
The advantage being that the db will generate the code regardless of the number of columns.
Now there's a myriad places I had in mind for storing these queries, either a variable of some sort, or a table column, the idea being to then have these queries execute.
I thought of storing the generated queries in a variable then executing them using the EXECUTE (or EXECUTE IMMEDIATE) statement which is the approach employed here (see right pane), but Postgres won't let me declare a variable outside a function and I've been scratching my head with how this would fit together, whether that's even the direction to follow, perhaps there's something simpler.
Would you have any pointers, I'm currently trying something like this, inspired by this other question but have no idea whether I'm headed in the right direction:
CREATE OR REPLACE FUNCTION foo()
RETURNS void AS
$$
DECLARE
dyn_sql text;
BEGIN
dyn_sql := SELECT 'SELECT min('||column_name||') from salesdata'
FROM information_schema.columns
WHERE table_name = 'salesdata';
execute dyn_sql
END
$$ LANGUAGE PLPGSQL;
System statistics
Before you roll your own, have a look at the system table pg_statistic or the view pg_stats:
This view allows access only to rows of pg_statistic that correspond
to tables the user has permission to read, and therefore it is safe to
allow public read access to this view.
It might already have some of the statistics you are about to compute. It's populated by ANALYZE, so you might run that for new (or any) tables before checking.
-- ANALYZE tbl; -- optionally, to init / refresh
SELECT * FROM pg_stats
WHERE tablename = 'tbl'
AND schemaname = 'public';
Generic dynamic plpgsql function
You want to return the minimum value for every column in a given table. This is not a trivial task, because a function (like SQL in general) demands to know the return type at creation time - or at least at call time with the help of polymorphic data types.
This function does everything automatically and safely. Works for any table, as long as the aggregate function min() is allowed for every column. But you need to know your way around PL/pgSQL.
CREATE OR REPLACE FUNCTION f_min_of(_tbl anyelement)
RETURNS SETOF anyelement
LANGUAGE plpgsql AS
$func$
BEGIN
RETURN QUERY EXECUTE (
SELECT format('SELECT (t::%2$s).* FROM (SELECT min(%1$s) FROM %2$s) t'
, string_agg(quote_ident(attname), '), min(' ORDER BY attnum)
, pg_typeof(_tbl)::text)
FROM pg_attribute
WHERE attrelid = pg_typeof(_tbl)::text::regclass
AND NOT attisdropped -- no dropped (dead) columns
AND attnum > 0 -- no system columns
);
END
$func$;
Call (important!):
SELECT * FROM f_min_of(NULL::tbl); -- tbl being the table name
db<>fiddle here
Old sqlfiddle
You need to understand these concepts:
Dynamic SQL in plpgsql with EXECUTE
Polymorphic types
Row types and table types in Postgres
How to defend against SQL injection
Aggregate functions
System catalogs
Related answer with detailed explanation:
Table name as a PostgreSQL function parameter
Refactor a PL/pgSQL function to return the output of various SELECT queries
Postgres data type cast
How to set value of composite variable field using dynamic SQL
How to check if a table exists in a given schema
Select columns with particular column names in PostgreSQL
Generate series of dates - using date type as input
Special difficulty with type mismatch
I am taking advantage of Postgres defining a row type for every existing table. Using the concept of polymorphic types I am able to create one function that works for any table.
However, some aggregate functions return related but different data types as compared to the underlying column. For instance, min(varchar_column) returns text, which is bit-compatible, but not exactly the same data type. PL/pgSQL functions have a weak spot here and insist on data types exactly as declared in the RETURNS clause. No attempt to cast, not even implicit casts, not to speak of assignment casts.
That should be improved. Tested with Postgres 9.3. Did not retest with 9.4, but I am pretty sure, nothing has changed in this area.
That's where this construct comes in as workaround:
SELECT (t::tbl).* FROM (SELECT ... FROM tbl) t;
By casting the whole row to the row type of the underlying table explicitly we force assignment casts to get original data types for every column.
This might fail for some aggregate function. sum() returns numeric for a sum(bigint_column) to accommodate for a sum overflowing the base data type. Casting back to bigint might fail ...
#Erwin Brandstetter, Many thanks for the extensive answer. pg_stats does indeed provide a few things, but what I really need to draw a complete profile is a variety of things, min, max values, counts, count of nulls, mean etc... so a bunch of queries have to be ran for each columns, some with GROUP BY and such.
Also, thanks for highlighting the importance of data types, i was sort of expecting this to throw a spanner in the works at some point, my main concern was with how to automate the query generation, and its execution, this last bit being my main concern.
I have tried the function you provide (I probably will need to start learning some plpgsql) but get a error at the SELECT (t::tbl) :
ERROR: type "tbl" does not exist
btw, what is the (t::abc) notation referred as, in python this would be a list slice, but it’s probably not the case in PLPGSQL

How to cut seconds from an interval column?

In my table results from column work_time (interval type) display as 200:00:00. Is it possible to cut the seconds part, so it will be displayed as 200:00? Or, even better: 200h00min (I've seen it accepts h unit in insert so why not load it like this?).
Preferably, by altering work_time column, not by changing the select query.
This is not something you should do by altering a column but by changing the select query in some way. If you change the column you are changing storage and functional uses, and that's not good. To change it on output, you need to modify how it is retrieved.
You have two basic options. The first is to modify your select queries directly, using to_char(myintervalcol, 'HH24:MI')
However if your issue is that you have a common format you want to have universal access to in your select query, PostgreSQL has a neat trick I call "table methods." You can attach a function to a table in such a way that you can call it in a similar (but not quite identical) syntax to a new column. In this case you would do something like:
CREATE OR REPLACE FUNCTION myinterval_nosecs(mytable) RETURNS text LANGUAGE SQL
IMMUTABLE AS
$$
SELECT to_char($1.myintervalcol, 'HH24:MI');
$$;
This works on the row input, not on the underlying table. As it always returns the same information for the same input, you can mark it immutable and even index the output (meaning it can be run at plan time and indexed used).
To call this, you'd do something like:
SELECT myinterval_nosecs(m) FROM mytable m;
But you can then use the special syntax above to rewrite that as:
SELECT m.myinterval_nosecs FROM mytable m;
Note that since myinterval_nosecs is a function you cannot omit the m. at the beginning. This is because the query planner will rewrite the query in the former syntax and will not guess as to which relation you mean to run it against.

In postgres (plpgsql), how to make a function that returns select * on a variable table_name?

Basically, at least for proof of concept, I want a function where I can run:
SELECT res('table_name'); and this will give me the results of SELECT * FROM table_name;.
The issue I am having is schema...in the declaration of the function I have:
CREATE OR REPLACE FUNCTION res(table_name TEXT) RETURNS SETOF THISISTHEPROBLEM AS
The problem is that I do not know how to declare my return, as it wants me to specify a table or a schema, and I won't have that until the function is actually run.
Any ideas?
You can do this, but as mentioned before you have to add a column definiton list in the SELECT query.
CREATE OR REPLACE FUNCTION res(table_name TEXT) RETURNS SETOF record AS $$
BEGIN
RETURN QUERY EXECUTE 'SELECT * FROM ' || table_name;
END;
$$ LANGUAGE plpgsql;
SELECT * FROM res('sometable') sometable (col1 INTEGER, col2 INTEGER, col3 SMALLINT, col4 TEXT);
Why for any real practical purpose would you just want to pass in table and select * from it? For fun maybe?
You can't do it without defining some kind of known output like jack and rudi show. Or doing it like depesz does here using output parameters http://www.depesz.com/index.php/2008/05/03/waiting-for-84-return-query-execute-and-cursor_tuple_fraction/.
A few hack around the wall approachs are to issue raise notices in a loop and print out a result set one row at a time. Or you could create a function called get_rows_TABLENAME that has a definition for every table you want to return. Just use code to generate the procedures creations. But again not sure how much value doing a select * from a table, especially with no constraints is other than for fun or making the DBA's blood boil.
Now in SQL Server you can have a stored procedure return a dynamic result set. This is both a blessing and curse as you can't be certain what comes back without looking up the definition. For me I look at PostgreSQL's implementation to be the more sound way to go about it.
Even if you manage to do this (see rudi-moore's answer for a way if you have 8.4 or above), You will have to expand the type explicitly in the select - eg:
SELECT res('table_name') as foo(id int,...)