How to pass all rows and go calculation in postgresql - postgresql

How can I write in postgresql a function which for each row in a table takes the parameter in "agg_id" column and do some calculations?
In:
FOR r IN SELECT * FROM foo
what the r is?
Thanks

Wild guess from a rather unclear question:
DECLARE
r record;
BEGIN
FOR r IN SELECT * FROM foo
LOOP
RAISE NOTICE 'agg_id is %', r.agg_id;
END LOOP;
END;
See the PL/PgSQL manual.
Note that it's almost always better to use plain SQL, but it's hard for me to make useful suggestions about how with the total lack of information in the question.

Related

How to migrate oracle's pipelined function into PostgreSQL

As I am newbie to plpgSQL,
I stuck while migrating a Oracle query into PostgreSQL.
Oracle query:
create or replace FUNCTION employee_all_case(
p_ugr_id IN integer,
p_case_type_id IN integer
)
RETURN number_tab_t PIPELINED
-- LANGUAGE 'plpgsql'
-- COST 100
-- VOLATILE
-- AS $$
-- DECLARE
is
l_user_id NUMBER;
l_account_id NUMBER;
BEGIN
l_user_id := p_ugr_id;
l_account_id := p_case_type_id;
FOR cases IN
(SELECT ccase.case_id, ccase.employee_id
FROM ct_case ccase
INNER JOIN ct_case_type ctype
ON (ccase.case_type_id=ctype.case_type_id)
WHERE ccase.employee_id = l_user_id)
LOOP
IF cases.employee_id IS NOT NULL THEN
PIPE ROW (cases.case_id);
END IF;
END LOOP;
RETURN;
END;
--$$
When I execute this function then I get the following result
select * from table(select employee_all_case(14533,1190) from dual)
My question here is: I really do not understand the pipelined function and how can I obtain the same result in PostgreSQL as Oracle query ?
Please help.
Thank you guys, your solution was very helpful.
I found the desire result:
-- select * from employee_all_case(14533,1190);
-- drop function employee_all_case
create or replace FUNCTION employee_all_case(p_ugr_id IN integer ,p_case_type_id IN integer)
returns table (case_id double precision)
-- PIPELINED
LANGUAGE 'plpgsql'
COST 100
VOLATILE
AS $$
DECLARE
-- is
l_user_id integer;
l_account_id integer;
BEGIN
l_user_id := cp_lookup$get_user_id_from_ugr_id(p_ugr_id);
l_account_id := cp_lookup$acctid_from_ugr(p_ugr_id);
RETURN QUERY SELECT ccase.case_id
FROM ct_case ccase
INNER JOIN ct_case_type ctype ON ccase.case_type_id = ctype.case_type_id
WHERE ccase.employee_id = p_ugr_id
and ccase.employee_id IS NOT NULL;
--return NEXT;
END;
$$
You would rewrite that to a set returning function:
Change the return type to
RETURNS SETOF integer
and do away with the PIPELINED.
Change the PIPE ROW statement to
RETURN NEXT cases.case_id;
Of course, you will have to do the obvious syntactic changes, like using integer instead of NUMBER and putting the IN before the parameter name.
But actually, it is quite unnecessary to write a function for that. Doing it in a single SELECT statement would be both simpler and faster.
Pipelined functions are best translated to a simple SQL function returning a table.
Something like this:
create or replace function employee_all_case(p_ugr_id integer, p_case_type_IN integer)
returns table (case_id integer)
as
$$
SELECT ccase.case_id
FROM ct_case ccase
INNER JOIN ct_case_type ctype ON ccase.case_type_id = ctype.case_type_id
WHERE ccase.employee_id = p_ugr_id
and cases.employee_id IS NOT NULL;
$$
language sql;
Note that your sample code did not use the second parameter p_case_type_id.
Usage is also more straightforward:
select *
from employee_all_case(14533,1190);
Before diving into the solution, I will provide some information which will help you to understand better.
So basically PIPELINED came into picture for improving memory allocation at run time.
As you all know collections will occupy space when ever they got created. So the more you use, the more memory will get allocated.
Pipelining negates the need to build huge collections by piping rows out of the function.
saving memory and allowing subsequent processing to start before all the rows are generated.
Pipelined table functions include the PIPELINED clause and use the PIPE ROW call to push rows out of the function as soon as they are created, rather than building up a table collection.
By using Pipelined how memory usage will be optimized?
Well, it's very simple. instead of storing data into an array, just process the data by using pipe row(desired type). This actually returns the row and process the next row.
coming to solution in plpgsql
simple but not preferred while storing large data.
Remove PIPELINED from return declaration and return an array of desired type. something like RETURNS typrec2[].
Where ever you are using pipe row(), add that entry to array and finally return that array.
create a temp table like
CREATE TEMPORARY TABLE temp_table (required fields) ON COMMIT DROP;
and insert data into it. Replace pipe row with insert statement and finally return statement like
return query select * from temp_table
**The best link for understanding PIPELINED in oracle [https://oracle-base.com/articles/misc/pipelined-table-functions]
pretty ordinary for postgres reference [http://manojadinesh.blogspot.com/2011/11/pipelined-in-oracle-as-well-in.html]
Hope this helps some one conceptually.

postgres inbuilt vs user defined function performance

I have to count the number of bits in postgresql on very large integer columns for which i wrote a postgresql funtion to count the number of bits in an integer.
CREATE OR REPLACE FUNCTION bitcount(i integer) RETURNS integer AS $$
DECLARE n integer;
DECLARE bitCount integer;
BEGIN
bitCount := 0;
LOOP
IF i = 0 THEN
EXIT;
END IF;
i := i & (i-1);
bitCount:= bitCount+1;
END LOOP;
RETURN bitCount;
END
$$ LANGUAGE plpgsql;
but i found one more way to do this using pg's inbuilt functions as well
like
SELECT char_length( replace(100::bit(31)::TEXT, '0', ''));
so i decided to compare performance of both of the ways
so i used below queries to compare
First
SELECT a.n, bitcount(a.n)
from generate_series(1, 100000) as a(n);
Second
SELECT a.n, char_length( replace(a.n::bit(31)::TEXT, '0', ''))
FROM generate_series(1, 100000) as a(n);
I was expecting that First method will outperform the second one
but to my surprise both of them performed almost same. In fact on my machine second one always completed a few seconds faster even with large number of integers.
can anyone explain me why second is almost as fast as first despite of using cast and string operation ?
I'd say it is because PL/pgSQL is known to be slow.
Write the function in PL/Perl or PL/Python for better performance.

Benchmarking/Performance of Postgresql Query

I want to measure the performance of postgresql code I wrote. In the code tables get created, selfwritten functions get called etc.
Looking around, I found EXPLAIN ANALYSE is the way to go.
However, as far as I understand it, the code only gets executed once. For a more realistic analysis I want to execute the code many many times and have the results of each iteration written somewhere, ideally in a table (for statistics later).
Is there a way to do this with a native postgresql function? If there is no native postgresql function, would I accomplish this with a simple loop? Further, how would I write out the information of every EXPLAIN ANALYZE iteration?
One way to do this is to write a function that runs an explain and then spool the output of that into a file (or insert that into a table).
E.g.:
create or replace function show_plan(to_explain text)
returns table (line_nr integer, line text)
as
$$
declare
l_plan_line record;
l_line integer;
begin
l_line := 1;
for l_plan_line in execute 'explain (analyze, verbose)'||to_explain loop
return query select l_line, l_plan_line."QUERY PLAN";
l_line := l_line + 1;
end loop;
end;
$$
language plpgsql;
Then you can use generate_series() to run a statement multiple times:
select g.i as run_nr, e.*
from show_plan('select * from foo') e
cross join generate_series(1,10) as g(i)
order by g.i, e.line_nr;
This will run the function 10 times with the passed SQL statement. The result can either be spooled to a file (how you do that depends on the SQL client you are using) or inserted into a table.
For an automatic analysis it's probably easer to use a more "parseable" explain format, e.g. XML or JSON. This is also easier to handle in the output as the plan is a single XML (or JSON) value instead of multiple text lines:
create or replace function show_plan_xml(to_explain text)
returns xml
as
$$
begin
return execut 'explain (analyze, verbose, format xml)'||to_explain;
end;
$$
language plpgsql;
Then use:
select g.i as run_nr, show_plan_xml('select * from foo')
from join generate_series(1,10) as g(i)
order by g.i;

Calculate number of rows affected by batch query in PostgreSQL

First of all, yes I've read documentation for DO statement :)
http://www.postgresql.org/docs/9.1/static/sql-do.html
So my question:
I need to execute some dynamic block of code that contains UPDATE statements and calculate the number of all affected rows. I'm using Ado.Net provider.
In Oracle the solution would have 4 steps:
add InputOutput parameter "N" to command
add BEGIN ... END; to command
add :N := :N + sql%rowcount after each statement.
It's done! We can read N parameter from command, after execute it.
How can I do it with PostgreSQL? I'm using npgsql provider but can migrate to devard if it helps.
DO statement blocks are good to execute dynamic SQL. They are no good to return values. Use a plpgsql function for that.
The key statement you need is:
GET DIAGNOSTICS integer_var = ROW_COUNT;
Details in the manual.
Example code:
CREATE OR REPLACE FUNCTION f_upd_some()
RETURNS integer AS
$func$
DECLARE
ct int;
i int;
BEGIN
EXECUTE 'UPDATE tbl1 ...'; -- something dynamic here
GET DIAGNOSTICS ct = ROW_COUNT; -- initialize with 1st count
UPDATE tbl2 ...; -- nothing dynamic here
GET DIAGNOSTICS i = ROW_COUNT;
ct := ct + i; -- add up
RETURN ct;
END
$func$ LANGUAGE plpgsql;
Call:
SELECT * FROM f_upd_some();
My solution is quite simple. In Oracle I need to use variables to calculate the sum of updated rows because command.ExecuteNonQuery() returns only the count of rows affected by the last UPDATE in the batch.
However, npgsql returns the sum of all rows updated by all UPDATE queries. So I only need to call command.ExecuteNonQuery() and get the result without any variables. Much easier than with Oracle.

Putting EXPLAIN results into a table?

I see from the Postgres 8.1 docs that EXPLAIN generated something like tabular data:
Prior to PostgreSQL 7.3, the plan was emitted in the form of a NOTICE
message. Now it appears as a query result (formatted like a table with
a single text column).
I'm working with 9.0, for which, the docs say, the output can be a variety of types, including TEXT and XML. What I'd really like to do is treat the output as a standard query result so that I could generate a simple report for a query or a set of queries, e.g.,
SELECT maxcost FROM (
EXPLAIN VERBOSE
SELECT COUNT(*)
FROM Mytable
WHERE value>17);
The above doesn't work in any form that I've tried, and I made up the attribute maxcost to demonstrate how neat it would be to pull out specific bits of data (in this case, the maximum estimated cost of the query). Is there anything I can do that would get me part of the way there? I'd prefer to be able to work within a simple SQL console.
No other answers so far, so here's my own stab at it.
It's possible to read the results of explain into a variable within plpgsql, and since the output can be in XML one can wrap EXPLAIN in a stored function to yield the top-level costs using xpath:
CREATE OR REPLACE FUNCTION estimate_cost(IN query text,
OUT startup numeric,
OUT totalcost numeric,
OUT planrows numeric,
OUT planwidth numeric)
AS
$BODY$
DECLARE
query_explain text;
explanation xml;
nsarray text[][];
BEGIN
nsarray := ARRAY[ARRAY['x', 'http://www.postgresql.org/2009/explain']];
query_explain :=e'EXPLAIN(FORMAT XML) ' || query;
EXECUTE query_explain INTO explanation;
startup := (xpath('/x:explain/x:Query/x:Plan/x:Startup-Cost/text()', explanation, nsarray))[1];
totalcost := (xpath('/x:explain/x:Query/x:Plan/x:Total-Cost/text()', explanation, nsarray))[1];
planrows := (xpath('/x:explain/x:Query/x:Plan/x:Plan-Rows/text()', explanation, nsarray))[1];
planwidth := (xpath('/x:explain/x:Query/x:Plan/x:Plan-Width/text()', explanation, nsarray))[1];
RETURN;
END;
$BODY$
LANGUAGE plpgsql;
Hence the example from the question becomes:
SELECT totalcost
FROM estimate_cost('SELECT COUNT(*)
FROM Mytable
WHERE value>17');
This message suggests using "FOR rec IN EXECUTE EXPLAIN..." which seems to do the trick e.g.:
drop table if exists temp_a;
create temp table temp_a
(
"QUERY PLAN" text
);
DO
$$
DECLARE
rec record;
BEGIN
FOR rec IN EXECUTE 'EXPLAIN VERBOSE select version()'
LOOP
-- RAISE NOTICE 'rec=%', row_to_json(rec);
insert into temp_a
select rec."QUERY PLAN";
END LOOP;
END
$$ language plpgsql;
select *
from temp_a;