The below is the snippet where I pass a cursor as input paramter in Oracle.
And Im trying to convert the same to EntepriseDB.
create or replace FUNCTION FUNCTION_1
(
p_source IN SYS_REFCURSOR
)
is
TYPE row_ntt
IS
TABLE OF VARCHAR2(32767);
v_rows row_ntt;
begin
---
LOOP
FETCH p_source BULK COLLECT
INTO v_rows LIMIT 1000
;
FOR i IN 1 .. v_rows.COUNT
LOOP
-- Will do something here
END LOOP;
EXIT WHEN p_source%NOTFOUND;
END LOOP;
END;
it's called in the below way in Oracle:
SELECT
*
FROM
TABLE ( function_1(CURSOR(select col1 from tbl)));
Now after migration
I changed like below
create or replace FUNCTION FUNCTION_1
(
p_source IN SYS_REFCURSOR
)
is
TYPE row_ntt
IS
TABLE OF VARCHAR2(32767);
v_rows row_ntt;
begin
---
select unnest(p_source) into v_rows;
FOR i IN 1 .. v_rows.COUNT
LOOP
-- Will do something here
END LOOP;
END;
Im calling in the below way in EntepriseDB/Postgresql
SELECT
*
FROM
TABLE (function_1(ARRAY(
SELECT
col1 from table)));
Im getting below error at assigning the value to v_rows.
ERROR: cannot assign non-table type to a table
Please help me in making unnest as table type so that it inserts into v_rows. or else if there is a way I can use sysrefcursor in EnterpriseDB is also fine. Please let me know how to call in ENtepriseDB when sysrefcursor is used. Thanks!!
How to use a recursive query and then using cursor to update multiple rows in postgresql. I try to return data but no data is found. Any alternative to using recursive query and cursor, or maybe better code please help me.
drop function proses_stock_invoice(varchar, varchar, character varying);
create or replace function proses_stock_invoice
(p_medical_cd varchar,p_post_cd varchar, p_pstruserid character varying)
returns void
language plpgsql
as $function$
declare
cursor_data refcursor;
cursor_proses refcursor;
v_medicalCd varchar(20);
v_itemCd varchar(20);
v_quantity numeric(10);
begin
open cursor_data for
with recursive hasil(idnya, level, pasien_cd, id_root) as (
select medical_cd, 1, pasien_cd, medical_root_cd
from trx_medical
where medical_cd = p_pstruserid
union all
select A.medical_cd, level + 1, A.pasien_cd, A.medical_root_cd
from trx_medical A, hasil B
where A.medical_root_cd = B.idnya
)
select idnya from hasil where level >=1;
fetch next from cursor_data into v_medicalCd;
return v_medicalCd;
while (found)
loop
open cursor_proses for
select B.item_cd, B.quantity from trx_medical_resep A
join trx_resep_data B on A.medical_resep_seqno = B.medical_resep_seqno
where A.medical_cd = v_medicalCd and B.resep_tp = 'RESEP_TP_1';
fetch next from cursor_proses into v_itemCd, v_quantity;
while (found)
loop
update inv_pos_item
set quantity = quantity - v_quantity, modi_id = p_pstruserid, modi_id = now()
where item_cd = v_itemCd and pos_cd = p_post_cd;
end loop;
close cursor_proses;
end loop;
close cursor_data;
end
$function$;
but nothing data found?
You have a function with return void so it will never return any data to you. Still you have the statement return v_medicalCd after fetching the first record from the first cursor, so the function will return from that point and never reach the lines below.
When analyzing your function you have (1) a cursor that yields a number of idnya values from table trx_medical, which is input for (2) a cursor that yields a number of v_itemCd, v_quantity from tables trx_medical_resep, trx_resep_data for each idnya, which is then used to (3) update some rows in table inv_pos_item. You do not need cursors to do that and it is, in fact, extremely inefficient. Instead, turn the whole thing into a single update statement.
I am assuming here that you want to update an inventory of medicines by subtracting the medicines prescribed to patients from the stock in the inventory. This means that you will have to sum up prescribed amounts by type of medicine. That should look like this (note the comments):
CREATE FUNCTION proses_stock_invoice
-- VVV parameter not used
(p_medical_cd varchar, p_post_cd varchar, p_pstruserid varchar)
RETURNS void AS $function$
UPDATE inv_pos_item -- VVV column repeated VVV
SET quantity = quantity - prescribed.quantity, modi_id = p_pstruserid, modi_id = now()
FROM (
WITH RECURSIVE hasil(idnya, level, pasien_cd, id_root) AS (
SELECT medical_cd, 1, pasien_cd, medical_root_cd
FROM trx_medical
WHERE medical_cd = p_pstruserid
UNION ALL
SELECT A.medical_cd, level + 1, A.pasien_cd, A.medical_root_cd
FROM trx_medical A, hasil B
WHERE A.medical_root_cd = B.idnya
)
SELECT B.item_cd, sum(B.quantity) AS quantity
FROM trx_medical_resep A
JOIN trx_resep_data B USING (medical_resep_seqno)
JOIN hasil ON A.medical_cd = hasil.idnya
WHERE B.resep_tp = 'RESEP_TP_1'
--AND hacil.level >= 1 Useless because level is always >= 1
GROUP BY 1
) prescribed
WHERE item_cd = prescribed.item_cd
AND pos_cd = p_post_cd;
$function$ LANGUAGE sql STRICT;
Important
As with all UPDATE statements, test this code before you run the function. You can do that by running the prescribed sub-query separately as a stand-alone query to ensure that it does the right thing.
I have a trigger function that is called by several tables when COLUMN A is updated, so that COLUMN B can be updated based on value from a different function. (More complicated to explain than it really is). The trigger function takes in col_a and col_b since they are different for the different tables.
IF needs_updated THEN
sql = format('($1).%2$s = dbo.foo(($1).%1$s); ', col_a, col_b);
EXECUTE sql USING NEW;
END IF;
When I try to run the above, the format produces this sql:
($1).NameText = dbo.foo(($1).Name);
When I execute the SQL with the USING I am expecting something like this to happen (which works when executed straight up without dynamic sql):
NEW.NameText = dbo.foo(NEW.Name);
Instead I get:
[42601] ERROR: syntax error at or near "$1"
How can I dynamically update the column on the record/composite type NEW?
This isn't going to work because NEW.NameText = dbo.foo(NEW.Name); isn't a correct sql query. And I cannot think of the way you could dynamically update variable attribute of NEW. My suggestion is to explicitly define behaviour for each of your tables:
IF TG_TABLE_SCHEMA = 'my_schema' THEN
IF TG_TABLE_NAME = 'my_table_1' THEN
NEW.a1 = foo(NEW.b1);
ELSE IF TG_TABLE_NAME = 'my_table_2' THEN
NEW.a2 = foo(NEW.b2);
... etc ...
END IF;
END IF;
First: This is a giant pain in plpgsql. So my best recommendation is to do this in some other PL, such as plpythonu or plperl. Doing this in either of those would be trivial. Even if you don't want to do the whole trigger in another PL, you could still do something like:
v_new RECORD;
BEGIN
v_new := plperl_function(NEW, column_a...)
The key to doing this in plpgsql is creating a CTE that has what you need in it:
c_new_old CONSTANT text := format(
'WITH
NEW AS (SELECT (r).* FROM (SELECT ($1)::%1$s r) s)
, OLD AS (SELECT (r).* FROM (SELECT ($2)::%1$s r) s
'
, TG_RELID::regclass
);
You will also need to define a v_new that is a plain record. You could then do something like:
-- Replace 2nd field in NEW with a new value
sql := c_new_old || $$SELECT row(NEW.a, $3, NEW.c) FROM NEW$$
EXECUTE sql INTO v_new USING NEW, OLD, new_value;
I need to loop through type RECORD items by key/index, like I can do this using array structures in other programming languages.
For example:
DECLARE
data1 record;
data2 text;
...
BEGIN
...
FOR data1 IN
SELECT
*
FROM
sometable
LOOP
FOR data2 IN
SELECT
unnest( data1 ) -- THIS IS DOESN'T WORK!
LOOP
RETURN NEXT data1[data2]; -- SMTH LIKE THIS
END LOOP;
END LOOP;
As #Pavel explained, it is not simply possible to traverse a record, like you could traverse an array. But there are several ways around it - depending on your exact requirements. Ultimately, since you want to return all values in the same column, you need to cast them to the same type - text is the obvious common ground, because there is a text representation for every type.
Quick and dirty
Say, you have a table with an integer, a text and a date column.
CREATE TEMP TABLE tbl(a int, b text, c date);
INSERT INTO tbl VALUES
(1, '1text', '2012-10-01')
,(2, '2text', '2012-10-02')
,(3, ',3,ex,', '2012-10-03') -- text with commas
,(4, '",4,"ex,"', '2012-10-04') -- text with commas and double quotes
Then the solution can be a simple as:
SELECT unnest(string_to_array(trim(t::text, '()'), ','))
FROM tbl t;
Works for the first two rows, but fails for the special cases of row 3 and 4.
You can easily solve the problem with commas in the text representation:
SELECT unnest(('{' || trim(t::text, '()') || '}')::text[])
FROM tbl t
WHERE a < 4;
This would work fine - except for line 4 which has double quotes in the text representation. Those are escaped by doubling them up. But the array constructor would need them escaped by \. Not sure why this incompatibility is there ...
SELECT ('{' || trim(t::text, '()') || '}') FROM tbl t WHERE a = 4
Yields:
{4,""",4,""ex,""",2012-10-04}
But you would need:
SELECT '{4,"\",4,\"ex,\"",2012-10-04}'::text[]; -- works
Proper solution
If you knew the column names beforehand, a clean solution would be simple:
SELECT unnest(ARRAY[a::text,b::text,c::text])
FROM tbl
Since you operate on records of well know type you can just query the system catalog:
SELECT string_agg(a.attname || '::text', ',' ORDER BY a.attnum)
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = 'tbl'::regclass
AND a.attnum > 0
AND a.attisdropped = FALSE
Put this in a function with dynamic SQL:
CREATE OR REPLACE FUNCTION unnest_table(_tbl text)
RETURNS SETOF text LANGUAGE plpgsql AS
$func$
BEGIN
RETURN QUERY EXECUTE '
SELECT unnest(ARRAY[' || (
SELECT string_agg(a.attname || '::text', ',' ORDER BY a.attnum)
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = _tbl::regclass
AND a.attnum > 0
AND a.attisdropped = false
) || '])
FROM ' || _tbl::regclass;
END
$func$;
Call:
SELECT unnest_table('tbl') AS val
Returns:
val
-----
1
1text
2012-10-01
2
2text
2012-10-02
3
,3,ex,
2012-10-03
4
",4,"ex,"
2012-10-04
This works without installing additional modules. Another option is to install the hstore extension and use it like #Craig demonstrates.
PL/pgSQL isn't really designed for what you want to do. It doesn't consider a record to be iterable, it's a tuple of possibly different and incompatible data types.
PL/pgSQL has EXECUTE for dynamic SQL, but EXECUTE queries cannot refer to PL/pgSQL variables like NEW or other records directly.
What you can do is convert the record to a hstore key/value structure, then iterate over the hstore. Use each(hstore(the_record)), which produces a rowset of key,value tuples. All values are cast to their text representations.
This toy function demonstrates iteration over a record by creating an anonymous ROW(..) - which will have column names f1, f2, f3 - then converting that to hstore, iterating over its column/value pairs, and returning each pair.
CREATE EXTENSION hstore;
CREATE OR REPLACE FUNCTION hs_demo()
RETURNS TABLE ("key" text, "value" text)
LANGUAGE plpgsql AS
$$
DECLARE
data1 record;
hs_row record;
BEGIN
data1 = ROW(1, 2, 'test');
FOR hs_row IN SELECT kv."key", kv."value" FROM each(hstore(data1)) kv
LOOP
"key" = hs_row."key";
"value" = hs_row."value";
RETURN NEXT;
END LOOP;
END;
$$;
In reality you would never write it this way, since the whole loop can be replaced with a simple RETURN QUERY statement and it does the same thing each(hstore) does anyway - so this is only to show how each(hstore(record)) works, and the above function should never actually be used.
This feature is not supported in plpgsql - Record IS NOT hash array like other scripting languages - it is similar to C or ADA, where this functionality is impossible. You can use other PL language like PLPerl or PLPython or some tricks - you can iterate with HSTORE datatype (extension) or via dynamic SQL
see How to set value of composite variable field using dynamic SQL
But request for this functionality usually means, so you do some wrong. When you use PL/pgSQL you have think different than you use Javascript or Python
FOR data2 IN
SELECT d
from unnest( data1 ) s(d)
LOOP
RETURN NEXT data2;
END LOOP;
If you order your results prior to looping, will you accomplish what you want.
for rc in select * from t1 order by t1.key asc loop
return next rc;
end loop;
will do exactly what you need. It is also the fastest way to perform that kind of task.
I wasn't able to find a proper way to loop over record, so what I did is converted record to json first and looped over json
declare
_src_schema varchar := 'db_utility';
_targetjson json;
_key text;
_value text;
BEGIN
select row_to_json(c.*) from information_schema.columns c where c.table_name = prm_table and c.column_name = prm_column
and c.table_schema = _src_schema into _targetjson;
raise notice '_targetjson %', _targetjson;
FOR _key, _value IN
SELECT * FROM jsonb_each_text(_targetjson)
LOOP
-- do some math operation on its corresponding value
RAISE NOTICE '%: %', _key, _value;
END LOOP;
return true;
end;
I'm writing some pl/pgsql code in postgres - and have come up to this issue. Simplified, my code looks like this:
declare
resulter mytype%rowtype;
...
for resulter in
select id, [a lot of other fields]
from mytable [joining a lot of other tables]
where [some reasonable where clause]
loop
if [certain condition on the resulter] then
[add id to a set];
end if;
return next resulter;
end loop;
select into myvar sum([some field])
from anothertable
where id in ([my set from above])
The question is around the [add to set]. In another scenario in the past, I used to deal with it this way:
declare
myset varchar := '';
...
loop
if [condition] then
myset || ',' || id;
end if;
return next resulter;
end loop;
execute 'select sum([field]) from anothertable where id in (' || trim(leading ',' from myset) || ')' into myvar
However this doesn't seem too efficient to me when the number of id's to be added to this set is large. What other options do I have for keeping track of this set and then using it?
-- update --
Obviously, another option is to create a temporary table and insert ids into it when needed. Then in the last select statement have a sub-select on that temporary table - like so:
create temporary table x (id integer);
loop
if [condition] then
insert into x values (id);
end if;
return next resulter;
end loop;
select into myvar sum([field]) from anothertable where id in (select id from x);
Any other options? Also, what would be the most efficient, considering that there may be many thousands of relevant ID's.
In my opinion, temp tables are the most efficient way to handle this:
create temp table x(id integer not null primary key) on commit drop;