I have a table with series of months with cumulative activity e.g.
month | activity
Jan-15 | 20
Feb-15 | 22
I also have a series of thresholds in another table e.g. 50, 100, 200. I need to get the date when the threshold is reached i.e. activity >= threshold.
The way I thought of doing this is to have a pgsql function that reads in the thresholds table, iterates over that cursor and reads in the months table to a cursor, then iterating over those rows working out the month where the threshold is reached. For performance reasons, rather than selecting all rows in the months table each time, I would then go back to the first row in the cursor and re-iterate over with the new value from the thresholds table.
Is this a sensible way to approach the problem? This is what I have so far - I am getting a
ERROR: cursor "curs" already in use error.
CREATE OR REPLACE FUNCTION schema.function()
RETURNS SETOF schema.row_type AS
$BODY$
DECLARE
rec RECORD;
rectimeline RECORD;
notification_threshold int;
notification_text text;
notification_date date;
output_rec schema.row_type;
curs SCROLL CURSOR FOR select * from schema.another_function_returning_set(); -- this is months table
curs2 CURSOR FOR select * from schema.notifications_table;
BEGIN
OPEN curs;
FOR rec IN curs2 LOOP
notification_threshold := rec.threshold;
LOOP
FETCH curs INTO rectimeline; -- this line seems to be the problem - not sure why cursor is closing
IF notification_threshold >= rectimeline.activity_total THEN
notification_text := rec.housing_notification_text;
notification_date := rectimeline.active_date;
SELECT notification_text, notification_date INTO output_rec.notification_text, output_rec.notification_date;
MOVE FIRST from curs;
RETURN NEXT output_rec;
END IF;
END LOOP;
END LOOP;
RETURN;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
select distinct on (t.threshold) *
from
thresholds t
inner join
months m on t.threshold < m.activity
order by t.threshold desc, m.month
Related
I am new to plpgsql, and I am excercising cursor.
I have following simple code,
create or replace function func_cursor_2()
returns setof numeric as $$
declare
cursor1 CURSOR for select empno,ename, job from emp;
r record;
begin
open cursor1;
loop
fetch from cursor1 into r;
exit when not found;
return next r.empno;
end loop;
close cursor1;
end;
$$ language plpgsql;
select func_cursor_2()
With fetch from cursor1 into r,
It looks to me that I am fetching the result rows one by one?
Is there way to specify 100 rows for one fetch from cursor?
Why bother with a cursor at all. This can be done in 1 statement.
create or replace function func_cursor_2()
returns setof numeric
language sql
as $$
select empno
from emp
limit 100;
$$;
However, the above will not return consistent results. To generate consistent you will need to add order by empno and perhaps offset depending on your exact needs.
Note: Not Tested.
I need to fetch data as 100 of records batches from a PostgresSQL table. I have tried the following function,
CREATE OR REPLACE FUNCTION fetch_compare_prices(n integer)
RETURNS SETOF varchar AS $$
DECLARE
curs CURSOR FOR SELECT * FROM compareprices LIMIT n;
row RECORD;
BEGIN
open curs;
LOOP
FETCH FROM curs INTO row;
EXIT WHEN NOT FOUND;
return next row.deal_id;
END LOOP;
END; $$ LANGUAGE plpgsql;
I ran this query to get results from the above function.select fetch_compare_prices(100); But this gives me only same 100 records always. Is there a way to fetch 100 records as batches using a cursor.
also with this return next row.deal_id; statement I can only return just the deal_id but no other columns. Is there a way to get all the columns/row?
Also, It should work like when I run select fetch_compare_prices(100); this for the 1st time, it should return first 100 rows, when I run it 2nd time, it should give rows from 100 to 200(next 100). What's the correct usage of cursor to do this?
I have a stored procedure as below:
CREATE OR REPLACE FUNCTION DELETE_REDUNDANT_RECORDS_STORED_PROCEDURE
RETURNS void AS
$func$
DECLARE
interval_time BIGINT DEFAULT 0;
min_time BIGINT DEFAULT 0;
max_time BIGINT DEFAULT 0;
rec_old RECORD;
rec_new RECORD;
rec_start RECORD;
cursor_file CURSOR FOR
SELECT distinct filename,systemuid FROM BOOKMARK.MONITORING_TESTING;
cursor_data CURSOR FOR
SELECT * FROM BOOKMARK.MONITORING_TESTING WHERE filename = v_filename AND systemuid=v_systemuid ORDER BY mindatetime, maxdatetime;
BEGIN
-- Use cursors for iteration
-- Business logic to delete and update the table records based on certain conditions
END;
$func$
LANGUAGE plpgsql;
The distinct query returns around a million records and is used for iteration on another cursor.
I want to distribute these million records into configurable chunks of data like for example 200k records each till all the records are read.
How can I achieve such functionality within my stored procedure?
You can add a window function call to the cursor's SELECT list:
(row_number() OVER ()) / 10000 AS chunk
That will add a number that you can use to split the result into chunks of 10000.
I am using Postgresql11 and a function that works well in a single run fails when I add a LOOP statement with
"ERROR: query has no destination for result data HINT: If you want to
discard the results of a SELECT, use PERFORM instead."
The function has VOID as return value, selects data from a source table into a temp table, calculates some data and inserts the result into a target table. The temp table is then dropped and the function ends. I would like to repeat this procedure in defined intervals and have included a LOOP statement. With LOOP it does not insert into the target table and does not actually loop at all.
create function transfer_cs_regular_loop(trading_pair character varying) returns void
language plpgsql
as
$$
DECLARE
first_open decimal;
first_price decimal;
last_close decimal;
last_price decimal;
highest_price decimal;
lowest_price decimal;
trade_volume decimal;
n_trades int;
start_time bigint;
last_entry bigint;
counter int := 0;
time_frame int := 10;
BEGIN
WHILE counter < 100 LOOP
SELECT max(token_trades.trade_time) INTO last_entry FROM token_trades WHERE token_trades.trade_symbol = trading_pair;
RAISE NOTICE 'Latest Entry: %', last_entry;
start_time = last_entry - (60 * 1000);
RAISE NOTICE 'Start Time: %', start_time;
CREATE TEMP TABLE temp_table AS
SELECT * FROM token_trades where trade_symbol = trading_pair and trade_time > start_time;
SELECT temp_table.trade_time,temp_table.trade_price INTO first_open, first_price FROM temp_table ORDER BY temp_table.trade_time ASC FETCH FIRST 1 ROW ONLY;
SELECT temp_table.trade_time,temp_table.trade_price INTO last_close, last_price FROM temp_table ORDER BY temp_table.trade_time DESC FETCH FIRST 1 ROW ONLY;
SELECT max(temp_table.trade_price) INTO highest_price FROM temp_table;
SELECT min(temp_table.trade_price) INTO lowest_price FROM temp_table;
SELECT INTO trade_volume sum(temp_table.trade_quantity) FROM temp_table;
SELECT INTO n_trades count(*) FROM temp_table;
INSERT INTO candlestick_data_5min_test(open, high, low, close, open_time, close_time, volume, number_trades, trading_pair) VALUES (first_price, highest_price, lowest_price, last_price, first_open, last_close, trade_volume, n_trades, trading_pair);
DROP TABLE temp_table;
counter := counter + 1;
SELECT pg_sleep(time_frame);
RAISE NOTICE '**************************Counter: %', counter;
END LOOP;
END;
$$;
The error refers to the last SELECT statement in the function. If there is a SELECT without INTO it will always return results. When there's no LOOP this result will be used as the return value of the function (even if it is void).
When you add a LOOP there can't be any SELECT without INTO inside the loop because a single return value would be needed and now there will be many. In this case you need to use PERFORM which does exactly the same thing as a SELECT but discards the results.
Therefore change the last SELECT into a PERFORM and the error will go away:
PERFORM pg_sleep(time_frame);
I'm new in PostgreSQL. Assume that I have a table (tbl_box) with thousands of records and it is growing, I want to delete 10 rows from a specific index (for example I want to delete 10 records from 50th row to 59th row) I wrote a function
You can see below:
- Function: public.signalreject()
-- DROP FUNCTION public.signalreject();
CREATE OR REPLACE FUNCTION public.signalreject()
RETURNS void AS
$BODY$
DECLARE
rec RECORD;
cur CURSOR
FOR SELECT barcode,id
FROM tbl_box where gf is null order by id desc;
counter int ;
BEGIN
-- Open the cursor
OPEN cur;
counter:=0;
LOOP
-- fetch row into the rec
FETCH cur INTO rec;
-- exit when no more row to fetch
EXIT WHEN NOT FOUND;
counter :=counter+1;
-- build the output
IF counter >= 50 and counter < 60 THEN
delete from tbl_box where barcode = rec.barcode;
END IF;
END LOOP;
-- Close the cursor
CLOSE cur;
END; $BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION public.signalreject()
OWNER TO Morteza;
I found that the cursor consumes memory and has a high CPU usage. What else except cursor you guys suggest me?
Is this a good way to do this?
I need the fastest way because it is important for me to delete in a shortest time.
This seems pretty elaborate, why not just do
delete from tbl_box
where barcode in
( select barcode
from tbl_box
where gf is null
order by id desc limit 10 offset 49
);
assuming that barcode is unique. We skip 49 rows to start deleting 10 rows from row 50.