I am trying to do an update on a specific record every 1000 rows using Postgres. I am looking for a better way to do that. My function is described below:
CREATE OR REPLACE FUNCTION update_row()
RETURNS void AS
$BODY$
declare
myUID integer;
nRow integer;
maxUid integer;
BEGIN
nRow:=1000;
select max(uid_atm_inp) from tab into maxUid where field1 = '1240200';
loop
if (nRow > 1000 and nRow < maxUid) then
select uid from tab into myUID where field1 = '1240200' and uid >= nRow limit 1;
update tab
set field = 'xxx'
where field1 = '1240200' and uid = myUID;
nRow:=nRow+1000;
end if;
end loop;
END; $BODY$
LANGUAGE plpgsql VOLATILE
How can I improve this procedure? I think there is something wrong. The loop does not end and takes too much time.
To perform this task in SQL, you could use the row_number window function and update only those rows where the number is divisible by 1000.
Your loop doesn't finish because there is no EXIT or RETURN in it.
I doubt you could ever rival the performance of a standard SQL update with a procedural loop. Instead of doing it a row at a time, just do it all as a single statement:
with t2 as (
select
uid, row_number() over (order by 1) as rn
from tab
where field1 = '1240200'
)
update tab t1
set field = 'xxx'
from t2
where
t1.uid = t2.uid and
mod (t2.rn, 1000) = 0
Per my comment, I am presupposing what you mean by "every 1000th row," as without some designation of how to determine what tuple is what row number. That is easily edited by changing the "order by" criteria.
Adding a second where clause on the update (t1.field1 = '1240200') can't hurt but might not be necessary if these are nested loop.
This might be notionally similar to what Laurenz has in mind.
I solved this way:
declare
myUID integer;
nRow integer;
rowNum integer;
checkrow integer;
myString varchar(272);
cur_check_row cursor for select uid , row_number() over (order by 1) as rn, substr(fieldxx,1,244)
from table where field1 = '1240200' and uid >= 1000 ORDER BY uid;
BEGIN
open cur_check_row;
loop
fetch cur_check_row into myUID, rowNum, myString;
EXIT WHEN NOT FOUND;
select mod(rowNum, 1000) into checkrow;
if checkrow = 0 then
update table
set fieldxx= myString||'O'
where uid in (myUID);
end if;
end loop;
close cur_check_row;
Related
DROP FUNCTION IF EXISTS top_5(customers.customerid%TYPE, products.prod_id%TYPE, orderlines.quantity%TYPE) CASCADE;
CREATE OR REPLACE FUNCTION top_5(c_id customers.customerid%TYPE, p_id products.prod_id%TYPE, quant orderlines.quantity%TYPE)
RETURNS orders.orderid%TYPE AS $$
DECLARE
top_prod CURSOR IS
SELECT inv.prod_id
FROM inventory AS inv, products AS prod
WHERE inv.prod_id=prod.prod_id
ORDER BY inv.quan_in_stock desc, inv.sales
limit 5;
ord_id orders.orderid%TYPE;
ord_date orders.orderdate%TYPE:= current_date;
ordln_id orderlines.orderlineid%TYPE:=1;
BEGIN
SELECT nova_orderid() INTO ord_id;
INSERT INTO orders(orderid, orderdate,customerid,netamount,tax,totalamount) VALUES(ord_id,ord_date,c_id,0,0,0);
PERFORM compra(c_id, p_id, 1::smallint, ord_id, ordln_id, ord_date);
IF (p_id = top_prod) THEN
UPDATE orders
SET totalamount = totalamount - (totalamount*0.2)
WHERE ord_id = (SELECT MAX(ord_id) FROM orders);
END IF;
END;
$$ LANGUAGE plpgsql;
I have the following code and when i try to execute this
SELECT top_5(1,1,'2');
i have this error
ERROR: operator does not exist: integer = refcursor
LINE 1: SELECT (p_id = top_prod)
You need to get the 'prod_id' value from the cursor 'top_prod'.
You cannot compare two types.
Try this,
DECLARE
top_prod_id top_prod%ROWTYPE;
BEGIN
OPEN top_prod;
LOOP
FETCH top_prod INTO top_prod_id;
EXIT WHEN top_prod %NOTFOUND;
IF (p_id = top_prod_id) THEN
UPDATE orders
SET totalamount = totalamount - (totalamount*0.2)
WHERE ord_id = (SELECT MAX(ord_id) FROM orders);
END IF;
END LOOP;
CLOSE top_prod;
END;
How to use a recursive query and then using cursor to update multiple rows in postgresql. I try to return data but no data is found. Any alternative to using recursive query and cursor, or maybe better code please help me.
drop function proses_stock_invoice(varchar, varchar, character varying);
create or replace function proses_stock_invoice
(p_medical_cd varchar,p_post_cd varchar, p_pstruserid character varying)
returns void
language plpgsql
as $function$
declare
cursor_data refcursor;
cursor_proses refcursor;
v_medicalCd varchar(20);
v_itemCd varchar(20);
v_quantity numeric(10);
begin
open cursor_data for
with recursive hasil(idnya, level, pasien_cd, id_root) as (
select medical_cd, 1, pasien_cd, medical_root_cd
from trx_medical
where medical_cd = p_pstruserid
union all
select A.medical_cd, level + 1, A.pasien_cd, A.medical_root_cd
from trx_medical A, hasil B
where A.medical_root_cd = B.idnya
)
select idnya from hasil where level >=1;
fetch next from cursor_data into v_medicalCd;
return v_medicalCd;
while (found)
loop
open cursor_proses for
select B.item_cd, B.quantity from trx_medical_resep A
join trx_resep_data B on A.medical_resep_seqno = B.medical_resep_seqno
where A.medical_cd = v_medicalCd and B.resep_tp = 'RESEP_TP_1';
fetch next from cursor_proses into v_itemCd, v_quantity;
while (found)
loop
update inv_pos_item
set quantity = quantity - v_quantity, modi_id = p_pstruserid, modi_id = now()
where item_cd = v_itemCd and pos_cd = p_post_cd;
end loop;
close cursor_proses;
end loop;
close cursor_data;
end
$function$;
but nothing data found?
You have a function with return void so it will never return any data to you. Still you have the statement return v_medicalCd after fetching the first record from the first cursor, so the function will return from that point and never reach the lines below.
When analyzing your function you have (1) a cursor that yields a number of idnya values from table trx_medical, which is input for (2) a cursor that yields a number of v_itemCd, v_quantity from tables trx_medical_resep, trx_resep_data for each idnya, which is then used to (3) update some rows in table inv_pos_item. You do not need cursors to do that and it is, in fact, extremely inefficient. Instead, turn the whole thing into a single update statement.
I am assuming here that you want to update an inventory of medicines by subtracting the medicines prescribed to patients from the stock in the inventory. This means that you will have to sum up prescribed amounts by type of medicine. That should look like this (note the comments):
CREATE FUNCTION proses_stock_invoice
-- VVV parameter not used
(p_medical_cd varchar, p_post_cd varchar, p_pstruserid varchar)
RETURNS void AS $function$
UPDATE inv_pos_item -- VVV column repeated VVV
SET quantity = quantity - prescribed.quantity, modi_id = p_pstruserid, modi_id = now()
FROM (
WITH RECURSIVE hasil(idnya, level, pasien_cd, id_root) AS (
SELECT medical_cd, 1, pasien_cd, medical_root_cd
FROM trx_medical
WHERE medical_cd = p_pstruserid
UNION ALL
SELECT A.medical_cd, level + 1, A.pasien_cd, A.medical_root_cd
FROM trx_medical A, hasil B
WHERE A.medical_root_cd = B.idnya
)
SELECT B.item_cd, sum(B.quantity) AS quantity
FROM trx_medical_resep A
JOIN trx_resep_data B USING (medical_resep_seqno)
JOIN hasil ON A.medical_cd = hasil.idnya
WHERE B.resep_tp = 'RESEP_TP_1'
--AND hacil.level >= 1 Useless because level is always >= 1
GROUP BY 1
) prescribed
WHERE item_cd = prescribed.item_cd
AND pos_cd = p_post_cd;
$function$ LANGUAGE sql STRICT;
Important
As with all UPDATE statements, test this code before you run the function. You can do that by running the prescribed sub-query separately as a stand-alone query to ensure that it does the right thing.
I have function returning list of Employees, My requirement is if i pass Limit to function than i should get result with limit and offset, If i don't pass limit than all the rows should be returned
for example
When Limit is greater than 0(I am passing Limit as 10)
Select * from Employees
Limit 10 offset 0
When Limit is equal to 0 than
Select * from Employees
Is their any way to do such logic in function?
Yes, you can pass an expression for the LIMIT and OFFSET clauses, which includes using a parameter passed in to a function.
CREATE FUNCTION employees_limited(limit integer) RETURNS SET OF employees AS $$
BEGIN
IF limit = 0 THEN
RETURN QUERY SELECT * FROM employees;
ELSE
RETURN QUERY SELECT * FROM employees LIMIT (limit)
END IF;
RETURN;
END; $$ LANGUAGE plpgsql STRICT;
Note the parentheses around the LIMIT clause. You can similarly pass in an OFFSET value.
This example is very trivial, though. You could achieve the same effect by doing the LIMIT outside of the function:
SELECT * FROM my_function() LIMIT 10;
Doing this inside of a function would really only be useful for a complex query, potentially involving a large amount of data.
Also note that a LIMIT clause without an ORDER BY produces unpredictable results.
Sorry I can't comment. Solution is almost provided by Patrick.
First we should write function to return result without limitation.
CREATE FUNCTION test ()
RETURNS TABLE (val1 varchar, val2 integer) AS $$
BEGIN
RETURN QUERY SELECT val1, val2 FROM test_table;
END;
$$ LANGUAGE plpgsql;
Then we have to write wrapper function, which will process limitation.
CREATE FUNCTION test_wrapper (l integer DEFAULT 0)
RETURNS TABLE (name varchar, id integer) AS $$
BEGIN
IF l = 0 THEN
RETURN QUERY SELECT * FROM test(); -- returns everything
ELSE
RETURN QUERY SELECT * FROM test() LIMIT (l); -- returns accordingly
END IF;
END;
$$ LANGUAGE plpgsql;
In my case I needed to return tables as final result, but one can get anything required as return from wrapper function.
Please note that a SELECT with a LIMIT should always include an ORDER BY, because if not explicitly specified the order of the returned rows can be undefined.
Insead of LIMIT you could use ROW_NUMBER() like in the following query:
SELECT *
FROM (
SELECT *, row_number() OVER (ORDER BY id) AS rn
FROM Employees
) AS s
WHERE
(rn>:offset AND rn<=:limit+:offset) OR :limit=0
I used the below approach in postgres
Create Function employees_limited(limit integer, offset interger, pagination boolean) RETURNS SET OF employees AS $$
BEGIN
if pagination = false then --skip offset and limit
offset = 0;
limit = 2147483647; -- int max value in postgres
end if;
RETURN QUERY
SELECT * FROM employees order by createddate
LIMIT (limit) Offset offset;
RETURN;
END; $$ LANGUAGE plpgsql STRICT;
Is there a way in to just run a query once to select into a variable, considering that the query might return nothing, then in that case the variable should be null.
Currently, I can't do a select into a variable directly, since if the query returns nothing, the PL/SQL would complain variable not getting set. I can only run the query twice, with the first one do the count and if the count is zero, set the variable to null, and if the count is 1, select into the variable.
So the code would be like:
v_column my_table.column%TYPE;
v_counter number;
select count(column) into v_counter from my_table where ...;
if (v_counter = 0) then
v_column := null;
elsif (v_counter = 1) then
select column into v_column from my_table where ...;
end if;
thanks.
Update:
The reason I didn't use exception is I still have some following logic after assigning the v_column, and I have to use goto in the exception section to jump back to the following code. I'm kind of hesitate of goto lines.
You can simply handle the NO_DATA_FOUND exception by setting your variable to NULL. This way, only one query is required.
v_column my_table.column%TYPE;
BEGIN
BEGIN
select column into v_column from my_table where ...;
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_column := NULL;
END;
... use v_column here
END;
I know it's an old thread, but I still think it's worth to answer it.
select (
SELECT COLUMN FROM MY_TABLE WHERE ....
) into v_column
from dual;
Example of use:
declare v_column VARCHAR2(100);
begin
select (SELECT TABLE_NAME FROM ALL_TABLES WHERE TABLE_NAME = 'DOES NOT EXIST')
into v_column
from dual;
DBMS_OUTPUT.PUT_LINE('v_column=' || v_column);
end;
What about using MAX?
That way if no data is found the variable is set to NULL, otherwise the maximum value.
Since you expect either 0 or 1 value, MAX should be OK to use.
v_column my_table.column%TYPE;
select MAX(column) into v_column from my_table where ...;
Using an Cursor FOR LOOP Statement is my favourite way to do this.
It is safer than using an explicit cursor, because you don't need to remember to close it, so you can't "leak" cursors.
You don't need "into" variables, you don't need to "FETCH", you don't need to catch and handle "NO DATA FOUND" exceptions.
Try it, you'll never go back.
v_column my_table.column%TYPE;
v_column := null;
FOR rMyTable IN (SELECT COLUMN FROM MY_TABLE WHERE ....) LOOP
v_column := rMyTable.COLUMN;
EXIT; -- Exit the loop if you only want the first result.
END LOOP;
From all the answers above, Björn's answer seems to be the most elegant and short. I personally used this approach many times. MAX or MIN function will do the job equally well. Complete PL/SQL follows, just the where clause should be specified.
declare v_column my_table.column%TYPE;
begin
select MIN(column) into v_column from my_table where ...;
DBMS_OUTPUT.PUT_LINE('v_column=' || v_column);
end;
I would recommend using a cursor. A cursor fetch is always a single row (unless you use a bulk collection), and cursors do not automatically throw no_data_found or too_many_rows exceptions; although you may inspect the cursor attribute once opened to determine if you have a row and how many.
declare
v_column my_table.column%type;
l_count pls_integer;
cursor my_cursor is
select count(*) from my_table where ...;
begin
open my_cursor;
fetch my_cursor into l_count;
close my_cursor;
if l_count = 1 then
select whse_code into v_column from my_table where ...;
else
v_column := null;
end if;
end;
Or, even more simple:
declare
v_column my_table.column%type;
cursor my_cursor is
select column from my_table where ...;
begin
open my_cursor;
fetch my_cursor into v_column;
-- Optional IF .. THEN based on FOUND or NOTFOUND
-- Not really needed if v_column is not set
if my_cursor%notfound then
v_column := null;
end if;
close my_cursor;
end;
I use this syntax for flexibility and speed -
begin
--
with KLUJ as
( select 0 ROES from dual
union
select count(*) from MY_TABLE where rownum = 1
) select max(ROES) into has_rows from KLUJ;
--
end;
Dual returns 1 row, rownum adds 0 or 1 rows, and max() groups to exactly 1. This gives 0 for no rows in a table and 1 for any other number of rows.
I extend the where clause to count rows by condition, remove rownum to count rows meeting a condition, and increase rownum to count rows meeting the condition up to a limit.
COALESCE will always return the first non-null result. By doing this, you will get the count that you want or 0:
select coalesce(count(column) ,0) into v_counter from my_table where ...;
everybody using mysql knows:
SELECT SQL_CALC_FOUND_ROWS ..... FROM table WHERE ... LIMIT 5, 10;
and right after run this :
SELECT FOUND_ROWS();
how do i do this in postrgesql? so far, i found only ways where i have to send the query twice...
No, there is not (at least not as of July 2007). I'm afraid you'll have to resort to:
BEGIN ISOLATION LEVEL SERIALIZABLE;
SELECT id, username, title, date FROM posts ORDER BY date DESC LIMIT 20;
SELECT count(id, username, title, date) AS total FROM posts;
END;
The isolation level needs to be SERIALIZABLE to ensure that the query does not see concurrent updates between the SELECT statements.
Another option you have, though, is to use a trigger to count rows as they're INSERTed or DELETEd. Suppose you have the following table:
CREATE TABLE posts (
id SERIAL PRIMARY KEY,
poster TEXT,
title TEXT,
time TIMESTAMPTZ DEFAULT now()
);
INSERT INTO posts (poster, title) VALUES ('Alice', 'Post 1');
INSERT INTO posts (poster, title) VALUES ('Bob', 'Post 2');
INSERT INTO posts (poster, title) VALUES ('Charlie', 'Post 3');
Then, perform the following to create a table called post_count that contains a running count of the number of rows in posts:
-- Don't let any new posts be added while we're setting up the counter.
BEGIN;
LOCK TABLE posts;
-- Create and initialize our post_count table.
SELECT count(*) INTO TABLE post_count FROM posts;
-- Create the trigger function.
CREATE FUNCTION post_added_or_removed() RETURNS TRIGGER AS $$
BEGIN
IF TG_OP = 'DELETE' THEN
UPDATE post_count SET count = count - 1;
ELSIF TG_OP = 'INSERT' THEN
UPDATE post_count SET count = count + 1;
END IF;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
-- Call the trigger function any time a row is inserted.
CREATE TRIGGER post_added_or_removed_tgr
AFTER INSERT OR DELETE
ON posts
FOR EACH ROW
EXECUTE PROCEDURE post_added_or_removed();
COMMIT;
Note that this maintains a running count of all of the rows in posts. To keep a running count of certain rows, you'll have to tweak it:
SELECT count(*) INTO TABLE post_count FROM posts WHERE poster <> 'Bob';
CREATE FUNCTION post_added_or_removed() RETURNS TRIGGER AS $$
BEGIN
-- The IF statements are nested because OR does not short circuit.
IF TG_OP = 'DELETE' THEN
IF OLD.poster <> 'Bob' THEN
UPDATE post_count SET count = count - 1;
END IF;
ELSIF TG_OP = 'INSERT' THEN
IF NEW.poster <> 'Bob' THEN
UPDATE post_count SET count = count + 1;
END IF;
END IF;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
There is a simple way, but keep in mind, that following COUNT(*) aggr function will be applied to all rows returned after where and before limit/offset (may be costy)
SELECT
id,
"count" (*) OVER () AS cnt
FROM
objects
WHERE
id > 2
OFFSET 50
LIMIT 5
No, PostgreSQL doesn't try to count all relevant results when you only need 10 results. You need a seperate COUNT to count all results.