Im trying to make a script that will assign a ticket to the agent from specific group and with the lowest amount of tickets. I've already made a loop that can assign ticket to the agent with lowest amount of tickets but when I add the lines where it needs to assign to the specific group the script gives output for only one and the same agent.
This is the output:
NOTICE: Agent ID: 2
NOTICE: Ticket amount: 3
NOTICE: Agent ID: 2
NOTICE: Ticket amount: 3
NOTICE: Assign the ticket to agent 2
DO
Query returned successfully in 506 msec.
do $$
declare
counter int := 1;
zgloszenia int := 0;
agenci int;
agent int :=0;
o int := 1000;
x int;
zgloszenie int;
begin
select count(usr_id) from vuserssupportgroups into agenci;
while agenci > counter loop
select usr_id from vuserssupportgroups where (supportgroupid = '1') order by usr_id limit 1 into agent;
select count(1) from vincidents where serviceuserid = agent and (statusname = 'Nowy' or statusname = 'Otwarty' or statusname = 'W realizacji') into zgloszenia; --policzenie incydentów
select * from incidents where serviceuserid is null and (statusid = '1') into zgloszenie;
raise notice 'Agent ID: %', agent;
raise notice 'Ilość zgłoszeń: %', zgloszenia;
if zgloszenia<o then
o := zgloszenia;
x := agent;
end if;
counter = counter + 1;
end loop;
update incidents
set serviceuserid = x
where incidentid = zgloszenie and serviceuserid is null and supportgroupid = '1';
raise notice 'Przypisz zgłoszenie do agenta %', x;
end
$$;
you wrote nice race condition. When you use UPDATE based on some selected data, then you should to use FOR UPDATE clause ever (you don't need it if you use isolation level SERIALIZABLE, but this can have some performance or operations impacts)
Related
The below query errors on ERROR: column "page" does not exist
If I replace (j) and (page) with hardcoded values; then the query will run.
My goal is to walk through records_to_update table; and then update a batch of records at a time.
DO $$
DECLARE
page int := 50;
BEGIN
FOR j IN 0..(select count(*) from records_to_update) BY page LOOP
with subset_to_update as (
select
* from records_to_update
order by target_class_id, main_type_id, source_class_id
offset (j)
limit (page)
) update large_table_to_update
set to_update_class_id = stu.target_class_id
from
subset_to_update stu
where
large_table_to_update.annotation_class_id = stu.source_class_id;
COMMIT;
END LOOP;
END; $$;
EDIT: I am creating a new question as per suggestion from sticky bit.
I have a query with CTEs to output results for several values (1 to 12). Below is a simplified example. Running it I got the following error:
ERROR: query has no destination for result data
HINT: If you want to discard the results of a SELECT, use PERFORM instead.
CONTEXT: PL/pgSQL function inline_code_block line 8 at SQL statement
I can't output the result of Select in a table. How can I solve this problem?
DO $$
DECLARE r integer;
BEGIN
r := 1;
WHILE r <= 2
LOOP
r := r + 1;
WITH params AS (
SELECT r AS rownumber
),
time AS (
SELECT id
FROM params, analysis
ORDER BY date DESC
LIMIT 1
OFFSET (SELECT rownumber - 1 from params)
)
SELECT * FROM time;
END LOOP;
END; $$;
An example of what I mentioned in my comment:
DO $$
DECLARE
r integer;
int_var integer;
BEGIN
r := 1;
WHILE r <= 12 LOOP
WITH params AS (
SELECT r AS rownumber
),
time AS (
SELECT id
FROM params, analysis
ORDER BY date DESC
LIMIT 1
OFFSET (SELECT rownumber - 1 from params)
)
SELECT INTO int_var id FROM time;
RAISE NOTICE 'id: %', int_var;
r := r + 1;
END LOOP;
END; $$;
You can't RETURN a value from a DO function, but you can RAISE NOTICE a value as above. The SELECT INTO will eliminate the error as you are giving the SELECT a destination(the int_var) for its output. NOTE: SELECT INTO inside plpgsql is different then the same command outside. Outside it is equivalent to CREATE TABLE AS.
CREATE OR REPLACE FUNCTION file_compare()
RETURNS text LANGUAGE 'plpgsql'
COST 100 VOLATILE AS $BODY$
DECLARE
filedata text[];
fpo_data jsonb;
inddata jsonb;
f_cardholderid text;
f_call_receipt text;
i INT;
BEGIN
SELECT json_agg((fpdata))::jsonb
FROM (SELECT fo_data AS fpdata
FROM fpo
LIMIT 100
) t INTO fpo_data;
i=0;
FOR inddata IN SELECT * FROM jsonb_array_elements(fpo_data) LOOP
f_cardholderid := (inddata->>0)::JSONB->'cardholder_id'->>'value';
f_call_receipt := (inddata->>0)::JSONB->'call_receipt_date'->>'value';
f_primary_key := f_cardholderid || f_auth_clm_number;
filedata[i] := jsonb_build_object(
'fc_primary_key',f_primary_key
);
i := i+1;
END LOOP;
RAISE NOTICE 'PRINTING DATA %', filedata;
END;
$BODY$;
I am getting the filedata as below
NOTICE: PRINTING DATA ={"{\"fc_primary_key\": \"A1234567892017/06/27\"}","{\"fc_primary_key\": \"A1234567892017/06/27\"}","{\"fc_primary_key\": \"A1234567892017/08/07\"}","{\"fc_primary_key\": \"A1234567892017/08/07\"}","{\"fc_primary_key\": \"A1234567892017/08/07\"}","{\"fc_primary_key\": \"A1234567892017/08/07\"}","{\"fc_primary_key\": \"A1234567892017/08/07\"}","{\"fc_primary_key\": \"A1234567892024/03/01\"}","{\"fc_primary_key\": \"A12345678945353\"}","{\"fc_primary_key\": \"A1234567892023/11/22\"}","{\"fc_primary_key\": \"A12345678945252\"}","{\"fc_primary_key\": \"A1234567892017-07-01\"}"}
Now I want to iterate this filedata and get each fc_primary_key value and check the count how many times it appeared in entire json data
Note: Each fc_primary_key has to be verified only with the values which are present after it. It should not compare with the fc_primary keys before it.
For example if I check the third element which is "A1234567892017/08/07", it appeared 4 times after its position. So the count must be 4.
Where as the same "A1234567892017/08/07" is there in seventh element, but there are no more "A1234567892017/08/07" after seventh position. So the count must be zero "0"
How do I loop the data and get the count, as I am new to postgres I am unable to find the solution. Please help!!
I was able to get the result you describe with the code below. By unnesting the data, you are able to take advantage of regular SQL syntax (offset, grouping, counting) which are the crux of the problem you described.
DO
$body$
DECLARE
fildata TEXT[] = ARRAY ['{''fc_primary_key'': ''A1234567892017/06/27''}','{''fc_primary_key'': ''A1234567892017/06/27''}','{''fc_primary_key'': ''A1234567892017/08/07''}','{''fc_primary_key'': ''A1234567892017/08/07''}','{''fc_primary_key'': ''A1234567892017/08/07''}','{''fc_primary_key'': ''A1234567892017/08/07''}','{''fc_primary_key'': ''A1234567892017/08/07''}','{''fc_primary_key'': ''A1234567892024/03/01''}','{''fc_primary_key'': ''A12345678945353''}','{''fc_primary_key'': ''A1234567892023/11/22''}','{''fc_primary_key'': ''A12345678945252''}','{''fc_primary_key'': ''A1234567892017-07-01''}'];
count INTEGER;
BEGIN
FOR i IN 1 .. array_length(fildata, 1) LOOP
SELECT count(*) - 1
INTO count
FROM (
SELECT unnest(fildata) AS x OFFSET (i - 1)
) AS t
WHERE x = fildata[i]
GROUP BY x;
RAISE NOTICE 'Row % appears % times after the current', fildata[i], count;
END LOOP;
END
$body$ LANGUAGE plpgsql;
Alternatively, you can get the entire set of data in a single statement (if that would be helpful) by using windowing instead of offset.
SELECT t
, count(*) OVER (PARTITION BY t ORDER BY rn RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) - 1 AS count
FROM (
SELECT row_number() OVER () AS rn, t
FROM unnest(
ARRAY ['{''fc_primary_key'': ''A1234567892017/06/27''}','{''fc_primary_key'': ''A1234567892017/06/27''}','{''fc_primary_key'': ''A1234567892017/08/07''}','{''fc_primary_key'': ''A1234567892017/08/07''}','{''fc_primary_key'': ''A1234567892017/08/07''}','{''fc_primary_key'': ''A1234567892017/08/07''}','{''fc_primary_key'': ''A1234567892017/08/07''}','{''fc_primary_key'': ''A1234567892024/03/01''}','{''fc_primary_key'': ''A12345678945353''}','{''fc_primary_key'': ''A1234567892023/11/22''}','{''fc_primary_key'': ''A12345678945252''}','{''fc_primary_key'': ''A1234567892017-07-01''}']) AS t
) AS x
ORDER BY rn;
I am trying to do an update on a specific record every 1000 rows using Postgres. I am looking for a better way to do that. My function is described below:
CREATE OR REPLACE FUNCTION update_row()
RETURNS void AS
$BODY$
declare
myUID integer;
nRow integer;
maxUid integer;
BEGIN
nRow:=1000;
select max(uid_atm_inp) from tab into maxUid where field1 = '1240200';
loop
if (nRow > 1000 and nRow < maxUid) then
select uid from tab into myUID where field1 = '1240200' and uid >= nRow limit 1;
update tab
set field = 'xxx'
where field1 = '1240200' and uid = myUID;
nRow:=nRow+1000;
end if;
end loop;
END; $BODY$
LANGUAGE plpgsql VOLATILE
How can I improve this procedure? I think there is something wrong. The loop does not end and takes too much time.
To perform this task in SQL, you could use the row_number window function and update only those rows where the number is divisible by 1000.
Your loop doesn't finish because there is no EXIT or RETURN in it.
I doubt you could ever rival the performance of a standard SQL update with a procedural loop. Instead of doing it a row at a time, just do it all as a single statement:
with t2 as (
select
uid, row_number() over (order by 1) as rn
from tab
where field1 = '1240200'
)
update tab t1
set field = 'xxx'
from t2
where
t1.uid = t2.uid and
mod (t2.rn, 1000) = 0
Per my comment, I am presupposing what you mean by "every 1000th row," as without some designation of how to determine what tuple is what row number. That is easily edited by changing the "order by" criteria.
Adding a second where clause on the update (t1.field1 = '1240200') can't hurt but might not be necessary if these are nested loop.
This might be notionally similar to what Laurenz has in mind.
I solved this way:
declare
myUID integer;
nRow integer;
rowNum integer;
checkrow integer;
myString varchar(272);
cur_check_row cursor for select uid , row_number() over (order by 1) as rn, substr(fieldxx,1,244)
from table where field1 = '1240200' and uid >= 1000 ORDER BY uid;
BEGIN
open cur_check_row;
loop
fetch cur_check_row into myUID, rowNum, myString;
EXIT WHEN NOT FOUND;
select mod(rowNum, 1000) into checkrow;
if checkrow = 0 then
update table
set fieldxx= myString||'O'
where uid in (myUID);
end if;
end loop;
close cur_check_row;
I have PostgreSQL function which is used for counting usage of "items" by users.
Counter values are saved into table:
users_items
user_id - integer (fk)
item_id - integer (fk)
counter - integer
There is max. 1 counter per user per item (unique key).
Here is my function:
CREATE OR REPLACE FUNCTION increment_favorite_user_item (item integer, userid integer) RETURNS integer AS
$BODY$
DECLARE
new_count integer; -- Usage counter
BEGIN
IF NOT EXISTS(SELECT 1 FROM users_items WHERE user_id = userid AND item_id = itemid)
THEN
INSERT INTO users_items ("user_id", "item_id", "counter") VALUES (userid, itemid, 1); -- First usage - create new counter
new_amount = 1;
ELSE
UPDATE users_items SET count = count + 1 WHERE (user_id = userid AND item_id = itemid); -- Increment counter
SELECT counter INTO new_count FROM users_items WHERE (user_id = userid AND item_id = itemid);
END IF;
RETURN new_count;
END;
$BODY$
LANGUAGE 'plpgsql'
VOLATILE;
It is used by application, which may call it multiple times.
Everything works fine until we call the function one after another, for the same user and item, when the item is new for specific user (record in users_items table does not exist).
For second function call, I get unique violation: "Key (user_id, item_id)=(1, 7912) already exists".
It seems like "if not exists" check doesn't work properly, second function call doesn't see record inserted by first one, and tries to insert same row, making the uq check fail.
What can I do to solve the problem?
Every function call runs in another transaction.
There is a) race condition, b) you should to LOCK table if you would to ensure INSERT
DECLARE rc int;
BEGIN
LOCK TABLE users IN SHARE ROW EXCLUSIVE MODE;
UPDATE users SET counter = counter + 1 WHERE user_id = $1;
GET DIAGNOSTICS rc = ROW_COUNT;
IF rc = 0 THEN
INSERT INTO users(id, counter) VALUES($1, 1)
END IF;
END;
or more complex code, but with less locking
DECLARE rc int;
BEGIN
-- fast path
UPDATE users SET counter = counter + 1 WHERE user_id = $1;
GET DIAGNOSTICS rc = ROW_COUNT;
IF rc = 0 THEN
LOCK TABLE users IN SHARE ROW EXCLUSIVE MODE;
UPDATE users SET counter = counter + 1 WHERE user_id = $1;
GET DIAGNOSTICS rc = ROW_COUNT;
IF rc = 0 THEN
INSERT INTO users(id, counter) VALUES($1, 1)
END IF;
END IF;
END;