I have two tables as below.
TABLE 1 TABLE 2
-------- --------
id id
date table1_id
total subtotal
balance
table 1 table 2
-------- ---------
id total balance id table1_id subtotal paid
1 20 10 1 1 5 5
2 30 30 2 1 15 5
3 2 10 0
4 2 10 0
5 2 10 0
I have to add paid column in table2. so can anyone help me to add values to newly added column for existing data. I tried to wrote procedure as below but as postgres will not allow if in for loop so am unable to do it.
CREATE OR REPLACE FUNCTION public.add_amountreceived_inbillitem() RETURNS void AS
$BODY$
DECLARE
rec RECORD;
inner_rec RECORD;
distributebalance numeric;
tempvar numeric;
BEGIN
FOR rec IN select * from table1
LOOP
distributebalance = rec.balance;
FOR inner_rec IN(select * from table2 where table1_id = rec.id order by id limit 1)
tempvar = distributebalance - inner_rec.subtotal;
if (distributebalance >0 and tempvar>=0) THEN
update table2 set paid = inner_rec.subtotal where id = inner_rec.id ;
distributebalance =distributebalance-inner_rec.subtotal;
else if( distributebalance >0 and tempvar<0 )THEN
update table2 set paid = distributebalance where id = inner_rec.id;
END IF;
END LOOP;
END LOOP;
END; $BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
Thanks In advance :)
Postgres does allow for IF statements in loops.
The issue is that by writing ELSE IF you've started a new IF statement, so you now have 2 opening IFs but only one closing END IF. A "proper" elseif in plpgsql is ELSIF or ELSEIF. So just delete the space between those words and it should work.
Related
I am searching the mean to do, in Postgresql, that each value in each one of several columns is unique in all (each) of these columns.
Example, with 2 columns :
col_1 col_2
--------------
a b # ok
c d # ok
e # ok
f a # forbidden
b # forbidden
b # forbidden
I need that each writing in these columns is treated by 1 transaction, especially (for some row) :
copy col_2 in col_1 and delete col_2
Has someone an idea about ?
This probably should be a comment, but then it is too long and I cannot format the code example there. You cannot get a unique constraint nor index across multiple columns. You may be able to with a trigger, but even there it is not simple:
create or replace function unique_over_2col()
returns trigger
language plpgsql
as $$
begin
if exists ( select null
from test
where new.col_1 = col_1
or new.col_1 = col_2
or new.col_2 = col_1
or new.col_2 = col_2
)
then
return null;
else
return new;
end if;
end;
$$;
create trigger test_biur
before insert or update
on <your table name here>
for each row
execute function unique_over_2col();
Your trigger will specifically have to compare every new.column against every existing column. The above just does so with the 2 columns you mentioned and that leads to 4 comparisons. Your several columns will expand this dramatically. I'll repeat the advice by #Bergi normalize your schema.
BTW: please explain copy col_2 in col_1 and delete col_2 it is totally meaningless. Perhaps it would be better to explain the business issue you are faced with rather than how you are trying to solve it.
A bit ugly but working solution:
CREATE TABLE tablename (col1 integer, col2 integer);
CREATE OR REPLACE FUNCTION pr_tablename_insertuniqueonly()
RETURNS TRIGGER
LANGUAGE plpgsql
AS $$
DECLARE
v_new_values integer[] = ARRAY[NEW.col1, NEW.col2];
BEGIN
IF (NEW.col1=NEW.col2) THEN
RETURN null;
END IF;
IF EXISTS(SELECT 1 FROM tablename t WHERE t.col1 = ANY(v_new_values) OR t.col2 = ANY(v_new_values)) THEN
RETURN null;
ELSE
RETURN NEW;
END IF;
RETURN NEW;
END;
$$;
CREATE OR REPLACE TRIGGER tr_iine_tablename BEFORE INSERT ON tablename FOR EACH ROW EXECUTE PROCEDURE pr_tablename_insertuniqueonly();
stack=# insert into tablename values (1,1);
INSERT 0 0
stack=# insert into tablename values (1,2);
INSERT 0 1
stack=# insert into tablename values (3,2);
INSERT 0 0
stack=# insert into tablename values (3,1);
INSERT 0 0
stack=# insert into tablename values (3,4);
INSERT 0 1
stack=# select * from tablename;
col1 | col2
1 | 2
3 | 4
(2 rows)
Is there a way for postgres to automatically generate child records with set parameters? I'm basically trying to create an employee timesheet and each time a new timesheet for a given date is created, I'd like to create a 7 child records (one record for each day of that given week for the user to fill in)
Something like this:
date (automatically generated on a weekly basis) | hours | timesheet_id(FK) | project_id(FK)
2019-01-01 8 1 2
2019-01-02 10 1 2
2019-01-03 8 1 2
2019-01-04 8 1 2
2019-01-05 0 1 2
2019-01-06 0 1 2
2019-01-07 9 1 2
#Z4-tier is correct but the trigger function does not need a loop as it can be reduced to a single insert statement.
create or replace function create_timesheet_days()
returns trigger
language 'plpgsql'
as $$
begin
insert into timesheet_days(timesheet_id, week_day)
select new.timesheet_id, wk_day
from generate_series (new.timesheet_date, new.timesheet_date+interval '6 day', interval '1 day') wk_day;
return new;
end;
$$ ;
as mentioned in the comments, this is a textbook case for using triggers.
You'll need a procedure that handles creating the new records. This might work:
create or replace function create_timesheet_days()
returns trigger as $BODY$
declare counter INTEGER := 0;
begin
while counter < 6 loop
insert into my_schema.timesheet_week
(timesheet_date, hrs, timesheet_id, project_id)
select
NEW.timesheet_date + counter as timesheet_date,
0 as hrs,
NEW.timesheet_id as timesheet_id,
project.project_id as project_id
from my_schema.project;
counter := counter + 1;
end loop;
return NEW;
END;
$BODY$
language 'plpgsql';
Then you could create a trigger like this:
CREATE TRIGGER create_timsheet_days_trigger
BEFORE INSERT
ON my_schema.timesheet_week
FOR EACH ROW
EXECUTE PROCEDURE
create_timesheet_days();
I have two tables, A and P.
A
________________
id | num_cars
----------
1 | 2
2 | 0
3 0
P
__________________________
id_driver | id_car
--------------------------
1 | Porsche
1 | BMW
A.id and P.id_driver referes to the same person. I created the below trigger. The idea is, every time I add a new row in P for an existing driver its correspondent row in A must be updated with the number of total cars owned by the person with that id.
CREATE OR REPLACE FUNCTION update_a() RETURNS trigger AS $$
BEGIN
IF TG_OP = 'INSERT' THEN
UPDATE A a
SET num_cars = (SELECT COUNT(NEW.id_driver)
FROM P p
WHERE (a.id = p.id_driver AND a.id=NEW.id_driver));
ELSIF TG_OP = 'DELETE' THEN
UPDATE A a
SET num_cars = num_cars - 1
WHERE a.id = OLD.id_driver AND a.num_cars<>0;
END IF;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER add_car
AFTER INSERT OR DELETE ON PARTICIPATION
FOR EACH ROW EXECUTE PROCEDURE update_a();
The trigger works fine when I add a row in B for a driver. However, if I then add a row for a different driver in B the rest of the rows in A are set back to 0. I would like the procedure to run only when A.id = P.id_driver. How can I do this?
The update query makes a cross product between A and P, and therefore updates the entire table, counting 0 cars most of the time.
You would need to restrict the update to the proper driver only, and also to compute the number of cars only for this driver:
UPDATE A a
SET num_cars = (SELECT COUNT(*)
FROM P p
WHERE p.id_driver = NEW.id_driver)
WHERE a.id = NEW.id_driver;
I am trying to do an update on a specific record every 1000 rows using Postgres. I am looking for a better way to do that. My function is described below:
CREATE OR REPLACE FUNCTION update_row()
RETURNS void AS
$BODY$
declare
myUID integer;
nRow integer;
maxUid integer;
BEGIN
nRow:=1000;
select max(uid_atm_inp) from tab into maxUid where field1 = '1240200';
loop
if (nRow > 1000 and nRow < maxUid) then
select uid from tab into myUID where field1 = '1240200' and uid >= nRow limit 1;
update tab
set field = 'xxx'
where field1 = '1240200' and uid = myUID;
nRow:=nRow+1000;
end if;
end loop;
END; $BODY$
LANGUAGE plpgsql VOLATILE
How can I improve this procedure? I think there is something wrong. The loop does not end and takes too much time.
To perform this task in SQL, you could use the row_number window function and update only those rows where the number is divisible by 1000.
Your loop doesn't finish because there is no EXIT or RETURN in it.
I doubt you could ever rival the performance of a standard SQL update with a procedural loop. Instead of doing it a row at a time, just do it all as a single statement:
with t2 as (
select
uid, row_number() over (order by 1) as rn
from tab
where field1 = '1240200'
)
update tab t1
set field = 'xxx'
from t2
where
t1.uid = t2.uid and
mod (t2.rn, 1000) = 0
Per my comment, I am presupposing what you mean by "every 1000th row," as without some designation of how to determine what tuple is what row number. That is easily edited by changing the "order by" criteria.
Adding a second where clause on the update (t1.field1 = '1240200') can't hurt but might not be necessary if these are nested loop.
This might be notionally similar to what Laurenz has in mind.
I solved this way:
declare
myUID integer;
nRow integer;
rowNum integer;
checkrow integer;
myString varchar(272);
cur_check_row cursor for select uid , row_number() over (order by 1) as rn, substr(fieldxx,1,244)
from table where field1 = '1240200' and uid >= 1000 ORDER BY uid;
BEGIN
open cur_check_row;
loop
fetch cur_check_row into myUID, rowNum, myString;
EXIT WHEN NOT FOUND;
select mod(rowNum, 1000) into checkrow;
if checkrow = 0 then
update table
set fieldxx= myString||'O'
where uid in (myUID);
end if;
end loop;
close cur_check_row;
Can you please explain me this strange behaviour:
I have this stored procedure which tell me if a row is locked
CREATE OR REPLACE FUNCTION tg_availablega_is_unlocked(availablega_id integer)
RETURNS boolean AS
$BODY$
DECLARE
is_locked boolean = FALSE;
BEGIN
BEGIN
PERFORM id FROM tg_availablega WHERE id = availablega_id
FOR UPDATE NOWAIT;
EXCEPTION
WHEN lock_not_available THEN
is_locked := TRUE;
END;
RETURN not is_locked;
END;$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
If I start a transaction and execute this:
SELECT "tg_availablega"."id",
"tg_availablega"."isactive",
"tg_availablega"."schedule",
"tg_availablega"."zone_tg_id"
FROM "tg_availablega"
WHERE (tg_availablega_is_unlocked("tg_availablega"."id")
AND "tg_availablega"."zone_tg_id" = 1
AND "tg_availablega"."isactive" = TRUE
AND "tg_availablega"."schedule" = 20)
LIMIT 100
FOR
UPDATE;
It locks and return 100 rows. If I execute the same simultaneously in other transaction, it locks and return different 100 rows. If rows total are 101 then first executuon return 100 rows and second execution return just 1 remaining row.
BUT if I add ORDER BY clause
SELECT "tg_availablega"."id",
"tg_availablega"."isactive",
"tg_availablega"."schedule",
"tg_availablega"."zone_tg_id"
FROM "tg_availablega"
WHERE (tg_availablega_is_unlocked("tg_availablega"."id")
AND "tg_availablega"."zone_tg_id" = 1
AND "tg_availablega"."isactive" = TRUE
AND "tg_availablega"."schedule" = 20)
***ORDER BY "tg_availablega"."id"***
LIMIT 100
FOR
UPDATE;
then the first transaction return 100 locked rows, and second transaction return NO ROWS
Why is that?
The problem is that the function tg_availablega_is_unlocked does lock the rows it examines. Without order by, Postgres doesn't visit all rows - so the function doesn't get called on all of them. I think You meant:
select * from (
SELECT "tg_availablega"."id",
"tg_availablega"."isactive",
"tg_availablega"."schedule",
"tg_availablega"."zone_tg_id"
FROM "tg_availablega"
WHERE "tg_availablega"."zone_tg_id" = 1
AND "tg_availablega"."isactive" = TRUE
AND "tg_availablega"."schedule" = 20)
ORDER BY "tg_availablega"."id"
) a
where tg_availablega_is_unlocked(id)
limit 100