this is my first post in stackoverflow,
I've a problem with my insert trigger,
the case is I have two tables
table raw_data
create table raw_data(
id varchar(20),
name varchar(20),
duration integer
)
and table total_duration
create table total_duration(
id_raw varchar(20),
name varchar(20),
duration integer
)
if I insert data to raw_data using trigger
begin
insert into total_duration(id_raw, name, duration)
select new.id, new.name, new.duration
from raw_data;
return new;
end;
the question is, how to insert row which stored with results of sum of duration, where the trigger above only insert the data from raw_data,
I ve trying to make another trigger to insert sum row, but the result Is not as expected
the expected result is like this
id_raw
name
total
1001
a
450
1001
b
450
1002
c
450
1001
a,b
900
Thank you for your help
Related
I am trying to make a trigger and function that inserts into the table purchases the values which have been inserted into the table customers.
Columns of table customers
1-customer_id serial PK references customer_id in purchases
2-c_name VARCHAR
3-amount DOUBLE PRECISION
Columns of table purchases
1- customer_id serial PK 2- amount DOUBLE PRECISION
The code for the trigger and the function:
CREATE OR REPLACE FUNCTION auto_insert_purchases()
RETURNS TRIGGER
LANGUAGE PLPGSQL
AS
$body$
BEGIN
insert into purchases(customer_id,purchase) values
(NEW.customer_id,NEW.purchase);
END
$body$
CREATE TRIGGER tr_auto_insert_purchases
AFTER INSERT ON customers
EXECUTE PROCEDURE auto_insert_purchases()
As you can see its supposed to take the new row data and insert it into the table but after doing and insertion to customers like this:
insert into customers values(2,'Stewie Griffin',4.99);
I get this error message:
ERROR: null value in column "customer_id" of relation "purchases" violates not-null
constraint
DETAIL: Failing row contains (null, null).
CONTEXT: SQL statement "insert into purchases(customer_id,purchase) values
(NEW.customer_id,NEW.purchase)"
auto_insert_purchases() PL/pgSQL fonksiyonu, 3. satır, SQL ifadesi içinde
SQL state: 23502
Why does the failing row contain null? Am I using the NEW keyword incorrectly?
CREATE TABLE customers (
customer_id int4 NULL,
c_name varchar NULL,
amount float8 NULL
);
CREATE TABLE purchases (
customer_id int4 NULL,
amount float8 NULL
);
CREATE OR REPLACE FUNCTION auto_insert_purchases()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
BEGIN
insert into purchases(customer_id, amount) values
(NEW.customer_id, NEW.amount);
return new;
END;
$function$
;
create trigger tr_auto_insert_purchases
after insert ON customers
for each row
execute procedure auto_insert_purchases();
insert into customers(customer_id, c_name, amount) values (2,'Stewie Griffin', 4.99);
select * from purchases;
-- Result:
customer_id|amount|
-----------+------+
2| 4.99|
May be you just forgot to write for each row statement after CREATE TRIGGER tr_auto_insert_purchases AFTER INSERT ON customers
I have created a trigger, it is taking more time while inserting multiple records.
Insetting 1 or 2 records is working. But if the records are more than 1000 then not fast, still running query from 2 hours.
I have created only 15 columns in below table. My actual table has 300 columns.
Is any other way to insert multiple records on the trigger table.?
Table
create table patients (
id serial,
name character varying (50),
daily varchar (8),
month varchar (6),
quarter varchar (6),
registration_date timestamp,
age integer,
address text,
country text,
city text,
phone_number integer,
Education text,
Occupation text,
Marital_Status text,"E-mail" text
);
trigger function
CREATE OR REPLACE FUNCTION update_data_after_insert_data_into_patients()
RETURNS trigger AS
$$BEGIN
update patients t1
set quarter=t2.quarter
from (SELECT (extract(year from registration_date)::text || 'Q' || extract(quarter from registration_date)::text) as quarter,registration_date
from patients) t2 where t1.registration_date =t2.registration_date;
update patients t1
set month=t2.month
from (select (extract(year from registration_date)::text || '' || to_char(registration_date,'MM')) as month,registration_date
from patients) t2 where t1.registration_date =t2.registration_date;
update patients t1
set daily=t2.daily
from (select extract(year from registration_date) || '' ||to_char(registration_date,'MM') || '' || to_char(registration_date,'DD') as daily,registration_date
from patients) t2 where t1.registration_date =t2.registration_date;
RETURN new;
END;
$$ LANGUAGE plpgsql;
Trigger definition
create TRIGGER trigger_update_data_after_insert_patients
AFTER insert ON patients
FOR EACH ROW
EXECUTE PROCEDURE update_data_after_insert_data_into_patients();
insert multiple records into patients table
INSERT INTO public.patients
("name", daily, "month", quarter, registration_date, age, address, country, city, phone_number, education, occupation, marital_status, "E-mail")
VALUES('Adam', '20221215', '202212', '2022Q4', '2022-08-17 19:01:10-08', 24, '', '', '', 1245578, '', '', '', '');
select statement
select * from patients;
You are updating all rows in the table with the same registration date as the one provided in the insert three times - just to calculate those generated columns.
You can do this more efficiently by assigning the generated values to the NEW record in a BEFORE trigger.
CREATE OR REPLACE FUNCTION update_data_after_insert_data_into_patients()
RETURNS trigger AS
$$
BEGIN
new.quarter := to_char(new.registration_date, 'yyyy"Q"q');
new.month := to_char(new.registration_date, 'yyyy mm');
new.daily := to_char(new.registration_date, 'yyyymmdd');
RETURN new;
END;
$$
LANGUAGE plpgsql;
create TRIGGER trigger_update_data_after_insert_patients
BEFORE insert ON patients
FOR EACH ROW
EXECUTE PROCEDURE update_data_after_insert_data_into_patients();
However I don't see the need to store these calculated values when you can easily format the registration_date when retrieving the data. I would get rid of those columns and the trigger and create a VIEW that does the formatting.
I have got a composite primary key in a table in PostgreSQL (I am using pgAdmin4)
Let's call the the two primary keys productno and version.
version represents the version of productno.
So if I create a new dataset, then it needs to be checked if a dataset with this productno already exists.
If productno doesn't exist yet, then version should be (version) 1
If productno exists once, then version should be 2
If productno exists twice, then version should be 3
... and so on
So that we get something like:
productno | version
-----|-----------
1 | 1
1 | 2
1 | 3
2 | 1
2 | 2
I found a quite similar problem: auto increment on composite primary key
But I can't use this solution because PostgreSQL syntax is obviously a bit different - so tried a lot around with functions and triggers but couldn't figure out the right way to do it.
You can keep the version numbers in a separate table (one for each "base PK" value). That is way more efficient than doing a max() + 1 on every insert and has the additional benefit that it's safe for concurrent transactions.
So first we need a table that keeps track of the version numbers:
create table version_counter
(
product_no integer primary key,
version_nr integer not null
);
Then we create a function that increments the version for a given product_no and returns that new version number:
create function next_version(p_product_no int)
returns integer
as
$$
insert into version_counter (product_no, version_nr)
values (p_product_no, 1)
on conflict (product_no)
do update
set version_nr = version_counter.version_nr + 1
returning version_nr;
$$
language sql
volatile;
The trick here is the the insert on conflict which increments an existing value or inserts a new row if the passed product_no does not yet exists.
For the product table:
create table product
(
product_no integer not null,
version_nr integer not null,
created_at timestamp default clock_timestamp(),
primary key (product_no, version_nr)
);
then create a trigger:
create function increment_version()
returns trigger
as
$$
begin
new.version_nr := next_version(new.product_no);
return new;
end;
$$
language plpgsql;
create trigger base_table_insert_trigger
before insert on product
for each row
execute procedure increment_version();
This is safe for concurrent transactions because the row in version_counter will be locked for that product_no until the transaction inserting the row into the product table is committed - which will commit the change to the version_counter table as well (and free the lock on that row).
If two concurrent transactions insert the same value for product_no, one of them will wait until the other finishes.
If two concurrent transactions insert different values for product_no, they can work without having to wait for the other.
If we then insert these rows:
insert into product (product_no) values (1);
insert into product (product_no) values (2);
insert into product (product_no) values (3);
insert into product (product_no) values (1);
insert into product (product_no) values (3);
insert into product (product_no) values (2);
The product table looks like this:
select *
from product
order by product_no, version_nr;
product_no | version_nr | created_at
-----------+------------+------------------------
1 | 1 | 2019-08-23 10:50:57.880
1 | 2 | 2019-08-23 10:50:57.947
2 | 1 | 2019-08-23 10:50:57.899
2 | 2 | 2019-08-23 10:50:57.989
3 | 1 | 2019-08-23 10:50:57.926
3 | 2 | 2019-08-23 10:50:57.966
Online example: https://rextester.com/CULK95702
You can do it like this:
-- Check if pk exists
SELECT pk INTO temp_pk FROM table a WHERE a.pk = v_pk1;
-- If exists, inserts it
IF temp_pk IS NOT NULL THEN
INSERT INTO table(pk, versionpk) VALUES (v_pk1, temp_pk);
END IF;
So - I got it work now
So if you want a column to update depending on another column in pg sql - have a look at this:
This is the function I use:
CREATE FUNCTION public.testfunction()
RETURNS trigger
LANGUAGE 'plpgsql'
COST 100
VOLATILE NOT LEAKPROOF
AS $BODY$
DECLARE v_productno INTEGER := NEW.productno;
BEGIN
IF NOT EXISTS (SELECT *
FROM testtable
WHERE productno = v_productno)
THEN
NEW.version := 1;
ELSE
NEW.version := (SELECT MAX(testtable.version)+1
FROM testtable
WHERE testtable.productno = v_productno);
END IF;
RETURN NEW;
END;
$BODY$;
And this is the trigger that runs the function:
CREATE TRIGGER testtrigger
BEFORE INSERT
ON public.testtable
FOR EACH ROW
EXECUTE PROCEDURE public.testfunction();
Thank you #ChechoCZ, you definetly helped me getting in the right direction.
Given two tables, A and B:
A B
----- -----
id id
high high
low low
bId
I want to find rows in table A where bId is null, create an entry in B based off the data in A, and update the row in A to reference the newly created row. I can create the rows but I'm having trouble updating table A with the reference to the new row:
begin transaction;
with rows as (
insert into B (high, low)
select high, low
from A a
where a.bId is null
returning id as bId, a.id as aId
)
update A
set bId=(select bId from rows where id=rows.aId)
where id=rows.aId;
--commit;
rollback;
However, this fails with a cryptic error: ERROR: missing FROM-clause entry for table a.
Using a Postgres query, how can I achieve this?
either
update "A"
set "bId"=(select "bId" from rows where id=rows."aId")
without the where clause or
update "A"
set "bId"=(select "bId" from rows where id=rows."aId")
FROM rows
where "A".id=rows.aId;
I dont know if your tables realy have that names, as mentioned in the comments try to avoid uppercase tables and fieldnames and try to avoid reserved keynames.
I found a way to get it to work but I feel like it's not the most efficient.
begin transaction;
do $body$
declare
newId int4;
tempB record;
begin
create temp table TempAB (
High float8,
Low float8,
AID int4
);
insert into TempAB (High, Low, AId)
select high, low, id
from A
where bId is null;
for tempB in (select * from TempAB)
loop
insert into B (high, low)
values (tempB.high, tempB.low)
returning id into newId;
update A
set bId=newId
where id=tempB.AId;
end loop;
end $body$;
rollback;
--commit;
I have records from a SOURCE1 table and I need to move those records into 2 different tables called DESTINATION1 and DESTINATION2
I know how to copy records from the SOURCE1 table into the DESTINATION1 table by using a INSERT INTO SELECT statement, but I run into a problem. What I need is when copying the REMARKS data from SOURCE1, I need to copy that into the DESTINATION2 table, retrieve the REFID and copy that REFID into the respective record in my DESTINATION1 table in the column FK_DESTINATION2_REFID.
The criteria is to copy only the records in the SOURCE1 table with the STATUS of 1 and only copy the respective REMARKS data into the DESTINATION2 table if its not null. Also, is it possible to do this without a Stored Procedure, if not, not a big deal.
CREATE TABLE #Source1 (
RefID int IDENTITY(1,1) NOT NULL,
Status bit NULL,
ProviderID int NULL,
Remarks varchar(max) NULL
)
Create Table #Destination1 (
RefID int IDENTITY(1,1) NOT NULL,
Status bit NULL,
ProviderID int NULL,
FK_Destination2_RefID int
)
Create Table #Destination2 (
RefID int IDENTITY(1,1) NOT NULL,
Remarks varchar(max) NULL
)
-- Insert Records into #Source1
Insert Into #Source1 values (1,100,'Test 555')
Insert Into #Source1 values (0,400,'Test 123')
Insert Into #Source1 values (1,300,NULL)
Insert Into #Source1 values (1,500,'Test 999')
Insert Into #Source1 values (1,200,NULL)
--Drop table #Source1
--Drop table #Destination1
--Drop table #Destination2
Results would look like this:
Source1 Table
RefID Status ProviderID Remarks
----------- ------ ----------- -----------
1 1 100 Test 555
2 0 400 Test 123
3 1 300 NULL
4 1 500 Test 999
5 1 200 NULL
Destination1 Table
RefID Status ProviderID FK_Destination2_RefID
----------- ------ ----------- ---------------------
1 1 100 1
2 1 300 NULL
3 1 500 2
4 1 200 NULL
Destination2 Table
RefID Remarks
------ ---------
1 Test 555
2 Test 999
EDIT: My #SOURCE1 table will be hold a dynamic set amount of records. In this instance I have 5 Records. But next time, it could be 50 records. At each time using the #SOURCE1 table, I will truncate the table each time and the REFID will start back to 1. Since this is a temporary holding table for a batch of records, I need to move them permanently to the 2 Destination tables as indicated when finished so in essence they can look like the #SOURCE1 table originally.
Well, you are using IDENTITY property on the #Destination tables. This means you are trying to assign a new PK to them, and it would thus remove the uniqueness / PK --> FK link to the #Source table... and it's unnecessary since your source table is already handling this. So, just remove this property from m the #Destination tables and do your inserts as you suspect. You can still add a UNIQUE CONSTRAINT on the destination tables if you want... but if this is all it is used for you should never run into non-uniqueness. Your FK will not be sequential, but that's because you are restricting what data to insert into it. If you want another PK IDENTITY column, just keep that separate. I have included that below as an example
CREATE TABLE #Source1 (
RefID int IDENTITY(1,1) NOT NULL,
Status bit NULL,
ProviderID int NULL,
Remarks varchar(max) NULL
)
Create Table #Destination1 (
SomePK int IDENTITY(1,1),
RefID int ,
Status bit NULL,
ProviderID int NULL,
FK_Destination2_RefID int
)
Create Table #Destination2 (
SomePK int IDENTITY(1,1),
RefID int ,
Remarks varchar(max) NULL
)
-- Insert Records into #Source1
Insert Into #Source1 values (1,100,'Test 555')
Insert Into #Source1 values (0,400,'Test 123')
Insert Into #Source1 values (1,300,NULL)
Insert Into #Source1 values (1,500,'Test 999')
Insert Into #Source1 values (1,200,NULL)
insert into #Destination2
select
RefID
,Remarks
from #Source1
where
Remarks is not null and Status = 1
insert into #Destination1
select
s.RefID
,s.Status
,s.ProviderID
,d.RefID
from
#Source1 s
left join #Destination2 d on d.RefID = s.RefID
where
s.Status = 1
select * from #Source1
select * from #Destination1
select * from #Destination2
Drop table #Source1
Drop table #Destination1
Drop table #Destination2