I am new to database triggers/PostgreSQL and trying to convert the following SQL trigger to PostgreSQL.
SQL script :
CREATE TRIGGER tr_EmpMerger ON Emp INSTEAD OF INSERT
AS
BEGIN
MERGE INTO Emp AS Target
USING ( SELECT * FROM INSERTED ) AS Source
ON
( Target.EmpId = Source.EmpId
)
WHEN MATCHED THEN UPDATE SET
EmpName = Source.EmpName,
Age = Source.Age
WHEN NOT MATCHED THEN INSERT VALUES
(
Source.EmpId,
Source.EmpName,
Source.Age
);
END
GO
Questions :
1) Is there any equivalent of SQL's INSERTED table in PostgreSQL? If not, what is the work around?
2) Does PostgreSQL support Merge triggers? If not, what is the work around?
3) What will be the equivalent PostgreSQL script for the above merge trigger?
EDIT :
Note - In this scenario the insertion of data into the Emp table (as well as other tables) is happening through Bulk Copy command of Postgres. So there is no direct INSERT INTO query available for this table
Related
I am coming from a mssql world and moving over to postgres. I am trying to create a new procedure from a query I wrote and it fails on creation. I am using pgAdmin 4 to create the proc and I've tried copy-pasting the query into the "code" tab of the dialog box.
What I'm trying to accomplish is inserting a bunch of rows into a table and outputting the ids from the identity column into a temporary table. I will be using those ids for more work further down the line, but it's failing before it is even usable. The way I did it in MSSQL was I had a table variable and used "output inserted.id" to get those values to insert into the table variable.
From what I understand, I have to create a temp table and use the returning keyword in postgres. The following query works if I run it in a query window
CREATE TEMPORARY TABLE temp_table
(
temp_id integer
);
WITH ROWS AS
(
INSERT INTO table_a
(some_name_a)
SELECT some_name_b
FROM table_b
RETURNING id)
INSERT INTO temp_table(temp_id)
SELECT id FROM ROWS;
But when I try to create the procedure for that I get an error saying
"ERROR: syntax error at or near "CREATE" LINE 3: AS $BODY$CREATE TEMPORARY TABLE temp_table^"
Here is what the create proc code looks like:
CREATE OR REPLACE PROCEDURE public.temp()
LANGUAGE 'plpgsql'
AS $BODY$
CREATE TEMPORARY TABLE temp_table
(
temp_id integer
);
WITH ROWS AS
(
INSERT INTO table_a
(some_name_a)
SELECT some_name_b
FROM table_b
RETURNING id)
INSERT INTO temp_table(temp_id)
SELECT id FROM ROWS;
$BODY$;
I am using Teiid vdb model where i need to extract query constraints inside the ddl and use it in a stored procedure to fetch results of my choice. For example, if I run following query :
select * from Student where student_name = 'st123', i want to pass st123 to my procedure and return the results based on some processing.
How can i extract this constraint inside the ddl instead of teiid doing the filtering for me and returning the matching row. Is there a way around developing the connector and handling this in vdb instead?
See http://teiid.github.io/teiid-documents/master/content/reference/r_procedural-relational-command.html
If you have the procedure:
create virtual procedure student (in student_name string) returns table (<some cols>) as
begin
if (student_name like '...')
...
end
then you can all it as if it were a table:
select * from student where student_name = 'st123'
I am trying to write a trigger which gets data from the table attribute in which multiple rows are inserted corresponding to one actionId at one time and group all that data into the one object:
Table Schema
actionId
key
value
I am firing trigger on rows insertion,SO how can I handle this multiple row insertion and how can I collect all the data.
CREATE TRIGGER attribute_changes
AFTER INSERT
ON attributes
FOR EACH ROW
EXECUTE PROCEDURE log_attribute_changes();
and the function,
CREATE OR REPLACE FUNCTION wflowr222.log_task_extendedattribute_changes()
RETURNS trigger AS
$BODY$
DECLARE
_message json;
_extendedAttributes jsonb;
BEGIN
SELECT json_agg(tmp)
INTO _extendedAttributes
FROM (
-- your subquery goes here, for example:
SELECT attributes.key, attributes.value
FROM attributes
WHERE attributes.actionId=NEW.actionId
) tmp;
_message :=json_build_object('actionId',NEW.actionId,'extendedAttributes',_extendedAttributes);
INSERT INTO wflowr222.irisevents(message)
VALUES(_message );
RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
and data format is,
actionId key value
2 flag true
2 image http:test.com/image
2 status New
I tried to do it via Insert trigger, but it is firing on each row inserted.
If anyone has any idea about this?
I expect that the problem is that you're using a FOR EACH ROW trigger; what you likely want is a FOR EACH STATEMENT trigger - ie. which only fires once for your multi-line INSERT statement. See the description at https://www.postgresql.org/docs/current/sql-createtrigger.html for a more through explanation.
AFAICT, you will also need to add REFERENCING NEW TABLE AS NEW in this mode to make the NEW reference available to the trigger function. So your CREATE TRIGGER syntax would need to be:
CREATE TRIGGER attribute_changes
AFTER INSERT
ON attributes
REFERENCING NEW TABLE AS NEW
FOR EACH STATEMENT
EXECUTE PROCEDURE log_attribute_changes();
I've read elsewhere that the required REFERENCING NEW TABLE ... syntax is only supported in PostgreSQL 10 and later.
Considering the version of postgres you have, and therefore keeping in mind that you can't use a trigger defined FOR EACH STATEMENT for your purpose, the only alternative I see is
using a trigger after insert in order to collect some information about changes in a utility table
using a unix cron that execute a pl/sql that do the job on data set
For example:
Your utility table
CREATE TABLE utility (
actionid integer,
createtime timestamp
);
You can define a trigger FOR EACH ROW with a body that do something like this
INSERT INTO utilty values(NEW.actionid, curent_timestamp);
And, finally, have a crontab UNIX that execute a file or a procedure that to something like this:
SELECT a.* FROM utility u JOIN yourtable a ON a.actionid = u.actionid WHERE u.createtime < current_timestamp;
// do something here with records selected above
TRUNCATE table utility;
If you had postgres 9.5 you could have used pg_cron instead of unix cron...
PostgreSQL DB: v 9.4.24
create table my_a_b_data ... // with a_uuid, b_uuid, and c columns
NOTE: the my_a_b_data keeps the references to a and b table. So it keeps the uuids of a and b.
where: the primary key (a_uuid, b_uuid)
there is also an index:
create unique index my_a_b_data_pkey
on my_a_b_data (a_uuid, b_uuid);
In the Java jdbc-alike code, in the scope one single transaction: (start() -> [code (delete, insert)] ->commit()]) (org.postgresql:postgresql:42.2.5 driver)
delete from my_a_b_data where b_uuid = 'bbb';
insert into my_a_b_data (a_uuid, b_uuid, c) values ('aaa', 'bbb', null);
I found that the insert fails, because the delete is not yet deleted. So it fails because it can not be duplicated.
Q: Is it is some kind of limitation in PostgreSQL that DB can’t do a delete and insert in one transaction because PostgreSQL doesn’t update its indexes until the commit for the delete is executed, therefore the insert will fail since the id or key (whatever we may be using) already exists in the index?
What would be possible solution? Splitting in two transactions?
UPDATE: the order is exactly the same. When I test the sql alone in the SQL console. It works fine. We use JDBI library v 5.29.
there it looks like this:
#Transaction
#SqlUpdate("insert into my_a_b_data (...; // similar for the delete
public abstract void addB() ..
So in the code:
this.begin();
this.deleteByB(b_id);
this.addB(a_id, b_id);
this.commit();
I had a similar problem to insert duplicated values and I resolved it by using Insert and Update instead of Delete. I created this process on Python but you might be able to reproduce it:
First, you create a temporary table like the target table where you want to insert values, the difference is that this table is dropped after commit.
CREATE TEMP TABLE temp_my_a_b_data
(LIKE public.my_a_b_data INCLUDING DEFAULTS)
ON COMMIT DROP;
I have created a CSV (I had to merge different data to input) with the values that I want to input/insert on my table and I used the COPY function to insert them to the temp_table (temp_my_a_b_data).
I found this code on this post related to Java and COPY PostgreSQL - \copy command:
String query ="COPY tmp from 'E://load.csv' delimiter ','";
Use the INSERT INTO but with the ON_CONFLICT clause which you can decide to do an action when the insert cannot be done because of specified constrains, on the case below we do the update:
INSERT INTO public.my_a_b_data
SELECT *
FROM temp_my_a_b_data
ON CONFLICT (a_uuid, b_uuid,c) DO UPDATE
SET a_uuid = EXCLUDED.a_uuid,
b_uuid = EXCLUDED. c = EXCLUDED.c;`
Considerations:
I am not sure but you might be able to perform the third step without using the previous steps, temp table or copy from. You can just a loop over the values:
INSERT INTO public.my_a_b_data VALUES(value1, value2, null)
ON CONFLICT (a_uuid, b_uuid,c) DO UPDATE
SET a_uuid = EXCLUDED.a_uuid,
b_uuid = EXCLUDED.b_uuid, c = EXCLUDED.c;
I'm working on JSF application that uses a Firebird 3.0 database containing hundreds of tables. I need to delete all tables time to time.
I have checked this query:
DROP TABLE TABLE_NAME
but only one table can be deleted at a time by using this query and its very time consuming for program, can I have another approach to hammer it away?
You can create procedure in which drop tables
create or alter procedure PRC_DROP_TABLES
as
declare variable TBL varchar(50);
begin
for select r.rdb$relation_name
from rdb$relation_fields r
where
r.rdb$system_flag=0 and r.rdb$view_context is null
-- and r.rdb$relation_name not containing '$' --uncomment and modify this if you what filter tables by condition
group by r.rdb$relation_name
into :tbl do
execute statement 'drop table '||:tbl;
end