I have to rewrite code that was generated in TOAD DATA MODELER on Oracle Data Base to PostgreSQL. I have two examples:
CREATE OR REPLACE TRIGGER ts_Kluby_jezdzieckie_KlubyJe_0 BEFORE INSERT
ON Kluby_jezdzieckie FOR EACH ROW
BEGIN
:new.ID_klubu := KlubyJezdzieckieS.nextval;
END;
And the second one:
CREATE OR REPLACE TRIGGER tsu_Adresy_AdresyS AFTER UPDATE OF ID_adresu
ON Adresy FOR EACH ROW
BEGIN
RAISE_APPLICATION_ERROR(-20010,'Cannot update column ID_adresu in table Adresy as it uses sequence.');
END;
I don't understand PostgreSQL's triggers yet, and i'm hoping for some help, have a great evening, Adam
The first one is assigning a to value to ID_klubu from the sequence KlubyJezdzieckieS. In Postgres this would be
:new.ID_klubu = nextval('KlubyJezdzieckieS');
However, the trigger is not needed. In Postgres v10 and later define the column as generated identity. This would accomplish the same and eliminates the trigger.
The second raises a user defined exception where the SQLCode -20010 is a user defined exception Postgres would be:
Raise Exception '-20010, Cannot update column ID_adresu in table Adresy as it uses sequence.';
Even better define the column as
ID_adresu integer generated always as identity
and just eliminate the trigger.
Related
I'm bringing in data from Excel into a PostgreSQL Db. There's a lot wrong with this data, but one thing that seems to connect several tables is a customer_id.
However, in the customer table I've a unique char(8) that always has a leading zero. Yes, if it were up to me I'd enforce this data weren't so screwy upstream, but I'm dealing with sales folks here, manufacturing there, financing, etc.
And, the customer id ALMOST matches through these various sources! It is just that the customer_id some data doesn't have the leading zero, so customers.id = '01234567' does represent orders.customer_id = '1234567'.
I'm using COPY command in Postgres, which is a new thing to me. Unfortunately, I cannot define a foreign key relationship on customer.id because of this small discrepancy.
How would I do a COPY and tell the column value to add a leading zero? Is this possible? I'm hoping I can do it right in the COPY statement? Thanks for any insight in how to do this!
EDIT:
A comment lead me to this documentation. I'll update with an answer after I figure this out. Looks like an ON BEFORE INSERT is what I'll need.
CREATE TRIGGER trigger_name
{BEFORE | AFTER} { event }
ON table_name
[FOR [EACH] { ROW | STATEMENT }]
EXECUTE PROCEDURE trigger_function
I'm the original poster and this is the answer to my question. I was bringing in data from XLS to PG and the leading zeros on customer_id(s) were dropped when exporting XLS to CSV for a COPY into PG.
Thanks be to an answer here that really pointed me down the right path: Postgresql insert trigger to set value
-- create table
CREATE TABLE T (customer_id char(8));
-- draft function to be used by trigger. NOTE the double single quotes.
CREATE FUNCTION lpad_8_0 ()
RETURNS trigger AS '
BEGIN
NEW.customer_id := (SELECT LPAD(NEW.customer_id, 8, ''0''));
RETURN NEW;
END' LANGUAGE 'plpgsql';
-- setup on before insert trigger to execute lpad_8_0 function
CREATE TRIGGER my_on_before_insert_trigger
BEFORE INSERT ON T
FOR EACH ROW
EXECUTE PROCEDURE lpad_8_0();
-- some sample inserts
INSERT INTO T
VALUES ('1234'), ('7');
Here's a working fiddle: http://sqlfiddle.com/#!17/a176e/1/0
NOTE: If the value here were larger than char(8) the COPY will still fail.
A trigger seems to be ignoring the 'when condition' in my definition but I'm unsure why. I'm running the following:
create trigger trigger_update_candidate_location
after update on candidates
for each row
when (
OLD.address1 is distinct from NEW.address1
or
OLD.address2 is distinct from NEW.address2
or
OLD.city is distinct from NEW.city
or
OLD.state is distinct from NEW.state
or
OLD.zip is distinct from NEW.zip
or
OLD.country is distinct from NEW.country
)
execute procedure entities.tf_update_candidate_location();
But when I check back in on it, I get the following:
-- auto-generated definition
create trigger trigger_update_candidate_location
after update
on candidates
for each row
execute procedure tf_update_candidate_location();
This is problematic because the procedure I call ends up doing an update on the same table for different columns (lat/lng). Since the 'when' condition is ignored this crates an infinite loop.
My intention is to watch for address change, do a lookup on another table to get lat/lng values.
Postgresql version: 10.6
IDE: DataGrip 2018.1.3
How exactly do you create and "check back"? With datagrip?
The WHEN condition was added with Postgres 9.0. Some old (or poor) clients may be outdated. To be sure, check in pgsql with:
SELECT pg_get_triggerdef(oid, true)
FROM pg_trigger
WHERE tgrelid = 'candidates'::regclass -- schema-qualify name to be sure
AND NOT tgisinternal;
Any actual WHEN qualification is stored in internal format in pg_trigger.tgqual, btw. Details in the manual here.
Also what's your current search_path and what's the schema of table candidates?
It stands out that the table candidates is unqualified, while the trigger function entities.tf_update_candidate_location() has a schema-qualification ... You are not confusing tables of the same name in different DB schemas, are you?
Aside, you can simplify with this shorter, equivalent syntax:
create trigger trigger_update_candidate_location
after update on candidates -- schema-qualify??
for each row
when (
(OLD.address1, OLD.address2, OLD.city, OLD.state, OLD.zip, OLD.country)
IS DISTINCT FROM
(NEW.address1, NEW.address2, NEW.city, NEW.state, NEW.zip, NEW.country)
)
execute procedure entities.tf_update_candidate_location();
Unfortunately, that's the issue of DataGrip. Please follow the ticket to be notified when it's fixed.
https://youtrack.jetbrains.com/issue/DBE-7247
as we start to migrate our Application from using Oracle to PostgreSQL we ran into the following problem:
A lot of our Oracle scripts create triggers that work on Oracle specific tables which dont exist in PostgreSQL. When running these scripts on the PG database they will not throw an error.
Only when the trigger is triggered an error is thrown.
Example code:
-- Invalid query under PostgreSQL
select * from v$mystat;
-- Create a view with the invalid query does not work (as expected)
create or replace view Invalid_View as
select * from v$mystat;
-- Create a test table
create table aaa_test_table (test timestamp);
-- Create a trigger with the invalid query does(!) work (not as expected)
create or replace trigger Invalid_Trigger
before insert
on aaa_test_table
begin
select * from v$mystat;
end;
-- Insert fails if the trigger exists
insert into aaa_test_table (test) values(sysdate);
-- Select from the test table
select * from aaa_test_table
order by test desc;
Is there a way to change this behavior to throw an error on trigger creation instead?
Kind Regards,
Hammerfels
Edit:
I was made aware, that we actually dont use basic PostgreSQL but EDB instead.
That would probably explain why the syntax for create trigger seems wrong.
I'm sorry for the confusion.
It will trigger an error, unless you have configured Postgres to postpone validation when creating functions.
Try issuing this before creating the trigger:
set check_function_bodies = on;
Creating the trigger should show
ERROR: syntax error at or near "trigger"
LINE 1: create or replace trigger Invalid_Trigger
I have a shapefile loaded into a postgis database. This shapefile is frequently updated by the source and thus my current process is:
Use shp2pgql with -a option to generate insert statements.
Run the SQL generated in step 1 to append to database.
Of course, I end up with all the rows from both versions of the shapefile, and what I need is to get rid of all the previous rows and load only the rows from the updated shapefile.
I tried creating a trigger and trigger function in the database:
CREATE TRIGGER drop_all_rows_from_owner_table_trigger
BEFORE INSERT
ON owner_polygons_common_ownership_layer
FOR EACH STATEMENT
EXECUTE PROCEDURE drop_all_rows_from_owner_table();
Here's the trigger function:
CREATE OR REPLACE FUNCTION drop_all_rows_from_owner_table()
RETURNS trigger AS $$
BEGIN DELETE FROM owner_polygons_common_ownership_layer;
RETURN NEW;
END;
$$
LANGUAGE 'plpgsql';
I believe all I have accomplished is to delete all rows from the table, insert the new rows, then delete them again, because when I look at the table after the process ends I have zero rows. I used the FOR EACH STATEMENT clause because shp2sql created one INSERT statement.
My question is: Are triggers the way to go to accomplish this?
Your trigger function seems right.
However, I don't think this is the way to go: you cannot be sure that shp2pgsql produces a single statement.
If your shapefile grows, it could split your inserts in multiple statements.
So, if you can't use the -d option (that delete and recreate the table), I'd add a step to the process, between 1 and 2, to truncate the table.
You could also prepend the truncate statement in the generated sql file, or you can execute another psql command to truncate the table.
So, I think this should be fairly simple, but the documentation makes it seem somewhat more complicated. I've written an SQL function in PostgreSQL (8.1, for now) which does some cleanup on some string input. For what it's worth, the string is an LDAP distinguished name, and I want there to consistently be no spaces after the commas - and the function is clean_dn(), which returns the cleaned DN. I want to do the same thing to force all input to another couple of columns to lower case, etc - which should be easy once I figure this part out.
Anyway, I want this function to be run on the "dn" column of a table any time anyone attempts to insert to or update and modify that column. But all the rule examples I can find seem to make the assumption that all insert/update queries modify all the columns in a table all the time. In my situation, that is not the case. What I think I really want is a constraint which just changes the value rather than returning true or false, but that doesn't seem to make sense with the SQL idea of a constraint. Do I have my rule do an UPDATE into the NEW table? Do I have to create a new rule for every possible combination of NEW values? And if I add a column, do I have to go through and update all of my rule combinations to refelect every possible new combination of columns?
There has to be an easy way...
First, update to a current version of PostgreSQL. 8.1 is long dead and forgotten und unsupported and very, very old .. you get my point? Current version is PostgreSQL 9.2.
Then, use a trigger instead of a rule. It's simpler. It's the way most people go. I do.
For column col in table tbl ...
First, create a trigger function:
CREATE OR REPLACE FUNCTION trg_tbl_insupbef()
RETURNS trigger AS
$BODY$
BEGIN
NEW.col := f_myfunc(NEW.col); -- your function here, must return matching type
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Then use it in a trigger.
For ancient Postgres 8.1:
CREATE TRIGGER insupbef
BEFORE INSERT OR UPDATE
ON tbl
FOR EACH ROW
EXECUTE PROCEDURE trg_tbl_insupbef();
For modern day Postgres (9.0+)
CREATE TRIGGER insbef
BEFORE INSERT OR UPDATE OF col -- only call trigger, if column was updated
ON tbl
FOR EACH ROW
EXECUTE PROCEDURE trg_tbl_insupbef();
You could pack more stuff into one trigger, but then you can't condition the UPDATE trigger on just the one column ...