Postgres Trigger Function Dynamic Column Concat TG_TABLE_NAME - postgresql

I want to know if it's possible to do something like this on Postgres 11 in a trigger function.
CREATE OR REPLACE FUNCTION "searchs"."handle_content" ( )
RETURNS trigger
AS $BODY$
BEGIN
UPDATE table
set test = 1
WHERE item_id = NEW.format('%I_id',TG_TABLE_NAME);
END;
$BODY$
The primary key on table change, so i need to concat with the table name
NEW.format('%I_id',TG_TABLE_NAME)

I have solved with CASE, but I still wish to know if it's possible to convert to an array record type, or access it dynamic.

Related

Alter PostreSQL column into a GENERATED ALWAYS column

I have an already made table:
cotizacion(idCot(PK), unit_price,unit_price_taxes)
I need to convert unit_price_taxes into a generated column that is equal to unit_price*1.16. The issue is I can't find the alter table statement which will give me this. Dropping table and creating it again is not an option as this table is already deeply linked with the rest of the database and reinserting all records is not an option at this point.
I tried the following:
ALTER TABLE cotizacion
alter column unit_price_taxes set
GENERATED ALWAYS AS (unit_price*1.16) STORED;
But it's not working. Does anybody know how to get this done or if it's even possible? I would like to avoid creating a new column.
Thanks!
**EDIT:
I also tried the following trigger implementation:
CREATE OR REPLACE FUNCTION calculate_price_taxes()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
declare pu money;
begin
select unit_price from cotizacion into pu
where idCot = new."idCot";
update cotizacion
set unit_price_taxes = pu * (1.16)
where idCot = new."idCot";
return new;
end;
$function$
;
And the trigger delcaration:
Create or replace trigger price_taxes
after update on cotizacion
for each row
execute procedure
calculate_price_taxes()
The most probable reason for your trigger to go into an infinite recursion is that you are running an UPDATE statement inside the trigger - which is the wrong thing to do. Create a before trigger and assign the calculated value to the new record:
create trigger update_tax()
returns trigger
as
$$
begin
new.unit_price_taxes := unit_price * 1.16;
return new;
end;
$$
language plpgsql;
create trigger update_tax_trigger()
before update or insert on cotizacion
for each row execute procedure update_tax();
The only way to "convert" that column to a generated one, is to drop it and add it again:
alter table cotizacion
drop unit_price_taxes;
alter table cotizacion
add unit_price_taxes numeric generated always as (unit_price*1.16) stored;
Note that this will rewrite the entire table which will block access to it. Adding the trigger will be less invasive.

sync two tables after insert

I am using postgresql. I have two schemas main and sec containing only one table datastore with the same structure (this is only an extract)
I am trying unsucessfully to create a trigger for keep sync both tables when insert occurs in one of them. The problem is some kind of circular or recursive reference.
Can you create some example for solve this?
I am working on this, I'll post my solution later.
You can use this code as reference for creating schemas and tables
CREATE SCHEMA main;
CREATE SCHEMA sec;
SET search_path = main, pg_catalog;
CREATE TABLE datastore (
fullname character varying,
age integer
);
SET search_path = sec, pg_catalog;
CREATE TABLE datastore (
fullname character varying,
age integer
);
An updatable view is the best solution and is as simple as (Postgres 9.3+):
drop table sec.datastore;
create view sec.datastore
as select * from main.datastore;
However, if you cannot do it for some inscrutable reasons, use pg_trigger_depth() function (Postgres 9.2+) to ensure that the trigger function is not executed during replication. The trigger on main.datastore may look like this:
create or replace function main.datastore_insert_trigger()
returns trigger language plpgsql as $$
begin
insert into sec.datastore
select new.fullname, new.age;
return new;
end $$;
create trigger datastore_insert_trigger
before insert on main.datastore
for each row when (pg_trigger_depth() = 0)
execute procedure main.datastore_insert_trigger();
The trigger on sec.datastore should be defined analogously.
create OR REPLACE function copytosec() RETURNS TRIGGER AS $$
BEGIN
insert into sec.datastore(fullname,age) values (NEW.fullname,NEW.age);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
create trigger copytosectrigger after insert on public.datastore
for each row
execute procedure copytosec();`

How to create a Trigger in PostgreSql?

TRIGEER-->To get a column value from one table to other table when i insert values?
I am having two tables(customer_details and loan_balance).
What i need is, I must get the column (custid)of customer_details table to the loan_balance table when i insert the data into the loan_balance table.
This is the full set up of my query : SQL FIDDLE
So i need a trigger to be raised and the data should be updated automatically without dynamic insertion of custid.
Postgres has an unconventional way of creating triggers:
create a function that returns type trigger and return the NEW row record
create a trigger that executes the function
Here's the code you need:
CREATE FUNCTION synch_custid_proc()
RETURNS trigger AS $$
BEGIN
NEW.custid = (
select max(custid)
from customer_details
where creditid = NEW.creditid
);
RETURN NEW;
END;
$$ LANGUAGE plpgsql
CREATE TRIGGER synch_custid_trig
BEFORE INSERT ON loan_amount
FOR EACH ROW
EXECUTE PROCEDURE synch_custid_proc();
I chosen to select max(custid) rather than simply custid when finding the value in case there are multiple rows that match. You might have to adjust this logic to suit your data.
See a live demo on SQLFiddle

Getting row values based on array of column names passed to PostgreSQL function

I'm trying to set up full text search in PostgreSQL 9.2. I created a new table to hold the content that I want to search (so that I can search across lots of different types of items), which looks like this:
CREATE TABLE search (
target_id bigint PRIMARY KEY,
target_type text,
fts tsvector
);
CREATE INDEX search_fts ON search USING gin(fts);
Every time a new item gets inserted (or updated) into one of the various tables I want to search across, it should automatically be added to the search table. Assuming that my table looks like the following:
CREATE TABLE item (id bigint PRIMARY KEY, name text NOT NULL, description text);
I created a trigger passing in the column names that I want to be able to search:
CREATE TRIGGER insert_item_search BEFORE INSERT
ON item FOR EACH ROW EXECUTE PROCEDURE
insert_search('{name, description}'::text[]);
Then created a new function insert_search as:
CREATE OR REPLACE FUNCTION insert_search(cols text[]) RETURNS TRIGGER AS $$
BEGIN
INSERT INTO search (target_id, target_type, fts) VALUES (
NEW.id, TG_TABLE_NAME, to_tsvector('english', 'foo')
);
RETURN NEW;
END;
$$ LANGUAGE PLPGSQL;
My question is, how do I pass in the table values based on cols to to_tsvector? Right now, the function is getting called and inserts the id and type correctly, but I don't know the right way to dynamically grab the other values based on the cols argument.
First, to pass arguments, just send them directly:
CREATE TRIGGER insert_item_search BEFORE INSERT
ON item FOR EACH ROW EXECUTE PROCEDURE
insert_search('name', 'description');
And, from PL/pgSQL you will get those arguments as an array, called TG_ARGV. But, the problem is that PL/pgSQL cannot get the values from NEW record based on their names. To do that you can either use a language that lets you do that (like PL/python or PL/perl) or use the hstore extension.
I'd stick with the last one and use hstore (unless you already use one of the other languages to create functions):
CREATE OR REPLACE FUNCTION insert_search() RETURNS TRIGGER AS $$
DECLARE
v_new hstore;
BEGIN
v_new = hstore(NEW); -- convert the record to hstore
FOR i IN 0..(TG_NARGS-1) LOOP
INSERT INTO search (target_id, target_type, fts) VALUES (
NEW.id, TG_TABLE_NAME, to_tsvector('english', v_new -> TG_ARGV[i])
);
END LOOP;
RETURN NEW;
END;
$$ LANGUAGE PLPGSQL;
As you can see above, I used the hstore's operator -> to get the value based on the name (on TG_ARGV[i]).
You can access the parameters specified in the trigger definition with the TG_ARGV variable. You can find documentation on that here. TG_ARGV is an array accessed by a 0 based index. So it would be something like TG_ARGV[0], TG_ARGV[1], and so on.

Getting Postgres to truncate values if necessary?

If I create a table mytable with column data as varchar(2) and then insert something like '123' into the column, postgres will give me an error for Value too long for type.
How can I have Postgres ignore this and truncate the value if necessary?
Also, I do not (when creating the query) know the actual size of the data column in mytable so I can't just cast it.
According to the postgres documentation, you have to explicitly cast it to achieve this behavior, as returning an error is a requirement of the SQL standard. Is there no way to inspect the table's schema before creating the query to know what to cast it to?
Use text type with trigger instead:
create table mytable (
data text
);
create or replace function mytable_data_trunc_trigger()
returns trigger language plpgsql volatile as $$
begin
NEW.data = substring(NEW.data for 2);
return NEW;
end;
$$;
create trigger mytable_data_truncate_trigger
before insert or update on mytable for each row
execute procedure mytable_data_trunc_trigger();
insert into mytable values (NULL),('1'),('12'),('123');
select * from mytable;
data
------
1
12
12
(4 rows)
Easiest is just substring
INSERT INTO mytable (data) VALUES (substring('123' from 1 for 2));
You could change the datatype to varchar, and then use a trigger to enforce the two char constraint.