Creating and executing a stored procedure in spring boot - postgresql

I have a stored procedure written in a sql file which I want to execute through spring boot code using jdbc template. The stored procedure is like :
CREATE OR REPLACE FUNCTION DELETE_REDUNDANT_RECORDS()
RETURNS void AS
$func$
DECLARE
interval_time BIGINT DEFAULT 0;
min_time BIGINT DEFAULT 0;
max_time BIGINT DEFAULT 0;
rec_old RECORD;
rec_new RECORD;
rec_start RECORD;
v_filename VARCHAR(250);
v_systemuid VARCHAR(50);
is_continous_record BOOLEAN default false;
cursor_file CURSOR FOR
select distinct filename,systemuid from ASP.MONITORING_BOOKMARK_ORIGINAL;
cursor_data CURSOR FOR
select * from ASP.MONITORING_BOOKMARK_ORIGINAL where filename = v_filename and systemuid=v_systemuid order by mindatetime, maxdatetime;
BEGIN
--open the file cursor to fetch the filename and systemuid
OPEN cursor_file;
-- logic for procedure
CLOSE cursor_file;
END;
$func$
LANGUAGE plpgsql;
I have added this in a sql file in my spring boot project. I want to create a scheduler to create and execute this stored procedure using spring jdbc. The database used is Postgres. Is there a way available to do this. I have got references for calling a procedure but what I need is to create and execute the procedure.

For running the procedure, Spring's SimpleJdbcCall could be what you are looking for.
Its use should be fairly straightforward:
create an instance supplying a DataSource
call withCatalogName and withProcedureName as needed
call declareParameters as needed
call execute to run the procedure
For creating it, a JdbcTemplate and its own execute method should be sufficient.
Be aware that you might want to create the function once and only run it afterwards. Creating and running the procedure both as part of some scheduled operation is a quite uncommon requirement. You might want to re-assess the need to create the procedure each and every time if possible.

Related

rails + psql with structure dump uses PROCEDURE over FUNCTION

Every time I dump my structure.sql on a rails app, I get PROCEDURE over FUNCTION. FUNCTION is our default and I have to commit the file in parts which is annoying and sometimes I miss lines which is even worse, as it is a rather big structure.sql file.
git diff example:
-CREATE TRIGGER cache_comments_count AFTER INSERT OR DELETE OR UPDATE ON public.comments FOR EACH ROW EXECUTE PROCEDURE public.update_comments_counter();
+CREATE TRIGGER cache_comments_count AFTER INSERT OR DELETE OR UPDATE ON public.comments FOR EACH ROW EXECUTE FUNCTION public.update_comments_counter();
I'm sure there is a postgresql setting for this somewhere, but I can't find it.
Whether you use Function or Procedure you get exactly the same. The documentation shows
CREATE [ CONSTRAINT ] TRIGGER name...
EXECUTE { FUNCTION | PROCEDURE } function_name ( arguments )
This means you can use either term FUNCTION or PROCEDURE but either way function_name is always called. See demo. For demo I have separate triggers for insert and update. Insert using execute procedure and update using execute function. This cannot be changed in Postgres it would have to be Rails setting. NOTE: Prior to v11 Postgres only allowed execute procedure even though you had to create a trigger function that was called.
The function pg_get_triggerdef() changed between Postgres 11 and 12 when Postgres introduced real procedures. Since Postgres 12 it always returns a syntax that uses EXECUTE FUNCTION as in reality it is a function that is called when the trigger fires, not a procedure.
So this code:
create table t1 (id int);
create function trg_func()
returns trigger
as
$$
begin
return new;
end;
$$
language plpgsql;
create trigger test_trigger
before insert or update
on t1
for each row
execute procedure trg_func();
select pg_get_triggerdef(oid)
from pg_trigger
where tgname = 'test_trigger';
returns the following in Postgres 11 and earlier:
CREATE TRIGGER test_trigger BEFORE INSERT OR UPDATE ON public.t1 FOR EACH ROW EXECUTE PROCEDURE trg_func()
and the following in Postgres 12 and later:
CREATE TRIGGER test_trigger BEFORE INSERT OR UPDATE ON public.t1 FOR EACH ROW EXECUTE FUNCTION trg_func()
I guess Rails uses pg_get_triggerdef() to obtain the trigger source. So there is nothing you can do. If you want a consistent result, you should use the same Postgres version everywhere.
The column action_statement in the view information_schema.triggers also reflects the change in naming.
Postgres 11 example
Postgres 12 example

How to test PROCEDURE in PostgreSQL with pgTAP?

Is there a best practice for unit testing a PostgreSQL 11+ PROCEDURE (NOT a FUNCTION) using pgTap.
For example, how would one recommend unit testing a stored procedure like this:
CREATE OR REPLACE PROCEDURE foo.do_something(IN i_value INT)
AS
$$
BEGIN
PERFORM foo.call_function_1(i_value);
COMMIT;
PERFORM foo.call_function_2(i_value);
COMMIT;
CALL foo.another_procedure(i_value);
END;
$$
LANGUAGE plpgsql;
This becomes difficult since pgTap unit tests run via a stored function like this:
SELECT * FROM runtests('foo'::NAME);
This executes in a transaction, making it impossible to execute stored procedures that modify transaction state by calling COMMIT or ROLLBACK.
Here is an approach I came up with inspired by using interfaces along with mocking frameworks in other languages.
First we move the COMMIT operation to a stored procedure like this:
CREATE PROCEDURE foo.do_commit()
AS
$$
BEGIN
COMMIT;
END;
$$
LANGUAGE plpgsql;
Then we alter the actual stored procedure to call do_commit instead of using COMMIT command directly. For example:
CREATE OR REPLACE PROCEDURE foo.do_something(IN i_value INT)
AS
$$
BEGIN
PERFORM foo.call_function_1(i_value);
CALL foo.do_commit();
CALL foo.another_procedure(i_value);
END;
$$
LANGUAGE plpgsql;
Since the unit tests are executed in a transaction that gets rolled back, we can replace the do_commit call temporarily to something mocked out for testing. A test could look something like this:
CREATE FUNCTION test.test_do_something()
RETURNS SETOF TEXT
AS
$$
BEGIN
CREATE TEMPORARY TABLE commit_calls
(
commit_call BOOLEAN NOT NULL DEFAULT TRUE
)
ON COMMIT DROP;
CREATE TEMPORARY TABLE function_calls
(
the_value INT NOT NULL
)
ON COMMIT DROP;
CREATE OR REPLACE PROCEDURE foo.do_commit()
AS
$mock_do_commit$
BEGIN
INSERT INTO commit_calls (commit_call)
VALUES (DEFAULT);
END;
$mock_do_commit$
LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION foo.call_function_1(i_value INT)
RETURNS VOID
AS
$mock_call_function_1$
INSERT INTO function_calls (the_value)
VALUES (i_value);
$mock_call_function_1$
LANGUAGE sql;
-- EXECUTE
CALL foo.do_something(9);
CALL foo.do_something(100);
-- VERIFY
RETURN NEXT assert.is((SELECT COUNT(*) FROM commit_calls)::INT, 2, 'verify transaction commits');
RETURN NEXT assert.bag_eq(
'SELECT the_value FROM function_calls',
'VALUES (9), (100)',
'verify function call values');
END;
$$
LANGUAGE plpgsql;
The idea is to temporarily mock out actual function calls for testing.
This way one can unit test a stored procedure without committing real transactions.
When the test ends it rolls back the transaction and the all of the changes are discarded.

Insert values in a loop and see the progress postgresql [duplicate]

I have Postgresql Function which has to INSERT about 1.5 million data into a table. What I want is I want to see the table getting populated with every one records insertion. Currently what is happening when I am trying with say about 1000 records, the get gets populated only after the complete function gets executed. If I stop the function half way through, no data gets populated. How can I make the record committed even if I stop after certain number of records have been inserted?
This can be done using dblink. I showed an example with one insert being committed you will need to add your while loop logic and commit every loop. You can http://www.postgresql.org/docs/9.3/static/contrib-dblink-connect.html
CREATE OR REPLACE FUNCTION log_the_dancing(ip_dance_entry text)
RETURNS INT AS
$BODY$
DECLARE
BEGIN
PERFORM dblink_connect('dblink_trans','dbname=sandbox port=5433 user=postgres');
PERFORM dblink('dblink_trans','INSERT INTO dance_log(dance_entry) SELECT ' || '''' || ip_dance_entry || '''');
PERFORM dblink('dblink_trans','COMMIT;');
PERFORM dblink_disconnect('dblink_trans');
RETURN 0;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION log_the_dancing(ip_dance_entry text)
OWNER TO postgres;
BEGIN TRANSACTION;
select log_the_dancing('The Flamingo');
select log_the_dancing('Break Dance');
select log_the_dancing('Cha Cha');
ROLLBACK TRANSACTION;
--Show records committed even though we rolled back outer transaction
select *
from dance_log;
What you're asking for is generally called an autonomous transaction.
PostgreSQL does not support autonomous transactions at this time (9.4).
To properly support them it really needs stored procedures, not just the user-defined functions it currently supports. It's also very complicated to implement autonomous tx's in PostgreSQL for a variety of internal reasons related to its session and process model.
For now, use dblink as suggested by Bob.
If you have the flexibility to change from function to procedure, from PostgreSQL 12 onwards you can do internal commits if you use procedures instead of functions, invoked by CALL command. Therefore your function will be changed to a procedure and invoked with CALL command: e.g:
CREATE PROCEDURE transaction_test2()
LANGUAGE plpgsql
AS $$
DECLARE
r RECORD;
BEGIN
FOR r IN SELECT * FROM test2 ORDER BY x LOOP
INSERT INTO test1 (a) VALUES (r.x);
COMMIT;
END LOOP;
END;
$$;
CALL transaction_test2();
More details about transaction management regarding Postgres are available here: https://www.postgresql.org/docs/12/plpgsql-transactions.html
For Postgresql 9.5 or newer you can use dynamic background workers provided by pg_background extension. It creates autonomous transaction. Please, refer the github page of the extension. The sollution is better then db_link. There is a complete guide on Autonomous transaction support in PostgreSQL. There is a third way to start autonomous transaction in Postgres, but some patching neede. Please see Peter's Eisentraut patch proposal for OracleDB-style transactions.

Trigger to insert rows in remote database after deletion

I have created a trigger that works like this:
After deleting data from table flux_tresorerie_historique it insert this row in the table flux_tresorerie_historique that is located in another database archive
I use dblink to insert data in the remote database, the problem is that the creation of the query is too hard especially that the table contain more than 20 columns, and I want to create similar functions for 10 other tables.
Is there another rapid way to ensure this task?
Here an example that works fine:
CREATE OR REPLACE FUNCTION flux_tresorerie_historique_backup_row()
RETURNS trigger AS
$BODY$
DECLARE date_rapprochement_flux TEXT;
DECLARE code_commission TEXT;
DECLARE reference_flux TEXT;
BEGIN
IF OLD.date_rapprochement_flux is null
THEN
date_rapprochement_flux = 'NULL';
ELSE
date_rapprochement_flux = ''''||to_char(OLD.date_rapprochement_flux, 'YYYY-MM-DD')||'''';
END IF;
IF OLD.code_commission is null
THEN
code_commission = 'NULL';
ELSE
code_commission = ''''||replace(OLD.code_commission,'''','''''')||'''';
END IF;
IF OLD.reference_flux is null
THEN
reference_flux = 'NULL';
ELSE
reference_flux = ''''||replace(OLD.reference_flux,'''','''''')||'''';
END IF;
perform dblink_connect('dbname=gtr_bd_archive user=postgres password=postgres');
perform dblink_exec('insert into flux_tresorerie_historique values('||OLD.id_flux_historique||','''||OLD.date_operation_flux||''','''||OLD.date_valeur_flux||''','||date_rapprochement_flux||','''||replace(OLD.libelle_flux,'''','''''')||''','||OLD.montant_flux||','||OLD.contre_valeur_dzd||','''||replace(OLD.rib_compte_bancaire,'''','''''')||''','||OLD.frais_flux||','''||replace(OLD.sens_flux,'''','''''')||''','''||replace(OLD.statut_flux,'''','''''')||''','''||replace(OLD.code_devise,'''','''''')||''','''||replace(OLD.code_mode_paiement,'''','''''')||''','''||replace(OLD.code_agence,'''','''''')||''','''||replace(OLD.code_compte,'''','''''')||''','''||replace(OLD.code_banque,'''','''''')||''','''||OLD.date_maj_flux||''','''||replace(OLD.statut_frais,'''','''''')||''','||reference_flux||','||code_commission||','||OLD.id_flux||');');
perform dblink_disconnect();
RETURN NULL;
END;
This is a limited application of replication. Requirements vary a lot, so there are a number of different established solutions, addressing different situations. Consider the overview in the manual.
Your hand-knit, trigger-based solution is one viable option for relatively few deletions. Opening and closing a separate connection for every row incurs quite an overhead. There are other various options.
While working with dblink I suggest some modifications. Most importantly:
Use format() to escape strings more elegantly.
Pass the whole row instead of passing and escaping every single column.
Don't place the password in every single trigger function.
Use a FOREIGN SERVER plus USER MAPPING. Detailed instructions here:
Persistent inserts in a UDF even if the function aborts
Basically, run once on the source server:
CREATE SERVER myserver FOREIGN DATA WRAPPER dblink_fdw
OPTIONS (hostaddr '127.0.0.1', dbname 'gtr_bd_archive');
CREATE USER MAPPING FOR role_source SERVER myserver
OPTIONS (user 'postgres', password 'secret');
Preferably, don't log in as superuser at the target server. Use a dedicated role with limited privileges to avoid privilege escalation.
And use a password file on the target server to allow password-less access. This way you don't even have to store the password in the USER MAPPING. Instructions in the last chapter of this related answer:
Run batch file with psql command without password
Then:
CREATE OR REPLACE FUNCTION pg_temp.flux_tresorerie_historique_backup_row()
RETURNS trigger AS
$func$
BEGIN
PERFORM dblink_connect('myserver'); -- name of foreign server from above
PERFORM dblink_exec( format(
$$
INSERT INTO flux_tresorerie_historique -- provide target column list!
SELECT (r).id_flux_historique
, (r).date_operation_flux
, (r).date_valeur_flux
, (r).date_rapprochement_flux::date -- 'YYYY-MM-DD' is default ISO format anyway
, (r).libelle_flux
, (r).montant_flux
, (r).contre_valeur_dzd
, (r).rib_compte_bancaire
, (r).frais_flux
, (r).sens_flux
, (r).statut_flux
, (r).code_devise
, (r).code_mode_paiement
, (r).code_agence
, (r).code_compte
, (r).code_banque
, (r).date_maj_flux
, (r).statut_frais
, (r).reference_flux
, (r).code_commission
, (r).id_flux
FROM (SELECT %L::flux_tresorerie_historique) t(r)
$$, OLD::text)); -- cast whole row type
PERFORM dblink_disconnect();
RETURN NULL; -- only for AFTER trigger
END
$func$ LANGUAGE plpgsql;
You should spell out the list of columns for the target table if the row types don't match.
If you are serious about this:
insert this row in the table flux_tresorerie_historique
I.e., you insert the whole row and the target row type is identical (no extracting a date from a timestamp etc.), you can simplify much further passing the whole row.
CREATE OR REPLACE FUNCTION flux_tresorerie_historique_backup_row()
RETURNS trigger AS
$func$
BEGIN
PERFORM dblink_connect('myserver'); -- name of foreign server
PERFORM dblink_exec( format(
$$
INSERT INTO flux_tresorerie_historique
SELECT (%L::flux_tresorerie_historique).*
$$
, OLD::text));
PERFORM dblink_disconnect();
RETURN NULL; -- only for AFTER trigger
END
$func$ LANGUAGE plpgsql;
Related:
How do I do large non-blocking updates in PostgreSQL?
You can use quote_nullable for this! Also, concat_ws comes very handy:
CREATE OR REPLACE FUNCTION flux_tresorerie_historique_backup_row()
RETURNS trigger AS
$BODY$
BEGIN
perform dblink_connect('dbname=gtr_bd_archive user=postgres password=postgres');
perform dblink_exec('insert into flux_tresorerie_historique values('||
concat_ws(', ', quote_nullable(OLD.id_flux_historique),
quote_nullable(OLD.date_operation_flux),
quote_nullable(OLD.date_valeur_flux),
quote_nullable(to_char(OLD.date_rapprochement_flux, 'YYYY-MM-DD')),
quote_nullable(OLD.libelle_flux),
quote_nullable(OLD.montant_flux),
quote_nullable(OLD.contre_valeur_dzd),
quote_nullable(OLD.rib_compte_bancaire),
quote_nullable(OLD.frais_flux),
quote_nullable(OLD.sens_flux),
quote_nullable(OLD.statut_flux),
quote_nullable(OLD.code_devise),
quote_nullable(OLD.code_mode_paiement),
quote_nullable(OLD.code_agence),
quote_nullable(OLD.code_compte),
quote_nullable(OLD.code_banque),
quote_nullable(OLD.date_maj_flux),
quote_nullable(OLD.statut_frais),
quote_nullable(OLD.reference_flux),
quote_nullable(OLD.code_commission),
quote_nullable(OLD.id_flux)
)||');');
perform dblink_disconnect();
RETURN NULL;
END;
Note that it is OK to place non-sting values between single quotes, since a quoted literal is for PostgreSQL just as good a literal value as one without the quotes, so it is convenient to place all of the columns processed by quote_nullable. Also note that quote_nullable will already output dates in YYYY-MM-DD format (e.g. select quote_nullable(now()::date) would result in '2016-05-04'), so you may want to simplify OLD.date_rapprochement_flux even further by removing the to_char.

How to add a column to an existing table then use it in a single PostgreSQL function

I have a table being created in a PostgreSQL ( version 9 ) database by a third party product and I need to change that table to add a new column then set the column in question to a standard value.
I have the following in my function:
CREATE FUNCTION alterscorecolumns()
RETURNS void AS
$BODY$
ALTER TABLE "hi_scores" ADD "total_score" integer;
UPDATE "hi_scores" SET total_score = score1+score2+score3;
$BODY$
However, I'm not allowed to do this because it doesn't know that the total_score field exists. I just get the message ERROR: column "total_score" of relation "hi_scores" does not exist.
I am guessing there is some execution-plan related reason for this and that maybe I need to tell it to run the ALTER TABLE before it tries to perform the update, but I can't seem to figure out what I need to do.
You can't do it that way. The SQL in the function is parsed when you create the function. At the time of the creation of the function the column is not there, so you get the error message.
You will need to use dynamic SQL to run the UPDATE statement.
Something like:
CREATE FUNCTION alterscorecolumns()
RETURNS void AS
$BODY$
begin
execute 'ALTER TABLE hi_scores ADD total_score integer';
execute 'UPDATE hi_scores SET total_score = score1+score2+score3';
$BODY$
language plpgsql;
(Not tested, so there might be syntax errors in there)
Just add DEFAULT to your statement like this:
ALTER TABLE "hi_scores" ADD "total_score" integer DEFAULT 0;
#mu already provided: if you want to save this procedure as a function, you have to use dynamic SQL with EXECUTE. But only for the UPDATE. The ALTER TABLE statement works just fine.
As this is obviously a one-time operation (can't add the same column twice), it hardly makes sense to persist a function for the purpose. You could use a DO statement instead:
DO
$BODY$
BEGIN
ALTER TABLE hi_scores ADD total_score integer;
EXECUTE 'UPDATE hi_scores SET total_score = score1+score2+score3';
END;
$BODY$;
But then again, keep it simple: just execute two SQL statements. As soon as the ALTER TABLE is done, the UPDATE will just work normally. Inside a transaction or not - doesn't matter, as long you execute them in order.
ALTER TABLE hi_scores ADD total_score integer;
UPDATE hi_scores SET total_score = score1+score2+score3;