FOR LOOP without a transaction - postgresql

We am doing a system redesigning and due to the change in design we need to import data from multiple similar source tables into one table. For this same, I am running a loop which have the list of tables and importing all the data. However, due to massive amount of data, I got out of memory error after execution of around 12 hours and 20 tables. Now I discovered that the loop runs in a single transaction which I don't need since the system which is filling the data is suspended for that time. Having this transaction thing, I believe, it is taking longer time also. My requirement is to run my query without any transaction.
DO $$DECLARE r record;
BEGIN
FOR r IN SELECT '
INSERT INTO dbo.tb_requests
(node_request_id, request_type, id, process_id, data, timestamp_d1, timestamp_d2, create_time, is_processed)
SELECT lpad(A._id, 32, ''0'')::UUID, (A.data_type + 1) request_type, B.id, B.order_id, data_value, timestamp_d1, timestamp_d2, create_time, TRUE
FROM dbo.data_store_' || id || ' A
JOIN dbo.tb_new_processes B
ON A.process_id = B.process_id
WHERE A._id != ''0'';
' as log_query FROM dbo.list_table
ORDER BY line_id
LOOP
EXECUTE r.log_query;
END LOOP;
END$$;
This is a sample code block. It is not the actual code block but I think, it will give the idea.
Error Message(Translation from Original Japanese error Message):
ERROR: Out of memory
DETAIL: Request for size 32 failed in memory context "ExprContext".
SQL state: 53200

You cannot to run any statement on server side without transaction. For some modern Postgres releases you can run commit statement inside DO statement. It is closes current transaction and starts new transactions. This can breaks very long transaction, and can solve the problem with memory leak - Postgres releasing some memory at transaction end.
Or use shell scripts instead (bash) if it is possible.

Related

How do you handle error handling and commits in Postgres

I am using Postgres 13.5 and I am unsure how to combine commit and error handling in a stored procedure or DO block. I know that if I include the EXCEPTION clause in my block, then I cannot include a commit.
I am new to Postgres. It has also been over 15 years since I have written SQL that was working with transactions. When I was working with transactions I was using Oracle and recall using AUTONOMOUS_TRANSACTION to resolve some of these issues. I am just not sure how to do something like that in Postgres.
Here is a very simplified DO block. As I said above, I know that the Commits will cause the procedure to throw and exception. But, if I remove the EXCEPTION clause, then how will I trap an error if it happens? After reading many things, I still have not found a solution. So, I am not understanding something that will lead me to the solution.
Do
$$
DECLARE
v_Start timestamptz;
v_id integer;
v_message_type varchar(500);
Begin
select current_timestamp into start;
select q.id, q.message_type into (v_id, v_message_type) from message_queue;
call Load_data(v_id, v_message_type);
commit; -- if Load_Data completes successfully, I want to commmit the data
insert into log (id, message_type, Status, start, end)
values (v_id, v_message_type, 'Success', v_start, Currrent_Timestamp);
commit; -- commit the log issert for success
EXCEPTION
WHEN others THEN
insert into log (id, message_type, status, start, end, error_message)
values (v_id, v_message_type, 'Failue', v_start, Currrent_Timestamp, SQLERRM || '', ' ||
SQLSTATE );
commit; -- commit the log insert for failure.
end;
$$
Thanks!
Since this is a pattern that I will have to do tens of times, I want to understand the right way to do this.
Since you cannot use transaction management statements in a subtransaction, you will have to move part of the processing to the client side.
But your sample code doesn't need any transaction management at all! Simply remove all the COMMIT statements, and the procedure will work just as you want it to. Remember that PostgreSQL uses the autocommit mode, so your procedure call from the client will automatically run in its own transaction and commit when it is done.
But perhaps your sample code is simplified, and you would like more complicated processing (looping etc.) in your actual use cases. So let's discuss your options:
One option is to remove the EXCEPTION handler and move only that part to the client side: if the procedure causes an error, roll back and insert a log message. Another, perhaps cleaner, method is to move the whole transaction management to the client side. In that case, you would replace the complete procedure with client code and call load_data directly from client code.

How do I chain a VACUUM off of a purge routine running with pg_cron?

Postgres 13.4
I've got some pg_cron jobs set up to periodically delete older records out of log-like files. What I'd like to do is to run VACUUM ANALYZE after performing a purge. Unfortunately, I can't work out how to do this in a stored function. Am I missing a trick? Is a stored procedure more appropriate?
As an example, here's one of my purge routines
CREATE OR REPLACE FUNCTION dba.purge_event_log (
retain_days_in integer_positive default 14)
RETURNS int4
AS $BODY$
WITH -- Use a CTE so that we've got a way of returning the count easily.
deleted AS (
-- Normal-looking code for this requires a literal:
-- where your_dts < now() - INTERVAL '14 days'
-- Don't want to use a literal, SQL injection, etc.
-- Instead, using a interval constructor to achieve the same result:
DELETE
FROM dba.event_log
WHERE dts < now() - make_interval (days => $1)
RETURNING *
),
----------------------------------------
-- Save details to a custom log table
----------------------------------------
logit AS (
insert into dba.event_log (name, details)
values ('purge_event_log(' || retain_days_in::text || ')',
'count = ' || (select count(*)::text from deleted)
)
)
----------------------------------------
-- Return result count
----------------------------------------
select count(*) from deleted;
$BODY$
LANGUAGE sql;
COMMENT ON FUNCTION dba.purge_event_log (integer_positive) IS
'Delete dba.event_log records older than the day count passed in, with a default retention period of 14 days.';
The truth is, I don't really care about the count(*) result from this routine, in this case. But I might want a result and an additional action in some other, similar context. As you can see, the routine deletes records, uses a CTE to insert a report into another table, and then returns a result. No matter what, I figure this example is a good way to get me head around the alternatives and options in stored functions. The main thing I want to achieve here is to delete records, and then run maintenance. if this is an awkward fit for a stored function or procedure, I could write out an entry to a vacuum_list table with the table name, and have another job to run though that list.
If there's a smarter way to approach vacuum without the extra, I'm of course interested in that. But I'm also interested in understanding the limits on what operationa you can combine in PL/PgSQL routines.
Pavel Stehule' answer is correct and complete. I decided to follow-up a bit here as I like to dig in on bugs in my code, behaviors in Postgres, etc. to get a better sense of what I'm dealing with. I'm including some notes below for anyone who finds them of use.
COMMAND cannot be executed...
The reference to "VACUUM cannot be executed inside a transaction block" gave me a better way to search the docs for similarly restricted commands. The information below probably doesn't cover everything, but it's a start.
Command Limitation
CREATE DATABASE
ALTER DATABASE If creating a new table space.
DROP DATABASE
CLUSTER Without any parameters.
CREATE TABLESPACE
DROP TABLESPACE
REINDEX All in system catalogs, database, or schema.
CREATE SUBSCRIPTION When creating a replication slot (the default behavior.)
ALTER SUBSCRIPTION With refresh option as true.
DROP SUBSCRIPTION If the subscription is associated with a replication slot.
COMMIT PREPARED
ROLLBACK PREPARED
DISCARD ALL
VACUUM
The accepted answer indicates that the limitation has nothing to do with the specific server-side language used. I've just come across an older thread that has some excellent explanations and links for stored functions and transactions:
Do stored procedures run in database transaction in Postgres?
Sample Code
I also wondered about stored procedures, as they're allowed to control transactions. I tried them out in PG 13 and, no, the code is treated like a stored function, down to the error messages.
For anyone that goes in for this sort of thing, here are the "hello world" samples of sQL and PL/PgSQL stored functions and procedures to test out how VACCUM behaves in these cases. Spoiler: It doesn't work, as advertised.
SQL Function
/*
select * from dba.vacuum_sql_function();
Fails:
ERROR: VACUUM cannot be executed from a function
CONTEXT: SQL function "vacuum_sql_function" statement 1. 0.000 seconds. (Line 13).
*/
DROP FUNCTION IF EXISTS dba.vacuum_sql_function();
CREATE FUNCTION dba.vacuum_sql_function()
RETURNS VOID
LANGUAGE sql
AS $sql_code$
VACUUM ANALYZE activity;
$sql_code$;
select * from dba.vacuum_sql_function(); -- Fails.
PL/PgSQL Function
/*
select * from dba.vacuum_plpgsql_function();
Fails:
ERROR: VACUUM cannot be executed from a function
CONTEXT: SQL statement "VACUUM ANALYZE activity"
PL/pgSQL function vacuum_plpgsql_function() line 4 at SQL statement. 0.000 seconds. (Line 22).
*/
DROP FUNCTION IF EXISTS dba.vacuum_plpgsql_function();
CREATE FUNCTION dba.vacuum_plpgsql_function()
RETURNS VOID
LANGUAGE plpgsql
AS $plpgsql_code$
BEGIN
VACUUM ANALYZE activity;
END
$plpgsql_code$;
select * from dba.vacuum_plpgsql_function();
SQL Procedure
/*
call dba.vacuum_sql_procedure();
ERROR: VACUUM cannot be executed from a function
CONTEXT: SQL function "vacuum_sql_procedure" statement 1. 0.000 seconds. (Line 20).
*/
DROP PROCEDURE IF EXISTS dba.vacuum_sql_procedure();
CREATE PROCEDURE dba.vacuum_sql_procedure()
LANGUAGE SQL
AS $sql_code$
VACUUM ANALYZE activity;
$sql_code$;
call dba.vacuum_sql_procedure();
PL/PgSQL Procedure
/*
call dba.vacuum_plpgsql_procedure();
ERROR: VACUUM cannot be executed from a function
CONTEXT: SQL statement "VACUUM ANALYZE activity"
PL/pgSQL function vacuum_plpgsql_procedure() line 4 at SQL statement. 0.000 seconds. (Line 23).
*/
DROP PROCEDURE IF EXISTS dba.vacuum_plpgsql_procedure();
CREATE PROCEDURE dba.vacuum_plpgsql_procedure()
LANGUAGE plpgsql
AS $plpgsql_code$
BEGIN
VACUUM ANALYZE activity;
END
$plpgsql_code$;
call dba.vacuum_plpgsql_procedure();
Other Options
Plenty. As I understand it, VACUUM, and a handful of other commands, are not supported in server-side code running within Postgres. Therefore, you code needs to start from somewhere else. That can be:
Whatever cron you've got in your server's OS.
Any exteral client you like.
pg_cron.
As we're deployed on RDS, those last two options are where I'll look. And there's one more:
Let AUTOVACCUM and an occasional VACCUM do their thing.
That's pretty easy to do, and seems to work fine for the bulk of our needs.
Another Idea
If you do want a bit more control and some custom logging, I'm imagining a table like this:
CREATE TABLE IF NOT EXISTS dba.vacuum_list (
database_name text,
schema_name text,
table_name text,
run boolean,
run_analyze boolean,
run_full boolean,
last_run_dts timestamp)
ALTER TABLE dba.vacuum_list ADD CONSTRAINT
vacuum_list_pk
PRIMARY KEY (database_name, schema_name, table_name);
That's just a sketch. The idea is like this:
You INSERT into vacuum_list when a table needs some vacuuming, at least as far as you're concerned.
In my case, that would be an UPSERT as I don't need a full log-like table, just a single row per table of interest with the last outcome and/or pending state.
Periodically, a remote client, etc. connects, reads the table, and executes each specified VACUUM, according to the options specified in the record.
The external client updates the row with the last run timestamp, and whatever else you're including in the row.
Optionally, you could include fields for duration and change in relation size pre:post vacuuming.
That last option is what I'm interested in. None of our VACUUM calls were working for quite some time as there was a months-old dead connection from something sever-side. VACUUM appears to run fine, in such a case, it just can't delete a whole lot of rows. (Because of the super old "open" transaction ID, visibility maps, etc.) The only way to see this sort of thing seems to be to VACUUM VERBOSE and study the output. Or to record vacuum time and, more important, relation size change to flag cases where nothing seems to happen, when it seems like it should.
VACUUM is "top level" command. It cannot be executed from PL/pgSQL ever or from any other PL.

Conflict between UPDATE and SELECT

I have a table DB.DATA_FEED that I update using a T/SQL Procedure. Every minute, the procedure below is executed 100 times for different data.
ALTER PROCEDURE [DB].[UPDATE_DATA_FEED]
#P_MARKET_DATE varchar(max),
#P_CURR1 int,
#P_CURR2 int,
#P_PERIOD float(53),
#P_MID float(53)
AS
BEGIN
BEGIN TRY
UPDATE DB.DATA_FEED
SET
MID = #P_MID,
MARKET_DATE = convert(datetime,#P_MARKET_DATE, 103)
WHERE
cast(MARKET_DATE as date) =
cast(convert(datetime,#P_MARKET_DATE, 103) as date) AND
CURR1 = #P_CURR1 AND
CURR2 = #P_CURR2 AND
PERIOD = #P_PERIOD
IF ##TRANCOUNT > 0
COMMIT WORK
END TRY
BEGIN CATCH
--error code
END CATCH
END
END
When Users use the application, then they also read from this table, as per the SQL below. Potentially this select can run thousands of times in one minute. (Questions marks are replaced by parser with appropriate date/numbers)
DECLARE #MYDATE AS DATE;
SET #MYDATE='?'
SELECT *
FROM DB.DATA_FEED
WHERE MARKET_DATE>=#MYDATE AND MARKET_DATE<DATEADD(D,1,#MYDATE)
AND CURR1 = ?
AND CURR2 = ?
AND PERIOD = ?
ORDER BY PERIOD
I have sometimes, albeit rarely, got a database lock.
Using the the script from http://sqlserverplanet.com/troubleshooting/blocking-processes-lead-blocker I saw it was SPID=58. I then did DECLARE #SPID INT; SET #SPID = 58; DBCC INPUTBUFFER(#SPID) to find the SQL script which turned out to be my select statement.
Is there something wrong with my SQL code? What can I do to prevent such locks happening in the future?
Thanks
Readers have priority over writers so when someone is writing the readers have to wait for the writing to finish. There are two Table Hints you ca try one is NOLOCK that reads uncommited lines (dirty reads) and the other is READPAST (only reads information that has been commited on the last commit). In both cases the readers never block the table, there for do not deadlock a writer.
Writers can block other writers but, if I understood correctly, only one write per execution so the readers will intercalate writes, diminuishing the deadlocks.
Hope it helps.

Are PostgreSQL functions transactional?

Is a PostgreSQL function such as the following automatically transactional?
CREATE OR REPLACE FUNCTION refresh_materialized_view(name)
RETURNS integer AS
$BODY$
DECLARE
_table_name ALIAS FOR $1;
_entry materialized_views%ROWTYPE;
_result INT;
BEGIN
EXECUTE 'TRUNCATE TABLE ' || _table_name;
UPDATE materialized_views
SET last_refresh = CURRENT_TIMESTAMP
WHERE table_name = _table_name;
RETURN 1;
END
$BODY$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER;
In other words, if an error occurs during the execution of the function, will any changes be rolled back? If this isn't the default behavior, how can I make the function transactional?
PostgreSQL 12 update: there is limited support for top-level PROCEDUREs that can do transaction control. You still cannot manage transactions in regular SQL-callable functions, so the below remains true except when using the new top-level procedures.
Functions are part of the transaction they're called from. Their effects are rolled back if the transaction rolls back. Their work commits if the transaction commits. Any BEGIN ... EXCEPT blocks within the function operate like (and under the hood use) savepoints like the SAVEPOINT and ROLLBACK TO SAVEPOINT SQL statements.
The function either succeeds in its entirety or fails in its entirety, barring BEGIN ... EXCEPT error handling. If an error is raised within the function and not handled, the transaction calling the function is aborted. Aborted transactions cannot commit, and if they try to commit the COMMIT is treated as ROLLBACK, same as for any other transaction in error. Observe:
regress=# BEGIN;
BEGIN
regress=# SELECT 1/0;
ERROR: division by zero
regress=# COMMIT;
ROLLBACK
See how the transaction, which is in the error state due to the zero division, rolls back on COMMIT?
If you call a function without an explicit surounding transaction the rules are exactly the same as for any other Pg statement:
BEGIN;
SELECT refresh_materialized_view(name);
COMMIT;
(where COMMIT will fail if the SELECT raised an error).
PostgreSQL does not (yet) support autonomous transactions in functions, where the procedure/function could commit/rollback independently of the calling transaction. This can be simulated using a new session via dblink.
BUT, things that aren't transactional or are imperfectly transactional exist in PostgreSQL. If it has non-transactional behaviour in a normal BEGIN; do stuff; COMMIT; block, it has non-transactional behaviour in a function too. For example, nextval and setval, TRUNCATE, etc.
As my knowledge of PostgreSQL is less deeper than Craig Ringer´s I will try to give a shorter answer: Yes.
If you execute a function that has an error in it, none of the steps will impact in the database.
Also, if you execute a query in PgAdmin the same happen.
For example, if you execute in a query:
update your_table yt set column1 = 10 where yt.id=20;
select anything_that_do_not_exists;
The update in the row, id = 20 of your_table will not be saved in the database.
UPDATE Sep - 2018
To clarify the concept I have made a little example with non-transactional function nextval.
First, let´s create a sequence:
create sequence test_sequence start 100;
Then, let´s execute:
update your_table yt set column1 = 10 where yt.id=20;
select nextval('test_sequence');
select anything_that_do_not_exists;
Now, if we open another query and execute
select nextval('test_sequence');
We will get 101 because the first value (100) was used in the latter query (that is because the sequences are not transactional) although the update was not committed.
https://www.postgresql.org/docs/current/static/plpgsql-structure.html
It is important not to confuse the use of BEGIN/END for grouping statements in PL/pgSQL with the similarly-named SQL commands for transaction control. PL/pgSQL's BEGIN/END are only for grouping; they do not start or end a transaction. Functions and trigger procedures are always executed within a transaction established by an outer query — they cannot start or commit that transaction, since there would be no context for them to execute in. However, a block containing an EXCEPTION clause effectively forms a subtransaction that can be rolled back without affecting the outer transaction. For more about that see Section 39.6.6.
In the function level, it is not transnational. In other words, each statement in the function belongs to a single transaction, which is the default db auto commit value. Auto commit is true by default. But anyway, you have to call the function using
select schemaName.functionName()
The above statement 'select schemaName.functionName()' is a single transaction, let's name the transaction T1, and so the all the statements in the function belong to the transaction T1. In this way, the function is in a single transaction.
Postgres 14 update: All statements written in between the BEGIN and END block of a Procedure/Function is executed in a single transaction. Thus, any errors arising while execution of this block will cause automatic roll back of the transaction.
Additionally, the ATOMIC Transaction including triggers as well.

DB2 deadlock timeout Sqlstate: 40001, reason code 68 due to update statements called from servlet using SQL

I am calling update statements one after the other from a servlet to DB2. I am getting error sqlstate 40001, reason code 68 which i found it is due to deadlock timeout.
How can I resolve this issue?
Can it be resolved by setting query timeout?
If yes then how to use it with update statements in servlet or where to use it?
The reason code 68 already tells you this is due to a lock timeout (deadlock is reason code 2) It could be due to other users running queries at the same time that use the same data you are accessing, or your own multiple updates.
Begin by running db2pd -db locktest -locks show detail from a db2 command line to see where the locks are. You'll then need to run something like:
select tabschema, tabname, tableid, tbspaceid
from syscat.tables where tbspaceid = # and tableid = #
filling in the # symbols with the ID number you get from the db2pd command output.
Once you see where the locks are, here are some tips:
◦Deadlock frequency can sometimes be reduced by ensuring that all applications access their common data in the same order – meaning, for example, that they access (and therefore lock) rows in Table A, followed by Table B, followed by Table C, and so on.
taken from: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.trb.doc/doc/t0055074.html
recommended reading: http://www.ibm.com/developerworks/data/library/techarticle/dm-0511bond/index.html
Addendum: if your servlet or another guilty application is using select statements found to be involved in the deadlock, you can try appending with ur to the select statements if accuracy of the newly updated (or inserted) data isn't important.
For me, the solution was adding FOR READ ONLY WITH UR at the end of all my SELECT statements. (Apparently my select statements were returning so much data, it locked the tables long enough to interfere with other SQL statements)
See https://www.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/sqlref/src/tpc/db2z_sql_isolationclause.html