Rollback DML statement in pgAdmin - postgresql

In pgAdmin, if I execute an insert query, I don't see any way to either commit or rollback the statement I just ran (I know it auto commits). I'm used to Oracle and SQL developer, where I could run a statement, and then rollback the last statement I ran with a press of a button. How would I achieve the same thing here?

Use transaction in the SQL window:
BEGIN;
DROP TABLE foo;
ROLLBACK; -- or COMMIT;
-- edit --
Another example:
BEGIN;
INSERT INTO foo(bar) VALUES ('baz') RETURNING bar; -- the results will be returned
SELECT * FROM other_table; -- some more result
UPDATE other_table SET var = 'bla' WHERE id = 1 RETURNING *; -- the results will be returned
-- and when you're done with all statements and have seen the results:
ROLLBACK; -- or COMMIT

I also DEARLY prefer the Oracle way of putting everything in a transaction automatically, to help avoid catastrophic manual mistakes.
Having auto-commit enabled by default in an Enterprise product, IMO, is beyond vicious, and nothing but a COMPLETELY, UTTERLY INSANE design-choice :(
Anyways --- working with Postgres, one always needs to remember
BEGIN;
at the start of manual work or sql-scripts.
As a practical habit: then, when you would say: COMMIT;
in Oracle, I use the line
END; BEGIN;
in Postgres which does the same thing, i.e commits the current transaction and immediately starts a new one.
When using JDBC or similar, to create a connection, always use some method, e.g. getPGConnection(), that includes:
...
Connection dbConn = DriverManager.getConnection(dbUrl, dbUser, dbPassword);
dbConn.setAutoCommit(false);
...
to make sure every connection has auto-commit disabled.

If you are using pgAdmin4, you can turn the auto commit and/or auto rollback on and off.
Go to the File drop down menu and select Preferences option. In the SQL editor tab -> Options you can see the options to turn auto commit/rollback on and off.
Auto commit/rollback option

Related

How do you handle error handling and commits in Postgres

I am using Postgres 13.5 and I am unsure how to combine commit and error handling in a stored procedure or DO block. I know that if I include the EXCEPTION clause in my block, then I cannot include a commit.
I am new to Postgres. It has also been over 15 years since I have written SQL that was working with transactions. When I was working with transactions I was using Oracle and recall using AUTONOMOUS_TRANSACTION to resolve some of these issues. I am just not sure how to do something like that in Postgres.
Here is a very simplified DO block. As I said above, I know that the Commits will cause the procedure to throw and exception. But, if I remove the EXCEPTION clause, then how will I trap an error if it happens? After reading many things, I still have not found a solution. So, I am not understanding something that will lead me to the solution.
Do
$$
DECLARE
v_Start timestamptz;
v_id integer;
v_message_type varchar(500);
Begin
select current_timestamp into start;
select q.id, q.message_type into (v_id, v_message_type) from message_queue;
call Load_data(v_id, v_message_type);
commit; -- if Load_Data completes successfully, I want to commmit the data
insert into log (id, message_type, Status, start, end)
values (v_id, v_message_type, 'Success', v_start, Currrent_Timestamp);
commit; -- commit the log issert for success
EXCEPTION
WHEN others THEN
insert into log (id, message_type, status, start, end, error_message)
values (v_id, v_message_type, 'Failue', v_start, Currrent_Timestamp, SQLERRM || '', ' ||
SQLSTATE );
commit; -- commit the log insert for failure.
end;
$$
Thanks!
Since this is a pattern that I will have to do tens of times, I want to understand the right way to do this.
Since you cannot use transaction management statements in a subtransaction, you will have to move part of the processing to the client side.
But your sample code doesn't need any transaction management at all! Simply remove all the COMMIT statements, and the procedure will work just as you want it to. Remember that PostgreSQL uses the autocommit mode, so your procedure call from the client will automatically run in its own transaction and commit when it is done.
But perhaps your sample code is simplified, and you would like more complicated processing (looping etc.) in your actual use cases. So let's discuss your options:
One option is to remove the EXCEPTION handler and move only that part to the client side: if the procedure causes an error, roll back and insert a log message. Another, perhaps cleaner, method is to move the whole transaction management to the client side. In that case, you would replace the complete procedure with client code and call load_data directly from client code.

How to prevent or avoid running update and delete statements without where clauses in PostgreSQL

How to prevent or avoid running update or delete statements without where clauses in PostgreSQL?
Same as SQL_SAFE_UPDATES statement in MySQL is needed for PostgreSQL.
For example:
UPDATE table_name SET active=1; -- Prevent this statement or throw error message.
UPDATE table_name SET active=1 WHERE id=1; -- This is allowed
My company database has many users with insert and update privilege any one of the users do that unsafe update.
In this secoario how to handle this.
Any idea can write trigger or any extension to handle the unsafe update in PostgreSQL.
I have switched off autocommits to avoid these errors. So I always have a transaction that I can roll back. All you have to do is modify .psqlrc:
\set AUTOCOMMIT off
\echo AUTOCOMMIT = :AUTOCOMMIT
\set PROMPT1 '%[%033[32m%]%/%[%033[0m%]%R%[%033[1;32;40m%]%x%[%033[0m%]%# '
\set PROMPT2 '%[%033[32m%]%/%[%033[0m%]%R%[%033[1;32;40m%]%x%[%033[0m%]%# '
\set PROMPT3 '>> '
You don't have to insert the PROMPT statements. But they are helpful because they change the psql prompt to show the transaction status.
Another advantage of this approach is that it gives you a chance to prevent any erroneous changes.
Example (psql):
database=# SELECT * FROM my_table; -- implicit start transaction; see prompt
-- output result
database*# UPDATE my_table SET my_column = 1; -- missed where clause
UPDATE 525125 -- Oh, no!
database*# ROLLBACK; -- Puh! revert wrong changes
ROLLBACK
database=# -- I'm completely operational and all of my circuits working perfectly
There actually was a discussion on the hackers list about this very feature. It had a mixed reception, but might have been accepted if the author had persisted.
As it is, the best you can do is a statement level trigger that bleats if you modify too many rows:
CREATE TABLE deleteme
AS SELECT i FROM generate_series(1, 1000) AS i;
CREATE FUNCTION stop_mass_deletes() RETURNS trigger
LANGUAGE plpgsql AS
$$BEGIN
IF (SELECT count(*) FROM OLD) > TG_ARGV[0]::bigint THEN
RAISE EXCEPTION 'must not modify more than % rows', TG_ARGV[0];
END IF;
RETURN NULL;
END;$$;
CREATE TRIGGER stop_mass_deletes AFTER DELETE ON deleteme
REFERENCING OLD TABLE AS old FOR EACH STATEMENT
EXECUTE FUNCTION stop_mass_deletes(10);
DELETE FROM deleteme WHERE i < 100;
ERROR: must not modify more than 10 rows
CONTEXT: PL/pgSQL function stop_mass_deletes() line 1 at RAISE
DELETE FROM deleteme WHERE i < 10;
DELETE 9
This will have a certain performance impact on deletes.
This works from v10 on, when transition tables were introduced.
If you can afford making it a little less convinient for your users, you might try revoking UPDATE privilege for all "standard" users and creating a stored procedure like this:
CREATE FUNCTION update(table_name, col_name, new_value, condition) RETURNS void
/*
Check if condition is acceptable, create and run UPDATE statement
*/
LANGUAGE plpgsql SECURITY DEFINER
Because of SECURITY DEFINER this way your users will be able to UPDATE despite not having UPDATE privilege.
I'm not sure if this is a good approach, but this way you can force as strict UPDATE (or anything else) requirements as you wish.
Of course the more complicated UPDATES are required, the more complicated has to be your procedure, but if this is mostly just about updating single row by ID (as in your example) this might be worth a try.

sqlworkbench-j not closing transaction connection

I am using sqlworkbench-j to query Redshift data. I am facing issue of locking tables whenever I do query on this table. It also happens for simple select statements. I know this is happening because workbench explicitly adds begin for every statement to take care for any changes happening for the data. So for every query we need to write end transaction.
Is there any option to disable the begin statement or to add end transaction statement in sqlworkbench-j?
When you set up redshift - click the "autocommit" option.
see here for more detailed instructions
https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-using-workbench.html
especially point 10

Are PostgreSQL functions transactional?

Is a PostgreSQL function such as the following automatically transactional?
CREATE OR REPLACE FUNCTION refresh_materialized_view(name)
RETURNS integer AS
$BODY$
DECLARE
_table_name ALIAS FOR $1;
_entry materialized_views%ROWTYPE;
_result INT;
BEGIN
EXECUTE 'TRUNCATE TABLE ' || _table_name;
UPDATE materialized_views
SET last_refresh = CURRENT_TIMESTAMP
WHERE table_name = _table_name;
RETURN 1;
END
$BODY$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER;
In other words, if an error occurs during the execution of the function, will any changes be rolled back? If this isn't the default behavior, how can I make the function transactional?
PostgreSQL 12 update: there is limited support for top-level PROCEDUREs that can do transaction control. You still cannot manage transactions in regular SQL-callable functions, so the below remains true except when using the new top-level procedures.
Functions are part of the transaction they're called from. Their effects are rolled back if the transaction rolls back. Their work commits if the transaction commits. Any BEGIN ... EXCEPT blocks within the function operate like (and under the hood use) savepoints like the SAVEPOINT and ROLLBACK TO SAVEPOINT SQL statements.
The function either succeeds in its entirety or fails in its entirety, barring BEGIN ... EXCEPT error handling. If an error is raised within the function and not handled, the transaction calling the function is aborted. Aborted transactions cannot commit, and if they try to commit the COMMIT is treated as ROLLBACK, same as for any other transaction in error. Observe:
regress=# BEGIN;
BEGIN
regress=# SELECT 1/0;
ERROR: division by zero
regress=# COMMIT;
ROLLBACK
See how the transaction, which is in the error state due to the zero division, rolls back on COMMIT?
If you call a function without an explicit surounding transaction the rules are exactly the same as for any other Pg statement:
BEGIN;
SELECT refresh_materialized_view(name);
COMMIT;
(where COMMIT will fail if the SELECT raised an error).
PostgreSQL does not (yet) support autonomous transactions in functions, where the procedure/function could commit/rollback independently of the calling transaction. This can be simulated using a new session via dblink.
BUT, things that aren't transactional or are imperfectly transactional exist in PostgreSQL. If it has non-transactional behaviour in a normal BEGIN; do stuff; COMMIT; block, it has non-transactional behaviour in a function too. For example, nextval and setval, TRUNCATE, etc.
As my knowledge of PostgreSQL is less deeper than Craig Ringer´s I will try to give a shorter answer: Yes.
If you execute a function that has an error in it, none of the steps will impact in the database.
Also, if you execute a query in PgAdmin the same happen.
For example, if you execute in a query:
update your_table yt set column1 = 10 where yt.id=20;
select anything_that_do_not_exists;
The update in the row, id = 20 of your_table will not be saved in the database.
UPDATE Sep - 2018
To clarify the concept I have made a little example with non-transactional function nextval.
First, let´s create a sequence:
create sequence test_sequence start 100;
Then, let´s execute:
update your_table yt set column1 = 10 where yt.id=20;
select nextval('test_sequence');
select anything_that_do_not_exists;
Now, if we open another query and execute
select nextval('test_sequence');
We will get 101 because the first value (100) was used in the latter query (that is because the sequences are not transactional) although the update was not committed.
https://www.postgresql.org/docs/current/static/plpgsql-structure.html
It is important not to confuse the use of BEGIN/END for grouping statements in PL/pgSQL with the similarly-named SQL commands for transaction control. PL/pgSQL's BEGIN/END are only for grouping; they do not start or end a transaction. Functions and trigger procedures are always executed within a transaction established by an outer query — they cannot start or commit that transaction, since there would be no context for them to execute in. However, a block containing an EXCEPTION clause effectively forms a subtransaction that can be rolled back without affecting the outer transaction. For more about that see Section 39.6.6.
In the function level, it is not transnational. In other words, each statement in the function belongs to a single transaction, which is the default db auto commit value. Auto commit is true by default. But anyway, you have to call the function using
select schemaName.functionName()
The above statement 'select schemaName.functionName()' is a single transaction, let's name the transaction T1, and so the all the statements in the function belong to the transaction T1. In this way, the function is in a single transaction.
Postgres 14 update: All statements written in between the BEGIN and END block of a Procedure/Function is executed in a single transaction. Thus, any errors arising while execution of this block will cause automatic roll back of the transaction.
Additionally, the ATOMIC Transaction including triggers as well.

detach database/take offline fails

I'm currently in the process of detaching a development database on the production server. Since this is a production server I don't want to restart the sql service. That is the worst case scenario.
Obviously I tried detaching it through SSMS. Told me there was an active connection and I disconnected it. When detaching the second time it told me that was impossible since it was in use.
I tried EXEC sp_detach_db 'DB' with no luck.
I tried getting the database offline. That ran for about 15 minutes when I got bored and turned it off.
Anyway, I tried everything ... I made sure all connections were killed using the connections indicator in detach database using SSMS.
The following returned 0 results:
USE master
SELECT * FROM sys.sysprocesses WHERE dbid = DB_ID('DB')
And the following is running for 18 minutes now:
ALTER DATABASE DB SET OFFLINE WITH ROLLBACK IMMEDIATE
I did restart SMSS regularly during all this to make sure SSMS wasn't the culprit by locking something invisibly.
Isn't there a way to brute force it? The database schema is something I'm pretty fond of but the data is expendable.
Hopefully there is some sort of a quick fix? :)
The DBA will try to reset the process tonight but I'd like to know the fix for this just in case.
Thx!
ps: I'm using DTC ... so perhaps this might explain why my database got locked up all of a sudden?
edit:
I'm now doing the following which results in an infinite execution of the final part. The first query even returns 0, so I suppose the killing of the users won't even matter.
USE [master]
GO
SELECT * FROM sys.sysprocesses WHERE dbid = DB_ID('Database')
GO
DECLARE #return_value int
EXEC #return_value = [dbo].[usp_KillUsers]
#p_DBName = 'Database'
SELECT 'Return Value' = #return_value
GO
ALTER DATABASE Database SET OFFLINE WITH ROLLBACK IMMEDIATE
GO
How are you connecting to SQL Server? Is it possible that you're trying to detach the database while you yourself are connected to it? This can block a Detach, depending on the version of SQL Server involved.
You can try using the DAC for stuff like this.
Try killing all connections before detaching the database, IE:
USE [master]
GO
/****** Object: StoredProcedure [dbo].[usp_KillUsers] Script Date: 08/18/2009 10:42:48 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[usp_KillUsers]
#p_DBName SYSNAME = NULL
AS
/* Check Paramaters */
/* Check for a DB name */
IF (#p_DBName IS NULL)
BEGIN
PRINT 'You must supply a DB Name'
RETURN
END -- DB is NULL
IF (#p_DBName = 'master')
BEGIN
PRINT 'You cannot run this process against the master database!'
RETURN
END -- Master supplied
IF (#p_DBName = DB_NAME())
BEGIN
PRINT 'You cannot run this process against your connections database!'
RETURN
END -- your database supplied
SET NOCOUNT ON
/* Declare Variables */
DECLARE #v_spid INT,
#v_SQL NVARCHAR(255)
/* Declare the Table Cursor (Identity) */
DECLARE c_Users CURSOR
FAST_FORWARD FOR
SELECT spid
FROM master..sysprocesses (NOLOCK)
WHERE db_name(dbid) LIKE #p_DBName
OPEN c_Users
FETCH NEXT FROM c_Users INTO #v_spid
WHILE (##FETCH_STATUS <> -1)
BEGIN
IF (##FETCH_STATUS <> -2)
BEGIN
SELECT #v_SQL = 'KILL ' + CONVERT(NVARCHAR, #v_spid)
-- PRINT #v_SQL
EXEC (#v_SQL)
END -- -2
FETCH NEXT FROM c_Users INTO #v_spid
END -- While
CLOSE c_Users
DEALLOCATE c_Users
This is a script to kill all user connections to a database, just pass the database name, and it will close them. Then you can try to detach the database. This script is one I found a while back and I cannot claim it as my own. I do not mean this as any sort of plagarism, I just don't have the source.
SELECT DISTINCT req_transactionUOW FROM syslockinfo
KILL 'number_returned' (the one(s) with process_id -2)
The cause was DTC being a little bit annoying and locking up the database completely with a failed transaction. Now I would like to know the reason why this happened. But at least it gives me the ability to reset the broken transactions when the problem re-occurs.
I'm posting it here since I'm sure it'll help some people who are experiencing the same issues.