Postgres - delete record after trigger - postgresql

I have a postgres table with some triggers that fire if records are udpated or deleted. They basically archive the records so they are not trully deleted - they just change certain attributes so the records are not displayed in corresponding views.
Sometimes I want to manually delete a record for good, but I can't do it because the trigger fires and does it's thing if I execute DELETE query.
Example:
DELETE FROM records WHERE operator = 20
Is there a way to run DELETE query and bypass a trigger that fires on DELETE?

With a setup like this, I think the typical approach is to avoid granting any direct privileges on the underlying table, and put your INSERT / UPDATE / DELETE triggers on the view (allowing the table owner to change it as needed).
If you are the table owner, you can use ALTER TABLE to temporarily DISABLE and re-ENABLE the trigger. As long as you do all of this within a transaction, you'll be safe against concurrent DELETEs, though they'll block until your transaction commits.
If you have superuser privileges, you can also prevent triggers from firing by setting session_replication_role to replica, provided that the trigger in question hasn't been configured to fire on replicated databases.

Related

Postgres Table lock on delete query

I have two aws task where one could be writing to table another could run delete query at the same time both could be running parallel queries but playing with different rows. so the question is if I run delete from table where column_name=some condition ? then will postgres apply table level lock or row level lock. If table level lock gets applied then another task will not able to write into table.
PostgreSQL will only lock the rows you delete.
Concurrent INSERTs will not be affected.
There are many different lock modes. The table will be locked, but in a mode that still allows other INSERT, DELETE, and UPDATE operations to happen concurrently. The rows actually deleted will also be locked, in a more restrictive mode.

Optimize the trigger to add audit log

I have a local database which is the production database, on which all operations are being done real time. I am storing the log on each action in an audit log table in another database via trigger. It basically checks if any change is made in any of the row's column it will remove that row and add it AGAIN (which is not a good way I think as it should simply update it but due to some reasons I need to delete and add it again).
There are some tables on which operations are being done rapidly like 100s of rows are being added in database. This is slowing the process of saving the data into audit log table. Now if trigger has to like delete 100 rows and add 100 again it will affect the performance obviously and if number of rows increases it will reduce the performance more.
What should be the best practice to tackle this, I have been looking into Read Replica and Foreign Data Wrapper but as for Read Replica it's only Readable and not writable for PostgreSQL and I don't really get to know how Foreign Data Wrapper gonna help me as this was suggested by one of my colleague.
Hope someone can guide me in right direction.
A log is append-only by definition. Loggers should never be modifying or removing existing entries.
Audit logs are no different. Audit triggers should INSERT an entry for each change (however you want to define "change"). They should never UPDATE or DELETE anything*.
The change and the corresponding log entry should be written to the same database within the same transaction, to ensure atomicity/consistency; logging directly to a remote database will always leave you with a window where the log is committed but the change is not (or vice versa).
If you need to aggregate these log entries and push them to a different database, you should do it from an external process, not within the trigger itself. If you need this to happen in real time, you can inform the process of new changes via a notification channel.
* In fact, you should revoke UPDATE/DELETE privileges on the audit table from the user inserting the logs. Furthermore, the trigger should ideally be a SECURITY DEFINER function owned by a privileged user with INSERT rights on the log table. The user connecting to the database should not be given permission to write to the log table directly.
This ensures that if your client application is compromised (whether due to a malfunction, or a malicious user e.g. exploiting an SQL injection vulnerability), then your audit log retains a complete and accurate record of everything it changed.

How to protect the trigger from deletion?

Version: PostgresQL 9.6
I create a trigger.
Even if I create a trigger as a superuser, the owner of the database can remove the trigger.
Is it possible to protect the trigger from deletion?
You cannot keep the owner of the table from dropping a trigger on it unless you want to go to the extreme of writing an event trigger for that.
Maybe you should use a different permission concept where you give people only limited privileges if you want to keep them from dropping your triggers. Instead of allowing others to own tables, grant others privileges on the tables.

How to drop a trigger in a resilient manner in postgresql

I'm looking to drop a currently in production trigger because it's no longer needed, but the problem is that when I try the simplest way, which is something like
drop trigger <triggername> on <tablename>
It caused a huge table lock and everything froze!
What the trigger does is:
When a row is inserted or updated, check for a field's contents, split it and populate another table.
How should I proceed to instantly disable (and dropping afterwards) without causing hassle in our production environment?
Thanks in advance and sorry for my english ;)
You could try ALTER TABLE ... DISABLE TRIGGER - but it requires the same strength of lock, so I don't think it'll do you much good.
There's work in PostgreSQL 9.4 to make ALTER TABLE take weaker locks for some operations. It might help with this.
In the mean time, I'd CREATE OR REPLACE FUNCTION to replace the trigger with a simple no-op function.
Then, to actually drop the trigger, I'd probably write a script that does:
BEGIN;
LOCK TABLE the_table IN ACCESS EXCLUSIVE MODE NOWAIT;
DROP TRIGGER ...;
COMMIT;
If anybody's using the table the script will abort at the LOCK TABLE.
I'd then run it in a loop until it succeeded.
If that didn't work (if the table is always busy) but if most transactions were really short, I might attempt a LOCK TABLE without NOWAIT, but set a short statement_timeout. So the script would be something like:
BEGIN;
SET LOCAL statement_timeout = '5s';
LOCK TABLE the_table IN ACCESS EXCLUSIVE MODE NOWAIT;
DROP TRIGGER ...;
COMMIT;
That ensures a fairly short disruption by failing if it can't complete the job in time. Again, I'd run it periodically until it succeeded.
If neither approach was effective - say, due to lots of long-running transactions - I'd probably just accept the need to lock it for a little while. I'd start the drop trigger then I'd pg_terminate_backend all concurrent transactions that held locks on the table so their connections dropped and their transactions terminated. That'd let the drop trigger proceed promptly, at the cost of greater disruption. You can only consider an approach like this if your apps are well-written so they'll just retry transactions on transient errors like connection drops.
Another possible approach is to disable (not drop) the trigger by modifying the system catalogs directly.
According to the docs, since 9.5 alter table ... disable trigger now takes a SHARE ROW EXCLUSIVE lock, so that might be the way to go now.

What is supported as transactional in postgres

I am trying find out what is postgres can handle safely inside of transaction, but I cannot find the relavant information in the postgres manual. So far I have found out the following:
UPDATE, INSERT and DELTE are fully supported inside transactions and rolled back when the transaction is not finished
DROP TABLE is not handled safely inside a transaction, and is undone with a CREATE TABLE, thus recreates the dropped table but does not repopulate it
CREATE TABLE is also not truly transactionized and is instead undone with a corresponding DROP TABLE
Is this correct? Also I could not find any hints as to the handling of ALTER TABLE and TRUNCATE. In what way are those handled and are they safe inside transactions? Is there a difference of the handling between different types of transactions and different versions of postgres?
DROP TABLE is transactional. To undo this, you need to issue a ROLLBACK not a CREATE TABLE. The same goes for CREATE TABLE (which is also undone using ROLLBACK).
ROLLBACK is always the only correct way to undo a transaction - that includes ALTER TABLE and TRUNCATE.
The only thing that is never transactional in Postgres are the numbers generated by a sequence (CREATE/ALTER/DROP SEQUENCE themselves are transactional though).
Best I'm aware all of these commands are transaction aware, except for TRUNCATE ... RESTART IDENTITY (and even that one is transactional since 9.1.)
See the manual on concurrency control and transaction-related commands.