Revert back updated column in table in PostgreSQL - postgresql

I accidentally updated 71 rows of 1 column in table. It's production and I want to learn how I can revert back those changes. It's an PostgreSQL environment.
Just after updating I realized where condition is missing. I rollback it but:
db=# ROLLBACK;
WARNING: there is no transaction in progress
ROLLBACK
Not sure how it's not getting rolled back. I had no backup files for it. Else I would have copied files locally and stored in database. So, can someone suggest how else I can proceed with reverting those changes?
Is there any way PostgreSQL store logs and that can be used to restore my data?

If you have committed the transaction,you cannot rollback or undo the update.Please find the detailed answer here
Can I rollback a transaction I've already committed? (data loss)

Related

How to fix "uncommitted xmin from before xid cutoff needs to be frozen, automatic vacuum of table "db.pg_catalog.pg_largeobject"

PostgreSQL Database error log generate this error all day and still continue error to next day
[23523] ERROR: uncommitted xmin 53354897 from before xid cutoff 210760077 needs to be frozen
[23523] CONTEXT: automatic vacuum of table "xxxx.pg_catalog.pg_largeobject"
[23523] ERROR: uncommitted xmin 53354897 from before xid cutoff 210760077 needs to be frozen
[23523] CONTEXT: automatic vacuum of table "xxxx.pg_catalog.pg_largeobject_metadata"
The error are involve system catalogs (pg_catalog.pg_largeobject, pg_catalog.pg_largeobject_metadata).
I need help about how to fix it or what will be affected if I disable autovacuum on these 2 tables.
Note:
DB : PostgreSQL version 11.6
OS : Red Hat Enterprise Linux Server release 7.8
You are experiencing data corruption, and if you don't take action, you are headed for disaster: if autovacuum keeps failing (as it will), you will eventually get close enough to transaction ID wraparound that your database will stop accepting transactions.
Create a new database cluster, dump the corrupted cluster with pg_dumpall, restore it into the new cluster and remove the old one.
You are running an old minor release (current is 11.10), so you are missing about a year of bug fixes. The cause could be a software bug or (more often) a hardware problem.
As Laurenz told you, is data corruption, but you don't have to dumpall and restore.
If your row registry is not so important, you can delete it by the xmin number 53354897.
To get more safety, you can dump before and delete after that, achieving no downtime.
In my case, that error happened in a log table and I could delete it without get any data injury.
Observation: If you got corruption on your data, you have to check your hardware and data integrity as well, even if you delete the problematic row.

How does Postgres support rollback without undo logs

I was going through this link and it is mentioned that PostgreSQL does not support undo log. So I am wondering how PostgreSQL rollback a transaction without undo log.
When a row is updated or deleted in PostgreSQL, it is not really updated or deleted. The old row version just remains in the table and is marked as removed by a certain transaction. That makes an update much like a delete of the old and insert of the new row version.
To roll back a transaction is nothing but to mark the transaction as aborted. Then the old row version automatically becomes the current row version again, without a need for undoing anything.
The Achilles' heel of this technique is that data modifications produce “dead row versions”, which have to be reclaimed later by a background procedure called “vacuuming”.

Postgres returns errors on future transactions

I am currently migrating from MySQL to postgres using pgbouncer for my connection pool.
We select/insert/update/delete lots of data from postgres and all comes from remote sources so we try to make the data quality as good as possible before an insert but sometimes some bad data slips through.
This causes in postgres to report current transaction is aborted, commands ignored until end of transaction block
This is fine except that connection through pgbouncer will report this error for every query. I get the same logic if i connect directly to postgres instead of pgbouncer too. I'd expect it to roll back whichever transaction that caused this issue.
Is there a way to just rollback and continue working like normal? Everything i've read just says fix the query but in this case it's not always possible.
You need to use the ROLLBACK command. This will undo everything since the last BEGIN TRANSACTION or START TRANSACTION. Note that transactions do not nest; if you've begun multiple transactions without committing, this will roll back the outermost transaction.
This will drop you into autocommit mode. You may want to issue a new BEGIN TRANSACTION command to open a new transaction.
You should also be able to ROLLBACK TO SAVEPOINT, if you have a savepoint from before the error.
(If at all possible, it is preferred to just fix the query, but depending on what you're doing, that may be prohibitively difficult.)

How to rollback an update in PostgreSQL

While editing some records in my PostgreSQL database using sql in the terminal (in ubuntu lucid), I made a wrong update.
Instead of -
update mytable set start_time='13:06:00' where id=123;
I typed -
update mytable set start_time='13:06:00';
So, all records are now having the same start_time value.
Is there a way to undo this change? There are some 500+ records in the table, and I do not know what the start_time value for each record was
Is it lost forever?
I'm assuming it was a transaction that's already committed? If so, that's what "commit" means, you can't go back.
Some data may be recoverable if you're lucky. Stop the database NOW.
Here's an answer I wrote on the same topic earlier. I hope it's helpful.
This might be too: Recoved deleted rows in postgresql .
Unless the data is absolutely critical, just restore from backups, it'll be lots easier and less painful. If you didn't have backups, consider yourself soundly thwacked.
If you catch the mistake and immediately bring down any applications using the database and take it offline, you can potentially use Point-in-Time Recovery (PITR) to replay your Write Ahead Log (WAL) files up to, but not including, the moment when the errant transaction was made. This would return the database to the state it was in prior, thus effectively 'undoing' that transaction.
As an approach for a production application database it has a number of obvious limitations, but there are circumstances in which PITR may be the best option available, especially when critical data loss has occurred. However, it is of no value if archiving was not already configured before the corruption event.
https://www.postgresql.org/docs/current/static/continuous-archiving.html
Similar capabilities exist with other relational database engines.

How commit works in oracle

I have couple of statements, the pseudo code would look something like this
insert into local_table
crease savepoint sp1
insert into remote_db //using db_link
update local_table2
delete from local_table
commit
Now I am kind a confuse at insert into remote_db statement. Would there be any chance that the commit which is being applied has different affect on local db and on remote db?
The problem statement is kind a complex. the script which copies data from local db to remote db is producing duplicates. After going through investigation, thats the only place which looks suspicious but i am not sure. Would really appreciate if someone can shed light on COMMIT of oracle.
If you are asking whether the commit could potentially cause duplicate rows, no, that's not possible.
Given the way that distributed transactions take place, it is possible that that transaction would not be committed at all on the remote database (in which case it would be an in-doubt distributed transaction that the remote DBA would likely need to resolve). But if the transaction is committed successfully, it's going to be committed correctly. It's not possible that some rows would get committed and others wouldn't or that duplicate rows that didn't exist prior to the commit would be created by the act of committing.