I just accidentaly updated a whole table, I set a column value without a where, and now, 80 registries have that value :s, is there a way to revert that? I don't want to be murdered by my boss.
You can "revert" the table if you are using archive logging. You restore a backup and perform roll forward to just before your update. It is not that easy, but you can "undo" that update.
Related
I dropped a mysql table and don't believe I explicitly enabled any sort of backups is there a way to undo the drop statement?
Once a table is dropped, it is gone. Without a backup you won't get it back. So, no there's no way to undo the drop statement.
Closest I got was to partially restore the structure (with no data) by retracing the create/alter statements. I went to help > locate log files and analyzed the log file for a given DB.
P.S: Problem is that for reasons unknown when you alter the table via "alter" button in workbench it doesn't register the queries in log files, it just says "Altered changes" or something which is pretty dumb, otherwise the whole structure would have been restored...
I was going through this link and it is mentioned that PostgreSQL does not support undo log. So I am wondering how PostgreSQL rollback a transaction without undo log.
When a row is updated or deleted in PostgreSQL, it is not really updated or deleted. The old row version just remains in the table and is marked as removed by a certain transaction. That makes an update much like a delete of the old and insert of the new row version.
To roll back a transaction is nothing but to mark the transaction as aborted. Then the old row version automatically becomes the current row version again, without a need for undoing anything.
The Achilles' heel of this technique is that data modifications produce “dead row versions”, which have to be reclaimed later by a background procedure called “vacuuming”.
Imagine dropping a subscription and recreating it from scratch. Is it possible to ignore existing data during the first synchronization?
Creating a subscription with (copy_data=false) is not an option because I do want to copy data, I just don't want to copy already existing data.
Example: There is a users table and a corresponding publication on the master. This table has 1 million rows and every minute a new row is added. Then we drop the subscription for a day.
If we recreate the subscription with (copy_data=true), replication will not start due to a conflict with already existing data. If we specify (copy_data=false), 1440 new rows will be missing. How can we synchronize the publisher and the subscriber properly?
You cannot do that, because PostgreSQL has no way of telling when the data were added.
You'd have to reconcile the tables by hand (or INSERT ... ON CONFLICT DO NOTHING).
Unfortunately PostgreSQL does not support nice skip options for conflicts yet, but I believe it will be enhanced in the feature.
Based on #Laurenz Albe answer which recommends the use of the statement:
INSERT ... ON CONFLICT DO NOTHING.
I believe that it would be better to use the following command which also will take care any possible updates on your data before you start the subscription again:
INSERT ... ON CONFLICT UPDATE SET...
Finally I have to say that both are dirty solutions as during the execution of the above statement and the creation of the subscription, new lines may have been arrived which will result in losing them until you perform again the custom sync.
I have seen some other suggested solutions using the LSN number from the Postgresql log file...
For me maybe is elegant and safe to delete all the data from the destination table and create the replication again!
our target db is DB2 and source is ORACLE, we found ddl changes in CDC management console and i need to fix the instance in to proper running condition.
Paul Vernon answer assumes that what you are looking for is how to replicate DDL changes. I will assume that you don't want to replicate DDL changes, but just restart the subscription after minor layout changes (for example, after a column size has been increased or a column you are not going to replicate, changes).
If that is the case, right-click the specific table map on your subscription, and update table definition. I am not sure but I think after that, you have to refresh the entire subscription. If the table is very large, you will want to avoid refreshing them all, but that's another question.
Off course, if in the table change, a column has been added and you want to deal with it, you can edit column map and make the specific assignment you want to that column.
I hope this helps.
We have a requirement that says we should have a copy of all the items that were in our system at one point. The most simple way to explain it would be replication but ignoring the delete statement (INSERT and UPDATE are ok)
Is this possible ? or maybe the better question would be what is the best approach to tackle this kind of problem?
Make a copy/replica of current database and use triggers via dblink from current database to the replica. Use after insert and after update trigger to insert and update data in replica.
So whenever a row insertion/updation take place in current database it will directly reflect to replica.
I'm not sure that I understand the question completely, but I'll try to help:
First (opposite to #Sunit) - I suggest avoiding triggers. Triggers are introducing additional overhead and impacting performance.
The solution I would use (and I'm actually using in few of my projects with similar demands) - don't use DELETE at all. Instead you can add bit (boolean) column called "Deleted", set its default value to 0 (false), and instead of deleting the row you update this field to 1 (true). You'll also need to change your other queries (SELECT) to include something like "WHERE Deleted = 0".
Another option is to continue using DELETE as usual, and to allow deleting records from both primary and replica, but configure WAL archiving, and store WAL archives in some shared directory. This will allow you to moment-in-time recovery, meaning that you'll be able to restore another PostgreSQL instance to state of your cluster in any moment in time (i.e. before the deletion). This way you'll have a trace of deleted records, but pretty complicated procedure to reach the records. Depending on how often deleted records will be checked in the future (maybe they are not checked at all, but simply kept for just-in-case tracking) this approach my also help.