Heroku Postgres pg:backups Restore From Copy - postgresql

I suspect I already know the answer here, but wanted to confirm. When you use the command line interface for Postgres backups, the command heroku pg:backups returns a list of Backups, Restores, and Copies. Apparently, my daily backups routine stopped working somewhere along the way, so my backups list was empty. I did, however, see this:
My suspicion is that this is merely a historical record showing that a copy was performed, however, I was hopeful that maybe I would be able to recover the database from that point in time. The restore command doesn't work with that ID, though. It gives me:
Can anyone confirm that this copy can or cannot be recovered?

Related

GCP Cloud SQL (PostgreSQL) Backups remain after deleting the isntance

We have recently started using GCP Cloud SQL, and have been testing backup and restores. We have done the following and seem to have backups that are not associated with an instance, and we can't remove them.
We created a database and have started using it.
We created a backup schedule for the database backup and wanted to test the restoration process.
We created a clone of the database, and then tested a restore from the original backup. This was all fine and worked as expected.
We left the clone running for a few days, then decided that it was not needed anymore.
We deleted the clone, but by this point, it had created a few backups.
If we run the
gcloud sql backups list --instance -
It shows all backups, including 3 from the deleted instance, this is strange, as backups are supposed to be removed when the instance is deleted (if they are automated). If try to run
gcloud sql backups delete [ID]
I get a http403 error saying not authorized (I am a GCP admin, so have all permissions)
We have tried re-creating the instance with the same name, but the backups are presumably linked via an ID, not name.
Does anyone know:
a) Will we be changed for this?
b) Is there a way to delete these
c) it may well be a legitimate scenario to want to keep backups, and may be restore them later. the restoration process does not work for these orphaned backups, we have tried
d) I found this post, does anyone know if Google have progressed this?
Any help on this would be much appreciated.

I have loaded wrong psql dump into my database, anyway to revert?

Ok, I screwed up.
I dumped one of my psql (9.6.18) staging database with the following command
pg_dump -U postgres -d <dbname> > db.out
And after doing some testing, I "restored" the data using the following command.
psql -f db.out postgres
Notice the absence of -d option? yup. And that was supposed to be the username.
Annnd as the database happend to have the same name as its user, it overwrote the 'default' database (postgres), which had data that other QAs are using.
I cancelled the operation quickly as soon as I realised my mistake, but the damage was still done. Around 1/3 ~ 1/2 of the database is roughly identical to the staging database - at least in terms of the schema.
Is there any way to revert this? I am still looking for any other dumps if any of these guys made one. But I don't think there is any past two to three months. Seems like I got no choice but to own up and apologise to them in the morning.
Without a recent dump or some sort of PITR replication setup, you can't un-revert this easily. The only option is to manually go through the log of what was restored and remove/alter it in the postgres database. This will work for the schema, the data is another matter. FYI, the postgres database should not really be used as a 'working' database. It is there to be a database to connect to for doing other operations, such as CREATE DATABASE or to bootstrap your way into a cluster. If left empty then the above would not have been a problem. You could have done, from another database, DROP DATABASE postgres; and then CREATE DATABASE postgres.
Do you have a capture of the output of the psql -f db.out postgres run?
Since the pg_dump didn't specify --clean or -c, it should not have overwritten anything, just appended. And if your tables have unique or primary keys, most of the data copy operations should have failed with unique key violations and rolled back. Even one overlapping row (per table) would roll back the entire dataset for that table.
Without having the output, it will be hard to figure out what damage has actually been done.
You should also immediately copy the pg_xlog data someplace safe. If it comes down to it, you might be able to use pg_xlogdump to figure out what changes committed and what did not.

Unable to restore cloud sql backup

I followed this link to restore my backup
https://cloud.google.com/sql/docs/mysql/backup-recovery/restoring
and i've tried restoring on multiple instances too
but in every instance's this error comes up in logs
Couldn't repair table: mysql.general_log
Failed to write to mysql.general_log: Incorrect key file for table 'general_log'; try to repair it
First, address the error. Your general query log is enabled, but the install default is disabled. If you do not need the table enabled, then once everything is working, disable it. I would suggest take a fresh backup and then:
A. Repair the table using the mysqlcheck -r YourDB general_log command. (If this is an ISAM table use myisamchk instead.)
B. If that does not repair the table, first try mysqlcheck -r YourDB to repair the whole database (sometimes more than just the table needs to be repaired.)
C. If the restore still doesn't work than there are a couple of possibilities: the database may be corrupted or the backup file is corrupted. You don't mention any other errors, so I do not suspect the whole database is corrupted.
D. To check on the corrupted file, you can create a fresh database instance and try your restore there. If that does not work you can try restoring a data table to confirm if the backup file is usable.
Be prepared for the possibility your backup file is corrupt.

Can I copy the postgresql /base directory as a DB backup?

Don't shoot me, I'm only the OP!
When needing to backup our DB, we always are able to shutdown postgresql completely. After it is down, I found I could copy the "/base" directory with the binary data in it to another location. Checksum the accuracy and am later able to restore that if/when necessary. This has even worked when upgrading to a later version of postgresql. Integrity of the various 'conf' files is not an issue as that is done elsewhere (ie. by other processes/procedures!) in the system.
Is there any risk to this approach that I am missing?
The "File System Level Backup" link in Abelisto's comment is what JoeG is talking about doing. https://www.postgresql.org/docs/current/static/backup-file.html
To be safe I would go up one more level, to "main" on our ubuntu systems to take the snapshot, and thoroughly go through the caveats of doing file-level backups. I was tempted to post the caveats here, but I'd end up quoting the entire page.
The thing to be most aware of (in a 'simple' postgres environment ) is the relationship between the postgres database, a user database and the pg_clog and pg_xlog files. If you only get the "base" you lose the transaction and WAL information, and in more complex installations, other 'necessary' information.
If those caveat conditions listed do not exist in your environment, and you can do a full shutdown, this is a valid backup strategy, which can be much faster than a pg_dump.

MongoDb restoring broken Database

Running MongoDB Server on Windows.
I had a big Db, with backup etc. but retarded as I am, instead of using the Shell to delete some entries, I first copied them to another directory, and then deleted them via explorer. Of course, nothing did work, because MongoDb did miss some entries and did not even start to work properly, with an I/O Error in the Log(File not found). So, I copied the files back where they belonged, again via explorer, retried it, and now I still get the ErrorMsg in the Log, that some file is missing. The weird thing is, that file never existed in those folders, I deleted...
Well so now, at least I still have a backup dump made with mongodump, but I can not restore the dump, because to restore, I have to start the MongoDbServer which will not start, because some folders of my DB entries are missing(the service will start, I can't access the server instance though), and to include the missing folders I have to use mongorestore... Some bad loop, I got going there...
So I created a new DB and wanted to restore my old dump in the new DB, but now I get a invalid header error when using mongorestore --gzip --archive -d test "dump_path"
Any help, how to resolve my problem?
Solved it... I created a new DB, started the mongodbserver but this time, instead of writing mongorestore --gzip --archive filename dumppath, which exited with an too many arguments error, you have to type:
mongorestore --gzip --archive="filename" dumppath, and then everything works as one would expect...