Restore dump locally without pre-existing database - postgresql

I have received a backup file from a customer and wish to restore it.
I've tried executing the following on the command line (which runs for ages, seems to get to the end, but does not produce a new database in my localhost server).
pg_restore --create -h localhost c:\temp\myBackup.backup
I've also tried to run the restore through pgAdmin4 but it reports that indexes already exist if I tried to create an empty database and restore into it and I can't seem to locate the correct options to restore the database and create a new one (without having first selected an empty database).
Any pointers would be greatly appreciated. I'm happy to use any method.

Related

Need to convert a dump.sql to a *fname.dump file for restoration of Odoo

My last working database back up of an Odoo13CE system was a full one, including the file store. I'm getting timeouts when trying to restore "a copy" via Odoo database manager page. Thought I could just do a partial restore (dump.sql & manifest.json), dump the filestore, recompress and upload and that brought everything down to its knees (Errored w/" no *.dump file found). So logged into server and dropped my failed restore and restarted odoo service and all is back to somewhat normal, with the database I want to replace active.
Is there a way to convert that .sql to a .dump or some other way to get my .sql to be added to my pgdb? I'm fairly green re: psql so if I'm missing something simple, please feel free to shove it down my throat.
TIA
to restore sql back up file to a new database:
psql YOUR_DATABASE_NAME < YOUR_FILENAME
You can read more about restoring/back up Postgres Db here: https://www.postgresql.org/docs/11/backup-dump.html
Restoring the heavy size database(with file store) you have to increase the limit of the server to continue your process.
Add the parameter on your path
--limit-time-cpu=6000 --limit-time-real=12000
Restore the SQL File
psql database_name < your_file.sql
Restore the Dump File
pg_restore -d database_name < your_file.dump

What is the best way to backfill old database data to an existing Postgres database?

A new docker image was recently stood up to replace an existing postgres database. A dump was taken of the database before the old instance was shut down using the following command:
pg_dump -h localhost -p 5432 -d *dbname* -U postgres > *dbname*.pgdump
We'd like to concatenate or append this data to the new database in order to "backfill" some older historical data. The database name and schema of the two databases is identical. What is the easiest, safest way to do this? Secondly, need postgres be shut down during the process?
If overlapping primary keys or unique columns have been assigned to the new data, then there will be no clean way to merge them without putting in some work to clean that up. Assuming that hasn't happened...
The current dump file will have create statements for all the objects that already exists. If you replay that file into the current database, you will get a bunch of errors for all those objects. If you don't have it all run in one transaction, then you could simply ignore those errors. But, you might also load data in the wrong order and get foreign key violations. Those errors will be mixed in with all the other ones about existing object, so might be easy to overlook.
So what I would do is stand up an empty database server, and replay your current dump into that. Then retake the pg_dump, but with either -a or --section=data. Then you should be able to load that dump into your new database. This has two advantages, it will not dump out CREATE statements which are not needed and throw errors which would need to be ignored, and it should dump the tables in an order which will not cause foreign key violations.

Unable to restore cloud sql backup

I followed this link to restore my backup
https://cloud.google.com/sql/docs/mysql/backup-recovery/restoring
and i've tried restoring on multiple instances too
but in every instance's this error comes up in logs
Couldn't repair table: mysql.general_log
Failed to write to mysql.general_log: Incorrect key file for table 'general_log'; try to repair it
First, address the error. Your general query log is enabled, but the install default is disabled. If you do not need the table enabled, then once everything is working, disable it. I would suggest take a fresh backup and then:
A. Repair the table using the mysqlcheck -r YourDB general_log command. (If this is an ISAM table use myisamchk instead.)
B. If that does not repair the table, first try mysqlcheck -r YourDB to repair the whole database (sometimes more than just the table needs to be repaired.)
C. If the restore still doesn't work than there are a couple of possibilities: the database may be corrupted or the backup file is corrupted. You don't mention any other errors, so I do not suspect the whole database is corrupted.
D. To check on the corrupted file, you can create a fresh database instance and try your restore there. If that does not work you can try restoring a data table to confirm if the backup file is usable.
Be prepared for the possibility your backup file is corrupt.

MongoDb restoring broken Database

Running MongoDB Server on Windows.
I had a big Db, with backup etc. but retarded as I am, instead of using the Shell to delete some entries, I first copied them to another directory, and then deleted them via explorer. Of course, nothing did work, because MongoDb did miss some entries and did not even start to work properly, with an I/O Error in the Log(File not found). So, I copied the files back where they belonged, again via explorer, retried it, and now I still get the ErrorMsg in the Log, that some file is missing. The weird thing is, that file never existed in those folders, I deleted...
Well so now, at least I still have a backup dump made with mongodump, but I can not restore the dump, because to restore, I have to start the MongoDbServer which will not start, because some folders of my DB entries are missing(the service will start, I can't access the server instance though), and to include the missing folders I have to use mongorestore... Some bad loop, I got going there...
So I created a new DB and wanted to restore my old dump in the new DB, but now I get a invalid header error when using mongorestore --gzip --archive -d test "dump_path"
Any help, how to resolve my problem?
Solved it... I created a new DB, started the mongodbserver but this time, instead of writing mongorestore --gzip --archive filename dumppath, which exited with an too many arguments error, you have to type:
mongorestore --gzip --archive="filename" dumppath, and then everything works as one would expect...

mysqldump by query, then use to update remote database

I have a database containing a very large table including binary data which I want to update on a remote machine, once a day. Rather than dumping the entire table, transferring and recreating it on the remote machine, I want to dump only the updates, then use that dump to update the remote machine.
I already understand that I can produce the dump file as such.
mysqldump -u user -p=pass --quick --result-file=dump_file \
--where "Updated >= one_day_ago" \
database_name table_name
1) Does the resulting "restore" on the remote machine
mysql -u user -p=pass database_name < dump_file
only update the necessary rows? Or are there other adverse effects?
2) Is this the best way to do this? (I am unable to pipe to the server directly and using --host option)
If you only dump rows where the WHERE clause is true, then you will only get a .sql file that contains the values you want to update. So you will never get duplicate values if you use the current export options. However, inserting these into a database will not work. You will have to use the commandline parameter --replace, otherwise, if you dump your database and a row with id 6 in table table1 and try to import this into your other database, you'll get an error on duplicates if a row already has id = 6. Using the --replace parameter, it will overwrite older values, which can only happen if there is a new one (according to your WHERE clause).
So to quickly answer:
Yes, this will restore on the remote machine, but only if you saved using --replace (this will restore the latest version of the file you have)
I am not entirely sure if you can pipe backups. According to this website, you can, but I have never tried it before.