I want to make a copy of a live Firebird .fdb database.
I understand that simply copying it could result in a corrupted database and I looked into using the gbak command because it is able to perform a backup while the database is running.
So that will give me a database backup but I would need to restore this before I can use it. My database is nearly 1GB and takes 10 minutes to restore which is too long. Is there any other method to simply copy a Firebird live database from one location to another safely?
So far I have used the following to backup (which works):
gbak -v -t -user SYSDBA -password "masterkey" 127.0.0.1:"C:/Files/Live/Database.fdb" "C:\Test\Test.fbk"
I also tried using the following to backup and restore at the same time:
gbak -c [options] <source database> stdout | gbak -r [options] stdin <target database>
but this kept giving the error:
Done with volume #1, "new.gbak"
Press return to reopen that file, or type a new
name followed by return to open a different file.
The risk of corruption is due to the way Firebird writes it file. Your copy might contain inconsistent data when copied at the same time that Firebird (re)writes a datapage. As far as I know the only real risk of corruption is during writes of index pages (and then only for index page splits), otherwise it just leads to inconsistent data and dangling transactions (which wouldn't be visible anyway as the transaction is not committed).
If you really don't want to use a backup, you can set the Firebird database in a backup state. This freezes the database and writes the changes to a delta file instead. You enable this state using ALTER DATABASE BEGIN BACKUP and end this state using ALTER DATABASE END BACKUP. See the documentation for ALTER DATABASE. This command was added in Firebird 2.0.
For the second part of your question (which really should have been posted as a separate question):
The command gbak -c doesn't create a backup, it Creates a database from a backup (it is the brother of -r (replace) but doesn't overwrite an existing database. See Restore Switches for more information.
See Create a Database Clone Without a Dump File for an example of how you can make a backup like this:
gbak -backup emptest stdout | gbak -replace stdin emptest_2
You can do
alter database begin backup;
then copy the file using standard file copy of your OS and
alter database end backup;
Also I strongly recommend reading these pages [1] [2] about nbackup.
Related
My last working database back up of an Odoo13CE system was a full one, including the file store. I'm getting timeouts when trying to restore "a copy" via Odoo database manager page. Thought I could just do a partial restore (dump.sql & manifest.json), dump the filestore, recompress and upload and that brought everything down to its knees (Errored w/" no *.dump file found). So logged into server and dropped my failed restore and restarted odoo service and all is back to somewhat normal, with the database I want to replace active.
Is there a way to convert that .sql to a .dump or some other way to get my .sql to be added to my pgdb? I'm fairly green re: psql so if I'm missing something simple, please feel free to shove it down my throat.
TIA
to restore sql back up file to a new database:
psql YOUR_DATABASE_NAME < YOUR_FILENAME
You can read more about restoring/back up Postgres Db here: https://www.postgresql.org/docs/11/backup-dump.html
Restoring the heavy size database(with file store) you have to increase the limit of the server to continue your process.
Add the parameter on your path
--limit-time-cpu=6000 --limit-time-real=12000
Restore the SQL File
psql database_name < your_file.sql
Restore the Dump File
pg_restore -d database_name < your_file.dump
I have a bit of a weird issue. We were trying to create a database baseline for our local environment that has very specific data pre-seeded into it. Our hopes were to make sure that everyone was operating with the same data, making collaboration and reviewing code a bit simpler.
My idea for this was to run a command to dump the database whenever we run a migration or decide a new account is necessary for local dev. The issue with this is the database dump is around 17MB. I'm trying to avoid us having to add a 17MB file to GitHub every time we update the database.
So the best solution I could think of was to setup a script to dump each individual table in the database. This way, if a single table is updated, we'd only be pushing that backup to GitHub and it would be more along a ~200kb file as opposed to 17mb.
The main issue I'm running into with this is trying to restore the database. With a full dump, handling the foreign keys is relatively simple as it's all done in a single restore command. But with multiple restores, it gets a bit more complicated.
I'm looking to find a way to restore all tables to a database, ignoring triggers and constraints, and then enabling them again once the data has been populated. (or find a way to export the tables based on the order the foreign keys are defined). There are a lot of tables to work with, so doing this manually would be a bit of a task.
I'm also concerned about the relational integrity of the database if I disabled/re-enable constraints. Any help or advice would be appreciated.
Right now I'm running the following on every single table:
pg_dump postgres://user:password#pg:5432/database -t table_name -Fc -Z9 -f /data/www/database/data/table_name.bak
And then this command to restore all backups to the DB.
$data_command = "pg_restore --disable-triggers -d $dbUrl -Fc \"%s\"";
$backups = glob("$directory*.bak");
foreach($backups as $data_file){
if($data_file != 'data_roles.bak') {
exec(sprintf($data_command, $data_file));
}
}
This obviously doesn't work as I hit a ton of "Relationship doesn't exist" errors. I guess I'm just looking for a better way to accomplish this.
I would separate the table data and the database metadata.
Create a pre- and post-data scfipt with
pg_dump --section=pre-data -f pre.sql mydb
pg_dump --section=post-data -f post.sql mydb
Then dump just the data for each table:
pg_dump --section=data --table=tab1 -f tab1.sql mydb
To restore the database, first restore pre.sql, then all the table data, then post.sql.
The pre- and post-data will change often, but they are not large, so that shouldn't be a problem.
I followed this link to restore my backup
https://cloud.google.com/sql/docs/mysql/backup-recovery/restoring
and i've tried restoring on multiple instances too
but in every instance's this error comes up in logs
Couldn't repair table: mysql.general_log
Failed to write to mysql.general_log: Incorrect key file for table 'general_log'; try to repair it
First, address the error. Your general query log is enabled, but the install default is disabled. If you do not need the table enabled, then once everything is working, disable it. I would suggest take a fresh backup and then:
A. Repair the table using the mysqlcheck -r YourDB general_log command. (If this is an ISAM table use myisamchk instead.)
B. If that does not repair the table, first try mysqlcheck -r YourDB to repair the whole database (sometimes more than just the table needs to be repaired.)
C. If the restore still doesn't work than there are a couple of possibilities: the database may be corrupted or the backup file is corrupted. You don't mention any other errors, so I do not suspect the whole database is corrupted.
D. To check on the corrupted file, you can create a fresh database instance and try your restore there. If that does not work you can try restoring a data table to confirm if the backup file is usable.
Be prepared for the possibility your backup file is corrupt.
I'm using pgAdmin to restore PostgreSQL database. To restore the database I need to delete, drop and remake it. How to restore the database without deleting and remaking it?
This cannot be done in pgAdmin or with any database tools. Regular backup files cannot be restored without deleting the data first because they consist of normal COPY statements which will fail if you have rows in the database (primary keys collide etc).
For a simple way to get back to an earlier snapshot in a testing environment take a look at PostgreSQL documentation - 24.2. File System Level Backup:
For backup:
Shut down your database
copy all the files from your data directory
For restore:
Shut down your database
replace your data directory with the backup directory
Note:
the size of the data might be significantly larger than with a regular backup especially if you have a lot of indexes
this is a server wide backup so you can't do this on individual databases
don't attempt to use it on a different version of PostgreSQL
this really deletes the data too - by replacing it with the backup
Also with regular backups you don't have to do a DROP TABLE if you do a data-only restore with pg_restore --data-only for example. You still have to delete the data though.
I am fairly new with PQSQL and am slowly picking things up - I have added a new disk and would like to do two things:
Restore a backup to this new disk - /hda2/pgdata/
Move a database from /hda1/pgdata to /hda2/pgdata/
Q1. Use pg_restore to restore a database. Check out the documentation which is very clear.
One important thing to remember, if you want to move to a later version of PostgreSQL use the later version of pg_dump to create a backup dump file. For example if you want to move from PostgreSQL version 8.3 to version 8.4, then create a backup dump file using pg_dump from version 8.4 and then use pg_restore 8.4 to recreate database in the 8.4 server.
http://www.postgresql.org/docs/8.4/static/app-pgrestore.html
Q2. Backup and restore is a safe way of doing it. Before restoring one can create a tablespace on the new disk and place the database in that space.
CREATE DATABASE mydb TABLESPACE myspace;
http://www.postgresql.org/docs/8.4/interactive/manage-ag-tablespaces.html
Simple command to restore database
Open PSQL Command console
Provide credentials
go to specific database that you need to restore (If the dataabse is not there Create empty database
/i < sql Dump file Path > e.g. \i /usr/local/pgsql/db20121109.sql