i want to backup firebird database. Below is command that i use to execute the backup.
gbak -b -v -user SYSDBA -pass ***** -service localhost:service_mgr -verify -y E:/logs/backup_20111104.log D:\data\campaign.FDB E:/backup/campaign_20111104.fbk -FIX_FSS_METADATA WIN1252 -FIX_FSS_DATA WIN1252
However, after 2 hours running, it ends with general error(only in last line). Below is the error.
gbak:22068 records written
gbak: writing data for table AA_1307265720107_842
gbak:19872 records written
gbak: writing data for table AA_1301338383915_1025
gbak:4 records written
gbak:Exiting before completion due to errors
There is 700+ tables in this database. And the database size is 129GB.
I need to get complete stacktrace(if any). Or any idea where to check so i can know the actual error?
Try to repair the database before backup:
gfix -mend -full -ignore <db name> -user sysdba -pas masterkey
While backing up add two switches -g -ig in order to prohibit garbage collection and ignore check sum errors.
Consult the gfix manual section Database Validation and Recovery.
Full Validation
By default, validation works at page level. If no need to go deeper and validate at the record level as well, the command to do this is:
gfix -v[alidate] -full database_name
using this option will validate, report and update at both page and record level. Any corrupted structures etc will be fixed.
[...]
Database Recovery
If the database validation described above produces no output then the database structures can be assumed to be valid. However, in the event that errors are reported, you may have to repair the database before it can be used again.
Recover a Corrupt Database
The option required to fix a corrupted database is the gfix -m[end] command. However, it cannot fix all problems and may result in a loss of data. It all depends on the level of corruption detected. The command is:
gfix -m[end] database_name
This causes the corruptions in data records to be ignored. While this sounds like a good thing, it is not. Subsequent database actions (such as taking a backup) will not include the corrupted records, leading to data loss.
Important
The best way to avoid data loss is to make sure that you have enough regular backups of your database and to regularly carry out test restorations. There is no point taking backups every night, for example, if they cannot be used when required. Test always and frequently.
Equally, when attempting to recover a potentially corrupted database, always work with a copy of the main database file and never with the original. Using the -mend option can lead to silent deletions of data because gfix doesn't care about internal database constraints like foreign keys etc, the -mend option simply says to gfix "go ahead and clean out anything you don't like".
These are the most important points of this section, but make sure to read the entire chapter. If you need additional support, on the Firebird-support list there are a number of people familiar with the ins and outs of Firebird and how to handle these kinds of errors.
Related
I'm used to working with SQL Server and the SQL Server Management Studio has the option to automatically generate a script to drop and recreate everything in a database (tables/views/procedures/etc). I find that when developing a new application and writing a bunch of junk in a local database for basic testing it's very helpful to have the options to just nuke the whole thing and recreate it in a clean slate, so I'm looking for a similar functionality within postgres/pgadmin.
PGAdmin has an option to generate a create script for a specific table but right clicking each table would be very tedious and I'm wondering if there's another way to do it.
To recreate a clean schema only database you can use the pg_dump client included with a Postgres server install. The options to use are:
-c
--clean
Output commands to clean (drop) database objects prior to outputting the commands for creating them. (Unless --if-exists is also specified, restore might generate some harmless error messages, if any objects were not present in the destination database.)
This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call pg_restore.
and:
-s
--schema-only
Dump only the object definitions (schema), not data.
This option is the inverse of --data-only. It is similar to, but for historical reasons not identical to, specifying --section=pre-data --section=post-data.
(Do not confuse this with the --schema option, which uses the word “schema” in a different meaning.)
To exclude table data for only a subset of tables in the database, see --exclude-table-data.
clean in Flyway
The database migration tool Flyway offers a clean command that drops all objects in the configured schemas.
To quote the documentation:
Clean is a great help in development and test. It will effectively give you a fresh start, by wiping your configured schemas completely clean. All objects (tables, views, procedures, …) will be dropped.
Needless to say: do not use against your production DB!
Ok, I screwed up.
I dumped one of my psql (9.6.18) staging database with the following command
pg_dump -U postgres -d <dbname> > db.out
And after doing some testing, I "restored" the data using the following command.
psql -f db.out postgres
Notice the absence of -d option? yup. And that was supposed to be the username.
Annnd as the database happend to have the same name as its user, it overwrote the 'default' database (postgres), which had data that other QAs are using.
I cancelled the operation quickly as soon as I realised my mistake, but the damage was still done. Around 1/3 ~ 1/2 of the database is roughly identical to the staging database - at least in terms of the schema.
Is there any way to revert this? I am still looking for any other dumps if any of these guys made one. But I don't think there is any past two to three months. Seems like I got no choice but to own up and apologise to them in the morning.
Without a recent dump or some sort of PITR replication setup, you can't un-revert this easily. The only option is to manually go through the log of what was restored and remove/alter it in the postgres database. This will work for the schema, the data is another matter. FYI, the postgres database should not really be used as a 'working' database. It is there to be a database to connect to for doing other operations, such as CREATE DATABASE or to bootstrap your way into a cluster. If left empty then the above would not have been a problem. You could have done, from another database, DROP DATABASE postgres; and then CREATE DATABASE postgres.
Do you have a capture of the output of the psql -f db.out postgres run?
Since the pg_dump didn't specify --clean or -c, it should not have overwritten anything, just appended. And if your tables have unique or primary keys, most of the data copy operations should have failed with unique key violations and rolled back. Even one overlapping row (per table) would roll back the entire dataset for that table.
Without having the output, it will be hard to figure out what damage has actually been done.
You should also immediately copy the pg_xlog data someplace safe. If it comes down to it, you might be able to use pg_xlogdump to figure out what changes committed and what did not.
I have a bit of a weird issue. We were trying to create a database baseline for our local environment that has very specific data pre-seeded into it. Our hopes were to make sure that everyone was operating with the same data, making collaboration and reviewing code a bit simpler.
My idea for this was to run a command to dump the database whenever we run a migration or decide a new account is necessary for local dev. The issue with this is the database dump is around 17MB. I'm trying to avoid us having to add a 17MB file to GitHub every time we update the database.
So the best solution I could think of was to setup a script to dump each individual table in the database. This way, if a single table is updated, we'd only be pushing that backup to GitHub and it would be more along a ~200kb file as opposed to 17mb.
The main issue I'm running into with this is trying to restore the database. With a full dump, handling the foreign keys is relatively simple as it's all done in a single restore command. But with multiple restores, it gets a bit more complicated.
I'm looking to find a way to restore all tables to a database, ignoring triggers and constraints, and then enabling them again once the data has been populated. (or find a way to export the tables based on the order the foreign keys are defined). There are a lot of tables to work with, so doing this manually would be a bit of a task.
I'm also concerned about the relational integrity of the database if I disabled/re-enable constraints. Any help or advice would be appreciated.
Right now I'm running the following on every single table:
pg_dump postgres://user:password#pg:5432/database -t table_name -Fc -Z9 -f /data/www/database/data/table_name.bak
And then this command to restore all backups to the DB.
$data_command = "pg_restore --disable-triggers -d $dbUrl -Fc \"%s\"";
$backups = glob("$directory*.bak");
foreach($backups as $data_file){
if($data_file != 'data_roles.bak') {
exec(sprintf($data_command, $data_file));
}
}
This obviously doesn't work as I hit a ton of "Relationship doesn't exist" errors. I guess I'm just looking for a better way to accomplish this.
I would separate the table data and the database metadata.
Create a pre- and post-data scfipt with
pg_dump --section=pre-data -f pre.sql mydb
pg_dump --section=post-data -f post.sql mydb
Then dump just the data for each table:
pg_dump --section=data --table=tab1 -f tab1.sql mydb
To restore the database, first restore pre.sql, then all the table data, then post.sql.
The pre- and post-data will change often, but they are not large, so that shouldn't be a problem.
Don't shoot me, I'm only the OP!
When needing to backup our DB, we always are able to shutdown postgresql completely. After it is down, I found I could copy the "/base" directory with the binary data in it to another location. Checksum the accuracy and am later able to restore that if/when necessary. This has even worked when upgrading to a later version of postgresql. Integrity of the various 'conf' files is not an issue as that is done elsewhere (ie. by other processes/procedures!) in the system.
Is there any risk to this approach that I am missing?
The "File System Level Backup" link in Abelisto's comment is what JoeG is talking about doing. https://www.postgresql.org/docs/current/static/backup-file.html
To be safe I would go up one more level, to "main" on our ubuntu systems to take the snapshot, and thoroughly go through the caveats of doing file-level backups. I was tempted to post the caveats here, but I'd end up quoting the entire page.
The thing to be most aware of (in a 'simple' postgres environment ) is the relationship between the postgres database, a user database and the pg_clog and pg_xlog files. If you only get the "base" you lose the transaction and WAL information, and in more complex installations, other 'necessary' information.
If those caveat conditions listed do not exist in your environment, and you can do a full shutdown, this is a valid backup strategy, which can be much faster than a pg_dump.
As subjects, what will happen to the backup file while there is still many active transaction in the database. Does it export realtime or just partially backups ?
thanks in advance.
pg_dump runs in a serializable transaction, so it sees a consistent snapshot of the database including system catalogs.
However it is possible to get 'cache lookup failed' error if someone performs DDL changes while a dump is starting. The time window for this sort of thing isn't very large, but it can happen. See: http://archives.postgresql.org/pgsql-bugs/2010-02/msg00187.php
pg_dump will give you a consistent state. Any transaction not completed before pg_dump has been issued will not be reflected.