I have a database that seems to be broken for some reason. It's a development db for rails so I don't have a backup but I do need to continue development. I tried to just drop it but that's not working.
$ dropdb "database-name"
dropdb: database removal failed: ERROR: could not open file "global/2964": No such file or directory
Thanks in advance for any help!
There's more wrong here than a "broken" database. Something is badly wrong with your PostgreSQL data directory.
global/9264 looks like it's pg_catalog.pg_db_role_setting, which stores ALTER DATABASE ... SET ... and ALTER ROLE ... SET ... settings. This is not database-specific, it's a global table.
If you have missing files in your data directory your whole PostgreSQL data directory is probably damaged. You should back up what you can, if there's anything you care about, then rename or delete the damaged data directory and initdb a new blank one.
You won't be able to DROP this database (or do much else) because PostgreSQL can't load the files for the pg_db_role_setting table, but it needs to delete entries referring to the dropped database from there.
As for how this happened:
Have you ever run with fsync = off in postgresql.conf?
Do you have SSD storage? If so, have you had any recent sudden power loss?
Have you ever done any direct modifications of any kind inside the PostgreSQL data directory?
Is the PostgreSQL data directory on external storage that might have been suddenly removed?
Have you ever deleted postmaster.pid ?
See also https://wiki.postgresql.org/wiki/Corruption
Related
It is know fact that backup and restore is a slow in Postgres
I'd like to deploy database to PostgreSQL server as fast as posible (Like it is possible in MS SQL just file copy and attach).
So If I:
backup and restore schema only.
And than copy database oid folder (data files) in to the appropriate oid ?
Will it work?
of not what I also need to be consider.
No, that won't work.
A database backup and restore can never be faster than copying the files is. So stop the database, copy the complete cluster and start PostgreSQL again. It won't get any faster.
I am getting the error like following while accessing a Postgres database
ERROR: could not access status of transaction 69675
DETAIL: Could not open file "pg_clog/0000": No such file or directory.
I didn't do anything with the pg_clog folder but the 0000 file is not there.
Is there any way to recover that file or in any way to fix this issue?
Any help would be appreciated.
You are experiencing database corruption, and you should restore from a backup. You should try to figure out what happened to the database so you can prevent it in the future.
Is your storage reliable?
Are you using dangerous settings like fsync = off?
Were there any crashes recently?
Are you really running 9.1? If yes, you shouldn't do that, as it is out of support.
Are there any files in the pg_clog directory? There should be.
Did you have an out-of-space problem recently that may have led someone to remove files from a “log” directory?
As stated in the previous response, you're better off restoring from backup, however, I discovered the metadata for those transaction files are not stored in the same location as the data when we restored the data on a server where we were doing some testing with full vacuum and needed to restore the database to an earlier state before the vacuum. In the event where your data integrity isn't as critical like a test database you can get away with creating empty files for the missing transaction logs like this:
dd if=/dev/zero of=/path/to/db/pg_clog/xxxx bs=256k count=1
chown postgres.postgres /path/to/db/pg_clog/xxxx
chmod go-rwX /path/to/db/pg_clog/xxxx
There may be multiple missing files, but if it's just a few files this is an alternative to consider.
I followed this link to restore my backup
https://cloud.google.com/sql/docs/mysql/backup-recovery/restoring
and i've tried restoring on multiple instances too
but in every instance's this error comes up in logs
Couldn't repair table: mysql.general_log
Failed to write to mysql.general_log: Incorrect key file for table 'general_log'; try to repair it
First, address the error. Your general query log is enabled, but the install default is disabled. If you do not need the table enabled, then once everything is working, disable it. I would suggest take a fresh backup and then:
A. Repair the table using the mysqlcheck -r YourDB general_log command. (If this is an ISAM table use myisamchk instead.)
B. If that does not repair the table, first try mysqlcheck -r YourDB to repair the whole database (sometimes more than just the table needs to be repaired.)
C. If the restore still doesn't work than there are a couple of possibilities: the database may be corrupted or the backup file is corrupted. You don't mention any other errors, so I do not suspect the whole database is corrupted.
D. To check on the corrupted file, you can create a fresh database instance and try your restore there. If that does not work you can try restoring a data table to confirm if the backup file is usable.
Be prepared for the possibility your backup file is corrupt.
I'm using pgAdmin to restore PostgreSQL database. To restore the database I need to delete, drop and remake it. How to restore the database without deleting and remaking it?
This cannot be done in pgAdmin or with any database tools. Regular backup files cannot be restored without deleting the data first because they consist of normal COPY statements which will fail if you have rows in the database (primary keys collide etc).
For a simple way to get back to an earlier snapshot in a testing environment take a look at PostgreSQL documentation - 24.2. File System Level Backup:
For backup:
Shut down your database
copy all the files from your data directory
For restore:
Shut down your database
replace your data directory with the backup directory
Note:
the size of the data might be significantly larger than with a regular backup especially if you have a lot of indexes
this is a server wide backup so you can't do this on individual databases
don't attempt to use it on a different version of PostgreSQL
this really deletes the data too - by replacing it with the backup
Also with regular backups you don't have to do a DROP TABLE if you do a data-only restore with pg_restore --data-only for example. You still have to delete the data though.
I need to back up my database but I don't have enough disk space to dump it. Can I just use duplicity to perform incremental backups on the data directory? Would that corrupt the backup somehow? I don't mind a few of the latest rows missing, but I would like my backup to not be destroyed.
Does anyone know what the case is?
Thanks!
See this page. I believe duplicity uses rsync type mechanism, so you cannot simply grab the directory and go - see the page of that page about rsync. If you need to do a file system level backup, while online, then you'll need some sort of atomicity like snapshots.
Most likely, the backup simply wouldn't work.
Postgres has lots of backup options though, like PITR. I suggest a read through the fine manual.
No, you can't backup the data directory while the database service is running. You could backup the WAL-segments for a point in time recovery when you want to restore. You have to make sure you test your recovery as well, it's a litte more complicated then pgrestore of an ordinary dump.