Is it possible to restore the db with incomplete datas? - postgresql

I have a database(Postgres) backup which contains 100+ tables and most of them have many rows(100K) in it. But when I restored the db with the backup file(production data:- contains large volume of data) one table restored with less data nearly 300K rows are missing. Is there any possibility to happen like this or I'm missing anything?
Thanks in advance

one option could be the following. You should store your data directory from the old db in a zip file and try again. More description here
Michael

Related

Copying a MongoDB record for record

We have a MongoDB sitting at 600GB. We've deleted a lot of documents, and in the hopes of shrinking it, we repaired it onto a 2TB drive.
It ran for hours, eventually running out of the 2TB space. When I looked at the repair directory, it had created way more files than the original database??
Anyway, I'm trying to look for alternative options. My first thought was to create a new MongoDB, and copy each row from the old to the new. Is it possible to do this, and what's the fastest way?
I have a lot of success in copying database using the db.copyDatabase command:
link to mongodb copyDatabase
I have also used MongoVUE, which is a software that makes it easy to copy databases from one location to another - MonogoVUE (which is jus ta graphical interface on top of monogo).
If you have no luck with copyDatabase, I can suggest you try to dump and restore the database to an external file, something like mongodump or lvcreate
Here is a full read on backup and restore which should allow you to copy the database easily: http://docs.mongodb.org/manual/core/backups/

Reconstructing PostgreSQL database from data files

You've heard the story, disk fails, no recent db backup, recovered files in disarray...
I've got a pg 9.1 database with a particular table I want to bring up to date. In the postgres data/base/444444 directory are all the raw files with table and index data. One particular table I can identify and it's files are as follows:
[relfindnode]
[relfindnode]_vm
[relfindnode]_fsm
where [relfindnode] is the number corresponding to the table I want to reconstruct.
In the current out-of-date database the main [relfindnode] file is 16MB.
In my recovered files, I have found the corresponding [relfindnode] file and the _vm and _fsm files. The main [relfindnode] file is 20MB, so I'm hopeful that this contains more upto-date data.
However, when I swap the files over and restart my machine (OS X) and I inspect the table, it has approximately the same number of records in it (not exactly the same).
Question: is it possible just to swap out these files and expect it to work? How else can I extract the data from the 20MB table file? I've read the other threads here regarding constructing from raw data files.
Thanks.

How to Recover PostgreSQL 8.0 Database

On my PostgreSQL 8.0 database, I started receiving a "ERROR: could not open relation 1663/17269/16691: No such file or directory" message, and now my data is inaccessible.
Any ideas on how to recover at least some of the data? Professional support is an option.
Regards.
RP
If you want your data back in a hurry and it's worth something to you, then the professional support option should be simple enough.
Some things to check, now that you've got a full backup of all your database (that's base, pg_clog, pg_xlog and all the other folders at that level).
Does that file actually exist? It might be a permissions problem rather than the file actualy going missing.
Check your anti-virus/security packages - have they mistakenly quarantined the file? If you can exclude PostgreSQL's database directories from scans/active scans that's worthwhile too.
Make a note of everything you can remember about when this happened and what happened just before. This will help with troubleshooting for you or a consultant.
Check the logs likewise - this error will be logged, find the first occurrence and see if there's anything odd before.
Double-check you really do have all your existing files backed up, and restart PostgreSQL.
Try connecting as user postgres to database postgres or database template1. If that works then the file is one of your database files rather than the global list of users or some such.
Try creating an empty file with the right name (and permissions - check the other files). If you are really lucky it's just an index. Otherwise it could be a data table you can live without. Then you can dump other tables individually.
OK - if you're here then you can connect to your DB. Those numbers in the file-path are PostgreSQL's OIDs identifying system objects. You can try a couple of useful queries here. These two queries should give you the IDs of the databases and then the object with the missing file. This is useful information for your professional too.
SELECT oid, datname, dattablespace FROM pg_database;
SELECT * FROM pg_class WHERE relfilenode = 16691;
Remember make sure you have the filesystem backup before tinkering.

Accidently deleted data from PostgreSQL 9.1 table. Is there any way to restore the data?

We were doing an in house tool with data base as PostgreSQL 9.1. Accidentally running a delete script we lost the data in three tables.
We didn't have a backup. :(
A try on the manuals, it didn't help. A quick look at the data files in /PostgreSQL/9.1/data/base/ , we found that data is not deleted ( atleast not completely )
Is there a way to recover this data ?
Thanks Daniel, the directions were useful.
And luckily we found a tool to do the same. Find the tool and instructions in the below link
pg_dirtyread
The instructions provided were simple and accurate.
Additionally what we had to do was.
Create temporary tables for restoration
Instead of the SELECT statement in the instructions, used INSERT statements to insert into the backup tables.
Filter the corrupted data ( manually )
Insert back to the original tables.
There were corrupted entries, but not many, as we could stop the running service immediately and avoid any updates to those tables.
Thanks to OmniTI Labs team. This tool saved us the day ( night :) ) .

Firebird DB size do not change

I'm using the Firebird DB and I did the following. I create a DB, fill it with a lot of records (size of the DB file 113MB). Then I delete all the records but the size remains the same. Is there any way to "shrink" or to "pack" the DB file?
The page at Re: Firebird and vacuum? claims that you need to use gbak to rebuild the database for that. Note that the file will not grow when you insert records until the total amount of data hits 113MB again, so you might just want to leave the file at that size.