postgresql: Accidentally deleted pg_filenode.map - postgresql

Is there any way to recover or re-create pg_filenode.map file that was accidentally deleted? Or is there any solution on how to fix this issue without affecting the database? Any suggestions to fix this issue is highly appreciated! The postgres version that we have is 9.0 running in Redhat Linux 5. Thanks!

STOP TRYING TO FIX ANYTHING RIGHT NOW. Everything you do risks making it worse.
Treat this as critical database corruption. Read and act on this wiki article.
Only once you have followed its advice should you even consider attempting repair or recovery.
Since you may have some hope of recovering the deleted file if it hasn't been overwritten yet, you should also STOP THE ENTIRE SERVER MACHINE or unmount the file system PostgreSQL is on and disk image it.
If this data is important to you I advise you to contact professional support. This will cost you, but is probably your best chance of getting your data back after a severe administrator mistake like this. See PostgreSQL professional support. (Disclaimer: I work for one of the listed companies as shown in my SO profile).
It's possible you could reconstruct pg_filenode.map by hand using information about the table structure and contents extracted from the on-disk tables. Probably a big job, though.

First, if this is urgent and valuable, I strongly recommend contacting professional support initially. However, if you can work on a disk image, if it is not time critical, etc. here are important points to note and how to proceed (we recently had to recover a bad pg_filenode.map. Moreover you are better off working on a disk image of a disk image.
What follows is what I learned from having to recover a damaged file due to an incomplete write on the containing directory. It is current to PostgreSQL 10, but that could change at any time
Before you begin
Data recovery is risky business. Always note what recovery means to your organization, what data loss is tolerable, what downtime is tolerable etc before you begin. Work on a copy of a copy if you can. If anything doesn't seem right, circle back, evaluate what went wrong and make sure you understand why before proceeding.
What this file is and what it does
The standard file node map for PostgreSQL is stored in the pg_class relation which is referenced by object id inside the Pg catalogs. Unfortunately you need a way to bootstrap the mappings of the system tables so you can look up this sort of informatuion.
In most deployments this file will never be written. It can be copied from a new initdb on the same version of Postgres with the same options passed to initdb aside from data directory. However this is not guaranteed.
Several things can change this mapping. If you do a vacuum full or similar on system catalogs, this can change the mapping from the default and then copying in a fresh file from an initdb will not help.
Some Things to Try
The first thing to try (on a copy of a copy!) is to replace the file with one from a fresh initdb onto another filesystem from the same server (this could be a thumb drive or whatever). This may work. It may not work.
If that fails, then it would be possible perhaps to use pg_filedump and custom scripting/C programming to create a new file based on efforts to look through the data of each relation file in the data directory. This would be significant work as Craig notes above.
If you get it to work
Take a fresh pg_dump of your database and restore it into a fresh initdb. This way you know everything is consistent and complete.

Related

Are there any negative performance or functionality downsides to using pg_upgrade with --link option afterwards?

I'm about upgrade a quite large PostgreSQL cluster from 9.3 to 11.
The upgrade
The cluster is approximately 1,2Tb in size. The database has a disk system consisting of a fast HW RAID 10 array of 8 DC-edition SSDs with 192GB ram and 64 cores. I am performing the upgrade by replicating the data to a new server with streaming replication first, then upgrading that one to 11.
I tested the upgrade using pg_upgrade with the --link option, this takes less than a minute. I also tested the upgrade regularly (without --link) with many jobs, that takes several hours (+4).
Questions
Now the obvious choice is of cause for me to use the --link option, however all this makes me wonder - is there any downsides (performance or functionality wise) to using that over the regular slower method? I do not know the internal workings of postgresql data structures, but I have a feeling there could be a performance difference after the upgrade between rewriting the data entirely and to just using hard links - whatever that means?
Considerations
The only thing I can find in the documentation about the drawbacks of --link is the downside of not being able to access the old data directory after the upgrade is performed https://www.postgresql.org/docs/11/pgupgrade.htm However that is only a safety concern and not a performance drawback and doesn't really apply in my case of replicating the data first.
The only other thing I can think of is reclaiming space, with whatever performance upsides that might have. However as I understand it, that can also be achieved by running a VACUUM FULL DATABASE (or CLUSTER?) command after the --link-upgraded database has been upgraded? Also the reclaiming of space is not very impactful performance wise on an SSD as I understand.
I appreciate if anyone can help cast some light into this.
There is absolutely no downside to using hard links (with the exception you noted, that the old cluster is dead and has to be removed).
A hard link is in no way different from a normal file.
A “file” in UNIX is in reality an “inode”, a structure containing file metadata. An entry in a directory is a (hard) link to that inode.
If you create another hard link to the inode, the same file will be in two different directories, but that has no impact whatsoever on the behavior of the file.
Of course you must make sure that you don't start both the only and the new server. Instant data corruption would ensue. That's why you should remove the old cluster as soon as possible.

PostgreSQL - Recovery of Functions' code following accidental deletion of data files

So, I am (well... I was) running PostgreSQL within a container (Ubuntu 14.04LTS with all the recent updates, back-end storage is "dir" because of convince).
To cut the long story short, the container folder got deleted. Following the use of extundelete and ext4magic, I have managed to extract some of the database physical files (it appears as if most of the files are there... but not 100% sure if and what is missing).
I have two copies of the database files. One from 9.5.3 (which appears to be more complete) and one from 9.6 (I upgraded the container very recently to 9.6, however it appears to be missing datafiles).
All I am after is to attempt and extract the SQL code the relates to the user defined functions. Is anyone aware of an approach that I could try?
P.S.: Last backup is a bit dated (due to bad practices really) so it would be last resort if the task of extracting the needed information is "reasonable" and "successful".
Regards,
G
Update - 20/4/2017
I was hoping for a "quick fix" by somehow extracting the function body text off the recovered data files... however, nothing's free in this life :)
Starting from the old-ish backup along with the recovered logs, we managed to cover a lot of ground into bringing the DB back to life.
Lessons learned:
1. Do implement a good backup/restore strategy
2. Do not store backups on the same physical machine
3. Hardware failure can be disruptive... Human error can be disastrous!
If you can reconstruct enough of a data directory to start postgres in single user mode you might be able to dump pg_proc. But this seems unlikely.
Otherwise, if you're really lucky you'll be able to find the relation for pg_proc and its corresponding pg_toast relation. The latter will often contain compressed text, so searches for parts of variables you know appear in function bodies may not help you out.
Anything stored inline in pg_proc will be short functions, significantly less than 8k long. Everything else will be in the toast relation.
To decode that you have to unpack the pages to get the toast hunks, then reassemble them and uncompress them (if compressed).
If I had to do this, I would probably create a table with the exact same schema as pg_proc in a new postgres instance of the same version. I would then find the relfilenode(s) for pg_catalog.pg_proc and its toast table using the relfilenode map file (if it survived) or by pattern matching and guesswork. I would replace the empty relation files for the new table I created with the recovered ones, restart postgres, and if I was right, I'd be able to select from the tables.
Not easy.
I suggest reading up on postgres's storage format as you'll need to understand it.
You may consider https://www.postgresql.org/support/professional_support/ . (Disclaimer, I work for one of the listed companies).
P.S.: Last backup is a bit dated (due to bad practices really) so it would be last resort if the task of extracting the needed information is "reasonable" and "successful".
Backups are your first resort here.
If the 9.5 files are complete and undamaged (or enough so to dump the schema) then simply copying them in place, checking permissions and starting the server will get you going. Don't trust the data though, you'll need to check it all.
Although it is possible to partially recover given damaged files, it's a long complicated process and the fact that you are asking on Stack Overflow probably means it's not for you.

I TRUNCATEd a table. How do I get the data back?

In my postgesql database, unfortunately I truncate this table mail_group, and the table is delete from the database, how to I get back this table.
Kindly help me, waiting for reply.
Thanks
Anyone else in the same situation: immediately stop your database with pg_ctl stop -m immediate (the immediate is important, you need to simulate a crash and prevent a checkpoint) then do not restart it.. If you had concurrent transactions still in progress you might be really lucky and PostgreSQL might not have unlinked the backing files for the table yet, so it could maybe be recoverable.
You very likely can't get the data back, you deleted it. Restore from a backup.
A normal DELETE in PostgreSQL marks the rows as deleted but does not actually erase the data immediately, so it can often be recovered if you promptly stop the database and you don't write anything else to the table.
This is not the case for TRUNCATE. TRUNCATE deletes the underlying files that represent the database table from the file system.
Recovering the data, if possible at all, would require forensic analysis of your hard drive. If the data is truly important then power the computer off now and take a disk image of the hard drive. Expect recover work to cost multiple thousand dollars, if it is possible at all, since you will need someone who knows both (a) file system internals and (b) PostgreSQL internals. The only person I can think of who I know has the skills to possibly be able to do this would probably cost about €5000 to €10000 for the time required for this sort of work. (It isn't me).
If you didn't have backups you have just learned a very expensive lesson.
If someone else is reading this and DELETEd rows, please immediately follow the instructions in corruption since the first recovery steps are the same. This will not help if you ran TRUNCATE.

SQL Server 2008 R2 log files filled up the drive

So some of us dev's are starting to take over the management of some of our SQL Server boxes as we upgrade to SQL Server 2008 R2. In the past, we've manually reduced the log file sizes by using
USE [databaseName]
GO
DBCC SHRINKFILE('databaseName_log', 1)
BACKUP LOG databaseName WITH TRUNCATE_ONLY
DBCC SHRINKFILE('databaseName_log', 1)
and I'm sure you all know how the truncate only has been deprecated.
So the solutions that I've found so far are setting the recovery = simple, then shrink, then set it back... however, this one got away from us before we could get there.
Now we've got a full disk, and the mirroring that is going on is stuck in a half-completed, constantly erroring state where we can't alter any databases. We can't even open half of them in object explorer.
So from reading about it, the way around this happening in the future is to have a maintenance plan set up. (whoops. :/ ) but while we can create one, we can't start it with no disk space and SQL Server stuck in its erroring state (event viewer is showing it recording errors about 5 per second... this has been going on since last night.)
Anyone have any experience with this?
So you've kind of got a perfect storm of bad circumstances here in that you've already reached the point where SQL Server is unable to start. Normally at this point it's necessary to detach a database and move it to free space, but if you're unable to do that you're going to have to start breaking things and rebuilding.
If you have a mirror and a backup that is up to date, you're going to need to blast one unlucky database on the disk to get the instance back online. Once you have enough space, then take emergency measures to break any mirrors necessary to get the log files back to a manageable size and get them shrunk.
The above is very much emergency recovery and you've got to triple check that you have backups, transaction log backups, and logs anywhere you can so you don't lose any data.
Long term to manage the mirror you need to make sure that your mirrors are remaining synchronized, that full and transaction log backups are being taken, and potentially reconfiguring each database on the instance with a maximum file size where the sum of all log files does not exceed the available volume space.
Also, I would double check that your system databases are not on the same volume as your database data and log files. That should help with being able to start the instance when you have a full volume somewhere.
Bear in mind, if you are having to shrink your log files on a regular basis then there's already a problem that needs to be addressed.
Update: If everything is on the C: drive then consider reducing the size of the page file to get enough space to online the instance. Not sure what your setup is here.

Sphinx: How to recover realtime index from backups?

I already known how to implement a backup of realtime index by using FLUSH RTINDEX and compressing all involved files (.ram, .kill, .meta files) like below:
tar zcvf /backups/myrtbackup.tar.gz /sphinxdata/myrtindex.{*.sp*,ram,kill,meta} /sphinxdata/binlog.*
But there's question, if the system is crashed, or somehow we delete all data by mistake, how could we recover from that backup?
Following a crash, or if the server goes and need to bring online a new one etc. Just restore those files to the data folder (while searchd is not running) and start searchd.
If search wont start you might have some luck not restoring the binlog, rather just the index files.
If you delete all the data, its not really going to help you. You can't 'roll back' and go back to a specific time.
In general the sphinx 'index' designed to be created as an index over a real database somewhere else. Not as an authoritative database in itself.
Sphinxes "backups" are just not robust enough for you to be able to rely on them. Because sphinx indexes should be disposable, if they get currupted, just recreate them from the source data.
(the backups that you can do, are just 'hacks' that may help you get online quicker in case of disaster)