I want to do some work with Write-ahead-logging(WAL) on Postgres. Could anyone point me to the WAL implementation in Postgres codebase? I just want to know current implementation and start to modify that. Any version of Postgres is fine unless it has WAL.
Thanks in advance.
The main part of the code is here:
src/backend/access/transam/xlog.c
And:
src/backend/access/transam/README
But of course the need to do WAL permeates the entire code base.
You have picked perhaps the most difficult possible starting point to get your feet wet. (I should know--that is also how I did it).
WAL is write-ahead logging. Basically, before the database actually
performs an operation, it writes in a log what it's about to do. Then, it
goes and does it. This ensures data consistency. Let's say that the
computer was powered off suddenly. There are several points that could
happen:
1) before a write - in this case the database would be fine with or
without write-ahead logging.
2) during a write - without write-ahead logging, if the machine is powered
off during a write, the database has no way of knowing what remained to be
written, or what was being written. WIth Postgres, this is furthere
broken down into two possibilities:
The power-off occurred while it was writing to the log - in this
case, the log is rolled back. The database is unaffected because the data
was never written to the database proper.
The power-off occurred after writing to the log, while writing to
disk - in this case, Postgres can simply read from the log what was
supposed to be written, and complete the write.
3) after a write - again, this does not affect Postgres either with or
without WAL.
In addition, WAL increases PostgreSQL's efficiency, because it can delay
random-access writes to disk, and just do sequential writes to the log for
a long time. This reduces the amount of head-seek the dissk are doing.
If you store your WAL files on a different disk, you get even more speed
advantages.
Related
As I understand it, this is where a background thread is responsible for writing transactions to disk in "careful write" order so that the user does not have to wait for the actual writing to disk to occur.
I have seen references to this (e.g. here) from a long time ago relating to interbase but I could not see it mentioned in relation to firebird anywhere.
Using gfix utility you can set FORCED WRITES flag on or off for a database file. When turned on, the server will wait until actual disk write occur. When turned off the server will continue execution leaving to OS to decide when to write data to a disk. Performance gains are up to 3x but then there is a posibility that some data would be written in a wrong order if power failure occurs.
We strongly advice our customers toward using RAID controller with independent power source for a cache memory together with FORCED WRITES = ON.
Based on the comments on this thread and searching online it seems that firebird does not have GROUP COMMIT
In my postgesql database, unfortunately I truncate this table mail_group, and the table is delete from the database, how to I get back this table.
Kindly help me, waiting for reply.
Thanks
Anyone else in the same situation: immediately stop your database with pg_ctl stop -m immediate (the immediate is important, you need to simulate a crash and prevent a checkpoint) then do not restart it.. If you had concurrent transactions still in progress you might be really lucky and PostgreSQL might not have unlinked the backing files for the table yet, so it could maybe be recoverable.
You very likely can't get the data back, you deleted it. Restore from a backup.
A normal DELETE in PostgreSQL marks the rows as deleted but does not actually erase the data immediately, so it can often be recovered if you promptly stop the database and you don't write anything else to the table.
This is not the case for TRUNCATE. TRUNCATE deletes the underlying files that represent the database table from the file system.
Recovering the data, if possible at all, would require forensic analysis of your hard drive. If the data is truly important then power the computer off now and take a disk image of the hard drive. Expect recover work to cost multiple thousand dollars, if it is possible at all, since you will need someone who knows both (a) file system internals and (b) PostgreSQL internals. The only person I can think of who I know has the skills to possibly be able to do this would probably cost about €5000 to €10000 for the time required for this sort of work. (It isn't me).
If you didn't have backups you have just learned a very expensive lesson.
If someone else is reading this and DELETEd rows, please immediately follow the instructions in corruption since the first recovery steps are the same. This will not help if you ran TRUNCATE.
I am trying to understand the implementation of WAL in Postgres 9.3.5. In xlog.c file, there is a parameter XLOG_SWITCH which I don't understand. I googled this parameter, but I didn't find useful information. Could anyone explain the purpose of this parameter?
Changes in the database are stored in xlog files that are for default 16MB size, principally for crash recovery, or a hot standby server, that means the server must full a file with commands like create table, insert into, etc. They‘re reasons to switch the log before that 16MB gets full, like you don‘t want to wait to migrate the current xlog for a standby server, and they‘re reasons too to expand that size, and it‘s because you consider that 16MB is little bit and the transactions in your database generate so much xlog files, the size to switch the xlogs depends of the amount of data that you‘re willing to lose
I need to track any changes of data in postgresql database. Is there any option in database or any script to view those data and DML as well.
Sorry - I have no clue. But I do have some different suggestions:
Log /all/ queries and grep for those involving update, delete, insert, alter table etc. Caveats: may cause performance problems if there are lots of queries and the log is on the same RAID as data and/or WAL. Not sure if it's easy to make some regexp that is 100% certain to catch all modifying statements. May be difficult to catch rollbacks etc. To log everything, add this to the configuration file: log_min_duration_statement = 0. Have a look that the other log_* configuration variables are sane as well.
The rules/trigger approach (as hinted by other user) - I believe it involves writing up rules for each and every table - but it's of course doable (and should be possible to create the rules through some external script if you have a lot of tables). You may also look a bit into how slony works - slony is a trigger-based replication system, should be possible to use it to catch all the changes in the DB.
All changes to the database ends up in the WAL-file, maybe it's theoretically possible to extract something out from the WAL, but I suspect that's not practical unless you're already a skilled postgres hacker ... and if you're a skilled postgres hacker, you probably wouldn't ask this question in the first place ;-) (eventually, the WALs may be used to see the rate of changes in the data and spot times of the day when there are more updates than otherwise etc. They may also be used for replication and roll-forward from a binary backup)
Between setting log_statement='all' in the postgresql.conf, you can also use tablelog to capture old data.
I'm wondering about experiences people have had with MongoDB backups. Assuming a filesystem snapshot is not an option, what have your experiences been with mongodump/restore versus doing a write lock and backing up the files? Have you run into any bugs with one method that caused you to switch?
From the reading I've done so far, it seems like mongodump/restore has the advantage of being able to run it while the server is live, but I'm not sure how well it will scale.
Locking and copying files is only an option when you don't have heavy write load.
mongodump can be run against live server. It will create some additional load, so don't do it on peak hours. Also, it is advised to do it on a secondary node (if you don't use replica sets, you should).
There are some complications when you have a DB so large that no single machine can hold it. See this document.
Also, if you have replica set, you take down one of secondaries and copy its files directly. See http://www.mongodb.org/display/DOCS/Backups:
A simple approach is just to stop the database, back up the data files, and resume. This is safe but of course requires downtime. This can be done on a secondary without requiring downtime, but you must ensure your oplog is large enough to cover the time the secondary is unavailable so that it can catch up again when you restart it.