Big transaction (multiple inserts) exceed RAM size : what happens? - postgresql

I do multiple inserts in single transaction that exceed RAM size:
I learned that inserts (delete) are written in RAM and that modified pages became dirty pages.
But what happen if dirty pages exceed RAM size: Checkpoint writes dirty pages on disk before the end of transaction?

Don't worry.
PostgreSQL caches 8kB pages of disk storage in RAM when the data are read or written, but modified (“dirty”) pages in the cache (shared buffers) are automatically written out to disk by the database system.
Normally, this happens during a checkpoint, but there is also a special background process, the background writer, that keeps writing out dirty pages from the cache so that there are always some clean pages in the cache that clients can use.
In the unlikely event that despite all that there is still no clean page to be found, the backend process that needs some cache space can clean out a dirty page itself. So, no matter what, you will always get some cache space eventually.
It also doesn't matter if your transaction is large or not. PostgreSQL doesn't wait for the transaction to be committed before it writes out data to disk, it will happily persist uncommitted data modifications from an active transaction. If the transaction fails, these data will just be rendered invisible (“dead”).
Owing to PostgreSQL architecture, your transaction will never fail just because it changes too many data.

Related

What are the difference between background writer and checkpoint in postgresql?

As per my understanding
checkpoint write all dirty buffer(data) periodically into disk and
background writer writes some specific dirty buffer(data) into disk
It looks both do almost same work.
But what are the specific dirty buffer(data) writes into disk?
How frequently checkpoint and bgwriter it is calling?
I want to know what are the difference between them.
Thanks in advance
It looks both do almost same work.
Looking at the source code link given by Adrian, you can see these words in the comments for the background writer:
As of Postgres 9.2 the bgwriter no longer handles checkpoints.
...which means in the past, the background writer and checkpointer tasks were handled by one component, which explains the similarity that probably led you to ask this question. The two components were split on 1/Nov/2011 in this commit and you can learn more about the checkpointer here.
From my own understanding, they are doing the same task from different perspectives. The task is making sure we use a limited amount of resources:
For the background writer, that resource is RAM and it writes dirty buffers to disk so the buffers can be reused to store other data hence limiting the amount of RAM required.
For the checkpointer, that resource is DISK and it writes all dirty buffers to disk so it can add a checkpoint record to the WAL, which allows all segments of the WAL prior to that record to be removed/recycled hence limiting the amount of DISK required to store the WAL files. You can confirm this in the docs which say ...after a checkpoint, log segments preceding the one containing the redo record are no longer needed and can be recycled or removed.
It may be helpful to read more about the WAL (Write-Ahead Log) in general.

PostgreSql Query Caching Logic

So I have database with 140 million entries. First Query takes around 500ms, repeating the query takes 8ms. So there is some caching.
I would like to learn more about how long this cache exists what are the conditions it's refreshed etc - time or number of new entries made to the table.
Adding ~1500 new entries to table still makes queries around 10ms. But adding 150k will reset the cache. So there could be both, or only transaction count.
Anyone has a good resource, so that I could get more information on it?
I made this gist to make the load tests.
PostgreSQL uses a simple clock sweep algorithm to manage the cache (“shared buffers” in PostgreSQL lingo).
Shared buffers are in shared memory, and all access to data is via this cache. An 8kB block that is cached in RAM is also called a buffer.
Whenever a buffer is used, its usage count is increased, up to a maximum of 5.
There is a free list of buffers with usage count 0. If the free list is empty, anybody who searches a "victim" buffer to replace goes through shared buffers in a circular fashion and decreases the usage count of each buffer they encounter. If the usage count is 0, the buffer gets evicted and reused.
If a buffer is "dirty" (it has been modified, but not written to disk yet), it has to be written out before it can be evicted. The majority of dirty buffers get written out during one of the regular checkpoints, and a few of them get written out by the background writer between checkpoints. Occasionally a normal worker process has to do that as well, but that should be an exception.

How does write ahead logging improve IO performance in Postgres?

I've been reading through the WAL chapter of the Postgres manual and was confused by a portion of the chapter:
Using WAL results in a significantly reduced number of disk writes, because only the log file needs to be flushed to disk to guarantee that a transaction is committed, rather than every data file changed by the transaction.
How is it that continuous writing to WAL more performant than simply writing to the table/index data itself?
As I see it (forgetting for now the resiliency benefits of WAL) postgres need to complete two disk operations; first pg needs to commit to WAL on disk and then you'll still need to change the table data to be consistent with WAL. I'm sure there's a fundamental aspect of this I've misunderstood but it seems like adding an additional step between a client transaction and the and the final state of the table data couldn't actually increase overall performance. Thanks in advance!
You are fundamentally right: the extra writes to the transaction log will per se not reduce the I/O load.
But a transaction will normally touch several files (tables, indexes etc.). If you force all these files out to storage (“sync”), you will incur more I/O load than if you sync just a single file.
Of course all these files will have to be written and sync'ed eventually (during a checkpoint), but often the same data are modified several times between two checkpoints, and then the corresponding files will have to be sync'ed only once.

MongoDB Write and lock processes

I've been read a lot about MongoDB recently, but one topic I can't find any clear material on, is how data is written to the journal and oplog.
So this is what I understand of the process so far, please correct me where I'm wrong
A client connect to mongod and performs a write. The write is stored in the socket buffer
When Mongo is available (not sure what available means at this point), data is written to the journal?
The mongoDB docs then say that writes every 60 seconds are flushed from the journal onto disk. By this I can only assume this mean written to the primary and the oplog. If this is the case, how to writes appear earlier than the 60 seconds sync interval?
Some time later, secondaries suck data from the primary or their sync source and update their oplog and databases. It seems very vague about when exactly this happens and what delays it.
I'm also wondering if journaling was disabled (I understand that's a really bad idea), at what point does the oplog and database get updated?
Lastly I'm a bit stumpted at which points in this process, the write locks get created. Is this just when the database and oplog are updated or at other times too?
Thanks to anyone who can shed some light on this or point me to some reading material.
Simon
Here is what happens as far as I understand it. I simplified a bit, but it should make clear how it works.
A client connects to mongo. No writes done so far, and no connection torn down, because it really depends on the write concern what happens now.Let's assume that we go with the (by the time of this writing) default "acknowledged".
The client sends it's write operation. Here is where I am really not sure. Either after this step or the next one the acknowledgement is sent to the driver.
The write operation is run through the query optimizer. It is here where the acknowledgment is sent because with in an acknowledged write concern, you may be returned a duplicate key error. It is possible that this was checked in the last step. If I should bet, I'd say it is after this one.
The output of the query optimizer is then applied to the data in memory Actually to the data of the memory mapped datafiles, to the memory mapped oplog and to the journal's memory mapped files. Queries are answered from this memory mapped parts or the according data is mapped to memory for answering the query. The oplog is read from memory if present, too.
Every 100ms in general the journal is synced to disk. The precise value is determined by a number of factors, one of them being the journalCommitInterval configuration parameter. If you have a write concern of journaled, the driver will be notified now.
Every syncDelay seconds, the current state of the memory mapped files is synced to disk I think the journal is truncated to the entries which weren't applied to the data yet, but I am not too sure of that since that it should basically never happen that data in the journal isn't yet applied to the current data.
If you have read carefully, you noticed that the data is ready for the oplog as early as it has been run through the query optimizer and was applied to the files mapped into memory. When the oplog entry is pulled by one of the secondaries, it is immediately applied to it's data of the memory mapped files and synced in the disk the same way as on the primary.
Some things to note: As soon as the relatively small data is written to the journal, it is quite safe. If a node goes down between two syncs to the datafiles, both the datafiles and the oplog can be restored from their last state in the datafiles and the journal. In general, the maximum data loss you can have is the operations recorded into the log after the last commit, 50ms in median.
As for the locks. If you have written carefully, there aren't locks imposed on a database level when the data is synced to disk. Write locks may be created in order to assure that only one thread at any given point in time modifies a given document. There are other write locks possible , but in general, they should be rather rare.
Write locks on the filesystem layer are created once, though only implicitly, iirc. During application startup, a lock file is created in the root directory of the dbpath. Any other mongod instance will refuse to do any operation on those datafiles while a valid lock exists. And you shouldn't either ;)
Hope this helps.

Confirm basic understanding of MongoDB's acknowledged write concern

Using MongoDB (via PyMongo) in the default "acknowledged" write concern mode, is it the case that if I have a line that writes to the DB (e.g. a mapReduce that outputs a new collection) followed by a line that reads from the DB, the read will always see the changes from the write?
Further, is the above true for all stricter write concerns than "acknowledged," i.e. "journaled" and "replica acknowledged," but not true in the case of "unacknowledged"?
If the write has been acknowledged, it should have been written to memory, thus any subsequent query should get the current data. This won't work if you have a replica set and allow reads from secondaries.
Journaled writes are written to the journal file on disk, which protects your data in case of power / hardware failures, etc. This shouldn't have an impact on consistency, which is covered as soon as the data is in memory.
Any replica configuration in the write concern will ensure that writes need to be acknowledged by the majority / all nodes in the replica set. This will only make a difference if you read from replicas or to protect your data against unreachable / dead servers.
For example in case of WiredTiger database engine, there'll be a cache of pages inside memory that are periodically written and read from disk, depending on memory pressure. And, in case of MMAPV1 storage engine, there would be a memory mapped address space that would correspond to pages on the disk. Now, the secondary structure that's called a journal. And a journal is a log of every single thing that the database processes - notice that the journal is also in memory.
When does the journal gets written to the disk?
When the app request something to the mongodb server via a TCP connection - and the server is gonna process the request. And it's going to write it into the memory pages. But they may not write to the disk for quite a while, depending on the memory pressure. It's also going to update request into the journal. By default, in the MongoDB driver, when we make a database request, we wait for the response. Say an acknowledged insert/update. But we don't wait for the journal to be written to the disk. The value that represents - whether we're going to wait for this write to be acknowledged by the server is called w.
w = 1
j = false
And by default, it's set to 1. 1 means, wait for this server to respond to the write. By default, j equals false, and j which stands for journal, represents whether or not we wait for this journal to be written to be written to the disk before we continue. So, what are the implications of these defaults? Well, the implications are that when we do an update/insert - we're really doing the operation in memory and not necessarily to the disk. This means, of course, it's very fast. And periodically (every few seconds) the journal gets written to the disk. It won't be long, but during this window of vulnerability when the data has been written into the server's memory into the pages, but the journal has not yet been persisted to the disk, if the server crashed, we could lose the data. We also have to realize that, as a programmer just because the write came back as good and it was written successfully to the memory. It may not ever persist to disk if the server subsequently crashes. And whether or not this is the problem depends on the application. For some applications, where there are lots of writes and logging small amount of data, we might find that it's very hard to even keep up with the data stream, if we wait for the journal to get written to the disk, because the disk is going to be 100 times, 1,000 times slower than memory for every single write. But for other applications, we may find that it's completely necessary for us to wait for this to be journaled and to know that it's been persisted to the disk before we continue. So, it's really upto us.
The w and j value together are called write concern. They can be set in the driver, at the collection level, database level or a client level.
1 : wait for the write to be acknowledged
0 : do not wait for the write to be acknowledged
TRUE : sync to journal
FALSE : do not sync to journal
There are also other values for w as well that also have some significance. With w=1 & j=true we could make sure that those writes have been persisted to disk. Now, if the writes have been written to the journal, then what happens is if the server crashes, then even though the pages may not be written back to disk yet, on recovery, the server can look at the journal on the disk - the mongod process and recreate all the writes that were not yet persisted to the pages. Because, they've been written to the journal. So, that's why this gives us a greater level of safety.