Force data to stay in cache - postgresql

I have a part of a large table in PostgreSQL 12 that I would like to be cached at all times. The queries are such that the same rows would (almost) never be read twice, so I can't rely on the automatic caching. I'm reading up on pg_prewarm which seems suitable for loading the cache, but I don't find anything about preventing it being overwritten over time. Any hints? Thanks!

AFAIK there is no documented feature in PostgreSQL to lock pages for a specific object in the database cache.

That is true. The only way you can make sure that the table stays cached is to have shared_buffers big enough to contain the whole database.
But in practice that is no big problem: if a block doesn't get used regularly, it can drop out of the cache, so you have to read it in again when it gets used. But a single block contains many rows, so it only drops out if none of these rows are needed.

Related

How to do caching if you can't afford misses at all?

I'm developing an app that is processing incoming data and currently needs to hit the database for each incoming datapoint. The problem is twofold:
the database can't keep up with the load
the database returns results for less than 5% of the queries
The first idea is to cache the data from the relational database into something like Redis to improve lookup speed. But all the regular caching strategies rely on the fact that you can fall back to the database if needed and fetch data from there. This is problematic in my case because for 95% of the queries there is nothing in the database and I don't have anything to store in the cache. I can of course store the empty results in the cache but that would mean that 95% (or even more, depending on the composition of data) of my cache storage would be rubbish.
The preferred way to do it would be to implement a caching system that doesn't have any misses: everything from the database is always present in the cache and therefore if it's not in the cache, then it's not in the database. After looking around though I found that the consistency of Redis does not seem reliable enough to always make that assumption - if the key doesn't exist in Redis, how can I be 100% sure that it doesn't exist in the database (assuming that we're not in the midst of an update)? It is a strong requirement that if there is a row in the database about an incoming datapoint, then it needs to be found and can't just be missed out on.
How do I go about designing a caching system that will always have the same data as the relational database - without having a fallback to look the data up in the database? Redis might not be the tool but what would you recommend? Is there a pattern or a keyword that I should look up that I haven't thought of?
There already is such a cache in the database: shared buffers. So all you have to do is to set shared_buffers big enough to contain the whole database and restart. Soon the whole database will be cached, and reading will cause no more I/O and will be fast.
That also works if you cannot cache the whole database, as long as you only need to access part of it: PostgreSQL will then just cache those 8kB-pages that are in use.
In my opinion, adding another external caching system can never do better than that. That is particularly true if data are ever modified: any external caching system would have to make sure that its data are not stale, which would introduce an additional overhead.

When should one vacuum a database, and when analyze?

I just want to check that my understanding of these two things is correct. If it's relevant, I am using Postgres 9.4.
I believe that one should vacuum a database when looking to reclaim space from the filesystem, e.g. periodically after deleting tables or large numbers of rows.
I believe that one should analyse a database after creating new indexes, or (periodically) after adding or deleting large numbers of rows from a table, so that the query planner can make good calls.
Does that sound right?
vacuum analyze;
collects statistics and should be run as often as much data is dynamic (especially bulk inserts). It does not lock objects exclusive. It loads the system, but is worth of. It does not reduce the size of table, but marks scattered freed up place (Eg. deleted rows) for reuse.
vacuum full;
reorganises the table by creating a copy of it and switching to it. This vacuum requires additional space to run, but reclaims all not used space of the object. Therefore it requires exclusive lock on the object (other sessions shall wait it to complete). Should be run as often as data is changed (deletes, updates) and when you can afford others to wait.
Both are very important on dynamic database
Correct.
I would add that you can change the value of the default_statistics_target parameter (default to 100) in the postgresql.conf file to a higher number, after which, you should restart your server and run analyze to obtain more accurate statistics.

PostgreSQL JOIN, under the hood

I have question about PostgreSQL join.
Does PostgreSQL create temporary table for JOINed tables or it makes everything
without any temporary tables?
The reason of my question is:
When I make SELECT request with many JOINs, I see IO spike in write operations.
What can be the reason of this issue?
Thanks a lot.
PostgreSQL will temporarily spill result-sets to disk if they get large enough. Unless you have a database smaller than your RAM it doesn't have much choice.
However, you can also see unexpected writes if this is the first time you have read in data pages since they were created. There are "hint bits" set on each page to optimise visibility checks. These get set when vacuuming etc. or when a page is accessed. If you do a large import followed by a table scan you can get a lot of unexpected IO as all the hint bits get set and written out.

Is there a way to configure Heroku PostgreSQL to not bother loading a particular column into RAM?

This may be a long shot, but I thought I'd ask anyway.
I am looking at using Heroku's new Crane Postgres DB (400 MB RAM Cache) in conjunction with an app I'm deploying on Heroku. The 400 MB cache size should be plenty for our needs... except for one column of one table, in which we store a cached PDF file as a string. The PDF's could easily use up the 400MB RAM pretty quickly if Heroku uses its Cache for them.
If I were on an actual server, I'd just store the PDF as a file, but given Heroku's ephemeral file system, my life is much simpler if I just store the pdf in the DB rather than rigging up a connection to S3 just for this one thing. (It further complicates that we're looking at deploying multiple heroku instances, one for each client ... so using the DB's is simpler than creating a new bucket for each one.) I don't really care about the speed on this. If people are getting the file, they will expect speeds as if it were coming from a file system anyhow, since thats how most file downloads are done. Is there any way to tell PostGRES to not bother caching this column?
Or maybe I'm asking the wrong question, and there is some other way to solve the problem or design alternatives that make it irrelevant.
You don't have to do anything. PostgreSQL will automatically use TOAST on values larger than 8 kB.
From http://www.postgresql.org/docs/9.1/static/storage-toast.html
PostgreSQL uses a fixed page size (commonly 8 kB), and does not allow tuples to span multiple pages. Therefore, it is not possible to store very large field values directly. To overcome this limitation, large field values are compressed and/or broken up into multiple physical rows. This happens transparently to the user, with only small impact on most of the backend code. The technique is affectionately known as TOAST (or "the best thing since sliced bread").
PostgreSQL caching is also done at the page level so TOAST does not have to be cached with the rest of the row (http://www.westnet.com/~gsmith/content/postgresql/InsideBufferCache.pdf).
The fact that Postgres can TOAST large field values, it doesn't mean it's the best thing to do.
If you store big fields in your main database, it will make many things harder, such as creating forks or followers, and creating and restoring backups in particular. I would strongly reconsider utilizing S3 to store the PDF files, and simply invest in automated onboarding of new clients (create heroku app, provision database, provision/create S3 bucket).
I'm not quite sure how you're managing to store large PDF's, since Postgres imposes a maximum field size (or at least a maximum page size). However, you might be able to get around this by using TOAST. TOASTed items are stored in a separate (physical) table, so if you're not selecting them frequently they shouldn't be cached.
If you are selecting them frequently, then I'm not sure if what you want is possible. Remember that Postgres only supplies one "level" of caching - the Linux VFS does caching also.

Question on checkpoints during large data loads?

This is a question about how PostgreSQL works. During large data loads using the 'COPY' command, I see multiple checkpoints occur where 100% of the log files (checkpoint_segments) are recycled.
I don't understand this, I guess. What does pgsql do when a single transaction requires more space than available log files? It seems that it is wrapping around multiple times in the course of this load which is a single transaction. What am I missing?
Everything is working, I just want to understand it better in case I can tune things, etc.
When a checkpoint happens all dirty pages are written to disk. As these pages cannot get lost anymore it doesn't need the log for them any more so it is save to recycle. Writing dirty pages to disk doesn't mean this data is committed. The db can see from the meta data stored with each row that it belongs to a transaction that has not commited yet and it also can still abort this transaction in which case vacuum will eventually cleanup these rows.
When loading large amounts of data it is adviced to temporarily increase checkpoint_segments.