Does postgres support in memory temp table - postgresql

I know that postgres does not allow for in memory structures. But for temp table to work efficiently it is important to have in memory structure, otherwise it would have to flow to disk and would not be that efficient. So my question is does postgres allow in memory storage for temp table? My hunch is that it does not. Just wanted to confirm it with someone.
Thanks in advance!

Yes, Postgres can keep temp tables in memory. The amount of memory available for that is configured through the property temp_buffers
Quote from the manual:
Sets the maximum number of temporary buffers used by each database session. These are session-local buffers used only for access to temporary tables. The default is eight megabytes (8MB). The setting can be changed within individual sessions, but only before the first use of temporary tables within the session; subsequent attempts to change the value will have no effect on that session.
A session will allocate temporary buffers as needed up to the limit given by temp_buffers. The cost of setting a large value in sessions that do not actually need many temporary buffers is only a buffer descriptor, or about 64 bytes, per increment in temp_buffers. However if a buffer is actually used an additional 8192 bytes will be consumed for it (or in general, BLCKSZ bytes).
So if you really need that, you can increase temp_buffers.

Related

Will pg_dump and pg_restore of PostgreSQL affect the buffer cache and kernel file system cache?

May I query about the buffer cache behavior of PostgreSQL during pg_dump and pg_restore?
As we know, PostgreSQL has a buffer cache to cache the recent working set, and Linux also has its file system level cache.
When we use pg_dump to backup the database, would the backup operation affect the PostgreSQL buffer cache and the system file cache?
And what about the pg_restore operation?
Since these operations read or write files on the machine, they will certainly affect the kernel's file system cache, potentially blowing out some data that were previously cached.
The same is true for the PostgreSQL shared buffers, although there is an optimization that avoids overwriting all shared buffers during a large sequential scan: if a table is bigger than a quarter on shared buffers, a ring buffer of 256 kB will be used rather that eliminating a major part of the cache.
See the following quotes from src/backend/access/heap/heapam.c and src/backend/storage/buffer/README:
/*
* If the table is large relative to NBuffers, use a bulk-read access
* strategy and enable synchronized scanning (see syncscan.c). Although
* the thresholds for these features could be different, we make them the
* same so that there are only two behaviors to tune rather than four.
* (However, some callers need to be able to disable one or both of these
* behaviors, independently of the size of the table; also there is a GUC
* variable that can disable synchronized scanning.)
*
* Note that table_block_parallelscan_initialize has a very similar test;
* if you change this, consider changing that one, too.
*/
if (!RelationUsesLocalBuffers(scan->rs_base.rs_rd) &&
scan->rs_nblocks > NBuffers / 4)
{
allow_strat = (scan->rs_base.rs_flags & SO_ALLOW_STRAT) != 0;
allow_sync = (scan->rs_base.rs_flags & SO_ALLOW_SYNC) != 0;
}
else
allow_strat = allow_sync = false;
For sequential scans, a 256KB ring is used. That's small enough to fit in L2
cache, which makes transferring pages from OS cache to shared buffer cache
efficient. Even less would often be enough, but the ring must be big enough
to accommodate all pages in the scan that are pinned concurrently. 256KB
should also be enough to leave a small cache trail for other backends to
join in a synchronized seq scan. If a ring buffer is dirtied and its LSN
updated, we would normally have to write and flush WAL before we could
re-use the buffer; in this case we instead discard the buffer from the ring
and (later) choose a replacement using the normal clock-sweep algorithm.
Hence this strategy works best for scans that are read-only (or at worst
update hint bits). In a scan that modifies every page in the scan, like a
bulk UPDATE or DELETE, the buffers in the ring will always be dirtied and
the ring strategy effectively degrades to the normal strategy.
As the README indicates, that strategy is probably not very effective for bulk writes.
Still, a pg_dump or pg_restore will affect many tables, so you can expect that it will blow out a significant portion of shared buffers.

Why does PostgreSQL reserve fixed amount of memory for the text of the currently executing command?

An excerpt from the documentation:
track_activity_query_size (integer) Specifies the amount of memory reserved to store the text of the currently executing command for each active session, for the pg_stat_activity.query field. If this value is specified without units, it is taken as bytes. The default value is 1024 bytes. This parameter can only be set at server start.
As I understand it, it means that if, for example, track_activity_query_size set to 10kB, each session will consume 10kB for the text of the currently executing command regardless of the actual size of the text.
Why is it implemented this way? Would it be too slow to dynamically allocate actually needed amount?
This parameter determines how much memory is allocated in shared memory structures that contain query texts.
PostgreSQL allocates such shared memory areas at server start and does not change their size later. This is to make the code (that has to work on many operating systems) simple and robust. Once you know max_connections and track_activity_query_size, you can determine the maximum memory required.

When should one vacuum a database, and when analyze?

I just want to check that my understanding of these two things is correct. If it's relevant, I am using Postgres 9.4.
I believe that one should vacuum a database when looking to reclaim space from the filesystem, e.g. periodically after deleting tables or large numbers of rows.
I believe that one should analyse a database after creating new indexes, or (periodically) after adding or deleting large numbers of rows from a table, so that the query planner can make good calls.
Does that sound right?
vacuum analyze;
collects statistics and should be run as often as much data is dynamic (especially bulk inserts). It does not lock objects exclusive. It loads the system, but is worth of. It does not reduce the size of table, but marks scattered freed up place (Eg. deleted rows) for reuse.
vacuum full;
reorganises the table by creating a copy of it and switching to it. This vacuum requires additional space to run, but reclaims all not used space of the object. Therefore it requires exclusive lock on the object (other sessions shall wait it to complete). Should be run as often as data is changed (deletes, updates) and when you can afford others to wait.
Both are very important on dynamic database
Correct.
I would add that you can change the value of the default_statistics_target parameter (default to 100) in the postgresql.conf file to a higher number, after which, you should restart your server and run analyze to obtain more accurate statistics.

How to cretae Buffer Pool in Database dedicated only for ONE BIG table?

I have table TICKET with 400K records in database (DB2).
I wish to create one huge buffer pool which will be dedicated only to this one big table for faster response. What are the steps to do it?
Also at the moment I have one Buffer Pool which coovers whole Table space with all the tables (about 200) in database! what will happen then with that my specific table in that old firstly created buffer pool? should that table stay in first buffer pool or how to remove from that buffer pool??
Also are there some risks for this action???
Thank you
I think this article will help you: http://www.ibm.com/developerworks/data/library/techarticle/0212wieser/index.html
Moving your large table into a different buffer pool may increase performance, but it depends on your use case. A relevant quote from the article:
Having more than one buffer pool can preserve data in the buffers. For
example, you might have a database with many very-frequently used
small tables, which would normally be in the buffer in their entirety
to be accessible very quickly. You might also have a query that runs
against a very large table that uses the same buffer pool and involves
reading more pages than the total buffer size. When this query runs,
the pages from the small, very frequently used tables are lost, making
it necessary to re-read them when they are needed again. If the small
tables have their own buffer pool, thereby making it necessary for
them to have their own table space, their pages cannot be overwritten
by the large query. This can lead to better overall system
performance, albeit at the price of a small negative effect on the
large query.
If you do decide to do this, you can only have one buffer pool per tablespace, so you would need to move your large table into its own tablespace. The article gives examples of creating tablespaces and buffer pools.
A table can be moved to another tablespace with ADMIN_MOVE_TABLE. I don't think it is risky. It captures changes that may be made to the source table during moving. The only thing it does is disable a few (rarely used) actions on the source table during moving.
You assign a buffer pool to a tablespace by specifying it in the CREATE TABLESPACE or ALTER TABLESPACE statement.

Is killing a "CLUSTER ON index" dangerous for database?

All the question is in the title,
if we kill a cluster query on a 100 millions row table, will it be dangerous for database ?
the query is running for 2 hours now, and i need to access the table tomorrow morning (12h left hopefully).
I thought it would be far quicker, my database is running on raid ssd and Bi-Xeon Processor.
Thanks for your wise advice.
Sid
No, you can kill the cluster operation without any risk. Before the operation is done, nothing has changed to the original table- and indexfiles. From the manual:
When an index scan is used, a temporary copy of the table is created
that contains the table data in the index order. Temporary copies of
each index on the table are created as well. Therefore, you need free
space on disk at least equal to the sum of the table size and the
index sizes.
When a sequential scan and sort is used, a temporary sort file is also
created, so that the peak temporary space requirement is as much as
double the table size, plus the index sizes.
As #Frank points out, it is perfectly fine to do so.
Assuming you want to run this query in the future and assuming you have the luxury of a service window and can afford some downtime, I'd tweak some settings to boost the performance a bit.
In your configuration:
turn off fsync, for higher throughput to the file system
Fsync stands for file system sync. With fsync on, the database waits for the file system to commit on every page flush.
maximize your maintenance_work_mem
It's ok to just take all memory available, as it will not be allocated during production hours. I don't know how big your table and the index you are working on are, things will run faster when they can be fully loaded in main memory.