Azure Postgres AUTOVACUM AND ANALYZE THRESHOLD - How to change it? - postgresql

I am coming again with another Postgres question. We are using the Managed Service from Azure that uses autovacuum. Both vacuum and statistics are automatic.
The problem I am getting is that for a specific query, when it is running at specific hours, the plan is not good. I realized that after collecting statistics manually, the plan behaves better back again.
From the documentation of Azure I got the following:
The vacuum process reads physical pages and checks for dead tuples.
Every page in shared_buffers is considered to have a cost of 1
(vacuum_cost_page_hit). All other pages are considered to have a cost
of 20 (vacuum_cost_page_dirty), if dead tuples exist, or 10
(vacuum_cost_page_miss), if no dead tuples exist. The vacuum operation
stops when the process exceeds the autovacuum_vacuum_cost_limit.
After the limit is reached, the process sleeps for the duration
specified by the autovacuum_vacuum_cost_delay parameter before it
starts again. If the limit isn't reached, autovacuum starts after the
value specified by the autovacuum_naptime parameter.
In summary, the autovacuum_vacuum_cost_delay and
autovacuum_vacuum_cost_limit parameters control how much data cleanup
is allowed per unit of time. Note that the default values are too low
for most pricing tiers. The optimal values for these parameters are
pricing tier-dependent and should be configured accordingly.
The autovacuum_max_workers parameter determines the maximum number of
autovacuum processes that can run simultaneously.
With PostgreSQL, you can set these parameters at the table level or
instance level. Today, you can set these parameters at the table level
only in Azure Database for PostgreSQL.
Let's imagine that I want to stress the default values I have for specific tables, as currently all of them are using the default ones for the whole database.
Keeping in mind that, I could try to use ( where X is what I don't know )
ALTER TABLE tablename SET (autovacuum_vacuum_threshold = X );
ALTER TABLE tablename SET (autovacuum_vacuum_scale_factor = X);
ALTER TABLE tablename SET (autovacuum_vacuum_cost_limit = X );
ALTER TABLE tablename SET (autovacuum_vacuum_cost_delay = X );
Currently I have these values in pg_stat_all_tables
SELECT schemaname,relname,n_tup_ins,n_tup_upd,n_tup_del,last_analyze,last_autovacuum,last_autoanalyze,analyze_count,autoanalyze_count
FROM pg_stat_all_tables where schemaname = 'swp_am_hcbe_pro'
and relname in ( 'submissions','applications' )
"swp_am_hcbe_pro" "applications" "264615" "11688533" "18278" "2021-11-11 08:45:45.878654+00" "2021-11-11 13:50:27.498745+00" "2021-11-10 12:02:04.690082+00" "1" "152"
"swp_am_hcbe_pro" "submissions" "663107" "687757" "51603" "2021-11-11 08:46:48.054731+00" "2021-11-07 04:41:30.205468+00" "2021-11-04 15:25:45.758618+00" "1" "20"
Those two tables are by far the ones getting most of the DML activity.
Questions
How can I determine which values for those specific parameters of the auto_vacuum are the best for tables with huge dml activity ?
How can I force Postgres to run more times the automatic analyze for these tables that I can get more up-to-date statistics ? According to the documentation
autovacuum_analyze_threshold
Specifies the minimum number of inserted, updated or deleted tuples
needed to trigger an ANALYZE in any one table. The default is 50
tuples. This parameter can only be set in the postgresql.conf file or
on the server command line; but the setting can be overridden for
individual tables by changing table storage parameters.
Does it mean that either deletes, updates or inserts gets to 50 triggers an auto analyze ? Because I am not seeing this behaviour.
If I change the values for the tables, should I do the same for their indexes ? Is there any option like cascade or similar that changing the table makes the values also affect the corresponding indexes ?
Thank you in advance for any advice. If you need any further details, let me know.

Related

PostgreSQL update with indexed joins

PostgreSQL 14 for Windows on a medium sized machine. I'm using the default settings - literally as shipped. New to PostgreSQL from MS SQL Server.
A seemingly simple statement that runs in a minute in MS is taking hours in PostgreSQL - not sure why? I'm busy migrating over, i.e. it is the exact same data on the exact same hardware.
It's an update statement that joins a master table (roughly 1000 records) and fact table (roughly 8 million records). I've masked the tables and exact application here, but the structure is exactly reflective of the real data.
CREATE TABLE public.tmaster(
masterid SERIAL NOT NULL,
masterfield1 character varying,
PRIMARY KEY(masterid)
);
-- I've read that the primary key tag creates an index on that field automatically - correct?
CREATE TABLE public.tfact(
factid SERIAL NOT NULL,
masterid int not null,
fieldtoupdate character varying NULL,
PRIMARY KEY(factid),
CONSTRAINT fk_public_tfact_tmaster
FOREIGN KEY(masterid)
REFERENCES public.tmaster(masterid)
);
CREATE INDEX idx_public_fact_master on public.tfact(masterid);
The idea is to set public.tfact.fieldtoupdate = public.tmaster.masterfield1
I've tried the following ways (all taking over an hour to complete):
update public.tfact b
set fieldtoupdate = c.masterfield1
from public.tmaster c
where c.masterid = b.masterid;
update public.tfact b
set fieldtoupdate = c.masterfield1
from public.tfact bb
join public.tmaster c
on c.masterid = bb.masterid
where bb.factid = b.factid;
with t as (
select b.factid,
c.fieldtoupdate
from public.tfact b
join public.tmaster c
on c.masterid = b.masterid
)
update public.tfact b
set fieldtoupdate = t.fieldtoupdate
from t
where t.factid = b.factid;
What am I missing? This should take no time at all, but yet takes over an hour??
Any help is appreciated...
If the table was tightly packed to start with, there will be no room to use the HOT (Heap-only-tuple) UPDATE short cut. Updating 8 million rows will mean inserting 8 million new rows and doing index maintenance on each one.
If your indexed columns on tfact are not clustered, this can involve large amounts of random IO, and if your RAM is small most of this may be uncached. With slow disk, I can see why this would take a long time, maybe even much longer than an hour.
If you will be doing this on a regular basis, you should change the table's fillfactor to keep it loosely packed.
Note that the default settings are generally suited for a small machine, or at least a machine here running the database is a small one of its tasks. But the only thing likely to effect you here is work_mem, and even that is probably not really a problem for this task.
If you use psql, then the command \d+ tfact would show you what the fillfactor is set to if it is not the default. But note that this only applies to new tuples, not to existing ones. To see the fill on an existing table, you would want to check the freespacemap using the extension pg_freespacemap and see that every block has about half space available.
To see if an index is well clustered, you can check the correlation column of pg_stats on the table for the leading column ("attname") of the index.

PostgreSQL: UPDATE large table

I have a large PostgreSQL table of 29 million rows. The size (according to the stats tab in pgAdmin is almost 9GB.) The table is post-gis enabled with an empty geometry column.
I want to UPDATE the geometry column using ST_GeomFromText, reading from X and Y coordinate columns (SRID: 27700) stored in the same table. However, running this query on the whole table at once results in 'out of disk space' and 'connection to server lost' errors... the latter being less frequent.
To overcome this, should I UPDATE the 29 million rows in batches/stages? How can I do 1 million rows (the FIRST 1 million), then do the next 1 million rows until I reach 29 million?
Or are there other more efficient ways of updating large tables like this?
I should add, the table is hosted in AWS.
My UPDATE query is:
UPDATE schema.table
SET geom = ST_GeomFromText('POINT(' || eastingcolumn || ' ' || northingcolumn || ')',27700);
You did not give any server specs, writing 9GB can be pretty fast on recent hardware.
You should be OK with one, long, update - unless you have concurrent writes to this table.
A common trick to overcome this problem (a very long transaction, locking writes to the table) is to split the UPDATE into ranges based on the primary key, ran in separate transactions.
/* Use PK or any attribute with a known distribution pattern */
UPDATE schema.table SET ... WHERE id BETWEEN 0 AND 1000000;
UPDATE schema.table SET ... WHERE id BETWEEN 1000001 AND 2000000;
For high level of concurrent writes people use more subtle tricks (like: SELECT FOR UPDATE / NOWAIT, lightweight locks, retry logic, etc).
From my original question:
However, running this query on the whole table at once results in 'out of disk space' and 'connection to server lost' errors... the latter being less frequent.
Turns out our Amazon AWS instance database was running out of space, stopping my original ST_GeomFromText query from completing. Freeing up space fixed it.
On an important note, as suggested by #mlinth, ST_Point ran my query far quicker than ST_GeomFromText (24mins vs 2hours).
My final query being:
UPDATE schema.tablename
SET geom = ST_SetSRID(ST_Point(eastingcolumn,northingcolumn),27700);

First call of query on big table is surprisingly slow

I have a query that feels like it is taking more time then it should be. This only applies on the first query for a given set of parameters, so when cached there is no issue.
I am not sure what to expect, however, given the setup and settings I was hoping someone could shed some light on a few questions and give some insight into what can be done to speed up the query. The table in question is fairly large and Postgres estimates around 155963000 in it (14 GB).
Query
select ts, sum(amp) as total_amp, sum(230 * factor) as wh
from data_cbm_aggregation_15_min
where virtual_id in (1818) and ts between '2015-02-01 00:00:00' and '2015-03-31 23:59:59'
and deleted is null
group by ts
order by ts
When I started looking into this the query it took around 15 seconds, after some changes I have gotten it to around 10 seconds which still seems long for a simply query like this. Here are the results from explain analyze: http://explain.depesz.com/s/97V1. Note the reason why GroupAggregate returns the same amount of rows is this example only has one virtual_id being used, but there can be more.
Table and index
Table being queried, it has values inserted into it every 15 minutes
CREATE TABLE data_cbm_aggregation_15_min (
virtual_id integer NOT NULL,
ts timestamp without time zone NOT NULL,
amp real,
recs smallint,
min_amp real,
max_amp real,
deleted boolean,
factor real DEFAULT 0.25,
min_amp_ts timestamp without time zone,
max_amp_ts timestamp without time zone
)
ALTER TABLE data_cbm_aggregation_15_min ALTER COLUMN virtual_id SET STATISTICS 1000;
ALTER TABLE data_cbm_aggregation_15_min ALTER COLUMN ts SET STATISTICS 1000;
The index that is used in the query
CREATE UNIQUE INDEX idx_data_cbm_aggregation_15_min_virtual_id_ts
ON data_cbm_aggregation_15_min USING btree (virtual_id, ts DESC);
ALTER TABLE data_cbm_aggregation_15_min
CLUSTER ON idx_data_cbm_aggregation_15_min_virtual_id_ts;
Postgres settings
Other settings are default.
default_statistics_target = 100
maintenance_work_mem = 2GB
effective_cache_size = 11GB
work_mem = 256MB
shared_buffers = 3840MB
random_page_cost = 1
What I have tried
I have been following the Things to try before you post in https://wiki.postgresql.org/wiki/Slow_Query_Questions and the results in a bit more detail were as follows:
Fiddling with the Postgres settings, mostly lowering random_page_cost since the index scan, while it seems not too special is miles ahead of the bitmap heap scan it tried doing instead when the random_page_cost was higher.
Adding increased statistics to the virtual_id and ts columns which the index and WHERE conditions are based on. The query planner's estimated row count was much closer to the actual row count after changing this.
Clustering on the idx_data_cbm_aggregation_15_min_virtual_id_ts index did not seem to change much, not that I noticed.
Running VACUUM manually did not change much, I am already running autovacuum so this was no surprise.
Running REINDEX on the index shrunk it considerably (by almost 50%!) but it did not improve the speed by much.
A couple of small improvements
SELECT ts, sum(amp) AS total_amp, sum(factor) * 230 AS wh
FROM data_cbm_aggregation_15_min
WHERE virtual_id = 1818
AND ts >= '2015-02-01 00:00'
AND ts < '2015-04-01 00:00'
AND deleted IS NULL
GROUP BY ts
ORDER BY ts;
sum(230 * factor) - it's cheaper to multiply the sum once instead of multiplying each element: sum(factor) * 230 The result is the same, even with NULL values.
ts between '2015-02-01 00:00:00' and '2015-03-31 23:59:59' is potentially incorrect. To include all of March 2015, use the presented alternative. BETWEEN is translated to ts >= lower AND ts <= upper anyway. It is always slightly faster to spell it out.
virtual_id in (1818) is just a needlessly convoluted way to say virtual_id = 1818.
Better index, potentially bigger improvement
CREATE INDEX data_cbm_aggregation_15_min_special_idx
ON data_cbm_aggregation_15_min (virtual_id, ts, amp, factor)
WHERE deleted IS NULL;
I see nothing in your question that would suggest DESC in your original index. While Index Scan Backward is almost as fast as a plain Index Scan, it's still better to drop the modifier.
Most importantly, there are index-only scans since Postgres 9.2. The two index columns I appended (amp, factor) only make sense if you get index-only scans out of it.
Since you obviously are not interested in deleted rows, make it a partial index. Only pays if you have more than a few deleted rows in the table.
If you have other large parts of the table that can be excluded, add more conditions - and remember to repeat the condition in the query (even if it seems redundant) so Postgres understands that the index is applicable.
Table definition
Reordering table columns like this would save 8 bytes per row:
CREATE TABLE data_cbm_aggregation_15_min (
virtual_id integer NOT NULL,
recs smallint,
deleted boolean,
ts timestamp NOT NULL,
amp real,
min_amp real,
max_amp real,
factor real DEFAULT 0.25,
min_amp_ts timestamp,
max_amp_ts timestamp
);
Related:
Configuring PostgreSQL for read performance
Most important information for last
The first query call can be substantially more expensive for very big tables, since the whole table cannot be cached. Subsequent calls profit from the populated cache. Postgres caches blocks, not necessarily whole tables.
One more thing that can be important for the first call. Due to the MVCC model of Postgres it has to maintain visibility information. When reading pages of a table the first time since the last write operation, Postgres opportunistically updates visibility information, which can impose some extra cost for the first access (and help a lot for subsequent calls). More in the manual here. Related answer on dba.SE:
Why does a SELECT statement dirty cache buffers in Postgres?
About what you've tried so far
SET STATISTICS 1000 for ts and virtual_id was an excellent idea, but the effect was largely nullified by setting random_page_cost = 1, which basically forces an index scan for this query either way.
random_page_cost = 1 is telling Postgres that random access is just as cheap as sequential access. This makes sense for a DB that (almost) completely resides in cache. For a DB with huge tables like yours, this setting seems too extreme (even if it gets Postgres to favor the desired index scan). Set it to random_page_cost = 1.1 or probably higher.
A bitmap index scan is typically a good plan for the first call of the query you presented - for data distributed randomly across the table. Since you clustered the table just like you need it for this query, an index scan is more efficient. The question is: will your table stay clustered?
Your settings for work_mem and other resources depend on how much RAM you have, the speed of your disks, on access pattern, how many concurrent connections you typically have, what other programs on the server compete for resources, etc. work_mem = 256MB seems too high. You don't need nearly as much for the presented query. Setting it that high may actually harm performance, because it reduces RAM available to cache.
REINDEX is not redundant immediately after CLUSTER, since that recreates all indexes anyway. You must have run REINDEX before cluster, or you have heavy write access on the table to get so much bloat again already.
Various
Upgrade to Postgres 9.4 (or the upcoming 9.5, currently alpha). Version 9.2 is 3 years old now, the latest version has received many improvements.
The query plan suggests that nothing is actually aggregated. rows=4,117 are read from the index and rows=4,117 remain after GroupAggregate. Looks like rows are unique on ts already? Then you can remove the aggregation completely and make it a simple SELECT ...
If that's just a misleading EXPLAIN output and you typically output much fewer rows than are read, a MATERIALIZED VIEW with index on ts would be another option. Especially in combination with Postgres 9.4, which introduces REFRESH MATERIALIZED VIEW CONCURRENTLY.

In-order sequence generation

Is there a way to generate some kind of in-order identifier for a table records?
Suppose that we have two threads doing queries:
Thread 1:
begin;
insert into table1(id, value) values (nextval('table1_seq'), 'hello');
commit;
Thread 2:
begin;
insert into table1(id, value) values (nextval('table1_seq'), 'world');
commit;
It's entirely possible (depending on timing) that an external observer would see the (2, 'world') record appear before the (1, 'hello').
That's fine, but I want a way to get all the records in the 'table1' that appeared since the last time the external observer checked it.
So, is there any way to get the records in the order they were inserted? Maybe OIDs can help?
No. Since there is no natural order of rows in a database table, all you have to work with is the values in your table.
Well, there are the Postgres specific system columns cmin and ctid you could abuse to some degree.
The tuple ID (ctid) contains the file block number and position in the block for the row. So this represents the current physical ordering on disk. Later additions will have a bigger ctid, normally. Your SELECT statement could look like this
SELECT *, ctid -- save ctid from last row in last_ctid
FROM tbl
WHERE ctid > last_ctid
ORDER BY ctid
ctid has the data type tid. Example: '(0,9)'::tid
However it is not stable as long-term identifier, since VACUUM or any concurrent UPDATE or some other operations can change the physical location of a tuple at any time. For the duration of a transaction it is stable, though. And if you are just inserting and nothing else, it should work locally for your purpose.
I would add a timestamp column with default now() in addition to the serial column ...
I would also let a column default populate your id column (a serial or IDENTITY column). That retrieves the number from the sequence at a later stage than explicitly fetching and then inserting it, thereby minimizing (but not eliminating) the window for a race condition - the chance that a lower id would be inserted at a later time. Detailed instructions:
Auto increment table column
What you want is to force transactions to commit (making their inserts visible) in the same order that they did the inserts. As far as other clients are concerned the inserts haven't happened until they're committed, since they might roll back and vanish.
This is true even if you don't wrap the inserts in an explicit begin / commit. Transaction commit, even if done implicitly, still doesn't necessarily run in the same order that the row its self was inserted. It's subject to operating system CPU scheduler ordering decisions, etc.
Even if PostgreSQL supported dirty reads this would still be true. Just because you start three inserts in a given order doesn't mean they'll finish in that order.
There is no easy or reliable way to do what you seem to want that will preserve concurrency. You'll need to do your inserts in order on a single worker - or use table locking as Tometzky suggests, which has basically the same effect since only one of your insert threads can be doing anything at any given time.
You can use advisory locking, but the effect is the same.
Using a timestamp won't help, since you don't know if for any two timestamps there's a row with a timestamp between the two that hasn't yet been committed.
You can't rely on an identity column where you read rows only up to the first "gap" because gaps are normal in system-generated columns due to rollbacks.
I think you should step back and look at why you have this requirement and, given this requirement, why you're using individual concurrent inserts.
Maybe you'll be better off doing small-block batched inserts from a single session?
If you mean that every query if it sees world row it has to also see hello row then you'd need to do:
begin;
lock table table1 in share update exclusive mode;
insert into table1(id, value) values (nextval('table1_seq'), 'hello');
commit;
This share update exclusive mode is the weakest lock mode which is self-exclusive — only one session can hold it at a time.
Be aware that this will not make this sequence gap-less — this is a different issue.
We found another solution with recent PostgreSQL servers, similar to #erwin's answer but with txid.
When inserting rows, instead of using a sequence, insert txid_current() as row id. This ID is monotonically increasing on each new transaction.
Then, when selecting rows from the table, add to the WHERE clause id < txid_snapshot_xmin(txid_current_snapshot()).
txid_snapshot_xmin(txid_current_snapshot()) corresponds to the transaction index of the oldest still-open transaction. Thus, if row 20 is committed before row 19, it will be filtered out because transaction 19 will still be open. When the transaction 19 is committed, both rows 19 and 20 will become visible.
When no transaction is opened, the snapshot xmin will be the transaction id of the currently running SELECT statement.
The returned transaction IDs are 64-bits, the higher 32 bits are an epoch and the lower 32 bits are the actual ID.
Here is the documentation of these functions: https://www.postgresql.org/docs/9.6/static/functions-info.html#FUNCTIONS-TXID-SNAPSHOT
Credits to tux3 for the idea.

Optimize Postgres query on timestamp range

I have the following table and indices defined:
CREATE TABLE ticket (
wid bigint NOT NULL DEFAULT nextval('tickets_id_seq'::regclass),
eid bigint,
created timestamp with time zone NOT NULL DEFAULT now(),
status integer NOT NULL DEFAULT 0,
argsxml text,
moduleid character varying(255),
source_id bigint,
file_type_id bigint,
file_name character varying(255),
status_reason character varying(255),
...
)
I created an index on the created timestamp as follows:
CREATE INDEX ticket_1_idx
ON ticket
USING btree
(created );
Here's my query:
select * from ticket
where created between '2012-12-19 00:00:00' and '2012-12-20 00:00:00'
This was working fine until the number of records started to grow (about 5 million) and now it's taking forever to return.
Explain analyze reveals this:
Index Scan using ticket_1_idx on ticket (cost=0.00..10202.64 rows=52543 width=1297) (actual time=0.109..125.704 rows=53340 loops=1)
Index Cond: ((created >= '2012-12-19 00:00:00+00'::timestamp with time zone) AND (created <= '2012-12-20 00:00:00+00'::timestamp with time zone))
Total runtime: 175.853 ms
So far I've tried setting:
random_page_cost = 1.75
effective_cache_size = 3
Also created:
create CLUSTER ticket USING ticket_1_idx;
Nothing works. What am I doing wrong? Why is it selecting sequential scan? The indexes are supposed to make the query fast. Anything that can be done to optimize it?
CLUSTER
If you intend to use CLUSTER, the displayed syntax is invalid.
create CLUSTER ticket USING ticket_1_idx;
Run once:
CLUSTER ticket USING ticket_1_idx;
This can help a lot with bigger result sets. Less for a single or few rows returned.
If your table isn't read-only the effect deteriorates over time. Re-run CLUSTER at reasonable intervals. Postgres remembers the index for subsequent calls, so this works, too:
CLUSTER ticket;
(But I would rather be explicit and use the first form.)
However, if you have lots of updates, CLUSTER (or VACUUM FULL) may actually be bad for performance. The right amount of bloat allows UPDATE to place new row versions on the same data page and avoids the need for extending the underlying physical file (expensively) too often. You can use a carefully tuned FILLFACTOR to get the best of both worlds:
Fillfactor for a sequential index that is PK
pg_repack / pg_squeeze
CLUSTER takes an exclusive lock on the table, which may be a problem in a multi-user environment. Quoting the manual:
When a table is being clustered, an ACCESS EXCLUSIVE lock is acquired
on it. This prevents any other database operations (both reads and
writes) from operating on the table until the CLUSTER is finished.
Bold emphasis mine. Consider the alternatives!
pg_repack:
Unlike CLUSTER and VACUUM FULL it works online, without holding an
exclusive lock on the processed tables during processing. pg_repack is
efficient to boot, with performance comparable to using CLUSTER directly.
and:
pg_repack needs to take an exclusive lock at the end of the reorganization.
The current version 1.4.7 works with PostgreSQL 9.4 - 14.
pg_squeeze is a newer alternative that claims:
In fact we try to replace pg_repack extension.
The current version 1.4 works with Postgres 10 - 14.
Query
The query is simple enough not to cause any performance problems per se.
However: The BETWEEN construct includes boundaries. Your query selects all of Dec. 19, plus records from Dec. 20, 00:00. That's an extremely unlikely requirement. Chances are, you really want:
SELECT *
FROM ticket
WHERE created >= '2012-12-19 00:00'
AND created < '2012-12-20 00:00';
Performance
Why is it selecting sequential scan?
Your EXPLAIN output clearly shows an Index Scan, not a sequential table scan. There must be some kind of misunderstanding.
You may be able to improve performance, but the necessary background information is not in the question. Possible options include:
Only query required columns instead of * to reduce transfer cost (and other performance benefits).
Look at partitioning and put practical time slices into separate tables. Add indexes to partitions as needed.
If partitioning is not an option, another related but less intrusive technique would be to add one or more partial indexes.
For example, if you mostly query the current month, you could create the following partial index:
CREATE INDEX ticket_created_idx ON ticket(created)
WHERE created >= '2012-12-01 00:00:00'::timestamp;
CREATE a new index right before the start of a new month. You can easily automate the task with a cron job.
Optionally DROP partial indexes for old months later.
Keep the total index in addition for CLUSTER (which cannot operate on partial indexes). If old records never change, table partitioning would help this task a lot, since you only need to re-cluster newer partitions.
Then again if records never change at all, you probably don't need CLUSTER.
Performance Basics
You may be missing one of the basics. All the usual performance advice applies:
https://wiki.postgresql.org/wiki/Slow_Query_Questions
https://wiki.postgresql.org/wiki/Performance_Optimization