PostgreSQL index bloat ratio more than table bloat ratio and autovacuum_vacuum_scale_factor - postgresql

Index bloats are reaching 57%, while table bloat is 9% only and autovacuum_vacuum_Scale_factor is 10% only.
what is more surprising is even primary key is having bloat of 57%. My understanding is since my primary key is auto incrementing and single column key only so after 10% of table dead tuples, primary key index should also have 10% dead tuples.
Now when autovacuum will run at 10% of dead tuples , it will clean dead tuples. The dead tuple space now becomes bloat and this should be reused by new updates, insert. But this isn't happening in my database, here bloat size keeps on increasing.
FYI:
Index Bloat:
current_database | schemaname | tblname | idxname | real_size | extra_size | extra_ratio | fillfactor | bloat_size | bloat_ratio
| is_na
------------------+------------+----------------------+----------------------------------------------------------+------------+------------+------------------+------------+------------+-------------------
+-------
stackdb | public | data_entity | data_entity_pkey | 2766848000 | 1704222720 | 61.5943745373797 | 90 | 1585192960 | 57.2923760177646
Table Bloat:
current_database | schemaname | tblname | real_size | extra_size | extra_ratio | fillfactor | bloat_size | bloat_ratio | is_na
stackdb | public | data_entity | 10106732544 | 1007288320 | 9.96650812332014 | 100 | 1007288320 | 9.96650812332014 | f
Autovacuum Settings:
stackdb=> show autovacuum_vacuum_scale_factor;
autovacuum_vacuum_scale_factor
--------------------------------
0.1
(1 row)
stackdb=> show autovacuum_vacuum_threshold;
autovacuum_vacuum_threshold
-----------------------------
50
(1 row)
Note:
autovacuum is on
autovacuum is running successfully at defined intervals.
postgreSQL is running version 10.6. Same issue has been found with version 12.x

First: an index bloat of 57% is totally healthy. Don't worry.
Indexes become more bloated than tables, because the empty space cannot be reused as freely as it can be in a table. The table, also known al “heap”, has no predetermined ordering: if a new row is written as the result of an INSERT or UPDATE, it ends up in the first page that has enough free space, so it is easy to keep bloat low if VACUUM does its job.
B-tree indexes are different: their entries have a certain ordering, so the database is not free to choose where to put the new row. So you may have to put it into a page that is already full, causing a page split, while elsewhere in the index there are pages that are almost empty.

Related

How to make big postgres db faster?

I have big Postgres database(around 75 GB) and queries are very slow. Is there any way to make them faster?
About database:
List of relations
Schema | Name | Type | Owner | Persistence | Access method | Size | Description
--------+-------------------+----------+----------+-------------+---------------+------------+-------------
public | fingerprints | table | postgres | permanent | heap | 35 GB |
public | songs | table | postgres | permanent | heap | 26 MB |
public | songs_song_id_seq | sequence | postgres | permanent | | 8192 bytes |
\d+ fingerprints
Table "public.fingerprints"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
---------------+-----------------------------+-----------+----------+---------+----------+-------------+--------------+-------------
hash | bytea | | not null | | extended | | |
song_id | integer | | not null | | plain | | |
offset | integer | | not null | | plain | | |
date_created | timestamp without time zone | | not null | now() | plain | | |
date_modified | timestamp without time zone | | not null | now() | plain | | |
Indexes:
"ix_fingerprints_hash" hash (hash)
"uq_fingerprints" UNIQUE CONSTRAINT, btree (song_id, "offset", hash)
Foreign-key constraints:
"fk_fingerprints_song_id" FOREIGN KEY (song_id) REFERENCES songs(song_id) ON DELETE CASCADE
Access method: heap
\d+ songs
Table "public.songs"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
---------------+-----------------------------+-----------+----------+----------------------------------------+----------+-------------+--------------+-------------
song_id | integer | | not null | nextval('songs_song_id_seq'::regclass) | plain | | |
song_name | character varying(250) | | not null | | extended | | |
fingerprinted | smallint | | | 0 | plain | | |
file_sha1 | bytea | | | | extended | | |
total_hashes | integer | | not null | 0 | plain | | |
date_created | timestamp without time zone | | not null | now() | plain | | |
date_modified | timestamp without time zone | | not null | now() | plain | | |
Indexes:
"pk_songs_song_id" PRIMARY KEY, btree (song_id)
Referenced by:
TABLE "fingerprints" CONSTRAINT "fk_fingerprints_song_id" FOREIGN KEY (song_id) REFERENCES songs(song_id) ON DELETE CASCADE
Access method: heap
DB Scheme
DB Amount
No need to write to database, only read. All queries are very simple:
SELECT song_id
WHERE hash in fingerpints = X
EXPLAIN(analyze, buffers, format text) SELECT "song_id", "offset" FROM "fingerprints" WHERE "hash" = decode('eeafdd7ce9130f9697','hex');
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------
Index Scan using ix_fingerprints_hash on fingerprints (cost=0.00..288.28 rows=256 width=8) (actual time=0.553..234.257 rows=871 loops=1)
Index Cond: (hash = '\xeeafdd7ce9130f9697'::bytea)
Buffers: shared hit=118 read=749
Planning Time: 0.225 ms
Execution Time: 234.463 ms
(5 rows)
234 ms looks fine where it is one query. But in reality there 3000 query per time, that takes about 600 seconds. It is audio recognition application, so algoritm works like that.
About indexes:
CREATE INDEX "ix_fingerprints_hash" ON "fingerprints" USING hash ("hash");
For pooler I use Odyssey.
Little bit of info from config:
shared_buffers = 4GB
huge_pages = try
work_mem = 582kB
maintenance_work_mem = 2GB
effective_io_concurrency = 200
max_worker_processes = 24
max_parallel_workers_per_gather = 12
max_parallel_maintenance_workers = 4
max_parallel_workers = 24
wal_buffers = 16MB
checkpoint_completion_target = 0.9
max_wal_size = 16GB
min_wal_size = 4GB
random_page_cost = 1.1
effective_cache_size = 12GB
Info about hardware:
Xeon 12 core (24 threads)
RAM DDR4 16 GB ECC
NVME disk
Will the database be accelerated by purchase more RAM to handle all DB inside (128 GB in example)? And what parameters should I change to say to Postgres to store db in ram?
I read about several topics about pg_tune, etc. but experiments don't show any good results.
Increasing the RAM so that everything can stay in cache (perhaps after using pg_prewarm to get it into cache in the first place) would certainly work. But it is expensive and shouldn't be necessary.
Having a hash index on something which is already a hashed value is probably not very helpful. Have you tried just a default (btree) index instead?
If you CLUSTER the table on the index over the column named "hash" (which you can only do if it is a btree index) then rows with the same hash code should mostly share the same table page, which would greatly cut down on the number of different buffer reads needed to fetch them all.
If you could get it do a bitmap heap scan instead of an index scan, then it should be able to have a large number of read requests outstanding at a time, due to effective_io_concurrency. But the planner does not account for effective_io_concurrency when doing planning, which means it won't choose a bitmap heap scan specifically to get it that benefit. Normally an index read finding hundreds of rows on different pages would automatically choose a bitmap heap scan method, but in your case it is probably the low setting of random_page_cost which is inhibiting it from doing so. The low setting of random_page_cost is probably reasonable in itself, but it does have this unfortunate side effect. A problem with this strategy is that it doesn't reduce the overall amount of IO needed, it just allows them overlap and so make better use of multiple IO channels. But if many sessions are running many instances of this query at the same time, they will start filling up those channels and so start competing with each other. So the CLUSTER method is probably superior as it gets the same answer with less IO. If you want to play around with bitmap scans, you could temporarily increase random_page_cost or temporarily set enable_indexscan to off.
No need to write to database, only read.
So the DB is read-only.
And in comments:
db worked fine on small amount of data(few GB), but after i filled out database started to slowdown.
So indexes have been built up incrementally.
Indexes
UNIQUE CONSTRAINT on (song_id, "offset", hash)
I would replace that with:
ALTER TABLE fingerprints
DROP CONSTRAINT uq_fingerprints
, ADD CONSTRAINT uq_fingerprints UNIQUE(hash, song_id, "offset") WITH (FILLFACTOR = 100)
This enforces the same constraint, but the leading hash column in the underlying B-tree index now supports the filter on hash in your displayed query. And the fact that all needed columns are included in the index, further allows much faster index-only scans. The (smaller) index should also be more easily cached than the (bigger) table (plus index).
See:
Is a composite index also good for queries on the first field?
Also rewrites the index in pristine condition, and with FILLFACTOR 100 for the read-only DB. (Instead of the default 90 for a B-tree index.)
Hash index on (hash) and CLUSTER
The name of the column "hash" has nothing to do with the name of the index type, which also happens to be "hash". (The column should probably not be named "hash" to begin with.)
If (and only if) you also have other queries centered around one of few hash values, that cannot use index-only scans (and you actually see faster queries than without) keep the hash index, additionally. And optimize it. (Else drop it!)
ALTER INDEX ix_fingerprints_hash SET (FILLFACTOR = 100);
An incrementally grown index may end up with bloat or unbalanced overflow pages in case of a hash index. REINDEX should take care of that. While being at it, increase FILLFACTER to 100 (from the default 75 for a hash index) for your read-only (!) DB. You can REINDEX to make the change effective.
REINDEX INDEX ix_fingerprints_hash;
Or you can CLUSTER (like jjanes already suggested) on the rearranged B-tree index from above:
CLUSTER fingerprints USING uq_fingerprints;
Rewrites the table and all indexes; rows are physically sorted according to the given index, so "clustered" around the leading column(s). Effects are permanent for your read-only DB. But index-only scans do not benefit from this.
When done optimizing, run once:
VACUUM ANALYZE fingerprints;
work_mem
The tiny setting for work_mem stands out:
work_mem = 582kB
Even the (very conservative!) default is 4MB.
But after reading your question again, it would seem you only have tiny queries. So maybe that's ok after all.
Else, with 16GB RAM you can typically afford a 100 times as much. Depends on your work load of course.
Many small queries, many parallel workers --> keep small work_mem (like 4MB?)
Few big queries, few parallel workers --> go high (like 256MB? or more)
Large amounts of temporary files written in your database over time, and mentions of "disk" in the output of EXPLAIN ANALYZE would indicate the need for more work_mem.
Additional questions
Will the database be accelerated by purchase more RAM to handle all DB inside (128 GB in example)?
More RAM almost always helps until the whole DB can be cached in RAM and all processes can afford all the work_mem they desire.
And what parameters should I change to say to Postgres to store db in ram?
Everything that's read from the database is cached automatically in system cache and Postgres cache, up to the limit of available RAM. (Setting work_mem too high competes for that same resource.)

PostgreSQL why table bloat ratio is higher than autovacuum_vacuum_scale_factor

I found that bloat ratio of feedback_entity is 48%
current_database | schemaname | tblname | real_size | extra_size | extra_ratio | fillfactor | bloat_size | bloat_ratio | is_na
stackdb | public | feedback_entity | 5743878144 | 2785599488 | 48.4968416488746 | 100 | 2785599488 | 48.4968416488746 | f
but when I check autovacuum setting it has autovaccum setting of 10%
stackdb=> show autovacuum_vacuum_scale_factor;
autovacuum_vacuum_scale_factor
--------------------------------
0.1
(1 row)
stackdb=> show autovacuum_vacuum_threshold;
autovacuum_vacuum_threshold
-----------------------------
50
(1 row)
Also:
Autovacuum setting is on.
Autovacuum for mentioned table are running regularly at defined threshold
My Question is when auto vacuum is running at 10% of dead tuples why would bloat size increase to 48%. I have seen similar behaviour in hundreds of databases/tables. Why table bloat is always increasing and doesn't come down after every vacuum.
The query that you used to calculate the table bloat is unreliable. To determine the actual bloat, use the pgstattuple extension and query like this:
SELECT * FROM pgstattuple('public.feedback_entity');
But the table may really be bloated. There are two major reasons for that:
autovacuum runs and finishes in a reasonable time, but it cannot clean up the dead tuples. That may be because there is a long-running open transaction, an abandoned replication slot or a prepared transaction. See this article for details.
autovacuum runs too slow, so that dead rows are generated faster than it can clean them up. The symptoms are lots of dead tuples in pg_stat_user_tables and autovacuum processes that keep running forever. The straightforward solution is to use ALTER TABLE to increase autovacuum_vacuum_cost_limit or reduce autovacuum_vacuum_cost_delay for the afflicted table. An alternative approach, if possible, is to use HOT updates.

Postgresql relations per database limitations

In the Postgresql docs Appendix K (https://www.postgresql.org/docs/12/limits.html) there is a list of the limitations:
+------------------------+-----------------------------------------------------------------------+------------------------------------------------------------------------+
| Item | Upper Limit | Comment |
+------------------------+-----------------------------------------------------------------------+------------------------------------------------------------------------+
| database size | unlimited | |
+------------------------+-----------------------------------------------------------------------+------------------------------------------------------------------------+
| number of databases | 4,294,950,911 | |
+------------------------+-----------------------------------------------------------------------+------------------------------------------------------------------------+
| relations per database | 1,431,650,303 | |
+------------------------+-----------------------------------------------------------------------+------------------------------------------------------------------------+
| relation size | 32 TB | with the default BLCKSZ of 8192 bytes |
+------------------------+-----------------------------------------------------------------------+------------------------------------------------------------------------+
| rows per table | limited by the number of tuples that can fit onto 4,294,967,295 pages | |
+------------------------+-----------------------------------------------------------------------+------------------------------------------------------------------------+
| columns per table | 1600 | further limited by tuple size fitting on a single page; see note below |
+------------------------+-----------------------------------------------------------------------+------------------------------------------------------------------------+
| field size | 1 GB | |
+------------------------+-----------------------------------------------------------------------+------------------------------------------------------------------------+
| identifier length | 63 bytes | can be increased by recompiling PostgreSQL |
+------------------------+-----------------------------------------------------------------------+------------------------------------------------------------------------+
| indexes per table | unlimited | constrained by maximum relations per database |
+------------------------+-----------------------------------------------------------------------+------------------------------------------------------------------------+
| columns per index | 32 | can be increased by recompiling PostgreSQL |
+------------------------+-----------------------------------------------------------------------+------------------------------------------------------------------------+
| partition keys | 32 | can be increased by recompiling PostgreSQL |
+------------------------+-----------------------------------------------------------------------+------------------------------------------------------------------------+
The line 'relations per database' mean the total number of relations for each row in the database, or the number of declared Foreign Keys ?
For exemple, 2 tables with a Foreign Key to join both count as 1 relation in the database ? Or as 1 * number rows (1.000 rows equals 1.000 relations) ?
Thanks a lot for your help :)
“Relation” is anything that is stored in the catalog table pg_class: tables, views, sequences, indexes, materialized views, partitioned tables and partitioned indexes.
That limit is mostly theoretical, in practice you will get in trouble if you have 100000 or so relations.
See here:
https://www.postgresql.org/docs/current/tutorial-concepts.html
Relation is essentially a mathematical term for table.

Postgresql "CLUSTER tablename USING indexname;" takes more disk space than table size plus all indexes combined

I have a quite large table in the database. Size of table and its indexes are shown in tables below:
table_size
----------------
22 GB
schemaname | scan_count | tablename | indexname | index_size
------------+------------+-----------+-----------------------------------------------------------------+------------
public | 1352665306 | t1 | ind1 | 6686 MB
public | 1492127808 | t1 | ind2 | 6587 MB
public | 3492322 | t1 | ind3 | 4747 MB
public | 71810172 | t1 | ind4 | 4237 MB
public | 80547954 | t1 | cluster_ind | 4035 MB
public | 3628773 | t1 | ind6 | 3700 MB
As you can see, tables size is 22GB. It has 6 different indexes, which take 30GB of space combined.
According to postgresql docs:
CLUSTER can re-sort the table using either an index scan on the specified index, or (if the index is a b-tree) a sequential scan followed by sorting. It will attempt to choose the method that will be faster, based on planner cost parameters and available statistical information.
When an index scan is used, a temporary copy of the table is created that contains the table data in the index order. Temporary copies of each index on the table are created as well. Therefore, you need free space on disk at least equal to the sum of the table size and the index sizes.
However, with df -h showing that I have 75GB of free space on the partition containing my postgresql data, I still run out of disk space when I try to cluster the table over cluster_ind index using the command below:
cluster table using cluster_ind;
and I face this error:
ERROR: could not extend file "base/16385/2165933.3": wrote only 4096 of 8192 bytes at block 394731
HINT: Check free disk space.
Question: What else is using disk space during clustering other than table size and sum of sizes of its indexes? And how can I estimate this space required to run a CLUSTER command on a table using an index?
That could be the TOAST table.
Use pg_total_relation_size('table_name') to get the full size of the table.

Refreshing materialized view CONCURRENTLY causes table bloat

In PostgreSQL 9.5 I've decided to create a materialized view "effects" and scheduled an hourly concurrent refresh, since I wanted it to be always available:
REFRESH MATERIALIZED VIEW CONCURRENTLY effects;
In the beginning everything worked well, my materialized view was refreshing and disk space usage remained more or less constant.
The Issue
After some time though, disk usage started to linearly grow.
I've concluded that the reason for this growth is the materialized view and ran the query from this answer to get the following result:
what | bytes/ct | bytes_pretty | bytes_per_row
-----------------------------------+-------------+--------------+---------------
core_relation_size | 32224567296 | 30 GB | 21140
visibility_map | 991232 | 968 kB | 0
free_space_map | 7938048 | 7752 kB | 5
table_size_incl_toast | 32233504768 | 30 GB | 21146
indexes_size | 22975922176 | 21 GB | 15073
total_size_incl_toast_and_indexes | 55209426944 | 51 GB | 36220
live_rows_in_text_representation | 316152215 | 302 MB | 207
------------------------------ | | |
row_count | 1524278 | |
live_tuples | 676439 | |
dead_tuples | 1524208 | |
(11 rows)
Then, I found that the last time this table was autovacuumed was two days ago, by running:
SELECT relname, n_dead_tup, last_vacuum, last_autovacuum FROM pg_stat_user_tables ORDER BY n_dead_tup desc;
I decided to manually call vacuum (VERBOSE) effects. It ran for about half an hour and produced the following output:
vacuum (VERBOSE) effects;
INFO: vacuuming "public.effects"
INFO: scanned index "effects_idx" to remove 129523454 row versions
DETAIL: CPU 12.16s/55.76u sec elapsed 119.87 sec
INFO: scanned index "effects_campaign_created_idx" to remove 129523454 row versions
DETAIL: CPU 19.11s/154.59u sec elapsed 337.91 sec
INFO: scanned index "effects_campaign_name_idx" to remove 129523454 row versions
DETAIL: CPU 28.51s/151.16u sec elapsed 315.51 sec
INFO: scanned index "effects_campaign_event_type_idx" to remove 129523454 row versions
DETAIL: CPU 38.60s/373.59u sec elapsed 601.73 sec
INFO: "effects": removed 129523454 row versions in 3865537 pages
DETAIL: CPU 59.02s/36.48u sec elapsed 326.43 sec
INFO: index "effects_idx" now contains 1524208 row versions in 472258 pages
DETAIL: 113679000 index row versions were removed.
463896 index pages have been deleted, 60386 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO: index "effects_campaign_created_idx" now contains 1524208 row versions in 664910 pages
DETAIL: 121637488 index row versions were removed.
41014 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: index "effects_campaign_name_idx" now contains 1524208 row versions in 711391 pages
DETAIL: 125650677 index row versions were removed.
696221 index pages have been deleted, 28150 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: index "effects_campaign_event_type_idx" now contains 1524208 row versions in 956018 pages
DETAIL: 127659042 index row versions were removed.
934288 index pages have been deleted, 32105 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: "effects": found 0 removable, 493 nonremovable row versions in 3880239 out of 3933663 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 666922 unused item pointers.
Skipped 0 pages due to buffer pins.
0 pages are entirely empty.
CPU 180.49s/788.60u sec elapsed 1799.42 sec.
INFO: vacuuming "pg_toast.pg_toast_1371723"
INFO: index "pg_toast_1371723_index" now contains 0 row versions in 1 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: "pg_toast_1371723": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM
At this point I thought the problem was resolved and started thinking what could interfere with the autovacuum. To be sure, I ran again the query to find space usage by that table and to my surprise it didn't change.
Only after I called REFRESH MATERIALIZED VIEW effects; not concurrently. Only now the output of the query to check table size was:
what | bytes/ct | bytes_pretty | bytes_per_row
-----------------------------------+-----------+--------------+---------------
core_relation_size | 374005760 | 357 MB | 245
visibility_map | 0 | 0 bytes | 0
free_space_map | 0 | 0 bytes | 0
table_size_incl_toast | 374013952 | 357 MB | 245
indexes_size | 213843968 | 204 MB | 140
total_size_incl_toast_and_indexes | 587857920 | 561 MB | 385
live_rows_in_text_representation | 316175512 | 302 MB | 207
------------------------------ | | |
row_count | 1524385 | |
live_tuples | 676439 | |
dead_tuples | 1524208 | |
(11 rows)
And everything went back to normal...
Questions
The problem is solved but there's still a fair amount of confusion
Could anyone please explain what was the issue I experienced?
How could I avoid this in the future?
First, let's explain the bloat
REFRESH MATERIALIZED CONCURRENTLY is implemented in src/backend/commands/matview.c, and the comment is enlightening:
/*
* refresh_by_match_merge
*
* Refresh a materialized view with transactional semantics, while allowing
* concurrent reads.
*
* This is called after a new version of the data has been created in a
* temporary table. It performs a full outer join against the old version of
* the data, producing "diff" results. This join cannot work if there are any
* duplicated rows in either the old or new versions, in the sense that every
* column would compare as equal between the two rows. It does work correctly
* in the face of rows which have at least one NULL value, with all non-NULL
* columns equal. The behavior of NULLs on equality tests and on UNIQUE
* indexes turns out to be quite convenient here; the tests we need to make
* are consistent with default behavior. If there is at least one UNIQUE
* index on the materialized view, we have exactly the guarantee we need.
*
* The temporary table used to hold the diff results contains just the TID of
* the old record (if matched) and the ROW from the new table as a single
* column of complex record type (if matched).
*
* Once we have the diff table, we perform set-based DELETE and INSERT
* operations against the materialized view, and discard both temporary
* tables.
*
* Everything from the generation of the new data to applying the differences
* takes place under cover of an ExclusiveLock, since it seems as though we
* would want to prohibit not only concurrent REFRESH operations, but also
* incremental maintenance. It also doesn't seem reasonable or safe to allow
* SELECT FOR UPDATE or SELECT FOR SHARE on rows being updated or deleted by
* this command.
*/
So the materialized view is refreshed by deleting rows and inserting new ones from a temporary table. This can of course lead to dead tuples and table bloat, which is confirmed by your VACUUM (VERBOSE) output.
In a way, that's the price you pay for CONCURRENTLY.
Second, let's debunk the myth that VACUUM cannot remove the dead tuples
VACUUM will remove the dead rows, but it cannot reduce the bloat (that can be done with VACUUM (FULL), but that would lock the view just like REFRESH MATERIALIZED VIEW without CONCURRENTLY).
I suspect that the query you use to determine the number of dead tuples is just an estimate that gets the number of dead tuples wrong.
An example that demonstrates all that
CREATE TABLE tab AS SELECT id, 'row ' || id AS val FROM generate_series(1, 100000) AS id;
-- make sure autovacuum doesn't spoil our demonstration
CREATE MATERIALIZED VIEW tab_v WITH (autovacuum_enabled = off)
AS SELECT * FROM tab;
-- required for CONCURRENTLY
CREATE UNIQUE INDEX ON tab_v (id);
Use the pgstattuple extension to accurately measure table bloat:
CREATE EXTENSION pgstattuple;
SELECT * FROM pgstattuple('tab_v');
-[ RECORD 1 ]------+--------
table_len | 4431872
tuple_count | 100000
tuple_len | 3788895
tuple_percent | 85.49
dead_tuple_count | 0
dead_tuple_len | 0
dead_tuple_percent | 0
free_space | 16724
free_percent | 0.38
Now let's delete some rows in the table, refresh and measure again:
DELETE FROM tab WHERE id BETWEEN 40001 AND 80000;
REFRESH MATERIALIZED VIEW CONCURRENTLY tab_v;
SELECT * FROM pgstattuple('tab_v');
-[ RECORD 1 ]------+--------
table_len | 4431872
tuple_count | 60000
tuple_len | 2268895
tuple_percent | 51.19
dead_tuple_count | 40000
dead_tuple_len | 1520000
dead_tuple_percent | 34.3
free_space | 16724
free_percent | 0.38
Lots of dead tuples. VACUUM gets rid of these:
VACUUM tab_v;
SELECT * FROM pgstattuple('tab_v');
-[ RECORD 1 ]------+--------
table_len | 4431872
tuple_count | 60000
tuple_len | 2268895
tuple_percent | 51.19
dead_tuple_count | 0
dead_tuple_len | 0
dead_tuple_percent | 0
free_space | 1616724
free_percent | 36.48
The dead tuples are gone, but now there is a lot of empty space.
I'm adding to #Laurenz Albe full answer provided above. There is another possibility for the bloating. Consider the following scenario:
You have a view that rarely changes in most (1000000 records, 100 records change per request) and yet you still get 500000 dead tuples. The reason for that can be null in the index column.
As described in the answer above when views materialized concurrently, a copy is recreated and compared. The comparison uses the mandatory unique index but, what about nulls? nulls are never equal to each other in sql. So if you primary key allow nulls, the records that include nulls even if unchanged will be always recreated and added to the table
In order to fix this what you can do to remove the bloat is to add additional column that will coalesce the null column to some never used value (-1, to_timestamp(0), ...) and use this column only for the primary index