How to find dead tuples size in postgresql? - postgresql

How to find dead tuples size in postgresql ?
I have created backup of database using pg_dump and restored it on other server. I see there is database size difference (5 GB)in both database. I have verified the table live tuples and dead tuples. There is numbers of row difference due to new data added in current database. However it is big difference in restored DB size. What is the cause of it ? I didn't do vacuum analyze on restored database yet.
I see there is no dead tuples on restored database this may be one reason. That's why I want to find deadtuples size.

This is the view that you need to check:
select n_live_tup, n_dead_tup, relname from pg_stat_all_tables;

You can use the extension pgstattuple. It will report dead_tuple_len.

Related

PostgreSQL: even read access changes data files disk leading to large incremental backups using pgbackrest

We are using pgbackrest to backup our database to Amazon S3. We do full backups once a week and an incremental backup every other day.
Size of our database is around 1TB, a full backup is around 600GB and an incremental backup is also around 400GB!
We found out that even read access (pure select statements) on the database has the effect that the underlying data files (in /usr/local/pgsql/data/base/xxxxxx) change. This results in large incremental backups and also in very large storage (costs) on Amazon S3.
Usually the files with low index names (e.g. 391089.1) change on read access.
On an update, we see changes in one or more files - the index could correlate to the age of the row in the table.
Some more facts:
Postgres version 13.1
Database is running in docker container (docker version 20.10.0)
OS is CentOS 7
We see the phenomenon on multiple servers.
Can someone explain, why postgresql changes data files on pure read access?
We tested on a pure database without any other resources accessing the database.
This is normal. Some cases I can think of right away are:
a SELECT or other SQL statement setting a hint bit
This is a shortcut for subsequent statements that access the data, so they don't have t consult the commit log any more.
a SELECT ... FOR UPDATE writing a row lock
autovacuum removing dead row versions
These are leftovers from DELETE or UPDATE.
autovacuum freezing old visible row versions
This is necessary to prevent data corruption if the transaction ID counter wraps around.
The only way to fairly reliably prevent PostgreSQL from modifying a table in the future is:
never perform an INSERT, UPDATE or DELETE on it
run VACUUM (FREEZE) on the table and make sure that there are no concurrent transactions

pg_repack and logical replication: any risk to missing out on changes from the table while running pg_repack?

As I understand, pg_repack creates a temporary 'mirror' table (table B) and copies the rows from the original table (table A) and re-indexes them and then replaces the original with the mirror. The mirroring step creates a lot of noise with logical replication (a lot of inserts at once), so I'd like to ignore the mirror table from being replicated.
I'm a bit confused with what happens during the switch over though. Is there a risk with losing some changes? I don't think there is since all actual writes are still going to the original table before and after the switch, so it should be safe right?
We're running Postgres 10.7 on AWS Aurora, using wal2json as the output plugin for replication.
I have neither used pg_repack nor logical replication but according to pg_repack Github repository there is a possible issue using pg_repack with logical replication: see
https://github.com/reorg/pg_repack/issues/135
To perform a repack, pg_repack will:
create a log table to record changes made to the original table.
add a trigger onto the original table, logging INSERTs, UPDATEs, and DELETEs into our log table.
create a new table containing all the rows in the old table.
build indexes on this new table.
apply all changes which have occurred in the log table to the new table.
swap the tables, including indexes and toast tables, using the system catalogs.
drop the original table.
In my experience, the log table keeps all changes and applies them after build indexes, besides if repack needs to rollback changes applied on the original table too.

Need to drop 900+ postgres schemas but it wants me to vacuum first

I have 900+ postgres schemas (which collectively hold 40,000 tables) that I'd like to drop. However, it appears that it wants me to vacuum everything first, because I get this whenever I try to drop a schema.
ERROR: database is not accepting commands to avoid wraparound data loss in database
Is there a way to drop a large number of schemas without having to vacuum first?
IS there any problem is running the vacuum command. It is like a garbage collection for a database. I use postgre database and I use this command before doing any major work like backup or creating a sql scripts of the whole database.
VACUUM reclaims storage occupied by dead tuples. In normal PostgreSQL operation, tuples that are deleted or obsoleted by an update are not physically removed from their table; they remain present until a VACUUM is done. Therefore it's necessary to do VACUUM periodically, especially on frequently-updated tables.
You've got two choices. Do the vacuum, or drop the whole database. xid wrap-around must be avoided.
https://blog.sentry.io/2015/07/23/transaction-id-wraparound-in-postgres
There is not much you can do, except VACUUM oder dropping the database.
In addition, if you don't do the VACUUM, the database will not work for anything, not just for the schemas you want to drop.

DB2 9.5 + drop indexes + tablespaces = not pages reduced

I am using DB2 v9.5, the database is not automatic storage and table spaces are all SMS (I know that SMS is not the best practice, but I'm studying to perform the migration then).
I dropped a total of 144 indexes, which were not used, but the amount of pages used/allocated in the database did not change after the DROP INDEX.
As far as I remember, for SMS tablespaces, if DROP of objects (tables or indexes), REORGs not be necessary, unless you had just deleted rows from the table, where it would be necessary to run the REORG to reduce the size allocated for the table .
Some opnion of what can be done to actually free the space from the indexes that were dropped?
Thanks
When you are sure you had your indexes in SMS tablespaces, you should look in the corresponding filesystem, e.g. with df -h or some such.

postgresql 9.2 never vacuumed and analyze

I have given a postgres 9.2 DB around 20GB of size.
I looked through the database and saw that it has been never run vacuum and/or analyze on any tables.
Autovacuum is on and the transaction wraparound limit is very far (only 1% of it).
I know nothing about the data activity (number of deletes,inserts, updates), but I see, it uses a lot of index and sequence.
My question is:
does the lack of vacuum and/or analyze affect data integrity (for example a select doesn't show all the rows matches the select from a table or from an index)? The speed of querys and writes doesn't matter.
is it possible that after the vacuum and/or analyze the same query gives a different answer than it would executed before the vacuum/analyze command?
I'm fairly new to PG, thank you for your help!!
Regards,
Figaro88
Running vacuum and/or analyze will not change the result set produced by any select operation (unless there was a bug in PostgreSQL). They may effect the order of results if you do not supply an ORDER BY clause.