PostgreSQL how to find what is causing Deadlock in vacuum when using --jobs parameter - postgresql

How to find in PostgreSQL 9.5 what is causing deadlock error/failure when doing full vacuumdb over database with option --jobs to run full vacuum in parallel.
I just get some process numbers and table names... How to prevent this so I could successfuly do full vacuum over database in parallel?

Completing a VACUUM FULL under load is a pretty hard task. The problem is that Postgres is contracting space taken by the table, thus any data manipulation interferes with that.
To achieve a full vacuum you have these options:
Lock access to the vacuumed table. Not sure if acquiring some exclusive lock will help, though. You may need to prevent access to the table on application level.
Use a create new table - swap (rename tables) - move data - drop original technique. This way you do not contract space under the original table, you free it by simply dropping the table. Of course you are rebuilding all indexes, redirecting FKs, etc.
Another question is: do you need to VACUUM FULL? The only thing it does that VACUUM ANALYZE does not is contracting the table on the file system. If you are not very limited by disk space you do not need doing a full vacuum that much.
Hope that helps.

Related

Full Load in Redshift - DROP vs TRUNCATE

As part of daily load in Redshift, I have a couple of tables to drop and full load all of them, (data size is small, less than 1 million).
My question is which of the below two strategies is better in terms of CPU utilization and memory in Redshift:
1) Truncate data
2) DROP and Recreate Table.
If I truncate tables, should I perform Vacuum on tables every day as I have read that frequent drop and recreate tables in the database cause fragmentation of pages.
And one of the tables I would like to enable compression. So, is there any downside creating DDL with encoding every day.
Please advise! Thank you!
If you drop the tables you will lose assigned permissions to these tables. If you have views for these tables they will be obsolete.
Truncate is a better option, truncate does not require vacuum or analyze, it is built for use cases like this.
For further info Redshift Truncate documentation

Need to drop 900+ postgres schemas but it wants me to vacuum first

I have 900+ postgres schemas (which collectively hold 40,000 tables) that I'd like to drop. However, it appears that it wants me to vacuum everything first, because I get this whenever I try to drop a schema.
ERROR: database is not accepting commands to avoid wraparound data loss in database
Is there a way to drop a large number of schemas without having to vacuum first?
IS there any problem is running the vacuum command. It is like a garbage collection for a database. I use postgre database and I use this command before doing any major work like backup or creating a sql scripts of the whole database.
VACUUM reclaims storage occupied by dead tuples. In normal PostgreSQL operation, tuples that are deleted or obsoleted by an update are not physically removed from their table; they remain present until a VACUUM is done. Therefore it's necessary to do VACUUM periodically, especially on frequently-updated tables.
You've got two choices. Do the vacuum, or drop the whole database. xid wrap-around must be avoided.
https://blog.sentry.io/2015/07/23/transaction-id-wraparound-in-postgres
There is not much you can do, except VACUUM oder dropping the database.
In addition, if you don't do the VACUUM, the database will not work for anything, not just for the schemas you want to drop.

PostgreSQL vacuuming a frequently updating jsonb field

I have postgres table with jsonb field. Field size is about 2-4kb per row. My application updates 100k rows per day 2000 times (changing 0.1-0.5% of data in field). Autovacuum is off, vacuum full runs every day at night.
Vacuum frees about 100-300gb every day and takes a long time to go causing application downtime.
The question is: can I solve this problem with jsonb field or I must split that field onto other simple tables?
If your concern is long down time then yes VACUUM FULL requires exclusive lock on the table being vacuumed for entire period of run.
I'll suggest you to try pg_repack extension or pg_squeeze extension - depending upon postgres version. Unlike CLUSTER and VACUUM FULL it works online, without holding an exclusive lock on the processed tables during processing. These extensions are really easy to install and use in postgres. These extensions can reduce your downtime significantly and also will help to reduce runs of VACUUM FULL.

truncate on one table blocked by select of another

Postgres 9.4, Ubuntu 10
I have been unable to find this exact problem here, so here it goes:
For each table t in my database, I have a table t_audit. Each delete, insert, and update on table t triggers a function that inserts a record to table t_audit.
Each night, a process truncates each t_audit table.
Last night, a select on table t prevented the truncate on t_audit from proceeding. I did not save what was in pg_stat_activity at the time, but I did save the output from blocking_locks().
Blocking pid: RowExclusiveLock, t, select * from t where ...,
Waiting pid: AccessExclusiveLock, t_audit, truncate table t_audit,
I am uncertain as to why a select on t would block the truncate on t_audit. As I did not save pg_stat_activity, the best that I can assume is that the select was "idle in transaction". I asked the person who was running the query at the time, and he said he was not running the update as part of a transaction. He did update table t just prior to the select. He did not close his connection as the pid was still active until I ran pg_terminate_backend on the pid.
Has anyone experienced this issue before? Is there a recommended procedure for this other than running pg_terminate_backend on any pids which are "idle in transaction" just prior to calling truncates?
Thank you for reading and taking time to respond.
Are there any triggers in place that might cause even something as innocuous as a SELECT on the audit table at the same time as the TRUNCATE (although the fact that it's a Row Exclusive lock indicates that whatever is being triggered is something like an UPDATE instead)? From the PG 9.4 locking documentation, SELECT and TRUNCATE would indeed block each other as expected behavior. The relevant tidbits are these:
ACCESS SHARE
Conflicts with the ACCESS EXCLUSIVE lock mode only.
The SELECT command acquires a lock of this mode on referenced tables. In general, any query that only reads a table and does not modify it will acquire this lock mode.
ACCESS EXCLUSIVE
Conflicts with locks of all modes (ACCESS SHARE, ROW SHARE, ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE). This mode guarantees that the holder is the only transaction accessing the table in any way.
Acquired by the DROP TABLE, TRUNCATE, REINDEX, CLUSTER, and VACUUM FULL commands. Many forms of ALTER TABLE also acquire a lock at this level.
And even more specifically telling is this explicit tip on that page:
Tip: Only an ACCESS EXCLUSIVE lock blocks a SELECT (without FOR UPDATE/SHARE) statement.
As for what to do in this scenario, if your use case is tolerant of unceremonious terminations of (possibly idle) connections, that is certainly a straightforward way of ensuring that the TRUNCATE succeeds.
A more flexible alternative may be to clear out the table with DELETE instead, and then follow up with some variation of VACUUM afterwards (DELETE and SELECT will not block each other, but it will block UPDATE). The suitability of this approach would depend a lot on things like the growth pattern of the table from day-to-day (simply VACUUM may be suitable if its maximum size is not that different day-to-day) and how badly you need that space reclaimed in the short term if it is a huge table - you may need to VACUUM FULL that table during a quiet window if you need the space quickly and badly, but VACUUM FULL is not a gentle hammer to swing by any means.

postgresql 9.2 never vacuumed and analyze

I have given a postgres 9.2 DB around 20GB of size.
I looked through the database and saw that it has been never run vacuum and/or analyze on any tables.
Autovacuum is on and the transaction wraparound limit is very far (only 1% of it).
I know nothing about the data activity (number of deletes,inserts, updates), but I see, it uses a lot of index and sequence.
My question is:
does the lack of vacuum and/or analyze affect data integrity (for example a select doesn't show all the rows matches the select from a table or from an index)? The speed of querys and writes doesn't matter.
is it possible that after the vacuum and/or analyze the same query gives a different answer than it would executed before the vacuum/analyze command?
I'm fairly new to PG, thank you for your help!!
Regards,
Figaro88
Running vacuum and/or analyze will not change the result set produced by any select operation (unless there was a bug in PostgreSQL). They may effect the order of results if you do not supply an ORDER BY clause.