Long Aurora autovacuum on postgres system tables - postgresql

We have had an incredibly long running autovacuum process running on one of our smaller database machines that we believe has been using a lot of Aurora:StorageIOUsage:
We determined this by running SELECT * FROM pg_stat_activity WHERE wait_event_type = 'IO';
and seeing the below results repeatedly.
datid | datname | pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start | query_start | state_change | wait_event_type | wait_event | state | backend_xid | backend_xmin | query | backend_type
--------+----------------------------+-------+----------+-----------+------------------+----------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+-----------------+--------------+--------+-------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------
398954 | postgres | 17582 | | | | | | | 2022-09-29 18:45:55.364654+00 | 2022-09-29 18:46:20.253107+00 | 2022-09-29 18:46:20.253107+00 | 2022-09-29 18:46:20.253108+00 | IO | DataFileRead | active | | 66020718 | autovacuum: VACUUM pg_catalog.pg_depend | autovacuum worker
398954 | postgres | 17846 | | | | | | | 2022-09-29 18:46:04.092536+00 | 2022-09-29 18:46:29.196309+00 | 2022-09-29 18:46:29.196309+00 | 2022-09-29 18:46:29.19631+00 | IO | DataFileRead | active | | 66020732 | autovacuum: VACUUM pg_toast.pg_toast_2618 | autovacuum worker
As you can see from the screenshot it has been going for well over a month, and is mainly for the pg_depend, pg_attribute, and pg_toast_2618 tables which are not all that large. I haven't been able to find any reason why these tables would need so much vacuuming other than maybe a database restore from our production environment (this is one of our lower environments). Here are the pg_stat_sys_tables entries for those tables and the pg_rewrite which is the table that pg_toast_2618 is associated with:
relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup | n_mod_since_analyze | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze | vacuum_count | autovacuum_count | analyze_count | autoanalyze_count
-------+------------+---------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+-------------+-------------------------------+--------------+-------------------------------+--------------+------------------+---------------+-------------------
1249 | pg_catalog | pg_attribute | 185251 | 12594432 | 31892996 | 119366792 | 1102817 | 3792 | 1065737 | 1281 | 543392 | 1069529 | 23584 | | 2022-09-29 18:53:25.227334+00 | | 2022-09-28 01:12:47.628499+00 | 0 | 1266763 | 0 | 36
2608 | pg_catalog | pg_depend | 2429 | 369003445 | 14152628 | 23494712 | 7226948 | 0 | 7176855 | 0 | 476267 | 7176855 | 0 | | 2022-09-29 18:52:34.523257+00 | | 2022-09-28 02:02:52.232822+00 | 0 | 950137 | 0 | 71
2618 | pg_catalog | pg_rewrite | 25 | 155083 | 1785288 | 1569100 | 64127 | 314543 | 62472 | 59970 | 7086 | 377015 | 13869 | | 2022-09-29 18:53:11.288732+00 | | 2022-09-23 18:54:50.771969+00 | 0 | 1280018 | 0 | 81
2838 | pg_toast | pg_toast_2618 | 0 | 0 | 1413436 | 3954640 | 828571 | 0 | 825143 | 0 | 15528 | 825143 | 1653714 | | 2022-09-29 18:52:47.242386+00 | | | 0 | 608881 | 0 | 0
I'm pretty new to Postgres and I'm wondering what could possibly cause this level of records to need to be cleaned up, and why it would take well over a month to accomplish considering we always have autovacuum set to TRUE. We are running Postgres version 10.17 on a single db.t3.medium, and the only thing I can think of at this point is to try increasing the instance size. Do we simply need to increase our database instance size on our aurora cluster so that this can be done more in memory? I'm at a bit of a loss for how to reduce this huge sustained spike in Storage IO costs.
Additional information for our autovaccum settings:
=> SELECT * FROM pg_catalog.pg_settings WHERE name LIKE '%autovacuum%';
name | setting | unit | category | short_desc | extra_desc | context | vartype | source | min_val | max_val | enumvals | boot_val | reset_val | sourcefile | sourceline | pending_restart
-------------------------------------+-----------+------+-------------------------------------+-------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------+------------+---------+--------------------+---------+------------+-----------------------------------------------------------------------------------------+-----------+-----------+-----------------------------------+------------+-----------------
autovacuum | on | | Autovacuum | Starts the autovacuum subprocess. | | sighup | bool | configuration file | | | | on | on | /rdsdbdata/config/postgresql.conf | 78 | f
autovacuum_analyze_scale_factor | 0.05 | | Autovacuum | Number of tuple inserts, updates, or deletes prior to analyze as a fraction of reltuples. | | sighup | real | configuration file | 0 | 100 | | 0.1 | 0.05 | /rdsdbdata/config/postgresql.conf | 55 | f
autovacuum_analyze_threshold | 50 | | Autovacuum | Minimum number of tuple inserts, updates, or deletes prior to analyze. | | sighup | integer | default | 0 | 2147483647 | | 50 | 50 | | | f
autovacuum_freeze_max_age | 200000000 | | Autovacuum | Age at which to autovacuum a table to prevent transaction ID wraparound. | | postmaster | integer | default | 100000 | 2000000000 | | 200000000 | 200000000 | | | f
autovacuum_max_workers | 3 | | Autovacuum | Sets the maximum number of simultaneously running autovacuum worker processes. | | postmaster | integer | configuration file | 1 | 262143 | | 3 | 3 | /rdsdbdata/config/postgresql.conf | 45 | f
autovacuum_multixact_freeze_max_age | 400000000 | | Autovacuum | Multixact age at which to autovacuum a table to prevent multixact wraparound. | | postmaster | integer | default | 10000 | 2000000000 | | 400000000 | 400000000 | | | f
autovacuum_naptime | 5 | s | Autovacuum | Time to sleep between autovacuum runs. | | sighup | integer | configuration file | 1 | 2147483 | | 60 | 5 | /rdsdbdata/config/postgresql.conf | 9 | f
autovacuum_vacuum_cost_delay | 5 | ms | Autovacuum | Vacuum cost delay in milliseconds, for autovacuum. | | sighup | integer | configuration file | -1 | 100 | | 20 | 5 | /rdsdbdata/config/postgresql.conf | 73 | f
autovacuum_vacuum_cost_limit | -1 | | Autovacuum | Vacuum cost amount available before napping, for autovacuum. | | sighup | integer | default | -1 | 10000 | | -1 | -1 | | | f
autovacuum_vacuum_scale_factor | 0.1 | | Autovacuum | Number of tuple updates or deletes prior to vacuum as a fraction of reltuples. | | sighup | real | configuration file | 0 | 100 | | 0.2 | 0.1 | /rdsdbdata/config/postgresql.conf | 22 | f
autovacuum_vacuum_threshold | 50 | | Autovacuum | Minimum number of tuple updates or deletes prior to vacuum. | | sighup | integer | default | 0 | 2147483647 | | 50 | 50 | | | f
autovacuum_work_mem | -1 | kB | Resource Usage / Memory | Sets the maximum memory to be used by each autovacuum worker process. | | sighup | integer | default | -1 | 2147483647 | | -1 | -1 | | | f
log_autovacuum_min_duration | -1 | ms | Reporting and Logging / What to Log | Sets the minimum execution time above which autovacuum actions will be logged. | Zero prints all actions. -1 turns autovacuum logging off. | sighup | integer | default | -1 | 2147483647 | | -1 | -1 | | | f
rds.force_autovacuum_logging_level | disabled | | Customized Options | Emit autovacuum log messages irrespective of other logging configuration. | Each level includes all the levels that follow it.Set to disabled to disable this feature and fall back to using log_min_messages. | sighup | enum | default | | | {debug5,debug4,debug3,debug2,debug1,info,notice,warning,error,log,fatal,panic,disabled} | disabled | disabled | | | f

I would say you have some very long-lived snapshot being held. These tables need to be vacuumed, but the vacuum doesn't accomplish anything because the dead tuples can't be removed as some old snapshot still can see them. So immediately after being vacuumed, they are still eligible to be vacuumed again. So it tries again every 5 seconds (autovacuum_naptime), because autovacuum doesn't have a way to say "Don't bother until this snapshot which blocked me from accomplishing anything last time goes away"
Check pg_stat_activity for very old 'idle in transaction' and for any prepared transactions.

Related

Postgres cannot create database but can create a user

I am using Ubuntu Linux. I am trying to use Postgres as database. It is doing fine when I created a user:
CREATE USER username;
But when I try to create a database, it returns nothing:
CREATE DATABASE databasename;
What is happening with my Postgres?
datid | datname | pid | leader_pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start | query_start | state_change | wait_event_type | wait_event | state | backend_xid | backend_xmin | query_id | query | backend_type
-------+----------+------+------------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+-----------------+---------------------+--------+-------------+--------------+----------+---------------------------------+------------------------------
| | 8237 | | | | | | | | 2022-02-02 13:00:47.683187+07 | | | | Activity | AutoVacuumMain | | | | | | autovacuum launcher
| | 8239 | | 10 | postgres | | | | | 2022-02-02 13:00:47.70127+07 | | | | Activity | LogicalLauncherMain | | | | | | logical replication launcher
13726 | postgres | 8329 | | 10 | postgres | psql | | | -1 | 2022-02-02 13:08:52.250244+07 | 2022-02-02 13:09:10.651383+07 | 2022-02-02 13:09:10.651383+07 | 2022-02-02 13:09:10.651393+07 | Lock | object | active | | 740 | | CREATE DATABASE kong; | client backend
13726 | postgres | 8313 | | 10 | postgres | psql | | | -1 | 2022-02-02 13:04:57.265085+07 | 2022-02-02 13:10:40.097817+07 | 2022-02-02 13:10:40.097817+07 | 2022-02-02 13:10:40.09782+07 | | | active | | 740 | | SELECT * FROM pg_stat_activity; | client backend
| | 8235 | | | | | | | | 2022-02-02 13:00:47.664058+07 | | | | Activity | BgWriterHibernate | | | | | | background writer
| | 8234 | | | | | | | | 2022-02-02 13:00:47.654713+07 | | | | Activity | CheckpointerMain | | | | | | checkpointer
| | 8236 | | | | | | | | 2022-02-02 13:00:47.673631+07 | | | | Activity | WalWriterMain | | | | | | walwriter
(7 rows)
(END)
and for the pg_locks
locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath | waitstart
------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+------------------+---------+----------+-------------------------------
relation | 13726 | 12290 | | | | | | | | 7/17 | 8313 | AccessShareLock | t | t |
virtualxid | | | | | 7/17 | | | | | 7/17 | 8313 | ExclusiveLock | t | t |
virtualxid | | | | | 3/15 | | | | | 3/15 | 8329 | ExclusiveLock | t | t |
virtualxid | | | | | 6/12 | | | | | 6/12 | 8335 | ExclusiveLock | t | t |
virtualxid | | | | | 5/3 | | | | | 5/3 | 8266 | ExclusiveLock | t | t |
virtualxid | | | | | 4/1 | | | | | 4/1 | 8264 | ExclusiveLock | t | t |
object | 0 | | | | | | 1262 | 1 | 0 | 6/12 | 8335 | RowExclusiveLock | f | f | 2022-02-02 13:09:30.561821+07
object | 0 | | | | | | 1262 | 1 | 0 | 3/15 | 8329 | ShareLock | f | f | 2022-02-02 13:09:10.651571+07
object | 0 | | | | | | 1262 | 1 | 0 | 4/1 | 8264 | RowExclusiveLock | t | f |
relation | 0 | 1262 | | | | | | | | 3/15 | 8329 | AccessShareLock | t | f |
object | 0 | | | | | | 1262 | 1 | 0 | 5/3 | 8266 | RowExclusiveLock | t | f |
(11 rows)
(END)
Database info
postgres=# \l
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
-----------+----------+----------+---------+---------+-----------------------+---------+------------+--------------------------------------------
postgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 | | 8529 kB | pg_default | default administrative connection database
template0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +| 8377 kB | pg_default | unmodifiable empty database
| | | | | postgres=CTc/postgres | | |
template1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +| 8529 kB | pg_default | default template for new databases
| | | | | postgres=CTc/postgres | | |
(3 rows)
postgres=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
kong | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
postgres=#
Using the same database name and user is not good practice. This may result in various errors.
When you call the command
CREATE DATABASE databaseName;
PostgreSql creates a database. This may take some time. After creating the database, you will receive a message:
CREATE DATABASE
postgres=#
Problem solved by reinstalling the pg to its old version (was installed 14, downgrade to 12 and it solved). Thanks to everyone here who helped me

how is backend_start great than xact_start

How can the backend_start be greater than 2 days of xact_start/query_start? The 3rd sessions looks good, but the first 2 looks weird, is this possible? Would this mean anything?
pg=> select * from pg_catalog.pg_stat_activity where usename = 'etl_user' and state = 'active' and backend_xmin = 65201266;
datid | datname | pid |usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start | query_start | state_change | wait_event_type | wait_event| state | backend_xid | backend_xmin | query | backend_type
-------+---------+-------+----------+----------+------------------------+----------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+-----------------+------------+--------+-------------+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------
16408 | pg| 37908 | 229661 | etl_user | PostgreSQL JDBC Driver | | | | 2021-04-20 21:36:22.540271+00 | 2021-04-17 22:31:32.314106+00 | 2021-04-17 22:31:32.317577+00 | 2021-04-20 21:36:22.541472+00 | || active | | 65201266 | SELECT 1 FROM (SELECT ...) | parallel worker
16408 | pg| 37909 | 229661 | etl_user | PostgreSQL JDBC Driver | | | | 2021-04-20 21:36:22.540909+00 | 2021-04-17 22:31:32.314106+00 | 2021-04-17 22:31:32.317577+00 | 2021-04-20 21:36:22.542134+00 | || active | | 65201266 | SELECT 1 FROM (SELECT ...) | parallel worker
16408 | pg| 3601 | 229661 | etl_user | PostgreSQL JDBC Driver | 10.175.130.142 | | 49832 | 2021-04-17 22:31:32.232008+00 | 2021-04-17 22:31:32.314106+00 | 2021-04-17 22:31:32.317577+00 | 2021-04-17 22:31:32.317578+00 | || active | | 65201266 | SELECT 1 FROM (SELECT ...) | client backend
(3 rows)
It looks to me like those are parallel workers started up to help the leader, and they inherit the leaders xact_start, but not backend_start. It would help to see the rest of the columns in pg_stat_activity, and know the version.
Yes, that looks impossible.
The only explanation that I have is that someone changed the system time since the sessions started.

Will selecting from pg_locks always return a result for itself?

SELECT relation::regclass, * FROM pg_locks ;
Results in the following:
relation | locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath
----------+------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+-----------------+---------+----------
pg_locks | relation | 16397 | 11187 | | | | | | | | 76/111628 | 2652 | AccessShareLock | t | t
| virtualxid | | | | | 76/111628 | | | | | 76/111628 | 2652 | ExclusiveLock | t | t
(2 rows)
Can I assume that my query of pg_locks is itself what is causing the ExclusiveLock in that result?

Are pg_stat_database and pg_stat_activity really listing the same stuff aka how do I get a list of all backends

In this answer to the question Right query to get the current number of connections in a PostgreSQL DB the poster implies that
SELECT sum(numbackends) FROM pg_stat_database;
and
SELECT count(*) FROM pg_stat_activity;
give the same results.
However, if I do this on my db the first one says 119 and the second one 30.
This is the difference as shown by summing numbackends and counting:
+------+-------------+-------+
| | numbackends | count |
+------+-------------+-------+
| db1 | 1 | 1 |
| db2 | 1 | 1 |
| db3 | 1 | 1 |
| db4 | 1 | 1 |
| db5 | 2 | 2 |
| db6 | 2 | 2 |
| db7 | 12 | 3 | <--
| db8 | 4 | 4 |
| db9 | 5 | 5 |
| db10 | 78 | 35 | <--
+------+-------------+-------+
Why does this difference exist?
How can I list each of the 119-30=89 backends not shown in pg_stat_activity?

nova diagnostics in devstack development

In ssh, when I run this command
nova diagnostics 2ad0dda0-072d-46c4-8689-3c487a452248
I got all the resources in devstack
+---------------------------+----------------------+
| Property | Value |
+---------------------------+----------------------+
| cpu0_time | 3766640000000 |
| hdd_errors | 18446744073709551615 |
| hdd_read | 111736 |
| hdd_read_req | 73 |
| hdd_write | 0 |
| hdd_write_req | 0 |
| memory | 2097152 |
| memory-actual | 2097152 |
| memory-available | 1922544 |
| memory-major_fault | 2710 |
| memory-minor_fault | 10061504 |
| memory-rss | 509392 |
| memory-swap_in | 0 |
| memory-swap_out | 0 |
| memory-unused | 1079468 |
| tap5a148e0f-b8_rx | 959777 |
| tap5a148e0f-b8_rx_drop | 0 |
| tap5a148e0f-b8_rx_errors | 0 |
| tap5a148e0f-b8_rx_packets | 8758 |
| tap5a148e0f-b8_tx | 48872 |
| tap5a148e0f-b8_tx_drop | 0 |
| tap5a148e0f-b8_tx_errors | 0 |
| tap5a148e0f-b8_tx_packets | 615 |
| vda_errors | 18446744073709551615 |
| vda_read | 597230592 |
| vda_read_req | 31443 |
| vda_write | 164690944 |
| vda_write_req | 18422 |
+---------------------------+----------------------+
How can I get this in devstack user interfaces.
Please help..
Thanks in advance
its not available in openstack icehouse/juno version though it can be edited in juno to retrieve in devstack.
I didn't use openstack Kilo. In juno, if your hypervisor is libvirt, Vsphere or XenAPI then you can retrive this statistics in devstack UI. for this you have to do this:
For Libvirt
In this location ceilometer/compute/virt/libvirt/inspector.py, add this:
from oslo.utils import units
from ceilometer.compute.pollsters import util
def inspect_memory_usage(self, instance, duration=None):
instance_name = util.instance_name(instance)
domain = self._lookup_by_name(instance_name)
state = domain.info()[0]
if state == libvirt.VIR_DOMAIN_SHUTOFF:
LOG.warn(_('Failed to inspect memory usage of %(instance_name)s, '
'domain is in state of SHUTOFF'),
{'instance_name': instance_name})
return
try:
memory_stats = domain.memoryStats()
if (memory_stats and
memory_stats.get('available') and
memory_stats.get('unused')):
memory_used = (memory_stats.get('available') -
memory_stats.get('unused'))
# Stat provided from libvirt is in KB, converting it to MB.
memory_used = memory_used / units.Ki
return virt_inspector.MemoryUsageStats(usage=memory_used)
else:
LOG.warn(_('Failed to inspect memory usage of '
'%(instance_name)s, can not get info from libvirt'),
{'instance_name': instance_name})
# memoryStats might launch an exception if the method
# is not supported by the underlying hypervisor being
# used by libvirt
except libvirt.libvirtError as e:
LOG.warn(_('Failed to inspect memory usage of %(instance_name)s, '
'can not get info from libvirt: %(error)s'),
{'instance_name': instance_name, 'error': e})
for more details you can check the following link:
https://review.openstack.org/#/c/90498/