I am seeing that dbeaver tabs sometimes block a postgres DB snapshot for 1 hour or more. The tab is set to 'manual commit - read committed' and 100% shows NO uncommited work. I have also seen this not coming from a tab but from what Dbeaver calls 'Main'.
The transaction in pg_stat_activity looks like below, you can see the session is idle in transaction. You can see that a backend_xmin is set and that pg_catalog.pg_proc was queried which was 100% not done by the user. From 'Main' I have seen it being idle in transaction on SHOW TRANSACTION ISOLATION LEVEL. When clicking rollback in the tab, the idle in transaction session immediately goes idle.
I do not want to set a server side idle_in_transaction_session_timeout timeout for this user. I have already set 'automatically end long idle transactions' in dbeaver.
How can I prevent dbeaver from holding transactions open so that backend_xmin or backend_xid get old and endanger autovacuum work?
Name |Value |
----------------+-----------------------------------------------------------------------------------------------------------------------------------------+
datid |16417 |
datname |<removed> |
pid |18974 |
leader_pid | |
usesysid |16394 |
usename |sys |
application_name|DBeaver - SQLEditor <Script-10.sql> |
client_addr |10.135.31.67 |
client_hostname | |
client_port |65397 |
backend_start |2023-02-08 12:12:45.098 +0000 |
xact_start |2023-02-08 12:16:23.121 +0000 |
query_start |2023-02-08 12:16:23.676 +0000 |
state_change |2023-02-08 12:16:23.677 +0000 |
wait_event_type |Client |
wait_event |ClientRead |
state |idle in transaction |
backend_xid | |
backend_xmin |222236770 |
query_id | |
query |SELECT pp.oid as poid, pp.* FROM pg_catalog.pg_proc pp WHERE pp.proname ILIKE $1 AND pp.pronamespace IN ($2) ORDER BY pp.proname LIMIT 10|
backend_type |client backend
I have a psql database that I want to do alembic migrations on. After the migrations, locks are still present:
SELECT l.pid, l.locktype, l.mode
FROM pg_locks l
INNER JOIN pg_stat_activity s ON (l.pid = s.pid) where usename='migrations_user';
pid | locktype | mode
-------+------------+-----------------
19918 | relation | AccessShareLock
19918 | relation | AccessShareLock
19918 | relation | AccessShareLock
19918 | relation | AccessShareLock
19918 | virtualxid | ExclusiveLock
(5 rows)
The database is owned by db_owner and I ran grant db_owner to migrations_user; so it has all the permissions to run the migration, and the migration succeeds. But a followup migration on the same table would fail due to these locks, and I need to go in and manually run pg_terminate_backend on these locks to proceed.
What could be causing this? The alembic migration code is pretty standard:
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
LOG.info("Running run_migrations_online")
connectable = create_engine(get_conn_url_from_env())
with connectable.connect() as connection:
context.configure(connection=connection, target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()
connectable.dispose()
I've postgresql-9.4 up and running, and I've enabled pg_stat_statements module lately by the help of official documentation.
But I'm getting following error upon usage:
postgres=# SELECT * FROM pg_stat_statements;
ERROR: relation "pg_stat_statements" does not exist
LINE 1: SELECT * FROM pg_stat_statements;
postgres=# SELECT pg_stat_statements_reset();
ERROR: function pg_stat_statements_reset() does not exist
LINE 1: SELECT pg_stat_statements_reset();
I'm logged in to psql with the postgres user.
I've also checked the available extension lists:
postgres=# SELECT * FROM pg_available_extensions WHERE name = 'pg_stat_statements'
;
name | default_version | installed_version | comment
--------------------+-----------------+-------------------+-----------------------------------------------------------
pg_stat_statements | 1.2 | | track execution statistics of all SQL statements executed
(1 row)
And here's the results of the extension versions query:
postgres=# SELECT * FROM pg_available_extension_versions WHERE name = 'pg_stat_statements';
name | version | installed | superuser | relocatable | schema | requires | comment
--------------------+---------+-----------+-----------+-------------+--------+----------+-----------------------------------------------------------
pg_stat_statements | 1.2 | f | t | t | | | track execution statistics of all SQL statements executed
(1 row)
Any help will be appreciated.
Extension isn't installed:
SELECT *
FROM pg_available_extensions
WHERE
name = 'pg_stat_statements' and
installed_version is not null;
If the table is empty, create the extension:
CREATE EXTENSION pg_stat_statements;
I've faced with this issue at configuring Percona Monitoring and Management (PMM) because by some strange reason PMM connecting to database with name postgres, so pg_stat_statements extension have to be created in this database:
yourdb# \c postgres
postgres# CREATE EXTENSION pg_stat_statements SCHEMA public;
Follow below steps:
Create the extension
CREATE EXTENSION pg_stat_statements;
Change in config
alter system set shared_preload_libraries='pg_stat_statements';
Restart
$ systemctl restart postgresql
Verify changes applied or not.
select * from pg_file_Settings where name='shared_preload_libraries';
The applied attribute must be 'true'.
I Had the same issue when deploying the environment using liquibase for the first time.
I understand that my reply maybe is not related with your problem but was the first google result so I think that other guys like me can arrive here with my the same Liquibase Issue.
These are PosGreSQL metadata tables that are retrieved by liquibase when you generate your first xml file.
In my case it only was useless autogenerated code, so I solved it deleteing these lines:
<changeSet author="martinlarizzate (generated)" id="1588181532394-7">
<createView fullDefinition="false" viewName="pg_stat_statements"> SELECT pg_stat_statements.userid,
pg_stat_statements.dbid,
pg_stat_statements.queryid,
pg_stat_statements.query,
pg_stat_statements.calls,
pg_stat_statements.total_time,
pg_stat_statements.min_time,
pg_stat_statements.max_time,
pg_stat_statements.mean_time,
pg_stat_statements.stddev_time,
pg_stat_statements.rows,
pg_stat_statements.shared_blks_hit,
pg_stat_statements.shared_blks_read,
pg_stat_statements.shared_blks_dirtied,
pg_stat_statements.shared_blks_written,
pg_stat_statements.local_blks_hit,
pg_stat_statements.local_blks_read,
pg_stat_statements.local_blks_dirtied,
pg_stat_statements.local_blks_written,
pg_stat_statements.temp_blks_read,
pg_stat_statements.temp_blks_written,
pg_stat_statements.blk_read_time,
pg_stat_statements.blk_write_time
FROM pg_stat_statements(true) pg_stat_statements(userid, dbid, queryid, query, calls, total_time, min_time, max_time, mean_time, stddev_time, rows, shared_blks_hit, shared_blks_read, shared_blks_dirtied, shared_blks_written, local_blks_hit, local_blks_read, local_blks_dirtied, local_blks_written, temp_blks_read, temp_blks_written, blk_read_time, blk_write_time);</createView>
</changeSet>
I have replication slot which I want to delete but when I do delete I got an error that I can't delete from view. Any ideas?
postgres=# SELECT * FROM pg_replication_slots ;
slot_name | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn
--------------+--------------+-----------+--------+----------+--------+------+--------------+-------------
bottledwater | bottledwater | logical | 12141 | postgres | t | | 374036 | E/FE8D9010
(1 row)
postgres=# delete from pg_replication_slots;
ERROR: cannot delete from view "pg_replication_slots"
DETAIL: Views that do not select from a single table or view are not automatically updatable.
HINT: To enable deleting from the view, provide an INSTEAD OF DELETE trigger or an unconditional ON DELETE DO INSTEAD rule.
postgres=#
Use pg_drop_replication_slot:
select pg_drop_replication_slot('bottledwater');
See the docs and this blog.
The replication slot must be inactive, i.e. no active connections. So if there's a streaming replica using the slot you must stop the streaming replica. Or you can change its recovery.conf so it doesn't use a slot anymore and restart it.
As a complement to the accepted answer, I'd like to mention that following command will not fail in case the slot does not exist (this was useful for me because I scripted that).
select pg_drop_replication_slot(slot_name) from pg_replication_slots where slot_name = 'bottledwater';
In pg_stat_activity I can see that a client is working its way through some query results using a cursor. But how can I see what the original query is?
pipeline=> select pid, query from pg_stat_activity where state = 'active' order by query_start;
pid | query
-------+--------------------------------------------------------------------------------------
6734 | FETCH FORWARD 1000 FROM "c_109886590_1"
26731 | select pid, query from pg_stat_activity where state = 'active' order by query_start;
(2 rows)
I see there is pg_cursors, but it is empty:
pipeline=> select * from pg_cursors;
name | statement | is_holdable | is_binary | is_scrollable | creation_time
------+-----------+-------------+-----------+---------------+---------------
(0 rows)
The server is on AWS RDS.
pipeline=> select version();
version
--------------------------------------------------------------------------------------------------------------
PostgreSQL 9.3.3 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2), 64-bit
(1 row)
You can't.
pg_cursors is backend-local. It doesn't show cursors that aren't part of the current connection.
PostgreSQL has no way to find out what query underlies a cursor from another session.
The only way I can think of to do this is using log analysis, with log_statement = all and a suitable log_line_prefix.