How to drop a Redshfit database with connected users - amazon-redshift

Is it possible to drop active connections to Redshift in order to drop a database?
In my development environment I find myself recreating the schema very frequently and if there happens to be some stray process connected to the database this fails. I know it's possible to do this with Postgresql using pg_terminate_backend, but this doesn't seem to work on Redshift.
Deleting rows from the STV_SESSIONS table isn't an option, either.
Any ideas?

http://docs.aws.amazon.com/redshift/latest/dg/PG_TERMINATE_BACKEND.html
Find the PIDs of current running queries
select pid from stv_recents where status = 'Running';
and terminate all the queries with
select pg_terminate_backend(pid);

Related

Lock table on postgresql 9.6

i'm new in postgres administration and when developper run LOCK TABLE tab1; with PREPAREDstatement , postgres use ACCES EXCLUSIVE BY DEFAULT. My probelem is that the lock in table is still here after one week in view pg_prepared_xact and pg_locks even after restart of postgres and in pg_lock is like:
vXID mode
-1/192836 AccessExclusiveLock
Name DATABASE Owner XID prepared at
db1 postgres 192836 20-07-2021
I would know why the LOCK is still here?? and how to solve it? and what is mean -1 in vXID?? because i can't even show my data on tab1
with PREPARED statement
Prepared transactions and prepared statements are very different things. What you have here is a prepared transaction. Surviving a restart is what prepared transactions are for. You need to find its "gid" in pg_prepared_xacts and then either manually commit it or roll it back. If you are not intentionally using prepared transactions, the you should set max_prepared_transactions =0, so this can't recur. If you are intentionally using them, you need to learn how to handle them.

DBLink query doesn't terminate even after it completes

I have a Dblink query Amazon RDS (Postgres) that execute an INSERT with rows from an Amazon Redshift cluster.
The query terminates after 15/20 minutes, if not more, but I can see that all rows are being inserted after only few minutes.
I'm running these queries via JetBrains' DataGrip.
Some other similar dblink on the same connection, terminate as expected.
The only difference I see being the size of the table, which is bigger in the first case.
All these queries are simply copying the whole table. Pretty much like this:
insert into rds_table(
select *
from db_link('foreign_server',
$REDSHIFT$
select *
from redshift_table
$REDSHIFT$) as table_n(...)
);
Where "foreign server" is my connection to Redshift.
I know that the query is completed because rds_table has the same number of rows as redshift_table.
DataGrip shows the query as still running:
and won't let me run other queries until I manually stop the query.
If I do so, the inserted rows remain in the database, meaning that the transaction has already committed.
Why is this happening? Is it a problem with DataGrip or with Postgres?
How can I fix it?
Is there any other better alternative to migrate data from Redshift to RDS?
If a concurrent transaction can already see the inserted data, that means that the inserting transaction and consequently the INSERT statement must already be finished.
If DataGrip shows the statement as still running, it is lying to you.
So this must be a DataGrip bug.

Postgres table queries not responding

I have been trying to truncate a table using SQlWorkbench. Suddenly, SqlWorkbench got freezed, while the truncate was in progress. I had to kill workbench from taskmanager. But now none of the queries are working on the table on which the truncate was aborted abruptly. For other tables queries are working fine. Need help, as I have to upload fresh data on the same table. Currently I am not even able to drop the table. What can be done to resolve this issue?
This looks like the TRUNCATE got stuck behind a lock, and then you killed the front end, while TRUNCATE kept running.
Connect to the database as superuser and examine the pg_stat_activity view; you should see some long running transactions.
Use the function pg_terminate_backend to kill these sessions by their pid.

How can I obtain the creation date of a DB2 database without connecting to it?

How can I obtain the creation date or time of an IBM's DB2 database without connecting to the specified database first? Solutions like:
select min(create_time) from syscat.tables
and:
db2 list tables for schema SYSIBM
require me to connect to the database first, like:
db2 connect to dbname user userName using password
Is there another way of doing this through a DB2 command instead, so I wouldn't need to connect to the database?
Can db2look command be used for that?
Edit 01: Background Story
Since more than one person asked why do I need to do this and for what reasons, here is the background story.
I have a server with DB2 DBMS where many people and automated scripts are using it to create some databases for temporary tasks and tests. It's never meant to keep the data for long time. However for one reason or another (ex: developer not cleaning after himself or tests stopping forcefully before they can do the clean up) some databases never get dropped and they start to get accumulated till the hard disk is filled out eventually. So The idea of the app is to look up the age of the database and drop it, if it's older than 6 months (for example).

FDW seems to lock table on foreign server

I try to use foreign table to link 2 postgresql databases
everything is fine and I can retrieve all data I want
the only issue is that the data wrapper seems to lock tables in foreign server and it's very annoying when I unit test my code
if I don't do any select request I can initialize data and truncate both tables in local server and tables in remote server
but I execute one select statement the truncate command on remote server seems to be in deadlock state
do you know how I can avoid this lock?
thanks
[edit]
I use this data wrapper to link 2 postgresql databases: http://interdbconnect.sourceforge.net/pgsql_fdw/pgsql_fdw-en.html
I use table1 of db1 as foreign table in db2
when I execute a select query in foreign_table1 in db2, there is an AccessShareLock for table1 in db1
the query is very simple: select * from foreign_table1
the lock is never released so when I execute a truncate command at the end of my unit test, there is a conflict because the truncate add an AccessExclusiveLock
I don't know how to release the first AccessShareLock but I think it would be done automatically by the wrapper...
hope this help
AccessExclusiveLock and AccessShareLock aren't generally obtained explicitly. They're obtained automatically by certain normal statements. See locking - the lock list says which statements acquire which locks, which says:
ACCESS SHARE
Conflicts with the ACCESS EXCLUSIVE lock mode only.
The SELECT command acquires a lock of this mode on referenced tables.
In general, any query that only reads a table and does not modify it
will acquire this lock mode.
What this means is that your 1st transaction hasn't committed or rolled back (thus releasing its locks) yet, so the 2nd can't TRUNCATE the table because TRUNCATE requires ACCESS EXCLUSIVE which conflicts with ACCESS SHARE.
Make sure the 1st transaction commits or rolls back.
BTW, is the "foreign" database actually the local database, ie are you using pgsql_fdw as an alternative to dblink to simulate autonomous transactions?