Postgres cursors behaviour - postgresql

I am trying to understand a PostgreSQL behaviour.
In my application I am using a Postgres (AWS Aurora to be precise) database through ODBC.
I am not specifying "with hold" on the cursors used but I see in the logs that most of them are 'idle in transaction', although there is no transaction, there is just a 'select'. The statements look like:
RELEASE _EXEC_SVP_05A0C768;SAVEPOINT _EXEC_SVP_05A0C768;declare "SQL_CUR05A0E3F8" cursor with hold for select u_selskap,u_kodetyp,u_tekst from u_hrm.u_stilltyp where u_selskap=50 and u_kodetyp='T';fetch 100 in "SQL_CUR05A0E3F8"
In PgAdmin we can see how the statements are displayed.
The idling cursors seem to keep locks on the database hence subsequent queries cannot be run.
I am not able to understand why these cursors are in a transaction and which part of the chain is introducing the 'with hold' as the default creation mode is 'without hold'(as seen here: https://www.postgresql.org/docs/11/sql-declare.html).
Can it be a PostgreSQL ODBC issue?

Related

Postgres add column on existing table takes very long

I have a table with 500k elements. Now I want to add a new column
(type boolean, nullable = false) without a default value.
The query to do so is running like for ever.
I'm using PostgreSQL 12.1, compiled by Visual C++ build 1914, 64-bit on my Windows 2012 Server
In pgAdmin I can see the query is blocked by PID 0. But when I execute this query, I can't see the query with pid = 0
SELECT *
FROM pg_stat_activity
Can someone help me here? Why is the query blocked and how can I fix this to add a new column to my table.
UPDATE attempt:
SELECT *
FROM pg_prepared_xacts
Update
It works after rollback all prepared transaction.
ROLLBACK PREPARED 'gid goes here';
You have got stale prepared transactions. I say that as in "you have got the measles", because it is a disease for a database.
Such prepared transactions keep holding locks and block autovacuum progress, so they will bring your database to its knees if you don't take action. In addition, such transactions are persisted, so even a restart of the database won't get rid of them.
Remove them with
ROLLBACK PREPARED 'gid goes here'; /* use the transaction names shown in the view */
If you use prepared transactions, you need a distributed transaction manager. That is a piece of software that keeps track of all prepared transactions and their state and persists that information, so that no distributed transaction can become stale. Even if there is a crash, the distributed transaction manager will resolve in-doubt transactions in all involved databases.
If you don't have that, don't use prepared transactions. You now know why. Best is to set max_prepared_transactions to 0 in that case.

DBLink query doesn't terminate even after it completes

I have a Dblink query Amazon RDS (Postgres) that execute an INSERT with rows from an Amazon Redshift cluster.
The query terminates after 15/20 minutes, if not more, but I can see that all rows are being inserted after only few minutes.
I'm running these queries via JetBrains' DataGrip.
Some other similar dblink on the same connection, terminate as expected.
The only difference I see being the size of the table, which is bigger in the first case.
All these queries are simply copying the whole table. Pretty much like this:
insert into rds_table(
select *
from db_link('foreign_server',
$REDSHIFT$
select *
from redshift_table
$REDSHIFT$) as table_n(...)
);
Where "foreign server" is my connection to Redshift.
I know that the query is completed because rds_table has the same number of rows as redshift_table.
DataGrip shows the query as still running:
and won't let me run other queries until I manually stop the query.
If I do so, the inserted rows remain in the database, meaning that the transaction has already committed.
Why is this happening? Is it a problem with DataGrip or with Postgres?
How can I fix it?
Is there any other better alternative to migrate data from Redshift to RDS?
If a concurrent transaction can already see the inserted data, that means that the inserting transaction and consequently the INSERT statement must already be finished.
If DataGrip shows the statement as still running, it is lying to you.
So this must be a DataGrip bug.

Does dropping a database have to be done not in any transaction?

From https://wiki.postgresql.org/wiki/Psycopg2_Tutorial
PostgreSQL can not drop databases within a transaction, it is an all
or nothing command. If you want to drop the database you would need to
change the isolation level of the database this is done using the
following.
conn.set_isolation_level(0)
You would place the above immediately preceding the DROP DATABASE
cursor execution.
Why "If you want to drop the database you would need to change the isolation level of the database"?
In particular, why do we need to change the isolation level to 0? (If I am correct, 0 means psycopg2.extensions.ISOLATION_LEVEL_READ_COMMITTED)
From https://stackoverflow.com/a/51859484/156458
The operation of destroying a database is implemented in a way which
prevents undoing it - therefore you can not run it from inside a
transaction because transactions are always undoable. Also keep in
mind that unlike most other databases PostgreSQL allows almost all DDL
statements (obviously not the DROP DATABASE one) to be executed inside
a transaction.
Actually you can not drop a database if anyone (including you) is
currently connected to this database - so it does not matter what is
your isolation level, you still have to connect to another database
(e.g. postgres)
"you can not run it from inside a transaction because transactions are always undoable". Then how can I drop a database not from inside a transaction?
I found my answer at https://stackoverflow.com/a/51880577/156458
I'm unfamiliar with psycopg2 so I can only provide steps to be performed.
Steps to be taken to perform DROP DATABASE from Python:
Connect to a different database, which you don't want to drop
Store current isolation level in a variable
Set isolation level to 0
Execute DROP DATABASE query
Set isolation level back to original (from #2)
Steps to be taken to perform DROP DATABASE from PSQL:
Connect to a different database, which you don't want to drop
Execute DROP DATABASE query
Code in psql
\c second_db
DROP DATABASE first_db;
Remember, that there can be no live connections to the database you are trying to drop.

Inserting Data manually into table in AWS Redshift, sql workbench

I am able to connect to redshift from SQL workbench and I am able to create a table but When I try to insert values into the table It throws me the below error.
Since I am using temp schema and the connectivity shows schema as public, is this still an issue even if my insert statement is
Insert into tempschema.temp_staging values
Postgres (and thus Redshift which is based on an ancient version of Postgres) has a very strict transaction concept: either all statements work or none.
As soon as one statement in your transaction fails, the whole transaction needs to be rolled back.
So all you need to do is to issue a ROLLBACK command and you can continue. There is no need to restart SQL Workbench/J.
If you don't want to do that for every statement that throws an error, just enable autocommit in the connection profile:
7.3.5. Autocommit
This check box enables/disables the "auto commit" property for the connection. If autocommit is enabled, then each SQL statement is automatically committed on the DBMS. If this is disabled, any DML statement (UPDATE, INSERT, DELETE, ...) has to be committed in order to make the change permanent. Some DBMS require a commit for DDL statements (CREATE TABLE, ...) as well. Please refer to the documentation of your DBMS.
Link to manual
I am part of SQL Workbench/J support
It's just a temporary acquired lock.
Disconnect the workbench from the datasource
Restart the workbench
Reconnect to your datasource.
You'll be able to resume from here.

Postgres returns errors on future transactions

I am currently migrating from MySQL to postgres using pgbouncer for my connection pool.
We select/insert/update/delete lots of data from postgres and all comes from remote sources so we try to make the data quality as good as possible before an insert but sometimes some bad data slips through.
This causes in postgres to report current transaction is aborted, commands ignored until end of transaction block
This is fine except that connection through pgbouncer will report this error for every query. I get the same logic if i connect directly to postgres instead of pgbouncer too. I'd expect it to roll back whichever transaction that caused this issue.
Is there a way to just rollback and continue working like normal? Everything i've read just says fix the query but in this case it's not always possible.
You need to use the ROLLBACK command. This will undo everything since the last BEGIN TRANSACTION or START TRANSACTION. Note that transactions do not nest; if you've begun multiple transactions without committing, this will roll back the outermost transaction.
This will drop you into autocommit mode. You may want to issue a new BEGIN TRANSACTION command to open a new transaction.
You should also be able to ROLLBACK TO SAVEPOINT, if you have a savepoint from before the error.
(If at all possible, it is preferred to just fix the query, but depending on what you're doing, that may be prohibitively difficult.)