PREPARE/EXECUTE in Npgsql 3.2 - postgresql

I'm using Npgsql 3.2 and it seems that the driver is now preparing statements automatically.
I see the following in the PostgreSQL logs:
LOG: execute <unnamed>:SELECT * FROM database.update_item($1, $2, $3 , $4)
I may be wrong but this is not activated on the connection string and I see no previous 'prepare' call in the logs.
What am I missing ?
Also I'm using Dapper 1.50.2 on top of Npgsql to query the database.
This is not yet implemented at this level but I see that there are some talks about such a feature on GitHub.
I'm using a READ COMMITTED transaction to update a row in the database. The row is updated twice with 2 disctinct statements.
When playing the statements one by one in a pgadmin query window it works fine.
When the statements are played by the driver through execute, the first statement seems to put locks on the record, and the second statement hang.
Here is the queries ran in the query window (runs to completion) :
BEGIN;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
SELECT * FROM database.update_item(params...);
SELECT * from database.update_item(params...);
ROLLBACK;
The corresponding PG logs:
LOG: statement: BEGIN;
LOG: statement: SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
LOG: statement: SELECT * FROM database.update_item(params...);
LOG: statement: SELECT * from database.update_item(params...);
LOG: statement: ROLLBACK;
The PostgreSQL logs when played by Npgsql:
LOG: statement: BEGIN
LOG: statement: SET TRANSACTION ISOLATION LEVEL READ COMMITTED
LOG: execute <unnamed>: select * from database.update_item(params...)
LOG: execute <unnamed>: select * from database.update_item(params...) <-- hangs
Any help appreciated.
EDIT 1:
Additional information: The update_item() plpgsql function is updating the row with an EXECUTE statement.
EDIT 2:
Added PG logs for query window.

First, Npgsql's automatic preparation feature is opt-in - it's not activated without the connection string. Even then, the same SQL needs to be executed several times (5 by default) before Npgsql prepares it. See the documentation for more details.
Regarding your deadlock, are you running your code concurrently? In other words, do you have multiple transactions at the same time, updating the same rows? If so, then this is probably the expected PostgreSQL behavior and has nothing to do with Npgsql. When you update rows in a transaction, these rows are locked until the transaction is committed. The fix in general is to update rows in the same order. See the PostgreSQL documentation for more details.

Related

Cannot rename database due to prepared transactions

I'm trying to rename a database using:
ALTER DATABASE xxx RENAME TO yyy
I got an error message saying there is another open session. I killed it using:
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'xxx' AND pid <> pg_backend_pid();
However, I then got an error message saying there are 2 prepared transactions pending. I made sure to kill all processes in the background with:
SELECT pg_cancel_backend(pid) FROM pg_stat_activity WHERE pid <> pg_backend_pid();
But still, I am not able to rename the database.
I then saw there is a view by postgres called pg_prepared_xacts that shows me the prepared transactions (shortened the gid for a better overview), and indeed, there are two of them:
transaction|gid |prepared |owner|database|
-----------+--------------+-----------------------------+-----+--------+
5697779 |4871251_EAAAAC|2022-08-05 15:50:59.782 +0200|xxx |xxx |
5487701 |4871251_DAAAAA|2022-07-08 08:05:36.066 +0200|xxx |xxx |
According to the Postgres documentation on prepared transactions, I can execute a Rollback on the transaction id - which is what I did:
ROLLBACK PREPARED '5697779';
I made sure to execute the ROLLBACK with the same user, but it shows an error message saying that the transaction does not exist...
How can I get rid of it / kill it in order to be able to rename the database?
Thank you for reading and taking time to respond.
From here Prepared transaction:
transaction_id
An arbitrary identifier that later identifies this transaction for COMMIT PREPARED or ROLLBACK PREPARED. The identifier must be written as a string literal, and must be less than 200 bytes long. It must not be the same as the identifier used for any currently prepared transaction.
and from here Rollback prepared:
transaction_id
The transaction identifier of the transaction that is to be rolled back.
Lastly from here pg_prepared_xacts:
gid text
Global transaction identifier that was assigned to the transaction
So to rollback the transaction you need the global identifier assigned in the PREPARE TRANSACTION as shown in the gid column in pg_prepared_xacts.

Postgresql - Unable to see exact Query or Transaction that blocked my actual Transaction

Note: I'm using DBeaver 21.1.3 as my PostgreSQL development tool.
For my testing, I have created a table:
CREATE TABLE test_sk1(n numeric(2));
then I have disabled Auto-Commit on my DBEaver to verify whether I can see the blocking query for my other transaction.
I have then executed an insert on my table:
INSERT INTO test_sk1(n) values(10);
Now this insert transaction is un-committed, which will block the table.
Then I have taken another new SQL window and tried alter command on the table.
ALTER TABLE test_sk1 ADD COLUMN v VARCHAR(2);
Now I see the alter transaction got blocked.
But when I verified in the locks, I see that this Alter transaction got blocked by "Show search_path;' transaction, where I'm expecting "INSERT..." transaction as blocking query.
I used below query to fetch the lock:
SELECT p1.pid,
p1.query as blocked_query,
p2.pid as blocked_by_pid,
p2.query AS blocking_query
FROM pg_stat_activity p1, pg_stat_activity p2
WHERE p2.pid IN (SELECT UNNEST(pg_blocking_pids(p1.pid)))
AND cardinality(pg_blocking_pids(p1.pid)) > 0;
Why is this happening on our databases?
Try using psql for such experiments. DBeaver and many other tools will execute many SQL statements that you didn't intend to run.
The query that you see in pg_stat_activity is just the latest query done by that process, not necessarily the one locking any resource.

sqlworkbench-j not closing transaction connection

I am using sqlworkbench-j to query Redshift data. I am facing issue of locking tables whenever I do query on this table. It also happens for simple select statements. I know this is happening because workbench explicitly adds begin for every statement to take care for any changes happening for the data. So for every query we need to write end transaction.
Is there any option to disable the begin statement or to add end transaction statement in sqlworkbench-j?
When you set up redshift - click the "autocommit" option.
see here for more detailed instructions
https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-using-workbench.html
especially point 10

How can I log `PREPARE` statements in PostgreSQL?

I'm using a database tool (Elixir's Ecto) which uses prepared statements for most PostgreSQL queries. I want to see exactly how and when it does that.
I found the correct Postgres config file, postgresql.conf, by running SHOW config_file; in psql. I edited it to include
log_statement = 'all'
as Dogbert suggested here.
According to the PostgreSQL 9.6 docs, this setting should cause PREPARE statements to be logged.
After restarting PostgreSQL, I can tail -f its log file (the one specified by the -r flag when running the postgres executable) and I do see entries like this:
LOG: execute ecto_728: SELECT i0."id", i0."store_id", i0."title", i0."description"
FROM "items" AS i0 WHERE (i0."description" LIKE $1)
DETAIL: parameters: $1 = '%foo%'
This corresponds to a query (albeit done over binary protocol, I think) like
EXECUTE ecto_728('%foo%');
However, I don't see the original PREPARE that created ecto_728.
I tried dropping and recreating the database. Afterwards, the same query is executed as ecto_578, so it seems that the original prepared statement was dropped with the database and a new one was created.
But when I search the PostgreSQL log for ecto_578, I only see it being executed, not created.
How can I see PREPARE statements in the PostgreSQL log?
As you mentioned, your queries are being prepared via the extended query protocol, which is distinct from a PREPARE statement. And according to the docs for log_statement:
For clients using extended query protocol, logging occurs when an
Execute message is received
(Which is to say that logging does not occur when a Parse or Bind message is received.)
However, if you set log_min_duration_statement = 0, then:
For clients using extended query protocol, durations of the Parse,
Bind, and Execute steps are logged independently
Enabling both of these settings together will give you two log entries per Execute (one from log_statement when the message is received, and another from log_min_duration_statement once execution is complete).
Nick's answer was correct; I'm just answering to add what I learned by trying it.
First, I was able to see three separate actions in the log: a parse to create the prepared statement, a bind to give the parameters for it, and an execute to have the database actually carry it out and return results. This is described in the PostgreSQL docs for the "extended query protocol".
LOG: duration: 0.170 ms parse ecto_918: SELECT i0."id", i0."store_id",
i0."title", i0."description"
FROM "items" AS i0 WHERE (i0."description" LIKE $1)
LOG: duration: 0.094 ms bind ecto_918: SELECT i0."id", i0."store_id",
i0."title", i0."description"
FROM "items" AS i0 WHERE (i0."description" LIKE $1)
DETAIL: parameters: $1 = '%priceless%'
LOG: execute ecto_918: SELECT i0."id", i0."store_id",
i0."title", i0."description"
FROM "items" AS i0 WHERE (i0."description" LIKE $1)
DETAIL: parameters: $1 = '%priceless%'
This output was generated during a run of some automated tests. On subsequent runs, I saw the same query with a different name - eg, ecto_1573. This was without dropping the database or even restarting the PostgreSQL process. The docs say that
If successfully created, a named prepared-statement object lasts till
the end of the current session, unless explicitly destroyed.
So these statements have to be recreated on each session, and probably my test suite has a new session on each run.

How do I track transactions in PostgreSQL log?

I recently inherited an application that uses PostgreSQL and I have been troubleshooting issues relating to saving records in the database.
I know that PostgreSQL allows me to record the transaction ID in the log by including the special value of %x in the log_line_prefix. What I noticed, however, is that the first statement that occurs within a transaction always gets logged with a zero.
If I perform the following operations in psql,
begin;
insert into numbers (1);
insert into numbers (2);
commit;
the query log will contain the following entries:
2016-09-20 03:07:40 UTC 0 LOG: statement: begin;
2016-09-20 03:07:53 UTC 0 LOG: statement: insert into numbers values (1);
2016-09-20 03:07:58 UTC 689 LOG: statement: insert into numbers values (2);
2016-09-20 03:08:03 UTC 689 LOG: statement: commit;
My log format is %t %x and as you can see, the transaction ID for the first insert statement is 0, but it changes to 689 when I execute the second insert.
Can anyone explain why after starting a transaction PostgreSQL doesn't log the right transaction ID on the first statement? Or if I'm just doing this wrong, is there a more reliable way of identifying which queries were part of a single transaction by looking at the log file?
The transaction ID is assigned after the statement starts, so log_statement doesn't capture it. BEGIN doesn't assign a transaction ID, it's delayed until the first write operation.
Use the virtual txid instead, which is assigned immediately. The placeholder is %v. These are assigned immediately, but are not persistent and are backend-local.
I find it useful to log both. The txid because it matches up to xmin and xmax system column contents, etc; the vtxid to help me group up operations done in transactions.