What does "tuple (0,79)" in postgres log file mean when a deadlock happened? - postgresql

In postgres log:
2016-12-23 15:28:14 +07 [17281-351 trns: 4280939, vtrns: 3/20] postgres#deadlocks HINT: See server log for query details.
2016-12-23 15:28:14 +07 [17281-352 trns: 4280939, vtrns: 3/20] postgres#deadlocks CONTEXT: while locking tuple (0,79) in relation "account"
2016-12-23 15:28:14 +07 [17281-353 trns: 4280939, vtrns: 3/20] postgres#deadlocks STATEMENT: SELECT id FROM account where id=$1 for update;
when I provoke a deadlock I can see text: tuple (0,79).
As I know, a tuple just is several rows in table. But I don't understand what (0,79) means. I have only 2 rows in table account, it's just play and self-learning application.
So what does (0,79) means?

This is the data type of the system column ctid. A tuple ID is a pair
(block number, tuple index within block) that identifies the physical
location of the row within its table.
read https://www.postgresql.org/docs/current/static/datatype-oid.html
It means block number 0, row index 79
also read http://rachbelaid.com/introduction-to-postgres-physical-storage/
also run SELECT id,ctid FROM account where id=$1 with right $1 to check out...

Related

EXECUTE IMMEDIATE in Enterprise Postgres - query returned no rows error

I'm using Enterprise Postgres 9.5 with Oracle Compatibility. I have a problem with the EXECUTE IMMEDIATE command.
Say I have a table with few columns and one of them can accept NULLs. If I do
EXECUTE IMMEDIATE 'select null_col from '||table_name||' where col1=10' into x;
It sends the value to x, if null_col returns any.
When I given the condition col1=19, where 19 is not present in the table, then I get the error like this.
query returned no rows
and my execution stops. So how can I handle that. Oracle doesn't given any error for such statements, whereas EDB does. Please help.
I didn't find any EDB tags, so please tag if you think this is inappropriate question here. Thanks for understanding.

How do I track transactions in PostgreSQL log?

I recently inherited an application that uses PostgreSQL and I have been troubleshooting issues relating to saving records in the database.
I know that PostgreSQL allows me to record the transaction ID in the log by including the special value of %x in the log_line_prefix. What I noticed, however, is that the first statement that occurs within a transaction always gets logged with a zero.
If I perform the following operations in psql,
begin;
insert into numbers (1);
insert into numbers (2);
commit;
the query log will contain the following entries:
2016-09-20 03:07:40 UTC 0 LOG: statement: begin;
2016-09-20 03:07:53 UTC 0 LOG: statement: insert into numbers values (1);
2016-09-20 03:07:58 UTC 689 LOG: statement: insert into numbers values (2);
2016-09-20 03:08:03 UTC 689 LOG: statement: commit;
My log format is %t %x and as you can see, the transaction ID for the first insert statement is 0, but it changes to 689 when I execute the second insert.
Can anyone explain why after starting a transaction PostgreSQL doesn't log the right transaction ID on the first statement? Or if I'm just doing this wrong, is there a more reliable way of identifying which queries were part of a single transaction by looking at the log file?
The transaction ID is assigned after the statement starts, so log_statement doesn't capture it. BEGIN doesn't assign a transaction ID, it's delayed until the first write operation.
Use the virtual txid instead, which is assigned immediately. The placeholder is %v. These are assigned immediately, but are not persistent and are backend-local.
I find it useful to log both. The txid because it matches up to xmin and xmax system column contents, etc; the vtxid to help me group up operations done in transactions.

Why does pgsql claim the currval of sequence undefined even after calling nextval?

I'm working with a PostgresQL database, in which a trigger function logs changes to a history table. I'm trying to add a column which keeps a logical "commit ID" to group master and detail records together. I've created a (non-temporary) sequence, and before I start the batch of updates, I bump this. All my SQL is logged to a log file, so you can clearly see this happening:
2015-04-16 10:43:37 SQLSelect: SELECT nextval('commit_id_seq')
2015-04-16 10:43:37 commit_id_seq: 8
...but then I attempt the UPDATE, my trigger function attempts to use currval, and it fails:
2015-04-16 10:43:37 ERROR: ERROR: currval of sequence "commit_id_seq" is not yet defined in this session
CONTEXT: SQL statement "INSERT INTO history (table_name, record_id, sec_user_id, created, action, notes, status, before, after, commit_id)
SELECT TG_TABLE_NAME, rec.id, (SELECT oid FROM pg_roles WHERE rolname = CURRENT_USER), now(), SUBSTR(TG_OP,1,1), note, stat, hstore(old), hstore(new), currval('commit_id_seq')"
PL/pgSQL function log_to_history() line 18 at SQL statement
[3]
So my question is basically: WTF?
One of two reasons:
Search_path differences, so you're actually talking about two different sequences.
Different sessions. The "current value" is only defined for the session you call nextval() in.
You can add process-id to the logfile format to check if they are different sessions.

Acquiring advisory locks in postgres

I think there must be something basic I'm not understanding about advisory locking in postgres. If I enter the following commands on the psql command line client, the function returns true both times:
SELECT pg_try_advisory_lock(20); --> true
SELECT pg_try_advisory_lock(20); --> true
I was expecting that the second command should return false, since the lock should already have been acquired. Oddly, I do get the following, suggesting that the lock has been acquired twice:
SELECT pg_advisory_unlock(20); --> true
SELECT pg_advisory_unlock(20); --> true
SELECT pg_advisory_unlock(20); --> false
So I guess my question is, how does one acquire an advisory lock in a way that stops it being acquired again?
What if you will try doing this from the 2 different PostgreSQL sessions?
Check out more in the docs.
My first impression on advisory locks was similar. I expected the second query (SELECT pg_tryadvisory_lock(20)) to return false too (because the first one got the lock). But this query only confirmed that a bigInt with value 20 has a lock. The interpretation is up to the user.
Imagine the advisory locks as a table where you can store a value and get a lock on that value (normally a BigInt). It is no explicit lock and no transction will be delayed. It depends on you how to interpret and use the result - and it is not blocking.
I use it in my projects with the two-integers-options. SELECT pg_try_advisory_lock(classId, objId) whereas both parameters are integers.
To make it work with more than a table just use the OID of the table as classId and the primary id (here 17) as objId:
SELECT pg_try_advisory_lock((SELECT 'first_table'::regclass::oid)::integer, 17);
In this example "first_table" is the name of the table and the second integer is the primary key id (here: 17).
Using a bigInt as parameter allows a wider range of ids, but if you use it with "second_table" than the id 17 is locked as well (because you locked the number "17" and not the relation to a specific row in a table).
It took me some time to figure that out, so hopefully it helps to understand the inner-workings of advisory locks.

DB2 deadlock timeout Sqlstate: 40001, reason code 68 due to update statements called from servlet using SQL

I am calling update statements one after the other from a servlet to DB2. I am getting error sqlstate 40001, reason code 68 which i found it is due to deadlock timeout.
How can I resolve this issue?
Can it be resolved by setting query timeout?
If yes then how to use it with update statements in servlet or where to use it?
The reason code 68 already tells you this is due to a lock timeout (deadlock is reason code 2) It could be due to other users running queries at the same time that use the same data you are accessing, or your own multiple updates.
Begin by running db2pd -db locktest -locks show detail from a db2 command line to see where the locks are. You'll then need to run something like:
select tabschema, tabname, tableid, tbspaceid
from syscat.tables where tbspaceid = # and tableid = #
filling in the # symbols with the ID number you get from the db2pd command output.
Once you see where the locks are, here are some tips:
◦Deadlock frequency can sometimes be reduced by ensuring that all applications access their common data in the same order – meaning, for example, that they access (and therefore lock) rows in Table A, followed by Table B, followed by Table C, and so on.
taken from: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.trb.doc/doc/t0055074.html
recommended reading: http://www.ibm.com/developerworks/data/library/techarticle/dm-0511bond/index.html
Addendum: if your servlet or another guilty application is using select statements found to be involved in the deadlock, you can try appending with ur to the select statements if accuracy of the newly updated (or inserted) data isn't important.
For me, the solution was adding FOR READ ONLY WITH UR at the end of all my SELECT statements. (Apparently my select statements were returning so much data, it locked the tables long enough to interfere with other SQL statements)
See https://www.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/sqlref/src/tpc/db2z_sql_isolationclause.html