pg_tables lock duration in postgresql - postgresql

When we run a query like create table in postgresql,the dictionary table pg_tables gets locked for a certain duration.Is there any command or query using which we can track out for how long any query locks the dictionary table?
I have accessed pg_locks table but it only shows the type of lock.I need to find the duration for which the pg_tables is locked by an executing query.

Postgres has not any tool for lock time monitoring. We had a patch to Postgres for this purpose. Anyway - the pg_tables is not the table - it is view
postgres=# \d+ pg_tables;
View "pg_catalog.pg_tables"
+-------------+---------+-----------+----------+---------+---------+-------------+
| Column | Type | Collation | Nullable | Default | Storage | Description |
+-------------+---------+-----------+----------+---------+---------+-------------+
| schemaname | name | | | | plain | |
| tablename | name | | | | plain | |
| tableowner | name | | | | plain | |
| tablespace | name | | | | plain | |
| hasindexes | boolean | | | | plain | |
| hasrules | boolean | | | | plain | |
| hastriggers | boolean | | | | plain | |
| rowsecurity | boolean | | | | plain | |
+-------------+---------+-----------+----------+---------+---------+-------------+
View definition:
SELECT n.nspname AS schemaname,
c.relname AS tablename,
pg_get_userbyid(c.relowner) AS tableowner,
t.spcname AS tablespace,
c.relhasindex AS hasindexes,
c.relhasrules AS hasrules,
c.relhastriggers AS hastriggers,
c.relrowsecurity AS rowsecurity
FROM pg_class c
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
LEFT JOIN pg_tablespace t ON tpostgres=# \d+ pg_tables;
View "pg_catalog.pg_tables"
+-------------+---------+-----------+----------+---------+---------+-------------+
| Column | Type | Collation | Nullable | Default | Storage | Description |
+-------------+---------+-----------+----------+---------+---------+-------------+
| schemaname | name | | | | plain | |
| tablename | name | | | | plain | |
| tableowner | name | | | | plain | |
| tablespace | name | | | | plain | |
| hasindexes | boolean | | | | plain | |
| hasrules | boolean | | | | plain | |
| hastriggers | boolean | | | | plain | |
| rowsecurity | boolean | | | | plain | |
+-------------+---------+-----------+----------+---------+---------+-------------+
View definition:
SELECT n.nspname AS schemaname,
c.relname AS tablename,
pg_get_userbyid(c.relowner) AS tableowner,
t.spcname AS tablespace,
c.relhasindex AS hasindexes,
c.relhasrules AS hasrules,
c.relhastriggers AS hastriggers,
c.relrowsecurity AS rowsecurity
FROM pg_class c
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
LEFT JOIN pg_tablespace t ON t.oid = c.reltablespace
WHERE c.relkind = ANY (ARRAY['r'::"char", 'p'::"char"]);
.oid = c.reltablespace
WHERE c.relkind = ANY (ARRAY['r'::"char", 'p'::"char"]);
You can check if one or more from these tables are not locked - there should be only few operations that can lock these tables against reading. More documentation about it https://wiki.postgresql.org/wiki/Lock_Monitoring

Related

Interpretation of rows in Phoenix SYSTEM.CATALOG

When I create a Phoenix table there are two extra rows in SYSTEM.CATALOG. These are the first and the second rows in the output of SELECT * FROM SYSTEM.CATALOG ...... below. Can someone please help me understand what these two rows signify?
The third and fourth rows in the output of SELECT * FROM SYSTEM.CATALOG ...... below are easily relatable to the CREATE TABLE statement. Therefore, they look fine.
0: jdbc:phoenix:t40aw2.gaq> CREATE TABLE C5 (company_id INTEGER PRIMARY KEY, name VARCHAR(225));
No rows affected (4.618 seconds)
0: jdbc:phoenix:t40aw2.gaq> select * from C5;
+-------------+-------+
| COMPANY_ID | NAME |
+-------------+-------+
+-------------+-------+
No rows selected (0.085 seconds)
0: jdbc:phoenix:t40aw2.gaq> SELECT * FROM SYSTEM.CATALOG WHERE TABLE_NAME='C5';
+------------+--------------+-------------+--------------+----------------+----------------+-------------+----------+---------------+---------------+------------------+--------------+----+
| TENANT_ID | TABLE_SCHEM | TABLE_NAME | COLUMN_NAME | COLUMN_FAMILY | TABLE_SEQ_NUM | TABLE_TYPE | PK_NAME | COLUMN_COUNT | SALT_BUCKETS | DATA_TABLE_NAME | INDEX_STATE | IM |
+------------+--------------+-------------+--------------+----------------+----------------+-------------+----------+---------------+---------------+------------------+--------------+----+
| | | C5 | | | 0 | u | | 2 | null | | | fa |
| | | C5 | | 0 | null | | | null | null | | | |
| | | C5 | COMPANY_ID | | null | | | null | null | | | |
| | | C5 | NAME | 0 | null | | | null | null | | | |
+------------+--------------+-------------+--------------+----------------+----------------+-------------+----------+---------------+---------------+------------------+--------------+----+
4 rows selected (0.557 seconds)
0: jdbc:phoenix:t40aw2.gaq>
The Phoenix version I am using is: 4.1.8.29
Kindly note that no other operations where done on the table other than the 3 listed above, namely, create table, select * from the table, and select * from system.catalog where TABLE_NAME=the concerned table name.

TSQL - PIVOT but CONCATENATE Fields

in this thread I was assisted with my initial question. The answer supplied has been accepted because it was the actual answer to that question.
As an extention to that answer, please consider the same table:
+------------------------------------------------------------------------------+
| GUID | DeviceGUID | DetailGUID | sValue | iValue | gValue | DateStored |
| ENTRY1 | DEVICE1 | Detail1 | SN112 | | | 01/01/2020 |
| ENTRY2 | DEVICE1 | Detail4 | | 1241 | | 01/01/2020 |
| ENTRY3 | DEVICE1 | Detail7 | | | GUID12 | 01/01/2020 |
| ENTRY8 | DEVICE1 | Detail7 | | | GUID13 | 01/01/2020 |
| ENTRY9 | DEVICE1 | Detail7 | | | GUID14 | 01/01/2020 |
| ENTRY4 | DEVICE2 | Detail1 | SN111 | | | 01/01/2020 |
| ENTRY5 | DEVICE2 | Detail2 | RND123 | | | 01/01/2020 |
| ENRTY6 | DEVICE2 | Detail4 | | 2351 | | 03/01/2020 |
| ENTRY7 | DEVICE3 | Detail1 | SN100 | | | 02/01/2020 |
| [...] | [...] | [...] | | | | |
| | | | | | | |
+------------------------------------------------------------------------------+
The issue arises when there are multiple records with the same DetailGUID; PIVOT has been set to select 'MAX', I would not know exactly how that selects the actual record in this case, but that is not important.
Insteas of selecting one record and having it displayed, I need the records to be concatenated in a Comma Separated List, in the Pivot.
the current SQL query is as follows:
DECLARE #columns NVARCHAR(MAX), #sql NVARCHAR(MAX), #OrderGUID uniqueidentifier;
SET #OrderGUID = '1B470FFB-7410-4950-A3BC-B9D778C459D3';
SET #columns = N'';
SELECT #columns+=N', p.'+QUOTENAME([Name])
FROM
(
SELECT GUID AS [Name]
FROM [dbo].Details AS p
) AS x;
SET #sql =
N'
SELECT *
FROM
(
SELECT DeviceObjectGUID
,DetailGUID
,CONCAT(sValue, iValue, gValue) as [value]
,DateStored
FROM DeviceDetails
WHERE (DeviceObjectGUID IN (SELECT DeviceObjectGUID FROM DevicesPerOrder WHERE OrderGUID = ''' + CAST(#OrderGUID as nVarchar(MAX) )+ '''))
) DS
PIVOT
(MAX([value]) FOR DetailGUID IN ('+STUFF(REPLACE(#columns, ', p.[', ',['), 1, 1, '')+')) PVT';
EXEC sp_executesql #sql
this will dynamically select all the detailGUIDS and transform them to headers; but I am unsure where I would start to input the CONCAT or TO XML statements

Will selecting from pg_locks always return a result for itself?

SELECT relation::regclass, * FROM pg_locks ;
Results in the following:
relation | locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath
----------+------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+-----------------+---------+----------
pg_locks | relation | 16397 | 11187 | | | | | | | | 76/111628 | 2652 | AccessShareLock | t | t
| virtualxid | | | | | 76/111628 | | | | | 76/111628 | 2652 | ExclusiveLock | t | t
(2 rows)
Can I assume that my query of pg_locks is itself what is causing the ExclusiveLock in that result?

Redshift: tables info query not working via spark

I am trying to run this query from spark code using databricks:
select * from svv_table_info
but I am getting this error msg:
Exception in thread "main" java.sql.SQLException: Amazon Invalid operation: Specified types or functions (one per INFO message) not supported on Redshift tables.;
any opinion why I am getting this?
That view returns table_id which is in the Postgres system type OID.
psql=# \d+ svv_table_info
Column | Type | Modifiers | Storage | Description
---------------+---------------+-----------+----------+-------------
database | text | | extended |
schema | text | | extended |
table_id | oid | | plain |
table | text | | extended |
encoded | text | | extended |
diststyle | text | | extended |
sortkey1 | text | | extended |
max_varchar | integer | | plain |
sortkey1_enc | character(32) | | extended |
sortkey_num | integer | | plain |
size | bigint | | plain |
pct_used | numeric(10,4) | | main |
empty | bigint | | plain |
unsorted | numeric(5,2) | | main |
stats_off | numeric(5,2) | | main |
tbl_rows | numeric(38,0) | | main |
skew_sortkey1 | numeric(19,2) | | main |
skew_rows | numeric(19,2) | | main |
You can cast it to INTEGER and Spark should be able to handle it.
SELECT database,schema,table_id::INT
,"table",encoded,diststyle,sortkey1
,max_varchar,sortkey1_enc,sortkey_num
,size,pct_used,empty,unsorted,stats_off
,tbl_rows,skew_sortkey1,skew_rows
FROM svv_table_info;

Optimization of Sybase 15.5 union query

im having trouble trying to optimize the following query on Sybase 15.5. Does anyone know how could i improve it. Each one of the tables used there have about 30 million rows each. I tried my best to optimize it but still taking lot of time(1.5 hours).
create table #tmp1( f_id smallint, a_date smalldatetime )
create table #tmp2( f_id smallint, a_date smalldatetime )
insert #tmp1
select f_id, a_date = max( a_date )
FROM audit_table
WHERE i_date = #pIDate
group by f_id
insert #tmp2
select f_id , a_date = max( a_date )
FROM n_audit_table
WHERE i_date = #pIDate
group by f_id
create table #tmp(
t_account varchar(32) not null,
t_id varchar(32) not null,
product varchar(64) null
)
insert into #tmp
select t_account,t_id, product
FROM audit_table nt, #tmp1 a
WHERE i_date = #pIDate
and nt.a_date = a.a_date
and nt.f_id = a.f_id
union
select t_account,t_id, product
FROM n_audit_table t, #tmp2 a
WHERE t.item_date = #pIDate
and t.a_date = a.a_date
and t.f_id = a.f_id
Both the tables having indexes on i_date, a_date, f_id. Please find below showplan where it is long time.
QUERY PLAN FOR STATEMENT 2 (at line 24).
Optimized using Serial Mode
STEP 1
The type of query is INSERT.
10 operator(s) under root
|ROOT:EMIT Operator (VA = 10)
|
| |INSERT Operator (VA = 9)
| | The update mode is direct.
| |
| | |HASH UNION Operator (VA = 8) has 2 children.
| | | Using Worktable1 for internal storage.
| | | Key Count: 3
| | |
| | | |NESTED LOOP JOIN Operator (VA = 3) (Join Type: Inner Join)
| | | |
| | | | |SCAN Operator (VA = 0)
| | | | | FROM TABLE
| | | | | #tmp1
| | | | | a
| | | | | Table Scan.
| | | | | Forward Scan.
| | | | | Positioning at start of table.
| | | | | Using I/O Size 2 Kbytes for data pages.
| | | | | With LRU Buffer Replacement Strategy for data pages.
| | | |
| | | | |RESTRICT Operator (VA = 2)(5)(0)(0)(0)(0)
| | | | |
| | | | | |SCAN Operator (VA = 1)
| | | | | | FROM TABLE
| | | | | | audit_table
| | | | | | nt
| | | | | | Index : IX_audit_table
| | | | | | Forward Scan.
| | | | | | Positioning by key.
| | | | | | Keys are:
| | | | | | i_date ASC
| | | | | | a_date ASC
| | | | | | Using I/O Size 2 Kbytes for index leaf pages.
| | | | | | With LRU Buffer Replacement Strategy for index leaf pages.
| | | | | | Using I/O Size 2 Kbytes for data pages.
| | | | | | With LRU Buffer Replacement Strategy for data pages.
| | |
| | | |NESTED LOOP JOIN Operator (VA = 7) (Join Type: Inner Join)
| | | |
| | | | |SCAN Operator (VA = 4)
| | | | | FROM TABLE
| | | | | #tmp2
| | | | | a
| | | | | Table Scan.
| | | | | Forward Scan.
| | | | | Positioning at start of table.
| | | | | Using I/O Size 2 Kbytes for data pages.
| | | | | With LRU Buffer Replacement Strategy for data pages.
| | | |
| | | | |RESTRICT Operator (VA = 6)(5)(0)(0)(0)(0)
| | | | |
| | | | | |SCAN Operator (VA = 5)
| | | | | | FROM TABLE
| | | | | | n_audit_table
| | | | | | t
| | | | | | Index : IX_n_audit_table
| | | | | | Forward Scan.
| | | | | | Positioning by key.
| | | | | | Keys are:
| | | | | | i_date ASC
| | | | | | a_date ASC
| | | | | | Using I/O Size 2 Kbytes for index leaf pages.
| | | | | | With LRU Buffer Replacement Strategy for index leaf pages.
| | | | | | Using I/O Size 2 Kbytes for data pages.
| | | | | | With LRU Buffer Replacement Strategy for data pages.
| |
| | TO TABLE
| | #tmp
| | Using I/O Size 2 Kbytes for data pages.
Total estimated I/O cost for statement 2 (at line 24): 29322945.
I doubt its a union issue. Queries are more probable troublemaker.
I suppose you should start from adding indexes on your temp tables:
create table #tmp1( f_id smallint, a_date smalldatetime )
Create clustered index IX1Temp on #tmp1(f_id )
Create clustered index IX2Temp on #tmp1(a_date )
...
Also, I see not much sense in #tmp1, #tmp2 the way you use them. You could call CTE instead. Also. I would recommend you to try PARTITION BY instead GROUP BY statement.
According to the query execution plan, the problem is the table scans on the temporary tables.
Please get the execution plan for the following query:
insert into #tmp
select t_account,t_id, product
FROM
audit_table nt,
(
select f_id, a_date = max(a_date)
FROM audit_table
WHERE i_date = #pIDate
group by f_id
) a
WHERE
i_date = #pIDate
and nt.a_date = a.a_date
and nt.f_id = a.f_id
union
select t_account,t_id, product
FROM
n_audit_table t,
(
select f_id , a_date = max( a_date )
FROM n_audit_table
WHERE i_date = #pIDate
group by f_id
) a
WHERE
t.item_date = #pIDate
and t.a_date = a.a_date
and t.f_id = a.f_id
How many rows end up in each of the temporary tables?
Looks like the temporary tables could be replaced by using HAVING, I would need to test it, it is always complicated when your group by is on a single column and you require more columns in the output.
Try running this statement with SET STATISTICS PLANCOST ON and SET STATISTICS IO ON as that would give a good idea of the number of pages that are scanned and if Sybase is going wrong somewhere while optimising the query.