Why does this query deadlock? - postgresql

I have an application that reads the structure of an existing PostgreSQL 9.1 database, compares it against a "should be" state and updates the database accordingly. That works fine, most of the time. However, I had several instances now when reading the current database structure deadlocked. The query responsible reads the existing foreign keys:
SELECT tc.table_schema, tc.table_name, tc.constraint_name, kcu.column_name,
ccu.table_schema, ccu.table_name, ccu.column_name
FROM information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
WHERE constraint_type = 'FOREIGN KEY'
Viewing the server status in pgAdmin shows this to be the only active query/transaction that's running on the server. Still, the query doesn't return.
The error is reproducible in a way: When I find a database that produces the error, it will produce the error every time. But not all databases produce the error. This is one mysterious bug, and I'm running out of options and ideas on what else to try or how to work around this. So any input or ideas are highly appreciated!
PS: A colleague of mine just reported he produced the same error using PostgreSQL 8.4.

I tested and found your query very slow, too. The root of this problem is that "tables" in information_schema are in fact complicated views to provide catalogs according to the SQL standard. In this particular case, matters are further complicated as foreign keys can be built on multiple columns. Your query yields duplicate rows for those cases which, I suspect, may be an undesired.
Correlated subqueries with unnest, fed to ARRAY constructors avoid the problem in my query.
This query yields the same information, just without duplicate rows and 100x faster. Also, I would venture to guarantee, without deadlocks.
Only works for PostgreSQL, not portable to other RDBMSes.
SELECT c.conrelid::regclass AS table_name
, c.conname AS fk_name
, ARRAY(SELECT a.attname
FROM unnest(c.conkey) x
JOIN pg_attribute a
ON a.attrelid = c.conrelid AND a.attnum = x) AS fk_columns
, c.confrelid::regclass AS ref_table
, ARRAY(SELECT a.attname
FROM unnest(c.confkey) x
JOIN pg_attribute a
ON a.attrelid = c.confrelid AND a.attnum = x) AS ref_columns
FROM pg_catalog.pg_constraint c
WHERE c.contype = 'f';
-- ORDER BY c.conrelid::regclass::text,2
The cast to ::regclass yields table names as seen with your current search_path. May or may not be what you want. For this query to include the absolute path (schema) for every table name you can set the search_path like this:
SET search_path = pg_catalog;
SELECT ...
To continue your session with your default search_path:
RESET search_path;
Related:
Get column names and data types of a query, table or view

Related

PostgreSQL n_distinct statistics setting

Are there multiple ways to set n_distinct in PostgreSQL? Both of these seem to be doing the same thing but end up changing a different value within pg_attribute. What is the difference between these two commands?
alter table my_table alter column my_column set (n_distinct = 500);
alter table my_table alter column my_column set statistics 1000;
select
c.relname,
a.attname,
a.attoptions,
a.attstattarget
from
pg_class c
inner join
pg_attribute a
on c.oid = a.attrelid
where
c.relname = 'my_table'
and
a.attname = 'my_column'
order by
c.relname,
a.attname;
Name |Value
-------------|----------------
relname |my_table
attname |my_column
attoptions |{n_distinct=500}
attstattarget|1000
Both of these seem to be doing the same thing
Why would you say that? Both commands are obviously distinct. Both are related to column statistics and query planning. But they do very different things.
The statistics target ...
controls the level of detail of statistics accumulated for this column by ANALYZE. See:
Check statistics targets in PostgreSQL
Basics in the manual.
Setting n_distinct is something completely different. It means hard-coding the number (or ratio) of distinct values to expect for the given column. (But only effective after the next ANALYZE.)
Related answer on dba.SE with more on n_distinct:
Very bad query plan in PostgreSQL 9.6

Delete unused indexes

I run this query for check if there are some unused indexes in my DataBase.
select
t.tablename AS "relation",
indexname,
c.reltuples AS num_rows,
pg_relation_size(quote_ident(t.tablename)::text) AS table_size,
pg_relation_size(quote_ident(indexrelname)::text) AS index_size,
idx_scan AS number_of_scans,
idx_tup_read AS tuples_read,
idx_tup_fetch AS tuples_fetched
FROM pg_tables t
LEFT OUTER JOIN pg_class c ON t.tablename=c.relname
LEFT OUTER JOIN
( SELECT c.relname AS ctablename, ipg.relname AS indexname, x.indnatts AS number_of_columns, psai.idx_scan, idx_tup_read, idx_tup_fetch, indexrelname, indisunique FROM pg_index x
JOIN pg_class c ON c.oid = x.indrelid
JOIN pg_class ipg ON ipg.oid = x.indexrelid
JOIN pg_stat_all_indexes psai ON x.indexrelid = psai.indexrelid )
AS foo
ON t.tablename = foo.ctablename
WHERE t.schemaname='public'
and idx_scan = 0
ORDER BY
--1,2
--6
5 desc
;
And I got a lot of rows where those fields are all zero:
number_of_scans,
tuples_read,
tuples_fetched
Is that mean that I can drop them? Is there a chance that that Metadata is out-of-date? How can I check it?
I'm using Postgres with version 9.6
Your query misses some uses of indexes that do not require them to be scanned:
they enforce primary key, unique and exclusion constraints
they influence statistics collection (for “expression indexes”)
Here is my gold standard query from my blog post:
SELECT s.schemaname,
s.relname AS tablename,
s.indexrelname AS indexname,
pg_relation_size(s.indexrelid) AS index_size
FROM pg_catalog.pg_stat_user_indexes s
JOIN pg_catalog.pg_index i ON s.indexrelid = i.indexrelid
WHERE s.idx_scan = 0 -- has never been scanned
AND 0 <>ALL (i.indkey) -- no index column is an expression
AND NOT EXISTS -- does not enforce a constraint
(SELECT 1 FROM pg_catalog.pg_constraint c
WHERE c.conindid = s.indexrelid)
ORDER BY pg_relation_size(s.indexrelid) DESC;
Anything that shows up there has not been used since the statistics have been reset and can be safely dropped.
There are a few caveats:
statistics collection must run (look for the “statistics collector” process and see if you have warnings about “stale statistics” in the log)
run the query against your production database
if your program is running at many sites, try it on all of them (different users have different usage patterns)
It is possible you can delete them, however you should make sure your query runs after a typical workload. That is, are there some indexes that show no usage in this query only used during certain times when specialized queries run? Month-end reporting, weekly runs, etc? We ran into this a couple of times - several large indexes didn't get used during the day but supported month-end summaries.

Incremental loading into amazon redshift from local mysql database - Automation process

We are beginning to use Amazon Redshift for our reporting purposes. We are able to load our entire data onto Redshift through S3 and also manually update the data for everyday incremental load. Now we are into the process of automating the entire process because then the scripts can be run at a particular time and data gets automatically updated with everyday data.
The method we are using for incremental load is as suggested in the documentation,
http://docs.aws.amazon.com/redshift/latest/dg/merge-create-staging-table.html
this works fine manually but while automating the process, I am not sure how to obtain the primary key for each table based on which the existing records are updated. In short how to obtain the primary key field from redshift ? Is there something like "index" or some other term which can be used to obtain the primary key or even the distkey ? Thanks in advance
I'm still working on the details of the query to extract the information easily, but you can use this query
select a.attname AS "column_name", format_type(a.atttypid, a.atttypmod) AS "column_type",
format_encoding(a.attencodingtype::integer) AS "encoding", a.attisdistkey AS "distkey",
a.attsortkeyord AS "sortkey", a.attnotnull AS "notnull", a.attnum, i.*
FROM pg_namespace n
join pg_class c on n.oid = c.relnamespace
join pg_attribute a on c.oid = a.attrelid AND a.attnum > 0 AND NOT a.attisdropped
left join pg_index i on c.oid = i.indrelid and i.indisprimary='true'
WHERE
c.relname = 'mytablename'
and n.nspname='myschemaname'
order by a.attnum
to find most of the interesting things about a table. If you look at the output, the pg_index.indkey is a space delimited concatenation of the primary key columns (since it may be a compound key) expressed as the column order number which ties back to the pg_attribute.attnum column.

How to introspect materialized views

I have a utility that introspects columns of tables using:
select column_name, data_type from information_schema.columns
where table_name=%s
How can I extend this to introspect columns of materialized views?
Your query carries a few shortcomings / room for improvement:
A table name is not unique inside a database, you would have to narrow down to a specific schema, or could get surprising / misleading / totally incorrect results.
It's much more effective / convenient to cast the (optionally) schema-qualified table name to regclass ... see below.
A cast to regtype gives you generic type names instead of internal ones. But that's still only the base type.
Use the system catalog information functions format_type() instead to get an exact type name including modifiers.
With the above improvements you don't need to join to additional tables. Just pg_attribute.
Dropped columns reside in the catalog until the table is vacuumed (fully). You need to exclude those.
SELECT attname, atttypid::regtype AS base_type
, format_type(atttypid, atttypmod) AS full_type
FROM pg_attribute
WHERE attrelid = 'myschema.mytable'::regclass
AND attnum > 0
AND NOT attisdropped; -- no dead columns
As an aside: the views in the information schema are only good for standard compliance and portability (rarely works anyway). If you don't plan to switch your RDBMS, stick with the catalog tables, which are much faster - and more complete, apparently.
It would seem that postgres 9.3 has left materialized views out of the information_schema. (See http://postgresql.1045698.n5.nabble.com/Re-Materialized-views-WIP-patch-td5740513i40.html for a discussion.)
The following will work for introspection:
select attname, typname
from pg_attribute a
join pg_class c on a.attrelid = c.oid
join pg_type t on a.atttypid = t.oid
where relname = %s and attnum >= 1;
The clause attnum >= 1 suppresses system columns. The type names are pg_specific this way, I guess, but good enough for my purposes.

POSTGRESQL: DUMP TABLE WITHOUT TOAST DATA

I am trying to isolate toast data from a table so that I can dump the table without the toast data. I know there must be a way to do that, but I cant get my way there...Suggestions would be highly appreciated
Try a COPY (or psql's \copy) with the query option - you can select the columns to export. You can also choose a CSV format rather than tab-separated, the representation of nulls etc.
TOAST is the way how PostgreSQL is storing your data internally. For you, as a user, there is only values that you delegated to the database to keep for you.
TOAST comes into play mostly for the textual data, when any of the tuple's attributes make tuple's size to be more then 8k (if PostgreSQL compiled with default page size). This happens inside the DB engine, transparently to the user. Say, if you'll insert a row with a text that has round 10k symbols, the corresponding attribute will be TOASTed.
Given how TOAST works, your question appears to look like: How do I dump table without attributes containing big chunks of data? It seems unclear for me what would be the purpose of this, as your dump will be incomplete.
EDIT: I don't know how to find if any attribute of any tuple do have a TOASTed value. Instead, I will eliminate all attributes that can have TOASTed values.
The following query will give you all the columns for a table, that are always in PLAIN storage mode:
SELECT a.attname
FROM pg_class t
JOIN pg_attribute a ON t.oid = a.attrelid
JOIN pg_type typ ON typ.oid = a.atttypid
WHERE t.relkind='r' AND t.relname = 'element'
AND a.attnum > 0 AND NOT a.attisdropped
AND typ.typstorage='p'
ORDER BY a.attnum;
And this query will generate the desired SQL, you can wrap it in the script or into the PL/pgSQL's EXECUTE statement:
SELECT 'COPY '||quote_ident(t.relname)||
'('||string_agg(a.attname, ',' ORDER BY a.attnum)||') TO stdout;'
FROM pg_class t
JOIN pg_attribute a ON t.oid = a.attrelid
JOIN pg_type typ ON typ.oid = a.atttypid
WHERE t.relkind='r' AND t.relname = '<YOUR_TABLE>'
AND a.attnum > 0 AND NOT a.attisdropped
AND typ.typstorage='p'
GROUP BY t.relname;