PostgreSQL n_distinct statistics setting - postgresql

Are there multiple ways to set n_distinct in PostgreSQL? Both of these seem to be doing the same thing but end up changing a different value within pg_attribute. What is the difference between these two commands?
alter table my_table alter column my_column set (n_distinct = 500);
alter table my_table alter column my_column set statistics 1000;
select
c.relname,
a.attname,
a.attoptions,
a.attstattarget
from
pg_class c
inner join
pg_attribute a
on c.oid = a.attrelid
where
c.relname = 'my_table'
and
a.attname = 'my_column'
order by
c.relname,
a.attname;
Name |Value
-------------|----------------
relname |my_table
attname |my_column
attoptions |{n_distinct=500}
attstattarget|1000

Both of these seem to be doing the same thing
Why would you say that? Both commands are obviously distinct. Both are related to column statistics and query planning. But they do very different things.
The statistics target ...
controls the level of detail of statistics accumulated for this column by ANALYZE. See:
Check statistics targets in PostgreSQL
Basics in the manual.
Setting n_distinct is something completely different. It means hard-coding the number (or ratio) of distinct values to expect for the given column. (But only effective after the next ANALYZE.)
Related answer on dba.SE with more on n_distinct:
Very bad query plan in PostgreSQL 9.6

Related

Delete unused indexes

I run this query for check if there are some unused indexes in my DataBase.
select
t.tablename AS "relation",
indexname,
c.reltuples AS num_rows,
pg_relation_size(quote_ident(t.tablename)::text) AS table_size,
pg_relation_size(quote_ident(indexrelname)::text) AS index_size,
idx_scan AS number_of_scans,
idx_tup_read AS tuples_read,
idx_tup_fetch AS tuples_fetched
FROM pg_tables t
LEFT OUTER JOIN pg_class c ON t.tablename=c.relname
LEFT OUTER JOIN
( SELECT c.relname AS ctablename, ipg.relname AS indexname, x.indnatts AS number_of_columns, psai.idx_scan, idx_tup_read, idx_tup_fetch, indexrelname, indisunique FROM pg_index x
JOIN pg_class c ON c.oid = x.indrelid
JOIN pg_class ipg ON ipg.oid = x.indexrelid
JOIN pg_stat_all_indexes psai ON x.indexrelid = psai.indexrelid )
AS foo
ON t.tablename = foo.ctablename
WHERE t.schemaname='public'
and idx_scan = 0
ORDER BY
--1,2
--6
5 desc
;
And I got a lot of rows where those fields are all zero:
number_of_scans,
tuples_read,
tuples_fetched
Is that mean that I can drop them? Is there a chance that that Metadata is out-of-date? How can I check it?
I'm using Postgres with version 9.6
Your query misses some uses of indexes that do not require them to be scanned:
they enforce primary key, unique and exclusion constraints
they influence statistics collection (for “expression indexes”)
Here is my gold standard query from my blog post:
SELECT s.schemaname,
s.relname AS tablename,
s.indexrelname AS indexname,
pg_relation_size(s.indexrelid) AS index_size
FROM pg_catalog.pg_stat_user_indexes s
JOIN pg_catalog.pg_index i ON s.indexrelid = i.indexrelid
WHERE s.idx_scan = 0 -- has never been scanned
AND 0 <>ALL (i.indkey) -- no index column is an expression
AND NOT EXISTS -- does not enforce a constraint
(SELECT 1 FROM pg_catalog.pg_constraint c
WHERE c.conindid = s.indexrelid)
ORDER BY pg_relation_size(s.indexrelid) DESC;
Anything that shows up there has not been used since the statistics have been reset and can be safely dropped.
There are a few caveats:
statistics collection must run (look for the “statistics collector” process and see if you have warnings about “stale statistics” in the log)
run the query against your production database
if your program is running at many sites, try it on all of them (different users have different usage patterns)
It is possible you can delete them, however you should make sure your query runs after a typical workload. That is, are there some indexes that show no usage in this query only used during certain times when specialized queries run? Month-end reporting, weekly runs, etc? We ran into this a couple of times - several large indexes didn't get used during the day but supported month-end summaries.

Query Constraint Clauses With Schema and Table (Postgres)

I am trying to query the constraint clauses along with schema and table in postgres. I've gotten as far as identifying information_schema.check_constraints as a useful table. The problem is that doing
select *
from information_schema.check_constraints
Results in constraint_catalog, constraint_schema, constraint_name, check_clause. The check_clause is what I want and this table also gives me the constraint_schema. However, it does not give the table that this constraint is defined on. In my current database, I have constraints with the same name defined on different tables within the same schema (which is in it of itself perhaps poor design but what I need to deal with). Is it possible to get the table name here as well?
select
conname,
connamespace::regnamespace as schemaname,
conrelid::regclass as tablename,
consrc as checkclause,
pg_get_constraintdef(oid) as definition
from
pg_constraint
where
contype = 'c'
and conrelid <> 0; -- to get only table constraints
About pg_constraint
About Object Identifier Types

How to introspect materialized views

I have a utility that introspects columns of tables using:
select column_name, data_type from information_schema.columns
where table_name=%s
How can I extend this to introspect columns of materialized views?
Your query carries a few shortcomings / room for improvement:
A table name is not unique inside a database, you would have to narrow down to a specific schema, or could get surprising / misleading / totally incorrect results.
It's much more effective / convenient to cast the (optionally) schema-qualified table name to regclass ... see below.
A cast to regtype gives you generic type names instead of internal ones. But that's still only the base type.
Use the system catalog information functions format_type() instead to get an exact type name including modifiers.
With the above improvements you don't need to join to additional tables. Just pg_attribute.
Dropped columns reside in the catalog until the table is vacuumed (fully). You need to exclude those.
SELECT attname, atttypid::regtype AS base_type
, format_type(atttypid, atttypmod) AS full_type
FROM pg_attribute
WHERE attrelid = 'myschema.mytable'::regclass
AND attnum > 0
AND NOT attisdropped; -- no dead columns
As an aside: the views in the information schema are only good for standard compliance and portability (rarely works anyway). If you don't plan to switch your RDBMS, stick with the catalog tables, which are much faster - and more complete, apparently.
It would seem that postgres 9.3 has left materialized views out of the information_schema. (See http://postgresql.1045698.n5.nabble.com/Re-Materialized-views-WIP-patch-td5740513i40.html for a discussion.)
The following will work for introspection:
select attname, typname
from pg_attribute a
join pg_class c on a.attrelid = c.oid
join pg_type t on a.atttypid = t.oid
where relname = %s and attnum >= 1;
The clause attnum >= 1 suppresses system columns. The type names are pg_specific this way, I guess, but good enough for my purposes.

Why does this query deadlock?

I have an application that reads the structure of an existing PostgreSQL 9.1 database, compares it against a "should be" state and updates the database accordingly. That works fine, most of the time. However, I had several instances now when reading the current database structure deadlocked. The query responsible reads the existing foreign keys:
SELECT tc.table_schema, tc.table_name, tc.constraint_name, kcu.column_name,
ccu.table_schema, ccu.table_name, ccu.column_name
FROM information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
WHERE constraint_type = 'FOREIGN KEY'
Viewing the server status in pgAdmin shows this to be the only active query/transaction that's running on the server. Still, the query doesn't return.
The error is reproducible in a way: When I find a database that produces the error, it will produce the error every time. But not all databases produce the error. This is one mysterious bug, and I'm running out of options and ideas on what else to try or how to work around this. So any input or ideas are highly appreciated!
PS: A colleague of mine just reported he produced the same error using PostgreSQL 8.4.
I tested and found your query very slow, too. The root of this problem is that "tables" in information_schema are in fact complicated views to provide catalogs according to the SQL standard. In this particular case, matters are further complicated as foreign keys can be built on multiple columns. Your query yields duplicate rows for those cases which, I suspect, may be an undesired.
Correlated subqueries with unnest, fed to ARRAY constructors avoid the problem in my query.
This query yields the same information, just without duplicate rows and 100x faster. Also, I would venture to guarantee, without deadlocks.
Only works for PostgreSQL, not portable to other RDBMSes.
SELECT c.conrelid::regclass AS table_name
, c.conname AS fk_name
, ARRAY(SELECT a.attname
FROM unnest(c.conkey) x
JOIN pg_attribute a
ON a.attrelid = c.conrelid AND a.attnum = x) AS fk_columns
, c.confrelid::regclass AS ref_table
, ARRAY(SELECT a.attname
FROM unnest(c.confkey) x
JOIN pg_attribute a
ON a.attrelid = c.confrelid AND a.attnum = x) AS ref_columns
FROM pg_catalog.pg_constraint c
WHERE c.contype = 'f';
-- ORDER BY c.conrelid::regclass::text,2
The cast to ::regclass yields table names as seen with your current search_path. May or may not be what you want. For this query to include the absolute path (schema) for every table name you can set the search_path like this:
SET search_path = pg_catalog;
SELECT ...
To continue your session with your default search_path:
RESET search_path;
Related:
Get column names and data types of a query, table or view

Query the schema details of a table in PostgreSQL?

I need to know the column type in PostgreSQL (i.e. varchar(20)). I know that I could probably find this using \d something in psql, but I need it to be done with a select query.
Is this possible in PostgreSQL?
There is a much simpler way in PostgreSQL to get the type of a column.
SELECT pg_typeof(col)::text FROM tbl LIMIT 1
The table must hold at least one row, of course. And you only get the base type without type modifiers (if any). Use the alternative below if you need that, too.
You can use the function for constants as well. The manual on pg_typeof().
For an empty (or any) table you can use query the system catalog pg_attribute to get the full list of columns and their respective type in order:
SELECT attnum, attname AS column, format_type(atttypid, atttypmod) AS type
FROM pg_attribute
WHERE attrelid = 'myschema.mytbl'::regclass -- optionally schema-qualified
AND NOT attisdropped
AND attnum > 0
ORDER BY attnum;
The manual on format_type() and on object identifier types like regclass.
You can fully describe a table using postgres with the following query:
SELECT
a.attname as Column,
pg_catalog.format_type(a.atttypid, a.atttypmod) as Datatype
FROM
pg_catalog.pg_attribute a
WHERE
a.attnum > 0
AND NOT a.attisdropped
AND a.attrelid = (
SELECT c.oid
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname ~ '^(TABLENAME)$'
AND pg_catalog.pg_table_is_visible(c.oid)
)
Tith this you will retrieve column names and data type.
It is also possible to start psql client using the -E option
$ psql -E
And then a simple \d mytable will output the queries used by postgres to describe the table. It work for every psql describe commands.
Yes, look at the information_schema.