Related
I'm looking for a way to find the row count for all my tables in Postgres. I know I can do this one table at a time with:
SELECT count(*) FROM table_name;
but I'd like to see the row count for all the tables and then order by that to get an idea of how big all my tables are.
There's three ways to get this sort of count, each with their own tradeoffs.
If you want a true count, you have to execute the SELECT statement like the one you used against each table. This is because PostgreSQL keeps row visibility information in the row itself, not anywhere else, so any accurate count can only be relative to some transaction. You're getting a count of what that transaction sees at the point in time when it executes. You could automate this to run against every table in the database, but you probably don't need that level of accuracy or want to wait that long.
WITH tbl AS
(SELECT table_schema,
TABLE_NAME
FROM information_schema.tables
WHERE TABLE_NAME not like 'pg_%'
AND table_schema in ('public'))
SELECT table_schema,
TABLE_NAME,
(xpath('/row/c/text()', query_to_xml(format('select count(*) as c from %I.%I', table_schema, TABLE_NAME), FALSE, TRUE, '')))[1]::text::int AS rows_n
FROM tbl
ORDER BY rows_n DESC;
The second approach notes that the statistics collector tracks roughly how many rows are "live" (not deleted or obsoleted by later updates) at any time. This value can be off by a bit under heavy activity, but is generally a good estimate:
SELECT schemaname,relname,n_live_tup
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;
That can also show you how many rows are dead, which is itself an interesting number to monitor.
The third way is to note that the system ANALYZE command, which is executed by the autovacuum process regularly as of PostgreSQL 8.3 to update table statistics, also computes a row estimate. You can grab that one like this:
SELECT
nspname AS schemaname,relname,reltuples
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE
nspname NOT IN ('pg_catalog', 'information_schema') AND
relkind='r'
ORDER BY reltuples DESC;
Which of these queries is better to use is hard to say. Normally I make that decision based on whether there's more useful information I also want to use inside of pg_class or inside of pg_stat_user_tables. For basic counting purposes just to see how big things are in general, either should be accurate enough.
Here is a solution that does not require functions to get an accurate count for each table:
select table_schema,
table_name,
(xpath('/row/cnt/text()', xml_count))[1]::text::int as row_count
from (
select table_name, table_schema,
query_to_xml(format('select count(*) as cnt from %I.%I', table_schema, table_name), false, true, '') as xml_count
from information_schema.tables
where table_schema = 'public' --<< change here for the schema you want
) t
query_to_xml will run the passed SQL query and return an XML with the result (the row count for that table). The outer xpath() will then extract the count information from that xml and convert it to a number
The derived table is not really necessary, but makes the xpath() a bit easier to understand - otherwise the whole query_to_xml() would need to be passed to the xpath() function.
To get estimates, see Greg Smith's answer.
To get exact counts, the other answers so far are plagued with some issues, some of them serious (see below). Here's a version that's hopefully better:
CREATE FUNCTION rowcount_all(schema_name text default 'public')
RETURNS table(table_name text, cnt bigint) as
$$
declare
table_name text;
begin
for table_name in SELECT c.relname FROM pg_class c
JOIN pg_namespace s ON (c.relnamespace=s.oid)
WHERE c.relkind = 'r' AND s.nspname=schema_name
LOOP
RETURN QUERY EXECUTE format('select cast(%L as text),count(*) from %I.%I',
table_name, schema_name, table_name);
END LOOP;
end
$$ language plpgsql;
It takes a schema name as parameter, or public if no parameter is given.
To work with a specific list of schemas or a list coming from a query without modifying the function, it can be called from within a query like this:
WITH rc(schema_name,tbl) AS (
select s.n,rowcount_all(s.n) from (values ('schema1'),('schema2')) as s(n)
)
SELECT schema_name,(tbl).* FROM rc;
This produces a 3-columns output with the schema, the table and the rows count.
Now here are some issues in the other answers that this function avoids:
Table and schema names shouldn't be injected into executable SQL without being quoted, either with quote_ident or with the more modern format() function with its %I format string. Otherwise some malicious person may name their table tablename;DROP TABLE other_table which is perfectly valid as a table name.
Even without the SQL injection and funny characters problems, table name may exist in variants differing by case. If a table is named ABCD and another one abcd, the SELECT count(*) FROM... must use a quoted name otherwise it will skip ABCD and count abcd twice. The %I of format does this automatically.
information_schema.tables lists custom composite types in addition to tables, even when table_type is 'BASE TABLE' (!). As a consequence, we can't iterate oninformation_schema.tables, otherwise we risk having select count(*) from name_of_composite_type and that would fail. OTOH pg_class where relkind='r' should always work fine.
The type of COUNT() is bigint, not int. Tables with more than 2.15 billion rows may exist (running a count(*) on them is a bad idea, though).
A permanent type need not to be created for a function to return a resultset with several columns. RETURNS TABLE(definition...) is a better alternative.
The hacky, practical answer for people trying to evaluate which Heroku plan they need and can't wait for heroku's slow row counter to refresh:
Basically you want to run \dt in psql, copy the results to your favorite text editor (it will look like this:
public | auth_group | table | axrsosvelhutvw
public | auth_group_permissions | table | axrsosvelhutvw
public | auth_permission | table | axrsosvelhutvw
public | auth_user | table | axrsosvelhutvw
public | auth_user_groups | table | axrsosvelhutvw
public | auth_user_user_permissions | table | axrsosvelhutvw
public | background_task | table | axrsosvelhutvw
public | django_admin_log | table | axrsosvelhutvw
public | django_content_type | table | axrsosvelhutvw
public | django_migrations | table | axrsosvelhutvw
public | django_session | table | axrsosvelhutvw
public | exercises_assignment | table | axrsosvelhutvw
), then run a regex search and replace like this:
^[^|]*\|\s+([^|]*?)\s+\| table \|.*$
to:
select '\1', count(*) from \1 union/g
which will yield you something very similar to this:
select 'auth_group', count(*) from auth_group union
select 'auth_group_permissions', count(*) from auth_group_permissions union
select 'auth_permission', count(*) from auth_permission union
select 'auth_user', count(*) from auth_user union
select 'auth_user_groups', count(*) from auth_user_groups union
select 'auth_user_user_permissions', count(*) from auth_user_user_permissions union
select 'background_task', count(*) from background_task union
select 'django_admin_log', count(*) from django_admin_log union
select 'django_content_type', count(*) from django_content_type union
select 'django_migrations', count(*) from django_migrations union
select 'django_session', count(*) from django_session
;
(You'll need to remove the last union and add the semicolon at the end manually)
Run it in psql and you're done.
?column? | count
--------------------------------+-------
auth_group_permissions | 0
auth_user_user_permissions | 0
django_session | 1306
django_content_type | 17
auth_user_groups | 162
django_admin_log | 9106
django_migrations | 19
[..]
If you don't mind potentially stale data, you can access the same statistics used by the query optimizer.
Something like:
SELECT relname, n_tup_ins - n_tup_del as rowcount FROM pg_stat_all_tables;
Simple Two Steps: (Note : No need to change anything - just copy paste)
1. create function
create function
cnt_rows(schema text, tablename text) returns integer
as
$body$
declare
result integer;
query varchar;
begin
query := 'SELECT count(1) FROM ' || schema || '.' || tablename;
execute query into result;
return result;
end;
$body$
language plpgsql;
2. Run this query to get rows count for all the tables
select sum(cnt_rows) as total_no_of_rows from (select
cnt_rows(table_schema, table_name)
from information_schema.tables
where
table_schema not in ('pg_catalog', 'information_schema')
and table_type='BASE TABLE') as subq;
or
To get rows counts tablewise
select
table_schema,
table_name,
cnt_rows(table_schema, table_name)
from information_schema.tables
where
table_schema not in ('pg_catalog', 'information_schema')
and table_type='BASE TABLE'
order by 3 desc;
Not sure if an answer in bash is acceptable to you, but FWIW...
PGCOMMAND=" psql -h localhost -U fred -d mydb -At -c \"
SELECT table_name
FROM information_schema.tables
WHERE table_type='BASE TABLE'
AND table_schema='public'
\""
TABLENAMES=$(export PGPASSWORD=test; eval "$PGCOMMAND")
for TABLENAME in $TABLENAMES; do
PGCOMMAND=" psql -h localhost -U fred -d mydb -At -c \"
SELECT '$TABLENAME',
count(*)
FROM $TABLENAME
\""
eval "$PGCOMMAND"
done
Extracted from my Comment in the answer from GregSmith to make it more readable:
with tbl as (
SELECT table_schema,table_name
FROM information_schema.tables
WHERE table_name not like 'pg_%' AND table_schema IN ('public')
)
SELECT
table_schema,
table_name,
(xpath('/row/c/text()',
query_to_xml(format('select count(*) AS c from %I.%I', table_schema, table_name),
false,
true,
'')))[1]::text::int AS rows_n
FROM tbl ORDER BY 3 DESC;
Thanks to #a_horse_with_no_name
I usually don't rely on statistics, especially in PostgreSQL.
SELECT table_name, dsql2('select count(*) from '||table_name) as rownum
FROM information_schema.tables
WHERE table_type='BASE TABLE'
AND table_schema='livescreen'
ORDER BY 2 DESC;
CREATE OR REPLACE FUNCTION dsql2(i_text text)
RETURNS int AS
$BODY$
Declare
v_val int;
BEGIN
execute i_text into v_val;
return v_val;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
This worked for me
SELECT schemaname,relname,n_live_tup FROM pg_stat_user_tables ORDER BY
n_live_tup DESC;
I don't remember the URL from where I collected this. But hope this should help you:
CREATE TYPE table_count AS (table_name TEXT, num_rows INTEGER);
CREATE OR REPLACE FUNCTION count_em_all () RETURNS SETOF table_count AS '
DECLARE
the_count RECORD;
t_name RECORD;
r table_count%ROWTYPE;
BEGIN
FOR t_name IN
SELECT
c.relname
FROM
pg_catalog.pg_class c LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE
c.relkind = ''r''
AND n.nspname = ''public''
ORDER BY 1
LOOP
FOR the_count IN EXECUTE ''SELECT COUNT(*) AS "count" FROM '' || t_name.relname
LOOP
END LOOP;
r.table_name := t_name.relname;
r.num_rows := the_count.count;
RETURN NEXT r;
END LOOP;
RETURN;
END;
' LANGUAGE plpgsql;
Executing select count_em_all(); should get you row count of all your tables.
I made a small variation to include all tables, also for non-public tables.
CREATE TYPE table_count AS (table_schema TEXT,table_name TEXT, num_rows INTEGER);
CREATE OR REPLACE FUNCTION count_em_all () RETURNS SETOF table_count AS '
DECLARE
the_count RECORD;
t_name RECORD;
r table_count%ROWTYPE;
BEGIN
FOR t_name IN
SELECT table_schema,table_name
FROM information_schema.tables
where table_schema !=''pg_catalog''
and table_schema !=''information_schema''
ORDER BY 1,2
LOOP
FOR the_count IN EXECUTE ''SELECT COUNT(*) AS "count" FROM '' || t_name.table_schema||''.''||t_name.table_name
LOOP
END LOOP;
r.table_schema := t_name.table_schema;
r.table_name := t_name.table_name;
r.num_rows := the_count.count;
RETURN NEXT r;
END LOOP;
RETURN;
END;
' LANGUAGE plpgsql;
use select count_em_all(); to call it.
Hope you find this usefull.
Paul
You Can use this query to generate all tablenames with their counts
select ' select '''|| tablename ||''', count(*) from ' || tablename ||'
union' from pg_tables where schemaname='public';
the result from the above query will be
select 'dim_date', count(*) from dim_date union
select 'dim_store', count(*) from dim_store union
select 'dim_product', count(*) from dim_product union
select 'dim_employee', count(*) from dim_employee union
You'll need to remove the last union and add the semicolon at the end !!
select 'dim_date', count(*) from dim_date union
select 'dim_store', count(*) from dim_store union
select 'dim_product', count(*) from dim_product union
select 'dim_employee', count(*) from dim_employee **;**
RUN !!!
Here is a much simpler way.
tables="$(echo '\dt' | psql -U "${PGUSER}" | tail -n +4 | head -n-2 | tr -d ' ' | cut -d '|' -f2)"
for table in $tables; do
printf "%s: %s\n" "$table" "$(echo "SELECT COUNT(*) FROM $table;" | psql -U "${PGUSER}" | tail -n +3 | head -n-2 | tr -d ' ')"
done
output should look like this
auth_group: 0
auth_group_permissions: 0
auth_permission: 36
auth_user: 2
auth_user_groups: 0
auth_user_user_permissions: 0
authtoken_token: 2
django_admin_log: 0
django_content_type: 9
django_migrations: 22
django_session: 0
mydata_table1: 9011
mydata_table2: 3499
you can update the psql -U "${PGUSER}" portion as needed to access your database
note that the head -n-2 syntax may not work in macOS, you could probably just use a different implementation there
Tested on psql (PostgreSQL) 11.2 under CentOS 7
if you want it sorted by table, then just wrap it with sort
for table in $tables; do
printf "%s: %s\n" "$table" "$(echo "SELECT COUNT(*) FROM $table;" | psql -U "${PGUSER}" | tail -n +3 | head -n-2 | tr -d ' ')"
done | sort -k 2,2nr
output;
mydata_table1: 9011
mydata_table2: 3499
auth_permission: 36
django_migrations: 22
django_content_type: 9
authtoken_token: 2
auth_user: 2
auth_group: 0
auth_group_permissions: 0
auth_user_groups: 0
auth_user_user_permissions: 0
django_admin_log: 0
django_session: 0
I like Daniel Vérité's answer.
But when you can't use a CREATE statement you can either use a bash solution or, if you're a windows user, a powershell one:
# You don't need this if you have pgpass.conf
$env:PGPASSWORD = "userpass"
# Get table list
$tables = & 'C:\Program Files\PostgreSQL\9.4\bin\psql.exe' -U user -w -d dbname -At -c "select table_name from information_schema.tables where table_type='BASE TABLE' AND table_schema='schema1'"
foreach ($table in $tables) {
& 'C:\path_to_postresql\bin\psql.exe' -U root -w -d dbname -At -c "select '$table', count(*) from $table"
}
I wanted the total from all tables + a list of tables with their counts. A little like a performance chart of where most time was spent
WITH results AS (
SELECT nspname AS schemaname,relname,reltuples
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE
nspname NOT IN ('pg_catalog', 'information_schema') AND
relkind='r'
GROUP BY schemaname, relname, reltuples
)
SELECT * FROM results
UNION
SELECT 'all' AS schemaname, 'all' AS relname, SUM(reltuples) AS "reltuples" FROM results
ORDER BY reltuples DESC
You could of course put a LIMIT clause on the results in this version too so that you get the largest n offenders as well as a total.
One thing that should be noted about this is that you need to let it sit for a while after bulk imports. I tested this by just adding 5000 rows to a database across several tables using real import data. It showed 1800 records for about a minute (probably a configurable window)
This is based from https://stackoverflow.com/a/2611745/1548557 work, so thank you and recognition to that for the query to use within the CTE
If you're in the psql shell, using \gexec allows you to execute the syntax described in syed's answer and Aur's answer without manual edits in an external text editor.
with x (y) as (
select
'select count(*), '''||
tablename||
''' as "tablename" from '||
tablename||' '
from pg_tables
where schemaname='public'
)
select
string_agg(y,' union all '||chr(10)) || ' order by tablename'
from x \gexec
Note, string_agg() is used both to delimit union all between statements and to smush the separated datarows into a single unit to be passed into the buffer.
\gexec
Sends the current query buffer to the server, then treats each column of each row of the query's output (if any) as a SQL statement to be executed.
below query will give us row count and size for each table
select table_schema, table_name,
pg_relation_size('"'||table_schema||'"."'||table_name||'"')/1024/1024 size_MB,
(xpath('/row/c/text()', query_to_xml(format('select count(*) AS c from %I.%I', table_schema, table_name),
false, true,'')))[1]::text::int AS rows_n
from information_schema.tables
order by size_MB desc;
I'm looking for a way to find the row count for all my tables in Postgres. I know I can do this one table at a time with:
SELECT count(*) FROM table_name;
but I'd like to see the row count for all the tables and then order by that to get an idea of how big all my tables are.
There's three ways to get this sort of count, each with their own tradeoffs.
If you want a true count, you have to execute the SELECT statement like the one you used against each table. This is because PostgreSQL keeps row visibility information in the row itself, not anywhere else, so any accurate count can only be relative to some transaction. You're getting a count of what that transaction sees at the point in time when it executes. You could automate this to run against every table in the database, but you probably don't need that level of accuracy or want to wait that long.
WITH tbl AS
(SELECT table_schema,
TABLE_NAME
FROM information_schema.tables
WHERE TABLE_NAME not like 'pg_%'
AND table_schema in ('public'))
SELECT table_schema,
TABLE_NAME,
(xpath('/row/c/text()', query_to_xml(format('select count(*) as c from %I.%I', table_schema, TABLE_NAME), FALSE, TRUE, '')))[1]::text::int AS rows_n
FROM tbl
ORDER BY rows_n DESC;
The second approach notes that the statistics collector tracks roughly how many rows are "live" (not deleted or obsoleted by later updates) at any time. This value can be off by a bit under heavy activity, but is generally a good estimate:
SELECT schemaname,relname,n_live_tup
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;
That can also show you how many rows are dead, which is itself an interesting number to monitor.
The third way is to note that the system ANALYZE command, which is executed by the autovacuum process regularly as of PostgreSQL 8.3 to update table statistics, also computes a row estimate. You can grab that one like this:
SELECT
nspname AS schemaname,relname,reltuples
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE
nspname NOT IN ('pg_catalog', 'information_schema') AND
relkind='r'
ORDER BY reltuples DESC;
Which of these queries is better to use is hard to say. Normally I make that decision based on whether there's more useful information I also want to use inside of pg_class or inside of pg_stat_user_tables. For basic counting purposes just to see how big things are in general, either should be accurate enough.
Here is a solution that does not require functions to get an accurate count for each table:
select table_schema,
table_name,
(xpath('/row/cnt/text()', xml_count))[1]::text::int as row_count
from (
select table_name, table_schema,
query_to_xml(format('select count(*) as cnt from %I.%I', table_schema, table_name), false, true, '') as xml_count
from information_schema.tables
where table_schema = 'public' --<< change here for the schema you want
) t
query_to_xml will run the passed SQL query and return an XML with the result (the row count for that table). The outer xpath() will then extract the count information from that xml and convert it to a number
The derived table is not really necessary, but makes the xpath() a bit easier to understand - otherwise the whole query_to_xml() would need to be passed to the xpath() function.
To get estimates, see Greg Smith's answer.
To get exact counts, the other answers so far are plagued with some issues, some of them serious (see below). Here's a version that's hopefully better:
CREATE FUNCTION rowcount_all(schema_name text default 'public')
RETURNS table(table_name text, cnt bigint) as
$$
declare
table_name text;
begin
for table_name in SELECT c.relname FROM pg_class c
JOIN pg_namespace s ON (c.relnamespace=s.oid)
WHERE c.relkind = 'r' AND s.nspname=schema_name
LOOP
RETURN QUERY EXECUTE format('select cast(%L as text),count(*) from %I.%I',
table_name, schema_name, table_name);
END LOOP;
end
$$ language plpgsql;
It takes a schema name as parameter, or public if no parameter is given.
To work with a specific list of schemas or a list coming from a query without modifying the function, it can be called from within a query like this:
WITH rc(schema_name,tbl) AS (
select s.n,rowcount_all(s.n) from (values ('schema1'),('schema2')) as s(n)
)
SELECT schema_name,(tbl).* FROM rc;
This produces a 3-columns output with the schema, the table and the rows count.
Now here are some issues in the other answers that this function avoids:
Table and schema names shouldn't be injected into executable SQL without being quoted, either with quote_ident or with the more modern format() function with its %I format string. Otherwise some malicious person may name their table tablename;DROP TABLE other_table which is perfectly valid as a table name.
Even without the SQL injection and funny characters problems, table name may exist in variants differing by case. If a table is named ABCD and another one abcd, the SELECT count(*) FROM... must use a quoted name otherwise it will skip ABCD and count abcd twice. The %I of format does this automatically.
information_schema.tables lists custom composite types in addition to tables, even when table_type is 'BASE TABLE' (!). As a consequence, we can't iterate oninformation_schema.tables, otherwise we risk having select count(*) from name_of_composite_type and that would fail. OTOH pg_class where relkind='r' should always work fine.
The type of COUNT() is bigint, not int. Tables with more than 2.15 billion rows may exist (running a count(*) on them is a bad idea, though).
A permanent type need not to be created for a function to return a resultset with several columns. RETURNS TABLE(definition...) is a better alternative.
The hacky, practical answer for people trying to evaluate which Heroku plan they need and can't wait for heroku's slow row counter to refresh:
Basically you want to run \dt in psql, copy the results to your favorite text editor (it will look like this:
public | auth_group | table | axrsosvelhutvw
public | auth_group_permissions | table | axrsosvelhutvw
public | auth_permission | table | axrsosvelhutvw
public | auth_user | table | axrsosvelhutvw
public | auth_user_groups | table | axrsosvelhutvw
public | auth_user_user_permissions | table | axrsosvelhutvw
public | background_task | table | axrsosvelhutvw
public | django_admin_log | table | axrsosvelhutvw
public | django_content_type | table | axrsosvelhutvw
public | django_migrations | table | axrsosvelhutvw
public | django_session | table | axrsosvelhutvw
public | exercises_assignment | table | axrsosvelhutvw
), then run a regex search and replace like this:
^[^|]*\|\s+([^|]*?)\s+\| table \|.*$
to:
select '\1', count(*) from \1 union/g
which will yield you something very similar to this:
select 'auth_group', count(*) from auth_group union
select 'auth_group_permissions', count(*) from auth_group_permissions union
select 'auth_permission', count(*) from auth_permission union
select 'auth_user', count(*) from auth_user union
select 'auth_user_groups', count(*) from auth_user_groups union
select 'auth_user_user_permissions', count(*) from auth_user_user_permissions union
select 'background_task', count(*) from background_task union
select 'django_admin_log', count(*) from django_admin_log union
select 'django_content_type', count(*) from django_content_type union
select 'django_migrations', count(*) from django_migrations union
select 'django_session', count(*) from django_session
;
(You'll need to remove the last union and add the semicolon at the end manually)
Run it in psql and you're done.
?column? | count
--------------------------------+-------
auth_group_permissions | 0
auth_user_user_permissions | 0
django_session | 1306
django_content_type | 17
auth_user_groups | 162
django_admin_log | 9106
django_migrations | 19
[..]
If you don't mind potentially stale data, you can access the same statistics used by the query optimizer.
Something like:
SELECT relname, n_tup_ins - n_tup_del as rowcount FROM pg_stat_all_tables;
Simple Two Steps: (Note : No need to change anything - just copy paste)
1. create function
create function
cnt_rows(schema text, tablename text) returns integer
as
$body$
declare
result integer;
query varchar;
begin
query := 'SELECT count(1) FROM ' || schema || '.' || tablename;
execute query into result;
return result;
end;
$body$
language plpgsql;
2. Run this query to get rows count for all the tables
select sum(cnt_rows) as total_no_of_rows from (select
cnt_rows(table_schema, table_name)
from information_schema.tables
where
table_schema not in ('pg_catalog', 'information_schema')
and table_type='BASE TABLE') as subq;
or
To get rows counts tablewise
select
table_schema,
table_name,
cnt_rows(table_schema, table_name)
from information_schema.tables
where
table_schema not in ('pg_catalog', 'information_schema')
and table_type='BASE TABLE'
order by 3 desc;
Not sure if an answer in bash is acceptable to you, but FWIW...
PGCOMMAND=" psql -h localhost -U fred -d mydb -At -c \"
SELECT table_name
FROM information_schema.tables
WHERE table_type='BASE TABLE'
AND table_schema='public'
\""
TABLENAMES=$(export PGPASSWORD=test; eval "$PGCOMMAND")
for TABLENAME in $TABLENAMES; do
PGCOMMAND=" psql -h localhost -U fred -d mydb -At -c \"
SELECT '$TABLENAME',
count(*)
FROM $TABLENAME
\""
eval "$PGCOMMAND"
done
Extracted from my Comment in the answer from GregSmith to make it more readable:
with tbl as (
SELECT table_schema,table_name
FROM information_schema.tables
WHERE table_name not like 'pg_%' AND table_schema IN ('public')
)
SELECT
table_schema,
table_name,
(xpath('/row/c/text()',
query_to_xml(format('select count(*) AS c from %I.%I', table_schema, table_name),
false,
true,
'')))[1]::text::int AS rows_n
FROM tbl ORDER BY 3 DESC;
Thanks to #a_horse_with_no_name
I usually don't rely on statistics, especially in PostgreSQL.
SELECT table_name, dsql2('select count(*) from '||table_name) as rownum
FROM information_schema.tables
WHERE table_type='BASE TABLE'
AND table_schema='livescreen'
ORDER BY 2 DESC;
CREATE OR REPLACE FUNCTION dsql2(i_text text)
RETURNS int AS
$BODY$
Declare
v_val int;
BEGIN
execute i_text into v_val;
return v_val;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
This worked for me
SELECT schemaname,relname,n_live_tup FROM pg_stat_user_tables ORDER BY
n_live_tup DESC;
I don't remember the URL from where I collected this. But hope this should help you:
CREATE TYPE table_count AS (table_name TEXT, num_rows INTEGER);
CREATE OR REPLACE FUNCTION count_em_all () RETURNS SETOF table_count AS '
DECLARE
the_count RECORD;
t_name RECORD;
r table_count%ROWTYPE;
BEGIN
FOR t_name IN
SELECT
c.relname
FROM
pg_catalog.pg_class c LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE
c.relkind = ''r''
AND n.nspname = ''public''
ORDER BY 1
LOOP
FOR the_count IN EXECUTE ''SELECT COUNT(*) AS "count" FROM '' || t_name.relname
LOOP
END LOOP;
r.table_name := t_name.relname;
r.num_rows := the_count.count;
RETURN NEXT r;
END LOOP;
RETURN;
END;
' LANGUAGE plpgsql;
Executing select count_em_all(); should get you row count of all your tables.
I made a small variation to include all tables, also for non-public tables.
CREATE TYPE table_count AS (table_schema TEXT,table_name TEXT, num_rows INTEGER);
CREATE OR REPLACE FUNCTION count_em_all () RETURNS SETOF table_count AS '
DECLARE
the_count RECORD;
t_name RECORD;
r table_count%ROWTYPE;
BEGIN
FOR t_name IN
SELECT table_schema,table_name
FROM information_schema.tables
where table_schema !=''pg_catalog''
and table_schema !=''information_schema''
ORDER BY 1,2
LOOP
FOR the_count IN EXECUTE ''SELECT COUNT(*) AS "count" FROM '' || t_name.table_schema||''.''||t_name.table_name
LOOP
END LOOP;
r.table_schema := t_name.table_schema;
r.table_name := t_name.table_name;
r.num_rows := the_count.count;
RETURN NEXT r;
END LOOP;
RETURN;
END;
' LANGUAGE plpgsql;
use select count_em_all(); to call it.
Hope you find this usefull.
Paul
You Can use this query to generate all tablenames with their counts
select ' select '''|| tablename ||''', count(*) from ' || tablename ||'
union' from pg_tables where schemaname='public';
the result from the above query will be
select 'dim_date', count(*) from dim_date union
select 'dim_store', count(*) from dim_store union
select 'dim_product', count(*) from dim_product union
select 'dim_employee', count(*) from dim_employee union
You'll need to remove the last union and add the semicolon at the end !!
select 'dim_date', count(*) from dim_date union
select 'dim_store', count(*) from dim_store union
select 'dim_product', count(*) from dim_product union
select 'dim_employee', count(*) from dim_employee **;**
RUN !!!
Here is a much simpler way.
tables="$(echo '\dt' | psql -U "${PGUSER}" | tail -n +4 | head -n-2 | tr -d ' ' | cut -d '|' -f2)"
for table in $tables; do
printf "%s: %s\n" "$table" "$(echo "SELECT COUNT(*) FROM $table;" | psql -U "${PGUSER}" | tail -n +3 | head -n-2 | tr -d ' ')"
done
output should look like this
auth_group: 0
auth_group_permissions: 0
auth_permission: 36
auth_user: 2
auth_user_groups: 0
auth_user_user_permissions: 0
authtoken_token: 2
django_admin_log: 0
django_content_type: 9
django_migrations: 22
django_session: 0
mydata_table1: 9011
mydata_table2: 3499
you can update the psql -U "${PGUSER}" portion as needed to access your database
note that the head -n-2 syntax may not work in macOS, you could probably just use a different implementation there
Tested on psql (PostgreSQL) 11.2 under CentOS 7
if you want it sorted by table, then just wrap it with sort
for table in $tables; do
printf "%s: %s\n" "$table" "$(echo "SELECT COUNT(*) FROM $table;" | psql -U "${PGUSER}" | tail -n +3 | head -n-2 | tr -d ' ')"
done | sort -k 2,2nr
output;
mydata_table1: 9011
mydata_table2: 3499
auth_permission: 36
django_migrations: 22
django_content_type: 9
authtoken_token: 2
auth_user: 2
auth_group: 0
auth_group_permissions: 0
auth_user_groups: 0
auth_user_user_permissions: 0
django_admin_log: 0
django_session: 0
I like Daniel Vérité's answer.
But when you can't use a CREATE statement you can either use a bash solution or, if you're a windows user, a powershell one:
# You don't need this if you have pgpass.conf
$env:PGPASSWORD = "userpass"
# Get table list
$tables = & 'C:\Program Files\PostgreSQL\9.4\bin\psql.exe' -U user -w -d dbname -At -c "select table_name from information_schema.tables where table_type='BASE TABLE' AND table_schema='schema1'"
foreach ($table in $tables) {
& 'C:\path_to_postresql\bin\psql.exe' -U root -w -d dbname -At -c "select '$table', count(*) from $table"
}
I wanted the total from all tables + a list of tables with their counts. A little like a performance chart of where most time was spent
WITH results AS (
SELECT nspname AS schemaname,relname,reltuples
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE
nspname NOT IN ('pg_catalog', 'information_schema') AND
relkind='r'
GROUP BY schemaname, relname, reltuples
)
SELECT * FROM results
UNION
SELECT 'all' AS schemaname, 'all' AS relname, SUM(reltuples) AS "reltuples" FROM results
ORDER BY reltuples DESC
You could of course put a LIMIT clause on the results in this version too so that you get the largest n offenders as well as a total.
One thing that should be noted about this is that you need to let it sit for a while after bulk imports. I tested this by just adding 5000 rows to a database across several tables using real import data. It showed 1800 records for about a minute (probably a configurable window)
This is based from https://stackoverflow.com/a/2611745/1548557 work, so thank you and recognition to that for the query to use within the CTE
If you're in the psql shell, using \gexec allows you to execute the syntax described in syed's answer and Aur's answer without manual edits in an external text editor.
with x (y) as (
select
'select count(*), '''||
tablename||
''' as "tablename" from '||
tablename||' '
from pg_tables
where schemaname='public'
)
select
string_agg(y,' union all '||chr(10)) || ' order by tablename'
from x \gexec
Note, string_agg() is used both to delimit union all between statements and to smush the separated datarows into a single unit to be passed into the buffer.
\gexec
Sends the current query buffer to the server, then treats each column of each row of the query's output (if any) as a SQL statement to be executed.
below query will give us row count and size for each table
select table_schema, table_name,
pg_relation_size('"'||table_schema||'"."'||table_name||'"')/1024/1024 size_MB,
(xpath('/row/c/text()', query_to_xml(format('select count(*) AS c from %I.%I', table_schema, table_name),
false, true,'')))[1]::text::int AS rows_n
from information_schema.tables
order by size_MB desc;
I am trying to make regular expression search on a table in PostgreSql . But i am not able to find right syntax to do so using single statement .
I don't know if i create index column will i be able to do search based on regular expression or not .could any one help me please
The query is like this
SELECT *
FROM flight_info
where to_tsvector('english',event_id || ' ' || type || ' ' || category || ' ' || number || ' '|| make || ' '|| model ) ## to_tsquery('Incident');
Here Incident is matched exactly based on text But I need to make search base on regular expression like how we do using LIKE .
Or Using like clause on all the column is the only way ?
Or is there a way to give like clause that applies to many columns in query
You will not be able to mix Regex and full text search. It is not possible to use a Regex within to_tsquery()
You can use regexp_matches() instead:
SELECT regexp_matches('foobarbequebaz', '(bar)(beque)');
regexp_matches
----------------
{bar,beque}
(1 row)
SELECT regexp_matches('foobarbequebazilbarfbonk', '(b[^b]+)(b[^b]+)', 'g');
regexp_matches
----------------
{bar,beque}
{bazil,barf}
(2 rows)
More info at https://www.postgresql.org/docs/9.6/static/functions-matching.html
Update:
To use a Regex in whe WHERE clause, use the ~ operator:
-- Sample schema
CREATE TABLE sample_table (
id serial,
col1 text,
col2 text,
col3 text
);
INSERT INTO sample_table (col1, col2, col3) VALUES ('this', 'is', 'foobarbequebazilbarfbonk');
INSERT INTO sample_table (col1, col2, col3) VALUES ('apple foobarbequebazequeb', 'rocky', 'sunny');
INSERT INTO sample_table (col1, col2, col3) VALUES ('not', 'a', 'match');
-- Query
SELECT *
FROM sample_table
WHERE (col1 || col2 || col3) ~ '(bar)(beque)';
->
id | col1 | col2 | col3
----+---------------------------+-------+--------------------------
1 | this | is | foobarbequebazilbarfbonk
2 | apple foobarbequebazequeb | rocky | sunny
(2 rows)
You can use ~* instead to make it case insensitive.
More info: https://www.postgresql.org/docs/current/static/functions-matching.html
In Postgres 8.4 or higher, what is the most efficient way to get a row of data populated by defaults without actually creating the row. Eg, as a transaction (pseudocode):
create table "mytable"
(
id serial PRIMARY KEY NOT NULL,
parent_id integer NOT NULL DEFAULT 1,
random_id integer NOT NULL DEFAULT random(),
)
begin transaction
fake_row = insert into mytable (id) values (0) returning *;
delete from mytable where id=0;
return fake_row;
end transaction
Basically I'd expect a query with a single row where parent_id is 1 and random_id is a random number (or other function return value) but I don't want this record to persist in the table or impact on the primary key sequence serial_id_seq.
My options seem to be using a transaction like above or creating views which are copies of the table with the fake row added but I don't know all the pros and cons of each or whether a better way exists.
I'm looking for an answer that assumes no prior knowledge of the datatypes or default values of any column except id or the number or ordering of the columns. Only the table name will be known and that a record with id 0 should not exist in the table.
In the past I created the fake record 0 as a permanent record but I've come to consider this record a type of pollution (since I typically have to filter it out of future queries).
You can copy the table definition and defaults to the temp table with:
CREATE TEMP TABLE table_name_rt (LIKE table_name INCLUDING DEFAULTS);
And use this temp table to generate dummy rows. Such table will be dropped at the end of the session (or transaction) and will only be visible to current session.
You can query the catalog and build a dynamic query
Say we have this table:
create table test10(
id serial primary key,
first_name varchar( 100 ),
last_name varchar( 100 ) default 'Tom',
age int not null default 38,
salary float default 100.22
);
When you run following query:
SELECT string_agg( txt, ' ' order by id )
FROM (
select 1 id, 'SELECT ' txt
union all
select 2, -9999 || ' as id '
union all
select 3, ', '
|| coalesce( column_default, 'null'||'::'||c.data_type )
|| ' as ' || c.column_name
from information_schema.columns c
where table_schema = 'public'
and table_name = 'test10'
and ordinal_position > 1
) xx
;
you will get this sting as a result:
"SELECT -9999 as id , null::character varying as first_name ,
'Tom'::character varying as last_name , 38 as age , 100.22 as salary"
then execute this query and you will get the "phantom row".
We can build a function that build and excecutes the query and return our row as a result:
CREATE OR REPLACE FUNCTION get_phantom_rec (p_i test10.id%type )
returns test10 as $$
DECLARE
v_sql text;
myrow test10%rowtype;
begin
SELECT string_agg( txt, ' ' order by id )
INTO v_sql
FROM (
select 1 id, 'SELECT ' txt
union all
select 2, p_i || ' as id '
union all
select 3, ', '
|| coalesce( column_default, 'null'||'::'||c.data_type )
|| ' as ' || c.column_name
from information_schema.columns c
where table_schema = 'public'
and table_name = 'test10'
and ordinal_position > 1
) xx
;
EXECUTE v_sql INTO myrow;
RETURN myrow;
END$$ LANGUAGE plpgsql ;
and then this simple query gives you what you want:
select * from get_phantom_rec ( -9999 );
id | first_name | last_name | age | salary
-------+------------+-----------+-----+--------
-9999 | | Tom | 38 | 100.22
I would just select the fake values as literals:
select 1 id, 1 parent_id, 1 user_id
The returned row will be (virtually) indistinguishable from a real row.
To get the values from the catalog:
select
0 as id, -- special case for serial type, just return 0
(select column_default::int -- Cast to int, because we know the column is int
from INFORMATION_SCHEMA.COLUMNS
where table_name = 'mytable'
and column_name = 'parent_id') as parent_id,
(select column_default::int -- Cast to int, because we know the column is int
from INFORMATION_SCHEMA.COLUMNS
where table_name = 'mytable'
and column_name = 'user_id') as user_id;
Note that you must know what the columns are and their type, but this is reasonable. If you change the table schema (except default value), you would need to tweak the query.
See the above as a SQLFiddle.
I have an interesting conundrum which I believe can be solved in purely SQL. I have tables similar to the following:
responses:
user_id | question_id | body
----------------------------
1 | 1 | Yes
2 | 1 | Yes
1 | 2 | Yes
2 | 2 | No
1 | 3 | No
2 | 3 | No
questions:
id | body
-------------------------
1 | Do you like apples?
2 | Do you like oranges?
3 | Do you like carrots?
and I would like to get the following output
user_id | Do you like apples? | Do you like oranges? | Do you like carrots?
---------------------------------------------------------------------------
1 | Yes | Yes | No
2 | Yes | No | No
I don't know how many questions there will be, and they will be dynamic, so I can't just code for every question. I am using PostgreSQL and I believe this is called transposition, but I can't seem to find anything that says the standard way of doing this in SQL. I remember doing this in my database class back in college, but it was in MySQL and I honestly don't remember how we did it.
I'm assuming it will be a combination of joins and a GROUP BY statement, but I can't even figure out how to start.
Anybody know how to do this? Thanks very much!
Edit 1: I found some information about using a crosstab which seems to be what I want, but I'm having trouble making sense of it. Links to better articles would be greatly appreciated!
Use:
SELECT r.user_id,
MAX(CASE WHEN r.question_id = 1 THEN r.body ELSE NULL END) AS "Do you like apples?",
MAX(CASE WHEN r.question_id = 2 THEN r.body ELSE NULL END) AS "Do you like oranges?",
MAX(CASE WHEN r.question_id = 3 THEN r.body ELSE NULL END) AS "Do you like carrots?"
FROM RESPONSES r
JOIN QUESTIONS q ON q.id = r.question_id
GROUP BY r.user_id
This is a standard pivot query, because you are "pivoting" the data from rows to columnar data.
I implemented a truly dynamic function to handle this problem without having to hard code any specific class of answers or use external modules/extensions. It also gives full control over column ordering and supports multiple key and class/attribute columns.
You can find it here: https://github.com/jumpstarter-io/colpivot
Example that solves this particular problem:
begin;
create temporary table responses (
user_id integer,
question_id integer,
body text
) on commit drop;
create temporary table questions (
id integer,
body text
) on commit drop;
insert into responses values (1,1,'Yes'), (2,1,'Yes'), (1,2,'Yes'), (2,2,'No'), (1,3,'No'), (2,3,'No');
insert into questions values (1, 'Do you like apples?'), (2, 'Do you like oranges?'), (3, 'Do you like carrots?');
select colpivot('_output', $$
select r.user_id, q.body q, r.body a from responses r
join questions q on q.id = r.question_id
$$, array['user_id'], array['q'], '#.a', null);
select * from _output;
rollback;
This outputs:
user_id | 'Do you like apples?' | 'Do you like carrots?' | 'Do you like oranges?'
---------+-----------------------+------------------------+------------------------
1 | Yes | No | Yes
2 | Yes | No | No
You can solve this example with the crosstab function in this way
drop table if exists responses;
create table responses (
user_id integer,
question_id integer,
body text
);
drop table if exists questions;
create table questions (
id integer,
body text
);
insert into responses values (1,1,'Yes'), (2,1,'Yes'), (1,2,'Yes'), (2,2,'No'), (1,3,'No'), (2,3,'No');
insert into questions values (1, 'Do you like apples?'), (2, 'Do you like oranges?'), (3, 'Do you like carrots?');
select * from crosstab('select responses.user_id, questions.body, responses.body from responses, questions where questions.id = responses.question_id order by user_id') as ct(userid integer, "Do you like apples?" text, "Do you like oranges?" text, "Do you like carrots?" text);
First, you must install tablefunc extension. Since 9.1 version you can do it using create extension:
CREATE EXTENSION tablefunc;
I wrote a function to generate the dynamic query.
It generates the sql for the crosstab and creates a view (drops it first if it exists).
You can than select from the view to get your results.
Here is the function:
CREATE OR REPLACE FUNCTION public.c_crosstab (
eavsql_inarg varchar,
resview varchar,
rowid varchar,
colid varchar,
val varchar,
agr varchar
)
RETURNS void AS
$body$
DECLARE
casesql varchar;
dynsql varchar;
r record;
BEGIN
dynsql='';
for r in
select * from pg_views where lower(viewname) = lower(resview)
loop
execute 'DROP VIEW ' || resview;
end loop;
casesql='SELECT DISTINCT ' || colid || ' AS v from (' || eavsql_inarg || ') eav ORDER BY ' || colid;
FOR r IN EXECUTE casesql Loop
dynsql = dynsql || ', ' || agr || '(CASE WHEN ' || colid || '=''' || r.v || ''' THEN ' || val || ' ELSE NULL END) AS ' || agr || '_' || r.v;
END LOOP;
dynsql = 'CREATE VIEW ' || resview || ' AS SELECT ' || rowid || dynsql || ' from (' || eavsql_inarg || ') eav GROUP BY ' || rowid;
RAISE NOTICE 'dynsql %1', dynsql;
EXECUTE dynsql;
END
$body$
LANGUAGE 'plpgsql'
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER
COST 100;
And here is how I use it:
SELECT c_crosstab('query_txt', 'view_name', 'entity_column_name', 'attribute_column_name', 'value_column_name', 'first');
Example:
Fist you run:
SELECT c_crosstab('Select * from table', 'ct_view', 'usr_id', 'question_id', 'response_value', 'first');
Than:
Select * from ct_view;
There is an example of this in contrib/tablefunc/.