Parallel append on foreign tables in PostgreSQL 11 - postgresql

Is it possible to achieve parallel append on foreign tables in PostgreSQL 11 (just like on local tables)?
Local tables:
# select version();
version
---------------------------------------------------------------------------------------------------
PostgreSQL 11beta3 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10+deb8u1) 4.9.2, 64-bit
(1 row)
# create table local_table_1 as select generate_series(1, 1000000);
SELECT 1000000
# create table local_table_2 as select generate_series(1, 1000000);
SELECT 1000000
# create table local_table (like local_table_1);
CREATE TABLE
# alter table local_table_1 inherit local_table;
ALTER TABLE
# alter table local_table_2 inherit local_table;
ALTER TABLE
# explain analyze select count(*) from local_table;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=25369.62..25369.63 rows=1 width=8) (actual time=209.538..209.538 rows=1 loops=1)
-> Gather (cost=25369.41..25369.62 rows=2 width=8) (actual time=209.446..211.924 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=24369.41..24369.42 rows=1 width=8) (actual time=191.494..191.494 rows=1 loops=3)
-> Parallel Append (cost=0.00..22152.35 rows=886824 width=0) (actual time=0.121..144.302 rows=666667 loops=3)
-> Parallel Seq Scan on local_table_2 (cost=0.00..9126.56 rows=470156 width=0) (actual time=0.064..47.920 rows=333333 loops=3)
-> Parallel Seq Scan on local_table_1 (cost=0.00..8591.67 rows=416667 width=0) (actual time=0.130..67.098 rows=500000 loops=2)
-> Parallel Seq Scan on local_table (cost=0.00..0.00 rows=1 width=0) (actual time=0.002..0.002 rows=0 loops=1)
Planning Time: 0.516 ms
Execution Time: 212.026 ms
(11 rows)
Foreign tables:
# explain analyze select count(*) from foreign_table;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=4308.62..4308.63 rows=1 width=8) (actual time=93729.176..93729.176 rows=1 loops=1)
-> Append (cost=0.00..4044.11 rows=105804 width=0) (actual time=6868.904..93430.855 rows=5520175 loops=1)
-> Seq Scan on foreign_table (cost=0.00..0.00 rows=1 width=0) (actual time=0.013..0.013 rows=0 loops=1)
-> Foreign Scan on foreign_table foreign_table_1 (cost=1.00..113.39 rows=3413 width=0) (actual time=6868.889..6879.804 rows=195460 loops=1)
-> 29 more foreign tables
-> Foreign Scan on foreign_table foreign_table_31 (cost=1.00..113.39 rows=3413 width=0) (actual time=3139.317..3150.340 rows=199581 loops=1)
Planning Time: 3.205 ms
Execution Time: 110707.099 ms
(36 rows)
Is there any parameters that could hint postgres to use parallel append on foreign tables?

Related

MADLib + PostgreSQL Query Plan

While executing the postgresql explan analyze for a ML query in MADLib, i'm getting output like below.
query & query plan
[Query]
EXPLAIN (VERBOSE, ANALYZE)
SELECT COUNT(linregr.linregr_predict) FROM(
SELECT madlib.linregr_predict(ARRAY[c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,c15,c16,c17,c18,c19,c20,c21,c22,c23,c24,c25,c26,c27,c28], ARRAY[f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f11,f12,f13,f14,f15,f16,f17,f18,f19,f20,f21,f22,f23,f24,f25,f26,f27,f28])
FROM higgs_50_linregr_model_coef, higgs_1k_test
WHERE higgs_1k_test.f1 >0.7) AS linregr;
[Query Plan]
Aggregate (cost=19158.81..19158.82 rows=1 width=8) (actual time=4.607..4.610 rows=1 loops=1)
-> Nested Loop (cost=0.00..1497.81 rows=117740 width=224) (actual time=0.056..1.827 rows=204 loops=1)
-> Seq Scan on higgs_50_linregr_model_coef (cost=0.00..15.80 rows=580 width=112) (actual time=0.017..0.019 rows=1 loops=1)
-> Materialize (cost=0.00..10.77 rows=203 width=112) (actual time=0.028..1.388 rows=204 loops=1)
-> Seq Scan on higgs_1k_test (cost=0.00..9.75 rows=203 width=112) (actual time=0.018..0.531 rows=204 loops=1)
Filter: (f1 > '0.7'::double precision)
Rows Removed by Filter: 96
Planning Time: 0.624 ms
Execution Time: 4.826 ms
It seems that the ML operation of query is not included in the overall query plan.
I have a wonder that it is right and if so, how postgresql uses the madlib query while executing?

Planner not using index order to sort the records using CTE

I am trying to pass some ids into an in-clause on a sorted index with the same order by condition but the query planner is explicitly sorting the data after performing index search. below are my queries.
Generate a temporary table.
SELECT a.n/20 as n, md5(a.n::TEXT) as b INTO temp_table
From generate_series(1, 100000) as a(n);
create an index
CREATE INDEX idx_temp_table ON temp_table(n ASC, b ASC);
In below query, planner uses index ordering and doesn't explicitly sorts the data.(expected)
EXPLAIN ANALYSE
SELECT * from
temp_table WHERE n = 10
ORDER BY n, b
limit 5;
Query Plan
QUERY PLAN Limit (cost=0.42..16.07 rows=5 width=36) (actual time=0.098..0.101 rows=5 loops=1)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..1565.17 rows=500 width=36) (actual time=0.095..0.098 rows=5 loops=1)
Index Cond: (n = 10)
Heap Fetches: 5 Planning time: 0.551 ms Execution time: 0.128 ms
but when i use one or more ids from a cte and pass them in clause then planner only uses index to fetch the values but explicitly sorts them afterwards (not expected).
EXPLAIN ANALYSE
WITH cte(x) AS (VALUES (10))
SELECT * from temp_table
WHERE n IN ( SELECT x from cte)
ORDER BY n, b
limit 5;
then planner uses below query plan
QUERY PLAN
QUERY PLAN
Limit (cost=85.18..85.20 rows=5 width=37) (actual time=0.073..0.075 rows=5 loops=1)
CTE cte
-> Values Scan on "*VALUES*" (cost=0.00..0.03 rows=2 width=4) (actual time=0.001..0.002 rows=2 loops=1)
-> Sort (cost=85.16..85.26 rows=40 width=37) (actual time=0.072..0.073 rows=5 loops=1)
Sort Key: temp_table.n, temp_table.b
Sort Method: top-N heapsort Memory: 25kB
-> Nested Loop (cost=0.47..84.50 rows=40 width=37) (actual time=0.037..0.056 rows=40 loops=1)
-> Unique (cost=0.05..0.06 rows=2 width=4) (actual time=0.009..0.010 rows=2 loops=1)
-> Sort (cost=0.05..0.06 rows=2 width=4) (actual time=0.009..0.010 rows=2 loops=1)
Sort Key: cte.x
Sort Method: quicksort Memory: 25kB
-> CTE Scan on cte (cost=0.00..0.04 rows=2 width=4) (actual time=0.004..0.005 rows=2 loops=1)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..42.02 rows=20 width=37) (actual time=0.012..0.018 rows=20 loops=2)
Index Cond: (n = cte.x)
Heap Fetches: 40
Planning time: 0.166 ms
Execution time: 0.101 ms
I tried putting an explicit sorting while passing the ids in where clause so that sorted order in ids is maintained but still planner sorted explicitly
EXPLAIN ANALYSE
WITH cte(x) AS (VALUES (10))
SELECT * from temp_table
WHERE n IN ( SELECT x from cte)
ORDER BY n, b
limit 5;
Query plan
QUERY PLAN
Limit (cost=42.62..42.63 rows=5 width=37) (actual time=0.042..0.044 rows=5 loops=1)
CTE cte
-> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)
-> Sort (cost=42.61..42.66 rows=20 width=37) (actual time=0.042..0.042 rows=5 loops=1)
Sort Key: temp_table.n, temp_table.b
Sort Method: top-N heapsort Memory: 25kB
-> Nested Loop (cost=0.46..42.28 rows=20 width=37) (actual time=0.025..0.033 rows=20 loops=1)
-> HashAggregate (cost=0.05..0.06 rows=1 width=4) (actual time=0.009..0.009 rows=1 loops=1)
Group Key: cte.x
-> Sort (cost=0.03..0.04 rows=1 width=4) (actual time=0.006..0.006 rows=1 loops=1)
Sort Key: cte.x
Sort Method: quicksort Memory: 25kB
-> CTE Scan on cte (cost=0.00..0.02 rows=1 width=4) (actual time=0.003..0.003 rows=1 loops=1)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..42.02 rows=20 width=37) (actual time=0.014..0.020 rows=20 loops=1)
Index Cond: (n = cte.x)
Heap Fetches: 20
Planning time: 0.167 ms
Execution time: 0.074 ms
Can anyone explain why planner is using an explicit sort on the data? Is there a way to by pass this and make planner use the index sorting order so additional sorting on the records can be saved. In production, we have similar case but size of our selection is too big but only a handful of records needs to fetched with pagination. Thanks in anticipation!
It is actually a decision made by the planner, with a larger set of values(), Postgres will switch to a smarter plan, with the sort done before the merge.
select version();
\echo +++++ Original
EXPLAIN ANALYSE
WITH cte(x) AS (VALUES (10))
SELECT * from temp_table
WHERE n IN ( SELECT x from cte)
ORDER BY n, b
limit 5;
\echo +++++ TEN Values
EXPLAIN ANALYSE
WITH cte(x) AS (VALUES (10),(11),(12),(13),(14),(15),(16),(17),(18),(19)
)
SELECT * from temp_table
WHERE n IN ( SELECT x from cte)
ORDER BY n, b
limit 5;
\echo ++++++++ one row from table
EXPLAIN ANALYSE
WITH cte(x) AS (SELECT n FROM temp_table WHERE n = 10)
SELECT * from temp_table
WHERE n IN ( SELECT x from cte)
ORDER BY n, b
limit 5;
\echo ++++++++ one row from table TWO ctes
EXPLAIN ANALYSE
WITH val(x) AS (VALUES (10))
, cte(x) AS (
SELECT n FROM temp_table WHERE n IN (select x from val)
)
SELECT * from temp_table
WHERE n IN ( SELECT x from cte)
ORDER BY n, b
limit 5;
Resulting plans:
version
-------------------------------------------------------------------------------------------------------
PostgreSQL 11.3 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4, 64-bit
(1 row)
+++++ Original
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=13.72..13.73 rows=5 width=37) (actual time=0.197..0.200 rows=5 loops=1)
CTE cte
-> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)
-> Sort (cost=13.71..13.76 rows=20 width=37) (actual time=0.194..0.194 rows=5 loops=1)
Sort Key: temp_table.n, temp_table.b
Sort Method: top-N heapsort Memory: 25kB
-> Nested Loop (cost=0.44..13.37 rows=20 width=37) (actual time=0.083..0.097 rows=20 loops=1)
-> HashAggregate (cost=0.02..0.03 rows=1 width=4) (actual time=0.018..0.018 rows=1 loops=1)
Group Key: cte.x
-> CTE Scan on cte (cost=0.00..0.02 rows=1 width=4) (actual time=0.007..0.008 rows=1 loops=1)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..13.14 rows=20 width=37) (actual time=0.058..0.068 rows=20 loops=1)
Index Cond: (n = cte.x)
Heap Fetches: 20
Planning Time: 1.328 ms
Execution Time: 0.360 ms
(15 rows)
+++++ TEN Values
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.91..89.11 rows=5 width=37) (actual time=0.179..0.183 rows=5 loops=1)
CTE cte
-> Values Scan on "*VALUES*" (cost=0.00..0.12 rows=10 width=4) (actual time=0.001..0.007 rows=10 loops=1)
-> Merge Semi Join (cost=0.78..3528.72 rows=200 width=37) (actual time=0.178..0.181 rows=5 loops=1)
Merge Cond: (temp_table.n = cte.x)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..3276.30 rows=100000 width=37) (actual time=0.030..0.123 rows=204 loops=1)
Heap Fetches: 204
-> Sort (cost=0.37..0.39 rows=10 width=4) (actual time=0.023..0.023 rows=1 loops=1)
Sort Key: cte.x
Sort Method: quicksort Memory: 25kB
-> CTE Scan on cte (cost=0.00..0.20 rows=10 width=4) (actual time=0.003..0.013 rows=10 loops=1)
Planning Time: 0.197 ms
Execution Time: 0.226 ms
(13 rows)
++++++++ one row from table
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=14.39..58.52 rows=5 width=37) (actual time=0.168..0.173 rows=5 loops=1)
CTE cte
-> Index Only Scan using idx_temp_table on temp_table temp_table_1 (cost=0.42..13.14 rows=20 width=4) (actual time=0.010..0.020 rows=20 loops=1)
Index Cond: (n = 10)
Heap Fetches: 20
-> Merge Semi Join (cost=1.25..3531.24 rows=400 width=37) (actual time=0.167..0.170 rows=5 loops=1)
Merge Cond: (temp_table.n = cte.x)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..3276.30 rows=100000 width=37) (actual time=0.025..0.101 rows=204 loops=1)
Heap Fetches: 204
-> Sort (cost=0.83..0.88 rows=20 width=4) (actual time=0.039..0.039 rows=1 loops=1)
Sort Key: cte.x
Sort Method: quicksort Memory: 25kB
-> CTE Scan on cte (cost=0.00..0.40 rows=20 width=4) (actual time=0.012..0.031 rows=20 loops=1)
Planning Time: 0.243 ms
Execution Time: 0.211 ms
(15 rows)
++++++++ one row from table TWO ctes
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=14.63..58.76 rows=5 width=37) (actual time=0.224..0.229 rows=5 loops=1)
CTE val
-> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)
CTE cte
-> Nested Loop (cost=0.44..13.37 rows=20 width=4) (actual time=0.038..0.052 rows=20 loops=1)
-> HashAggregate (cost=0.02..0.03 rows=1 width=4) (actual time=0.007..0.007 rows=1 loops=1)
Group Key: val.x
-> CTE Scan on val (cost=0.00..0.02 rows=1 width=4) (actual time=0.003..0.003 rows=1 loops=1)
-> Index Only Scan using idx_temp_table on temp_table temp_table_1 (cost=0.42..13.14 rows=20 width=4) (actual time=0.029..0.038 rows=20 loops=1)
Index Cond: (n = val.x)
Heap Fetches: 20
-> Merge Semi Join (cost=1.25..3531.24 rows=400 width=37) (actual time=0.223..0.226 rows=5 loops=1)
Merge Cond: (temp_table.n = cte.x)
-> Index Only Scan using idx_temp_table on temp_table (cost=0.42..3276.30 rows=100000 width=37) (actual time=0.038..0.114 rows=204 loops=1)
Heap Fetches: 204
-> Sort (cost=0.83..0.88 rows=20 width=4) (actual time=0.082..0.082 rows=1 loops=1)
Sort Key: cte.x
Sort Method: quicksort Memory: 25kB
-> CTE Scan on cte (cost=0.00..0.40 rows=20 width=4) (actual time=0.040..0.062 rows=20 loops=1)
Planning Time: 0.362 ms
Execution Time: 0.313 ms
(21 rows)
Beware of CTEs!.
For the planner, CTEs are more or less black boxes, and very little is known about expected number of rows, statistics distribution, or ordering inside.
In cases where CTEs result in a bad plan (the original question is not such a case), a CTE can often be replaced by a (temp) view, which is seen by the planner in its full naked glory.
Update
Starting with version 11, CTEs are handled differently by the planner: if they do not have side effects, they are candidates for being merged with the main query. (but is still a good idea to check your query plans)
The optimizet isn't aware that the CTE is sorted. If you scan an index for multiple values and have an ORDER BY, PostgreSQL will always sort.
The only thing that comes to my mind is to create a temporary table with the values from the IN list and put an index on that temporary table. Then when you join with that table, PostgreSQL will be aware of the ordering and might for example choose a merge join that can use the indexes.
Of course that means a lot of overhead, and it could easily be that the original sort wins out.

postgres: which is faster to delete data not exists in another table

Two tables in myschema: post and comment. comment's onId refers to post's is.
Among the following two pieces of postgres codes, which is faster?
DELETE FROM myschema.comment comment
WHERE NOT EXISTS (
SELECT NULL
FROM myschema.post post
WHERE post.id = comment."onId"
);
DELETE FROM myschema.comment
USING myschema.comment AS newcomment LEFT JOIN myschema.post AS newpost
ON newpost.id = newcomment."onId"
WHERE myschema.comment.id = newcomment.id AND newpost.id is NULL;
Thanks
UPDATE
If id and onId indexed, any differences?
t=# drop table a,b;
DROP TABLE
t=# create table a(i int);
CREATE TABLE
t=# create table b(i int);
CREATE TABLE
t=# insert into a select * from generate_series(1,10000,1);
INSERT 0 10000
t=# insert into b select * from generate_series(1000,11000,1);
INSERT 0 10001
and plans:
t=# begin; explain analyze delete from b where not exists (select null from a where a.i = b.i); rollback;
BEGIN
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Delete on b (cost=270.00..565.02 rows=1 width=12) (actual time=10.516..10.516 rows=0 loops=1)
-> Hash Anti Join (cost=270.00..565.02 rows=1 width=12) (actual time=9.154..9.927 rows=1000 loops=1)
Hash Cond: (b.i = a.i)
-> Seq Scan on b (cost=0.00..145.01 rows=10001 width=10) (actual time=0.008..2.014 rows=10001 loops=1)
-> Hash (cost=145.00..145.00 rows=10000 width=10) (actual time=4.554..4.554 rows=10000 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 430kB
-> Seq Scan on a (cost=0.00..145.00 rows=10000 width=10) (actual time=0.005..2.107 rows=10000 loops=1)
Total runtime: 10.589 ms
(8 rows)
ROLLBACK
t=# begin; explain analyze delete from b using b bj left outer join a on bj.i = a.i where a.i is null and b.i = bj.i; rollback;
BEGIN
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------
Delete on b (cost=565.04..747.56 rows=1 width=18) (actual time=14.684..14.684 rows=0 loops=1)
-> Hash Join (cost=565.04..747.56 rows=1 width=18) (actual time=13.482..14.119 rows=1000 loops=1)
Hash Cond: (b.i = bj.i)
-> Seq Scan on b (cost=0.00..145.01 rows=10001 width=10) (actual time=0.010..1.928 rows=10001 loops=1)
-> Hash (cost=565.02..565.02 rows=1 width=16) (actual time=10.224..10.224 rows=1000 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 43kB
-> Hash Anti Join (cost=270.00..565.02 rows=1 width=16) (actual time=9.178..9.969 rows=1000 loops=1)
Hash Cond: (bj.i = a.i)
-> Seq Scan on b bj (cost=0.00..145.01 rows=10001 width=10) (actual time=0.003..2.098 rows=10001 loops=1)
-> Hash (cost=145.00..145.00 rows=10000 width=10) (actual time=4.570..4.570 rows=10000 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 430kB
-> Seq Scan on a (cost=0.00..145.00 rows=10000 width=10) (actual time=0.005..2.088 rows=10000 loops=1)
Total runtime: 14.775 ms
(13 rows)
ROLLBACK
First looks faster and cheaper
Option 1 is faster.
In option 2, the database has to perform an entire join, whereas option 1 will be executed using a negated semi join.
I believe this is now true for all major RDBMS, not just postgresql.

Postgres Slow group by query with max

I am using postgres 9.1 and I have a table with about 3.5M rows of eventtype (varchar) and eventtime (timestamp) - and some other fields. There are only about 20 different eventtype's and the event time spans about 4 years.
I want to get the last timestamp of each event type. If I run a query like:
select eventtype, max(eventtime)
from allevents
group by eventtype
it takes around 20 seconds. Selecting distinct eventtype's is equally slow. The query plan shows a full sequential scan of the table - not surprising it is slow.
Explain analyse for the above query gives:
HashAggregate (cost=84591.47..84591.68 rows=21 width=21) (actual time=20918.131..20918.141 rows=21 loops=1)
-> Seq Scan on allevents (cost=0.00..66117.98 rows=3694698 width=21) (actual time=0.021..4831.793 rows=3694392 loops=1)
Total runtime: 20918.204 ms
If I add a where clause to select a specific eventtype, it takes anywhere from 40ms to 150ms which is at least decent.
Query plan when selecting specific eventtype:
GroupAggregate (cost=343.87..24942.71 rows=1 width=21) (actual time=98.397..98.397 rows=1 loops=1)
-> Bitmap Heap Scan on allevents (cost=343.87..24871.07 rows=14325 width=21) (actual time=6.820..89.610 rows=19736 loops=1)
Recheck Cond: ((eventtype)::text = 'TEST_EVENT'::text)
-> Bitmap Index Scan on allevents_idx2 (cost=0.00..340.28 rows=14325 width=0) (actual time=6.121..6.121 rows=19736 loops=1)
Index Cond: ((eventtype)::text = 'TEST_EVENT'::text)
Total runtime: 98.482 ms
Primary key is (eventtype, eventtime). I also have the following indexes:
allevents_idx (event time desc, eventtype)
allevents_idx2 (eventtype).
How can I speed up the query?
Results of query play for correlated subquery suggested by #denis below with 14 manually entered values gives:
Function Scan on unnest val (cost=0.00..185.40 rows=100 width=32) (actual time=0.121..8983.134 rows=14 loops=1)
SubPlan 2
-> Result (cost=1.83..1.84 rows=1 width=0) (actual time=641.644..641.645 rows=1 loops=14)
InitPlan 1 (returns $1)
-> Limit (cost=0.00..1.83 rows=1 width=8) (actual time=641.640..641.641 rows=1 loops=14)
-> Index Scan using allevents_idx on allevents (cost=0.00..322672.36 rows=175938 width=8) (actual time=641.638..641.638 rows=1 loops=14)
Index Cond: ((eventtime IS NOT NULL) AND ((eventtype)::text = val.val))
Total runtime: 8983.203 ms
Using the recursive query suggested by #jjanes, the query runs between 4 and 5 seconds with the following plan:
CTE Scan on t (cost=260.32..448.63 rows=101 width=32) (actual time=0.146..4325.598 rows=22 loops=1)
CTE t
-> Recursive Union (cost=2.52..260.32 rows=101 width=32) (actual time=0.075..1.449 rows=22 loops=1)
-> Result (cost=2.52..2.53 rows=1 width=0) (actual time=0.074..0.074 rows=1 loops=1)
InitPlan 1 (returns $1)
-> Limit (cost=0.00..2.52 rows=1 width=13) (actual time=0.070..0.071 rows=1 loops=1)
-> Index Scan using allevents_idx2 on allevents (cost=0.00..9315751.37 rows=3696851 width=13) (actual time=0.070..0.070 rows=1 loops=1)
Index Cond: ((eventtype)::text IS NOT NULL)
-> WorkTable Scan on t (cost=0.00..25.58 rows=10 width=32) (actual time=0.059..0.060 rows=1 loops=22)
Filter: (eventtype IS NOT NULL)
SubPlan 3
-> Result (cost=2.53..2.54 rows=1 width=0) (actual time=0.059..0.059 rows=1 loops=21)
InitPlan 2 (returns $3)
-> Limit (cost=0.00..2.53 rows=1 width=13) (actual time=0.057..0.057 rows=1 loops=21)
-> Index Scan using allevents_idx2 on allevents (cost=0.00..3114852.66 rows=1232284 width=13) (actual time=0.055..0.055 rows=1 loops=21)
Index Cond: (((eventtype)::text IS NOT NULL) AND ((eventtype)::text > t.eventtype))
SubPlan 6
-> Result (cost=1.83..1.84 rows=1 width=0) (actual time=196.549..196.549 rows=1 loops=22)
InitPlan 5 (returns $6)
-> Limit (cost=0.00..1.83 rows=1 width=8) (actual time=196.546..196.546 rows=1 loops=22)
-> Index Scan using allevents_idx on allevents (cost=0.00..322946.21 rows=176041 width=8) (actual time=196.544..196.544 rows=1 loops=22)
Index Cond: ((eventtime IS NOT NULL) AND ((eventtype)::text = t.eventtype))
Total runtime: 4325.694 ms
What you need is a "skip scan" or "loose index scan". PostgreSQL's planner does not yet implement those automatically, but you can trick it into using one by using a recursive query.
WITH RECURSIVE t AS (
SELECT min(eventtype) AS eventtype FROM allevents
UNION ALL
SELECT (SELECT min(eventtype) as eventtype FROM allevents WHERE eventtype > t.eventtype)
FROM t where t.eventtype is not null
)
select eventtype, (select max(eventtime) from allevents where eventtype=t.eventtype) from t;
There may be a way to collapse the max(eventtime) into the recursive query rather than doing it outside that query, but if so I have not hit upon it.
This needs an index on (eventtype, eventtime) in order to be efficient. You can have it be DESC on the eventtime, but that is not necessary. This is efficiently only if eventtype has only a few distinct values (21 of them, in your case).
Based on the question you already have the relevant index.
If upgrading to Postgres 9.3 or an index on (eventtype, eventtime desc) doesn't make a difference, this is a case where rewriting the query so it uses a correlated subquery works very well if you can enumerate all of the event types manually:
select val as eventtype,
(select max(eventtime)
from allevents
where allevents.eventtype = val
) as eventtime
from unnest('{type1,type2,…}'::text[]) as val;
Here's the plans I get when running similar queries:
denis=# select version();
version
-----------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 9.3.1 on x86_64-apple-darwin11.4.2, compiled by Apple LLVM version 4.2 (clang-425.0.28) (based on LLVM 3.2svn), 64-bit
(1 row)
Test data:
denis=# create table test (evttype int, evttime timestamp, primary key (evttype, evttime));
CREATE TABLE
denis=# insert into test (evttype, evttime) select i, now() + (i % 3) * interval '1 min' - j * interval '1 sec' from generate_series(1,10) i, generate_series(1,10000) j;
INSERT 0 100000
denis=# create index on test (evttime, evttype);
CREATE INDEX
denis=# vacuum analyze test;
VACUUM
First query:
denis=# explain analyze select evttype, max(evttime) from test group by evttype; QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=2041.00..2041.10 rows=10 width=12) (actual time=54.983..54.987 rows=10 loops=1)
-> Seq Scan on test (cost=0.00..1541.00 rows=100000 width=12) (actual time=0.009..15.954 rows=100000 loops=1)
Total runtime: 55.045 ms
(3 rows)
Second query:
denis=# explain analyze select val as evttype, (select max(evttime) from test where test.evttype = val) as evttime from unnest('{1,2,3,4,5,6,7,8,9,10}'::int[]) val;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Function Scan on unnest val (cost=0.00..48.39 rows=100 width=4) (actual time=0.086..0.292 rows=10 loops=1)
SubPlan 2
-> Result (cost=0.46..0.47 rows=1 width=0) (actual time=0.024..0.024 rows=1 loops=10)
InitPlan 1 (returns $1)
-> Limit (cost=0.42..0.46 rows=1 width=8) (actual time=0.021..0.021 rows=1 loops=10)
-> Index Only Scan Backward using test_pkey on test (cost=0.42..464.42 rows=10000 width=8) (actual time=0.019..0.019 rows=1 loops=10)
Index Cond: ((evttype = val.val) AND (evttime IS NOT NULL))
Heap Fetches: 0
Total runtime: 0.370 ms
(9 rows)
index on (eventtype, eventtime desc) should help. or reindex on primary key index. I would also recommend replace type of eventtype to enum (if number of types is fixed) or int/smallint. This will decrease size of data and indexes so queries will run faster.

Configuration parameter work_mem in PostgreSQL on Linux

I have to optimize queries by tuning basic PostgreSQL server configuration parameters. In documentation I've came across the work_mem parameter. Then I checked how changing this parameter would influence performance of my query (using sort). I measured query execution time with various work_mem settings and was very disappointed.
The table on which I perform my query contains 10,000,000 rows and there are 430 MB of data to sort. (Sort Method: external merge Disk: 430112kB).
With work_mem = 1MB, EXPLAIN output is:
Total runtime: 29950.571 ms (sort takes about 19300 ms).
Sort (cost=4032588.78..4082588.66 rows=19999954 width=8)
(actual time=22577.149..26424.951 rows=20000000 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 430104kB
With work_mem = 5MB:
Total runtime: 36282.729 ms (sort: 25400 ms).
Sort (cost=3485713.78..3535713.66 rows=19999954 width=8)
(actual time=25062.383..33246.561 rows=20000000 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 430104kB
With work_mem = 64MB:
Total runtime: 42566.538 ms (sort: 31000 ms).
Sort (cost=3212276.28..3262276.16 rows=19999954 width=8)
(actual time=28599.611..39454.279 rows=20000000 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 430104kB
Can anyone explain why performance gets worse? Or suggest any other methods to makes queries execution faster by changing server parameters?
My query (I know it's not optimal, but I have to benchmark this kind of query):
SELECT n
FROM (
SELECT n + 1 AS n FROM table_name
EXCEPT
SELECT n FROM table_name) AS q1
ORDER BY n DESC;
Full execution plan:
Sort (cost=5805421.81..5830421.75 rows=9999977 width=8) (actual time=30405.682..30405.682 rows=1 loops=1)
Sort Key: q1.n
Sort Method: quicksort Memory: 25kB
-> Subquery Scan q1 (cost=4032588.78..4232588.32 rows=9999977 width=8) (actual time=30405.636..30405.637 rows=1 loops=1)
-> SetOp Except (cost=4032588.78..4132588.55 rows=9999977 width=8) (actual time=30405.634..30405.634 rows=1 loops=1)
-> Sort (cost=4032588.78..4082588.66 rows=19999954 width=8) (actual time=23046.478..27733.020 rows=20000000 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 430104kB
-> Append (cost=0.00..513495.02 rows=19999954 width=8) (actual time=0.040..8191.185 rows=20000000 loops=1)
-> Subquery Scan "*SELECT* 1" (cost=0.00..269247.48 rows=9999977 width=8) (actual time=0.039..3651.506 rows=10000000 loops=1)
-> Seq Scan on table_name (cost=0.00..169247.71 rows=9999977 width=8) (actual time=0.038..2258.323 rows=10000000 loops=1)
-> Subquery Scan "*SELECT* 2" (cost=0.00..244247.54 rows=9999977 width=8) (actual time=0.008..2697.546 rows=10000000 loops=1)
-> Seq Scan on table_name (cost=0.00..144247.77 rows=9999977 width=8) (actual time=0.006..1079.561 rows=10000000 loops=1)
Total runtime: 30496.100 ms
I posted your query plan on explain.depesz.com, have a look.
The query planner's estimates are terribly wrong in some places.
Have you run ANALYZE recently?
Read the chapters in the manual on Statistics Used by the Planner and Planner Cost Constants. Pay special attention to the chapters on random_page_cost and default_statistics_target.
You might try:
ALTER TABLE diplomas ALTER COLUMN number SET STATISTICS 1000;
ANALYZE diplomas;
Or go even a higher for a table with 10M rows. It depends on data distribution and actual queries. Experiment. Default is 100, maximum is 10000.
For a database of that size, only 1 or 5 MB of work_mem are generally not enough. Read the Postgres Wiki page on Tuning Postgres that #aleroot linked to.
As your query needs 430104kB of memory on disk according to EXPLAIN output, you have to set work_mem to something like 500MB or more to allow in-memory sorting. In-memory representation of data needs some more space than on-disk representation. You may be interested in what Tom Lane posted on that matter recently.
Increasing work_mem by just a little, like you tried, won't help much or can even slow down. Setting it to high globally can even hurt, especially with concurrent access. Multiple sessions might starve one another for resources. Allocating more for one purpose takes away memory from another if the resource is limited. The best setup depends on the complete situation.
To avoid side effects, only set it high enough locally in your session, and temporarily for the query:
SET work_mem = '500MB';
Reset it to your default afterwards:
RESET work_mem;
Or use SET LOCAL to set it just for the current transaction to begin with.
SET search_path='tmp';
-- Generate some data ...
-- DROP table tmp.table_name ;
-- CREATE table tmp.table_name ( n INTEGER NOT NULL PRIMARY KEY);
-- INSERT INTO tmp.table_name(n) SELECT generate_series(1,1000);
-- DELETE FROM tmp.table_name WHERE random() < 0.05 ;
The except query is equivalent to the following NOT EXISTS form, which generates a different query plan (but the same results) here ( 9.0.1beta something)
-- EXPLAIN ANALYZE
WITH q1 AS (
SELECT 1+tn.n AS n
FROM table_name tn
WHERE NOT EXISTS (
SELECT * FROM table_name nx
WHERE nx.n = tn.n+1
)
)
SELECT q1.n
FROM q1
ORDER BY q1.n DESC;
(a version with a recursive CTE might also be possible :-)
EDIT: the query plans. all for 100K records with 0.2 % deleted
Original query:
------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=36461.76..36711.20 rows=99778 width=4) (actual time=2682.600..2682.917 rows=222 loops=1)
Sort Key: q1.n
Sort Method: quicksort Memory: 22kB
-> Subquery Scan q1 (cost=24984.41..26979.97 rows=99778 width=4) (actual time=2003.047..2682.036 rows=222 loops=1)
-> SetOp Except (cost=24984.41..25982.19 rows=99778 width=4) (actual time=2003.042..2681.389 rows=222 loops=1)
-> Sort (cost=24984.41..25483.30 rows=199556 width=4) (actual time=2002.584..2368.963 rows=199556 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 3512kB
-> Append (cost=0.00..5026.57 rows=199556 width=4) (actual time=0.071..1452.838 rows=199556 loops=1)
-> Subquery Scan "*SELECT* 1" (cost=0.00..2638.01 rows=99778 width=4) (actual time=0.067..470.652 rows=99778 loops=1)
-> Seq Scan on table_name (cost=0.00..1640.22 rows=99778 width=4) (actual time=0.063..178.365 rows=99778 loops=1)
-> Subquery Scan "*SELECT* 2" (cost=0.00..2388.56 rows=99778 width=4) (actual time=0.014..429.224 rows=99778 loops=1)
-> Seq Scan on table_name (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.011..143.320 rows=99778 loops=1)
Total runtime: 2684.840 ms
(14 rows)
NOT EXISTS-version with CTE:
----------------------------------------------------------------------------------------------------------------------
Sort (cost=6394.60..6394.60 rows=1 width=4) (actual time=699.190..699.498 rows=222 loops=1)
Sort Key: q1.n
Sort Method: quicksort Memory: 22kB
CTE q1
-> Hash Anti Join (cost=2980.01..6394.57 rows=1 width=4) (actual time=312.262..697.985 rows=222 loops=1)
Hash Cond: ((tn.n + 1) = nx.n)
-> Seq Scan on table_name tn (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.013..143.210 rows=99778 loops=1)
-> Hash (cost=1390.78..1390.78 rows=99778 width=4) (actual time=309.923..309.923 rows=99778 loops=1)
-> Seq Scan on table_name nx (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.007..144.102 rows=99778 loops=1)
-> CTE Scan on q1 (cost=0.00..0.02 rows=1 width=4) (actual time=312.270..698.742 rows=222 loops=1)
Total runtime: 700.040 ms
(11 rows)
NOT EXISTS-version without CTE
--------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=6394.58..6394.58 rows=1 width=4) (actual time=692.313..692.625 rows=222 loops=1)
Sort Key: ((1 + tn.n))
Sort Method: quicksort Memory: 22kB
-> Hash Anti Join (cost=2980.01..6394.57 rows=1 width=4) (actual time=308.046..691.849 rows=222 loops=1)
Hash Cond: ((tn.n + 1) = nx.n)
-> Seq Scan on table_name tn (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.014..142.781 rows=99778 loops=1)
-> Hash (cost=1390.78..1390.78 rows=99778 width=4) (actual time=305.732..305.732 rows=99778 loops=1)
-> Seq Scan on table_name nx (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.007..143.783 rows=99778 loops=1)
Total runtime: 693.139 ms
(9 rows)
My conclusion is that the "NOT EXISTS" versions cause postgres to produce better plans.