Reduce execution time of my query in postgresql - postgresql

Below is my query. It takes 9387.430 ms to execute, which is certainly too long for such a request. I would like to be able to reduce this execution time. Can you please help me on this ? I also provided my analyze output.
EXPLAIN ANALYZE
SELECT a.artist, b.artist, COUNT(*)
FROM release_has_artist a, release_has_artist b
WHERE a.release = b.release AND a.artist <> b.artist
GROUP BY(a.artist,b.artist)
ORDER BY (a.artist,b.artist);;
Output of EXPLAIN ANALYZE :
Sort (cost=1696482.86..1707588.14 rows=4442112 width=48) (actual time=9253.474..9314.510 rows=461386 loops=1)
Sort Key: (ROW(a.artist, b.artist))
Sort Method: external sort Disk: 24832kB
-> Finalize GroupAggregate (cost=396240.32..932717.19 rows=4442112 width=48) (actual time=1928.058..2911.463 rows=461386 loops=1)
Group Key: a.artist, b.artist
-> Gather Merge (cost=396240.32..860532.87 rows=3701760 width=16) (actual time=1928.049..2494.638 rows=566468 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial GroupAggregate (cost=395240.29..432257.89 rows=1850880 width=16) (actual time=1912.809..2156.951 rows=188823 loops=3)
Group Key: a.artist, b.artist
-> Sort (cost=395240.29..399867.49 rows=1850880 width=8) (actual time=1912.794..2003.776 rows=271327 loops=3)
Sort Key: a.artist, b.artist
Sort Method: external merge Disk: 4848kB
-> Merge Join (cost=0.85..177260.72 rows=1850880 width=8) (actual time=2.143..1623.628 rows=271327 loops=3)
Merge Cond: (a.release = b.release)
Join Filter: (a.artist <> b.artist)
Rows Removed by Join Filter: 687597
-> Parallel Index Only Scan using release_has_artist_pkey on release_has_artist a (cost=0.43..67329.73 rows=859497 width=8) (actual time=0.059..240.998 rows=687597 loops=3)
Heap Fetches: 711154
-> Index Only Scan using release_has_artist_pkey on release_has_artist b (cost=0.43..79362.68 rows=2062792 width=8) (actual time=0.072..798.402 rows=2329742 loops=3)
Heap Fetches: 2335683
Planning time: 2.101 ms
Execution time: 9387.430 ms

In your EXPLAIN ANALYZE output, there are two Sort Method: external merge Disk: ####kB, indicating that the sort spilled out to disk and not in memory, due to an insufficiently-sized work_mem. Try increasing your work_mem up to 32MB (30 might be ok, but I like multiples of 8), and try again
Note that you can set work_mem on a per-session basis, as a global change in work_mem could potentially have negative side-effects, such as running out of memory, because postgresql.conf-configured work_mem is allocated for each session (basically, it has a multiplicative effect).

Related

Optimize Postgres Query with nested join

I have a query that takes several seconds in my staging environment, but takes a few minutes in my production environment. The query is the same, but the production environment has many more rows in each table (appx 10x).
The goal of the query is to find all distinct software records which are installed on at least one of the assets in the search results (NOTE: the search criteria can vary across dozens of fields).
The tables:
Assets: dozens of fields on which a user can search - production has several million records
InstalledSoftwares: asset_id (reference), software_id (reference) - each asset has 10-100 installed software records, so there are 10s of millions of records in production.
Softwares: the results. Production has less than 4000 unique software records.
I've removed duplicate WHERE clauses based on the suggestions from Laurenz Albe.
How can I make this more efficient?
The Query:
SELECT DISTINCT "softwares"."id" FROM "softwares"
INNER JOIN "installed_softwares" ON "installed_softwares"."software_id" = "softwares"."id"
WHERE "installed_softwares"."asset_id" IN
(SELECT "assets"."id" FROM "assets"
WHERE "assets"."assettype_id" = 3
AND "assets"."archive_number" IS NULL
AND "assets"."expired" = FALSE
AND "assets"."local_status_id" != 4
AND "assets"."scorable" = TRUE)
Here is the EXPLAIN (analyze, buffers) for the query:
Unique (cost=153558.09..153588.92 rows=5524 width=8) (actual time=4224.203..5872.293 rows=3525 loops=1)
Buffers: shared hit=112428 read=187, temp read=3145 written=3164
I/O Timings: read=2.916
-> Sort (cost=153558.09..153573.51 rows=6165 width=8) (actual time=4224.200..5065.158 rows=1087807 loops=1)
Sort Key: softwares.id
Sort Method: external merge Disk: 19240kB
Buffers: shared hit=112428 read=187, temp read=3145 written=3164
I/O Timings: read=2.916
-> Hash Join (cost=119348.05..153170.01 rows=6165 width=8) (actual time=342.860..3159.458 rows=1087807 loops=1)
Hash Cond: (installed_softwares.software_id = softwares.id)
Buffers: shared hit=112428 read=187
I/O Timings: read=2.916
-> Gather (cost=119119.76..152925.53 rows=6165 width=8) (actual time=333.981..1320.277 rows=1087807 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=112324 read=187
I/O Timings: read=2.916
-> Parallel Hash Join (cost=118119.76..151309.03 rows=2569 width=8) (actual time=331.836..2010.171 rows=362602 loops=3)
Hash Cond: (installed_softwares.asset_id = assets.id)
Buffers: shared hit=112324 read=187
I/O Timings: read=2.916
-> Parallel Seq Scan on installed_softwares (cost=0.00..30518.88 rows=1017288 width=16) (actual time=0.007..667.564 rows=813396 loops=3)
Buffers: shared hit=20159 read=187
I/O Timings: read=2.916
-> Parallel Hash (cost=118065.04..118065.04 rows=4378 width=4) (actual time=331.648..331.651 rows=23407 loops=3)
Buckets: 131072 (originally 16384) Batches: 1 (originally 1) Memory Usage: 4672kB
Buffers: shared hit=92058
-> Parallel Seq Scan on assets (cost=0.00..118065.04 rows=4378 width=4) (actual time=0.012..302.134 rows=23407 loops=3)
Filter: ((archive_number IS NULL) AND (NOT expired) AND scorable AND (local_status_id <> 4) AND (assettype_id = 3))
Rows Removed by Filter: 1363624
Buffers: shared hit=92058
-> Hash (cost=159.24..159.24 rows=5524 width=8) (actual time=8.859..8.862 rows=5546 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 281kB
Buffers: shared hit=104
-> Seq Scan on softwares (cost=0.00..159.24 rows=5524 width=8) (actual time=0.006..4.426 rows=5546 loops=1)
Buffers: shared hit=104
Planning Time: 0.534 ms
Execution Time: 5878.476 ms
The problem is the bad row count estimate for the scan on assets. The estimate is so bad (48 rows instead of 23407), because the condition is somewhat redundant:
archived_at IS NULL AND archive_number IS NULL AND
NOT expired AND
scorable AND
local_status_id <> 4 AND local_status_id <> 4 AND
assettype_id = 3
PostgreSQL treats all these conditions as statistically independent, which leads it astray. One of the conditions (local_status_id <> 4) is present twice; remove one copy. The first two conditions seem somewhat redundant too; perhaps one of them can be omitted.
Perhaps that is enough to improve the estimate so that PostgreSQL does not choose the slow nested loop join.

query taking nested loop instead of hash join

Below query is nested loop and runs for 21 mins, after disabling nested loop it works in < 1min. Table stats are up to date and vacuum is run on the tables, any way to figure out why postgres is taking nested loop instead of efficient hash join.
Also to disable a nested loop is it better to set enable_nestloop to off or increase the random_page_cost? I think setting nestloop to off would stop plans from using nested loop if its going to be efficient in some places. What would be a better alternative, please advise.
SELECT DISTINCT ON (three.quebec_delta)
last_value(three.reviewed_by_nm) OVER wnd AS reviewed_by_nm,
last_value(three.reviewer_specialty_nm) OVER wnd AS reviewer_specialty_nm,
last_value(three.kilo) OVER wnd AS kilo,
last_value(three.review_reason_dscr) OVER wnd AS review_reason_dscr,
last_value(three.review_notes) OVER wnd AS review_notes,
last_value(three.seven_uniform_charlie) OVER wnd AS seven_uniform_charlie,
last_value(three.di_audit_source_system_cd) OVER wnd AS di_audit_source_system_cd,
last_value(three.di_audit_update_dtm) OVER wnd AS di_audit_update_dtm,
three.quebec_delta
FROM
ods_authorization.quebec_foxtrot seven_uniform_foxtrot
JOIN ods_authorization.golf echo ON seven_uniform_foxtrot.four = echo.oscar
JOIN ods_authorization.papa three ON echo.five = three.quebec_delta
AND three.xray = '0'::bpchar
WHERE
seven_uniform_foxtrot.two_india >= (zulu () - '2 years'::interval)
AND lima (three.kilo, 'ADVISOR'::character varying)::text = 'ADVISOR'::text
WINDOW wnd AS (PARTITION BY three.quebec_delta ORDER BY three.seven_uniform_charlie DESC ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
Plan running for 21m and taking nested loop
Unique (cost=550047.63..550257.15 rows=5238 width=281) (actual time=1295000.966..1296128.356 rows=319863 loops=1)
-> WindowAgg (cost=550047.63..550244.06 rows=5238 width=281) (actual time=1295000.964..1296013.046 rows=461635 loops=1)
-> Sort (cost=550047.63..550060.73 rows=5238 width=326) (actual time=1295000.929..1295089.796 rows=461635 loops=1)
Sort Key: three.quebec_delta, three.seven_uniform_charlie DESC
Sort Method: quicksort Memory: 197021kB
-> Nested Loop (cost=1001.12..549724.06 rows=5238 width=326) (actual time=8.274..1292470.826 rows=461635 loops=1)
-> Gather (cost=1000.56..527782.84 rows=24896 width=391) (actual time=4.287..12701.687 rows=3484699 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Nested Loop (cost=0.56..524293.24 rows=10373 width=391) (actual time=3.492..400998.923 rows=1161566 loops=3)
-> Parallel Seq Scan on papa three (cost=0.00..436912.84 rows=10373 width=326) (actual time=1.554..2455.626 rows=1161566 loops=3)
Filter: ((xray = 'november'::bpchar) AND ((lima_sierra(kilo, 'two_zulu'::character varying))::text = 'two_zulu'::text))
Rows Removed by Filter: 501723
-> Index Scan using five_tango on golf echo (cost=0.56..8.42 rows=1 width=130) (actual time=0.342..0.342 rows=1 loops=3484699)
Index Cond: (five_hotel = three.quebec_delta)
-> Index Scan using lima_alpha on quebec_foxtrot seven_uniform_foxtrot (cost=0.56..0.88 rows=1 width=65) (actual time=0.366..0.366 rows=0 loops=3484699)
Index Cond: (four = echo.oscar)
Filter: (two_india >= (zulu() - 'two_two'::interval))
Rows Removed by Filter: 1
Planning time: 0.777 ms
Execution time: 1296183.259 ms
Plan after setting enable_nestloop to off and work_mem to 8GB. I get the same plan when increasing random_page_cost to 1000.
Unique (cost=5933437.24..5933646.68 rows=5236 width=281) (actual time=19898.050..20993.124 rows=319980 loops=1)
-> WindowAgg (cost=5933437.24..5933633.59 rows=5236 width=281) (actual time=19898.049..20879.655 rows=461769 loops=1)
-> Sort (cost=5933437.24..5933450.33 rows=5236 width=326) (actual time=19898.022..19978.839 rows=461769 loops=1)
Sort Key: three.quebec_delta, three.seven_uniform_charlie DESC
Sort Method: quicksort Memory: 197056kB
-> Hash Join (cost=1947451.87..5933113.80 rows=5236 width=326) (actual time=11616.323..17931.146 rows=461769 loops=1)
Hash Cond: (echo.oscar = seven_uniform_foxtrot.four)
-> Gather (cost=438059.74..4423656.32 rows=24897 width=391) (actual time=1909.685..7291.289 rows=3484833 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Hash Join (cost=437059.74..4420166.62 rows=10374 width=391) (actual time=1904.546..7385.948 rows=1161611 loops=3)
Hash Cond: (echo.five = three.quebec_delta)
-> Parallel Seq Scan on golf echo (cost=0.00..3921922.09 rows=8152209 width=130) (actual time=0.003..1756.576 rows=6531668 loops=3)
-> Parallel Hash (cost=436930.07..436930.07 rows=10374 width=326) (actual time=1904.354..1904.354 rows=1161611 loops=3)
Buckets: 4194304 (originally 32768) Batches: 1 (originally 1) Memory Usage: 1135200kB
-> Parallel Seq Scan on papa three (cost=0.00..436930.07 rows=10374 width=326) (actual time=0.009..963.728 rows=1161611 loops=3)
Filter: ((xray = 'november'::bpchar) AND ((lima(kilo, 'two_zulu'::character varying))::text = 'two_zulu'::text))
Rows Removed by Filter: 502246
-> Hash (cost=1476106.74..1476106.74 rows=2662831 width=65) (actual time=9692.517..9692.517 rows=2685656 loops=1)
Buckets: 4194304 Batches: 1 Memory Usage: 287171kB
-> Seq Scan on quebec_foxtrot seven_uniform_foxtrot (cost=0.00..1476106.74 rows=2662831 width=65) (actual time=0.026..8791.556 rows=2685656 loops=1)
Filter: (two_india >= (zulu() - 'two_two'::interval))
Rows Removed by Filter: 9984069
Planning time: 0.742 ms
Execution time: 21218.770 ms
Try an index on papa(lima_sierra(kilo, 'two_zulu'::character varying)) and ANALYZE the table. With that index in place, PostgreSQL collects statistics on the expression, which should improve the estimate, so that you don't get a nested loop join.
If you just replace COALESCE(r_cd, 'ADVISOR') = 'ADVISOR' with
(r_cd = 'ADVISOR' or r_cd IS NULL)
That might use the current table statistics to improve the estimates enough to change the plan.

Explain not showing memory usage

I am trying to see how much memory a query takes. In articles I can see in their explain output has a memory value but when I run explain I do not get a memory value.
Here is my query:
explain (analyze, verbose, buffers) select * from watched_url_queue
join rate_check rc on watched_url_queue.domain_key = rc.domain_key
where rc.locked = false
order by rc.domain_key
limit 1;
And this is my output:
Limit (cost=0.70..0.85 rows=1 width=336) (actual time=0.009..0.011 rows=1 loops=1)
Output: watched_url_queue.watched_url_record_id, watched_url_queue.url, watched_url_queue.domain_key, watched_url_queue.targets, watched_url_queue.create_date, watched_url_queue.tries, watched_url_queue.defer_until, watched_url_queue.duration, watched_url_queue.user_auth_custom_id, watched_url_queue.completed, rc.domain_key, rc.last_scan, rc.locked, rc.domain_key
Buffers: shared hit=7
-> Merge Join (cost=0.70..32514.53 rows=219864 width=336) (actual time=0.009..0.009 rows=1 loops=1)
Output: watched_url_queue.watched_url_record_id, watched_url_queue.url, watched_url_queue.domain_key, watched_url_queue.targets, watched_url_queue.create_date, watched_url_queue.tries, watched_url_queue.defer_until, watched_url_queue.duration, watched_url_queue.user_auth_custom_id, watched_url_queue.completed, rc.domain_key, rc.last_scan, rc.locked, rc.domain_key
Inner Unique: true
Merge Cond: (watched_url_queue.domain_key = rc.domain_key)
Buffers: shared hit=7
-> Index Scan using idx_watchedurlqueue_domainkey on public.watched_url_queue (cost=0.42..29069.88 rows=439728 width=289) (actual time=0.003..0.003 rows=1 loops=1)
Output: watched_url_queue.watched_url_record_id, watched_url_queue.url, watched_url_queue.domain_key, watched_url_queue.targets, watched_url_queue.create_date, watched_url_queue.tries, watched_url_queue.defer_until, watched_url_queue.duration, watched_url_queue.user_auth_custom_id, watched_url_queue.completed
Buffers: shared hit=4
-> Index Scan using rate_check_pkey on public.rate_check rc (cost=0.28..141.85 rows=2519 width=28) (actual time=0.003..0.003 rows=1 loops=1)
Output: rc.domain_key, rc.last_scan, rc.locked
Filter: (NOT rc.locked)
Buffers: shared hit=3
Planning time: 0.362 ms
Execution time: 0.060 ms
How do I see the memory usage?
A Merge Join doesn't report memory usage because it doesn't need any buffers (like e.g. the hash join). It simply goes through the sorted data from the two indexes as they are retrieved without the need to buffer that.
If you want to estimate the overall size of each step in the plan, you can multiply the width value and the rows value (from the actual rows).
But that is not necessarily the "memory" that is needed by the query, as the blocks are managed in the shared memory ("cache") rather then "by the query".

postresql group by query taking too long

I am running the below query and it's takig 5 minutes,
SELECT "DID"
FROM location_signals
GROUP BY "DID";
I have an index on DID, which is variable char 100, the table has about 150 million records
how to improve and optimize this further ?
is there any additional indexes that can be added or recommendations? thanks
edit: below results of explain analyse query:
Finalize GroupAggregate (cost=23803276.36..24466411.92 rows=179625 width=44) (actual time=285577.900..321360.237 rows=4833061 loops=1)
Group Key: DID
-> Gather Merge (cost=23803276.36..24462819.42 rows=359250 width=44) (actual time=285577.874..320018.354 rows=10825153 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial GroupAggregate (cost=23802276.33..24420353.03 rows=179625 width=44) (actual time=281580.548..310818.137 rows=3608384 loops=3)
Group Key: DID
-> Sort (cost=23802276.33..24007703.15 rows=82170727 width=36) (actual time=281580.535..303887.638 rows=65736579 loops=3)
Sort Key: DID
Sort Method: external merge Disk: 2987656kB
Worker 0: Sort Method: external merge Disk: 3099408kB
Worker 1: Sort Method: external merge Disk: 2987648kB
-> Parallel Seq Scan on location_signals (cost=0.00..6259493.27 rows=82170727 width=36) (actual time=0.043..13460.990 rows=65736579 loops=3)
Planning Time: 1.332 ms
Execution Time: 322686.767 ms

Configuration parameter work_mem in PostgreSQL on Linux

I have to optimize queries by tuning basic PostgreSQL server configuration parameters. In documentation I've came across the work_mem parameter. Then I checked how changing this parameter would influence performance of my query (using sort). I measured query execution time with various work_mem settings and was very disappointed.
The table on which I perform my query contains 10,000,000 rows and there are 430 MB of data to sort. (Sort Method: external merge Disk: 430112kB).
With work_mem = 1MB, EXPLAIN output is:
Total runtime: 29950.571 ms (sort takes about 19300 ms).
Sort (cost=4032588.78..4082588.66 rows=19999954 width=8)
(actual time=22577.149..26424.951 rows=20000000 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 430104kB
With work_mem = 5MB:
Total runtime: 36282.729 ms (sort: 25400 ms).
Sort (cost=3485713.78..3535713.66 rows=19999954 width=8)
(actual time=25062.383..33246.561 rows=20000000 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 430104kB
With work_mem = 64MB:
Total runtime: 42566.538 ms (sort: 31000 ms).
Sort (cost=3212276.28..3262276.16 rows=19999954 width=8)
(actual time=28599.611..39454.279 rows=20000000 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 430104kB
Can anyone explain why performance gets worse? Or suggest any other methods to makes queries execution faster by changing server parameters?
My query (I know it's not optimal, but I have to benchmark this kind of query):
SELECT n
FROM (
SELECT n + 1 AS n FROM table_name
EXCEPT
SELECT n FROM table_name) AS q1
ORDER BY n DESC;
Full execution plan:
Sort (cost=5805421.81..5830421.75 rows=9999977 width=8) (actual time=30405.682..30405.682 rows=1 loops=1)
Sort Key: q1.n
Sort Method: quicksort Memory: 25kB
-> Subquery Scan q1 (cost=4032588.78..4232588.32 rows=9999977 width=8) (actual time=30405.636..30405.637 rows=1 loops=1)
-> SetOp Except (cost=4032588.78..4132588.55 rows=9999977 width=8) (actual time=30405.634..30405.634 rows=1 loops=1)
-> Sort (cost=4032588.78..4082588.66 rows=19999954 width=8) (actual time=23046.478..27733.020 rows=20000000 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 430104kB
-> Append (cost=0.00..513495.02 rows=19999954 width=8) (actual time=0.040..8191.185 rows=20000000 loops=1)
-> Subquery Scan "*SELECT* 1" (cost=0.00..269247.48 rows=9999977 width=8) (actual time=0.039..3651.506 rows=10000000 loops=1)
-> Seq Scan on table_name (cost=0.00..169247.71 rows=9999977 width=8) (actual time=0.038..2258.323 rows=10000000 loops=1)
-> Subquery Scan "*SELECT* 2" (cost=0.00..244247.54 rows=9999977 width=8) (actual time=0.008..2697.546 rows=10000000 loops=1)
-> Seq Scan on table_name (cost=0.00..144247.77 rows=9999977 width=8) (actual time=0.006..1079.561 rows=10000000 loops=1)
Total runtime: 30496.100 ms
I posted your query plan on explain.depesz.com, have a look.
The query planner's estimates are terribly wrong in some places.
Have you run ANALYZE recently?
Read the chapters in the manual on Statistics Used by the Planner and Planner Cost Constants. Pay special attention to the chapters on random_page_cost and default_statistics_target.
You might try:
ALTER TABLE diplomas ALTER COLUMN number SET STATISTICS 1000;
ANALYZE diplomas;
Or go even a higher for a table with 10M rows. It depends on data distribution and actual queries. Experiment. Default is 100, maximum is 10000.
For a database of that size, only 1 or 5 MB of work_mem are generally not enough. Read the Postgres Wiki page on Tuning Postgres that #aleroot linked to.
As your query needs 430104kB of memory on disk according to EXPLAIN output, you have to set work_mem to something like 500MB or more to allow in-memory sorting. In-memory representation of data needs some more space than on-disk representation. You may be interested in what Tom Lane posted on that matter recently.
Increasing work_mem by just a little, like you tried, won't help much or can even slow down. Setting it to high globally can even hurt, especially with concurrent access. Multiple sessions might starve one another for resources. Allocating more for one purpose takes away memory from another if the resource is limited. The best setup depends on the complete situation.
To avoid side effects, only set it high enough locally in your session, and temporarily for the query:
SET work_mem = '500MB';
Reset it to your default afterwards:
RESET work_mem;
Or use SET LOCAL to set it just for the current transaction to begin with.
SET search_path='tmp';
-- Generate some data ...
-- DROP table tmp.table_name ;
-- CREATE table tmp.table_name ( n INTEGER NOT NULL PRIMARY KEY);
-- INSERT INTO tmp.table_name(n) SELECT generate_series(1,1000);
-- DELETE FROM tmp.table_name WHERE random() < 0.05 ;
The except query is equivalent to the following NOT EXISTS form, which generates a different query plan (but the same results) here ( 9.0.1beta something)
-- EXPLAIN ANALYZE
WITH q1 AS (
SELECT 1+tn.n AS n
FROM table_name tn
WHERE NOT EXISTS (
SELECT * FROM table_name nx
WHERE nx.n = tn.n+1
)
)
SELECT q1.n
FROM q1
ORDER BY q1.n DESC;
(a version with a recursive CTE might also be possible :-)
EDIT: the query plans. all for 100K records with 0.2 % deleted
Original query:
------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=36461.76..36711.20 rows=99778 width=4) (actual time=2682.600..2682.917 rows=222 loops=1)
Sort Key: q1.n
Sort Method: quicksort Memory: 22kB
-> Subquery Scan q1 (cost=24984.41..26979.97 rows=99778 width=4) (actual time=2003.047..2682.036 rows=222 loops=1)
-> SetOp Except (cost=24984.41..25982.19 rows=99778 width=4) (actual time=2003.042..2681.389 rows=222 loops=1)
-> Sort (cost=24984.41..25483.30 rows=199556 width=4) (actual time=2002.584..2368.963 rows=199556 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 3512kB
-> Append (cost=0.00..5026.57 rows=199556 width=4) (actual time=0.071..1452.838 rows=199556 loops=1)
-> Subquery Scan "*SELECT* 1" (cost=0.00..2638.01 rows=99778 width=4) (actual time=0.067..470.652 rows=99778 loops=1)
-> Seq Scan on table_name (cost=0.00..1640.22 rows=99778 width=4) (actual time=0.063..178.365 rows=99778 loops=1)
-> Subquery Scan "*SELECT* 2" (cost=0.00..2388.56 rows=99778 width=4) (actual time=0.014..429.224 rows=99778 loops=1)
-> Seq Scan on table_name (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.011..143.320 rows=99778 loops=1)
Total runtime: 2684.840 ms
(14 rows)
NOT EXISTS-version with CTE:
----------------------------------------------------------------------------------------------------------------------
Sort (cost=6394.60..6394.60 rows=1 width=4) (actual time=699.190..699.498 rows=222 loops=1)
Sort Key: q1.n
Sort Method: quicksort Memory: 22kB
CTE q1
-> Hash Anti Join (cost=2980.01..6394.57 rows=1 width=4) (actual time=312.262..697.985 rows=222 loops=1)
Hash Cond: ((tn.n + 1) = nx.n)
-> Seq Scan on table_name tn (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.013..143.210 rows=99778 loops=1)
-> Hash (cost=1390.78..1390.78 rows=99778 width=4) (actual time=309.923..309.923 rows=99778 loops=1)
-> Seq Scan on table_name nx (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.007..144.102 rows=99778 loops=1)
-> CTE Scan on q1 (cost=0.00..0.02 rows=1 width=4) (actual time=312.270..698.742 rows=222 loops=1)
Total runtime: 700.040 ms
(11 rows)
NOT EXISTS-version without CTE
--------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=6394.58..6394.58 rows=1 width=4) (actual time=692.313..692.625 rows=222 loops=1)
Sort Key: ((1 + tn.n))
Sort Method: quicksort Memory: 22kB
-> Hash Anti Join (cost=2980.01..6394.57 rows=1 width=4) (actual time=308.046..691.849 rows=222 loops=1)
Hash Cond: ((tn.n + 1) = nx.n)
-> Seq Scan on table_name tn (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.014..142.781 rows=99778 loops=1)
-> Hash (cost=1390.78..1390.78 rows=99778 width=4) (actual time=305.732..305.732 rows=99778 loops=1)
-> Seq Scan on table_name nx (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.007..143.783 rows=99778 loops=1)
Total runtime: 693.139 ms
(9 rows)
My conclusion is that the "NOT EXISTS" versions cause postgres to produce better plans.