I'm trying to understand why indexes are not used by this query:
SELECT
te.idturno
FROM turno t
JOIN turnoestado te ON t.idturno = te.idturno
ORDER BY te.idturno, te.idturnoestado
Limit 100
Row count for table Turno is ~1 million, and TurnoEstado is ~10 million.
Here are the index definition:
CREATE INDEX idx_turnoestado_idturno
ON turnoestado USING btree (idturno);
CREATE INDEX idx_turnoestado_idturnoestado
ON turnoestado USING btree (idturnoestado);
As you can see, there are two indexes for the keys used in the Order By clause, but none of them are used acording to the query plan returned by explain analyze:
Limit (cost=988700.56..988700.81 rows=100 width=8) (actual time=7332.740..7332.754 rows=100 loops=1)
-> Sort (cost=988700.56..1014125.36 rows=10169918 width=8) (actual time=7332.737..7332.746 rows=100 loops=1)
Sort Key: te.idturno, te.idturnoestado
Sort Method: top-N heapsort Memory: 29kB
-> Hash Join (cost=160603.86..600013.61 rows=10169918 width=8) (actual time=505.221..6279.233 rows=10169918 loops=1)
Hash Cond: (te.idturno = t.idturno)
-> Seq Scan on turnoestado te (cost=0.00..178003.18 rows=10169918 width=8) (actual time=0.011..1249.563 rows=10169918 loops=1)
-> Hash (cost=143894.94..143894.94 rows=1018394 width=4) (actual time=504.731..504.731 rows=1018394 loops=1)
Buckets: 4096 Batches: 64 Memory Usage: 571kB
-> Seq Scan on turno t (cost=0.00..143894.94 rows=1018394 width=4) (actual time=0.033..342.071 rows=1018394 loops=1)
Total runtime: 7332.824 ms
If I use only one key in the Order By, the result is instantaneous, the same happens if I create a multicolumn index. Why the planner does not use the indexes already created?.
P.S.: I'm using "PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit"
Related
Table: Customer
Type:
telephone1 | character varying(255)
telephone2 | character varying(255)
location_id | integer
Index:
"idx_customers_location_id" btree (location_id)
"idx_customers_telephone1_txt" btree (telephone1 text_pattern_ops)
"idx_customers_trim_telephone_1" btree (btrim(telephone1::text))
"idx_customers_trim_telephone2" btree (btrim(telephone2::text))
I have a table called customers, total rows are 141182. I was checking values in two columns (telephone1, telephone2), all the column telephone1 has data, but only 8 rows have value for the column telephone2
When I check for the value 1, getting this below execution time.
SELECT customers.id, location_id, telephone1, telephon2 FROM "customers" INNER JOIN "locations" ON
"locations"."id" = "customers"."location_id" WHERE (customers.location_id = 189 AND (telephone1 = '1'
OR telephone2 = '1')) GROUP BY customers.id LIMIT 20 OFFSET 0;
Limit (cost=519.62..519.64 rows=4 width=125) (actual time=25.895..25.898 rows=1 loops=1)
-> GroupAggregate (cost=519.62..519.64 rows=4 width=125) (actual time=25.893..25.896 rows=1 loops=1)
Group Key: customers.id
-> Sort (cost=519.62..519.62 rows=4 width=127) (actual time=25.876..25.879 rows=1 loops=1)
Sort Key: customers.id
Sort Method: quicksort Memory: 25kB
-> Nested Loop (cost=8.62..519.61 rows=4 width=127) (actual time=10.740..25.869 rows=1 loops=1)
-> Index Scan using locations_pkey on locations (cost=0.06..4.06 rows=1 width=70) (actual time=0.027..0.029 rows=1 loops=1)
Index Cond: (id = 189)
-> Bitmap Heap Scan on customers (cost=8.56..515.54 rows=4 width=61) (actual time=10.707..25.832 rows=1 loops=1)
Recheck Cond: (((telephone1)::text = '1'::text) OR ((telephone2)::text = '1'::text))
Filter: (location_id = 189)
Rows Removed by Filter: 1048
Heap Blocks: exact=1737
-> BitmapOr (cost=8.56..8.56 rows=259 width=0) (actual time=3.445..3.446 rows=0 loops=1)
-> Bitmap Index Scan on idx_customers_telephone1_txt (cost=0.00..2.10 rows=7 width=0) (actual time=0.065..0.066 rows=99 loops=1)
Index Cond: ((telephone1)::text = '1'::text)
-> Bitmap Index Scan on idx_customers_telephone2_txt (cost=0.00..6.47 rows=253 width=0) (actual time=3.378..3.378 rows=1664 loops=1)
Index Cond: ((telephone2)::text = '1'::text)
Planning Time: 0.419 ms
Execution Time: 25.995 ms
When I check for value 0 there is a huge change in the execution time (7753.216 ms)
Limit (cost=0.14..2440.90 rows=10 width=125) (actual time=5900.924..7753.133 rows=4 loops=1)
-> GroupAggregate (cost=0.14..292402.20 rows=1198 width=125) (actual time=5900.922..7753.129 rows=4 loops=1)
Group Key: customers.id
-> Nested Loop (cost=0.14..292395.61 rows=1198 width=127) (actual time=4350.358..7753.087 rows=4 loops=1)
-> Index Scan using customers_pkey on customers (cost=0.09..292387.36 rows=1198 width=61) (actual time=4350.338..7753.054 rows=4 loops=1)
Filter: ((location_id = 189) AND (((telephone1)::text = '0'::text) OR ((telephone2)::text = '0'::text)))
Rows Removed by Filter: 8484280
-> Materialize (cost=0.06..4.06 rows=1 width=70) (actual time=0.005..0.005 rows=1 loops=4)
-> Index Scan using locations_pkey on locations (cost=0.06..4.06 rows=1 width=70) (actual time=0.013..0.013 rows=1 loops=1)
Index Cond: (id = 189)
Planning Time: 0.322 ms
Execution Time: 7753.216 ms
Is there any particular reason, that takes more time to execute for the value 0 ? or anything wrong here?
One more thing I have noticed this issue happens only with column telephone2.
but only 8 rows have value for the column telephone2
Your explain plan indicates otherwise, finding 1664 rows with one specific value for telephone2. Now maybe most of those are not visible, but in that case you really need to VACUUM ANALYZE the table.
Nested Loop (cost=0.14..292395.61 rows=1198 width=127) (actual time=4350.358..7753.087 rows=4 loops=1)
With this second query, it thinks it will find 1198 rows (if run to completion). But it thinks it can stop after the first 20, so that would be 1.67% of index. But instead there are only 4 rows, so it unexpectedly has to scan the entire index without getting to stop early.
Why are the estimates off by so much? I don't know, it could just be stale statistics (again, VACUUM ANALYZE the table), or there could be some interrelation between the columns that make the estimation hard to do even with accurate statistics.
What is the point in joining to locations at all?
I'm using django orm with select related and it generates the query of the form:
SELECT *
FROM "coupons_coupon"
LEFT OUTER JOIN "coupons_merchant"
ON ("coupons_coupon"."merchant_id" = "coupons_merchant"."slug")
WHERE ("coupons_coupon"."end_date" > '2020-07-10T09:10:28.101980+00:00'::timestamptz AND "coupons_coupon"."published" = true)
ORDER BY "coupons_coupon"."end_date" ASC, "coupons_coupon"."id"
LIMIT 5;
Which is then executed using the following plan:
Limit (cost=4363.28..4363.30 rows=5 width=604) (actual time=21.864..21.865 rows=5 loops=1)
-> Sort (cost=4363.28..4373.34 rows=4022 width=604) (actual time=21.863..21.863 rows=5 loops=1)
Sort Key: coupons_coupon.end_date, coupons_coupon.id"
Sort Method: top-N heapsort Memory: 32kB
-> Hash Left Join (cost=2613.51..4296.48 rows=4022 width=604) (actual time=13.918..20.209 rows=4022 loops=1)
Hash Cond: ((coupons_coupon.merchant_id)::text = (coupons_merchant.slug)::text)
-> Seq Scan on coupons_coupon (cost=0.00..291.41 rows=4022 width=261) (actual time=0.007..1.110 rows=4022 loops=1)
Filter: (published AND (end_date > '2020-07-10 09:10:28.10198+00'::timestamp with time zone))
Rows Removed by Filter: 1691
-> Hash (cost=1204.56..1204.56 rows=24956 width=331) (actual time=13.894..13.894 rows=23911 loops=1)
Buckets: 16384 Batches: 4 Memory Usage: 1948kB
-> Seq Scan on coupons_merchant (cost=0.00..1204.56 rows=24956 width=331) (actual time=0.003..4.681 rows=23911 loops=1)
Which is a bad execution plan as join can be done after the left table has been filtered, ordered and limited. When I remove the id from order by it generates an efficient plan, which basically could have been used in the previous query as well.
Limit (cost=0.57..8.84 rows=5 width=600) (actual time=0.013..0.029 rows=5 loops=1)
-> Nested Loop Left Join (cost=0.57..6650.48 rows=4022 width=600) (actual time=0.012..0.028 rows=5 loops=1)
-> Index Scan using coupons_cou_end_dat_a8d5b7_btree on coupons_coupon (cost=0.28..1015.77 rows=4022 width=261) (actual time=0.007..0.010 rows=5 loops=1)
Index Cond: (end_date > '2020-07-10 09:10:28.10198+00'::timestamp with time zone)
Filter: published
-> Index Scan using coupons_merchant_pkey on coupons_merchant (cost=0.29..1.40 rows=1 width=331) (actual time=0.003..0.003 rows=1 loops=5)
Index Cond: ((slug)::text = (coupons_coupon.merchant_id)::text)
Why is this happening? Can the optimizer be nudged to use similar plan for the former query?
I'm using postgres 12.
v13 of PostgreSQL, which should be released in the next few months, implements incremental sorting, in which it can read rows in an pre-sorted order based on prefix columns, then sorts just the ties on those prefix column(s) by the remaining column(s) in order to get a complete sort based on more columns than an index provides. I think that will do more or less what you want.
Limit (cost=2.46..2.99 rows=5 width=21)
-> Incremental Sort (cost=2.46..405.58 rows=3850 width=21)
Sort Key: coupons_coupon.end_date, coupons_coupon.id
Presorted Key: coupons_coupon.end_date
-> Nested Loop Left Join (cost=0.31..253.48 rows=3850 width=21)
-> Index Scan using coupons_coupon_end_date_idx on coupons_coupon (cost=0.15..54.71 rows=302 width=17)
Index Cond: (end_date > '2020-07-10 05:10:28.10198-04'::timestamp with time zone)
Filter: published
-> Index Only Scan using coupons_merchant_slug_idx on coupons_merchant (cost=0.15..0.53 rows=13 width=4)
Index Cond: (slug = coupons_coupon.merchant_id)
Of course just adding "id" into the current index will work under currently released versions, and even under version 13 is should be more efficient to have the index fully order the rows in the way you need them.
I have a sql query
SELECT * FROM sharescheduledjob WHERE sharescheduledjob.status = 'SCHEDULED' ORDER BY id ASC LIMIT 1
with limit 1 it is very slow
Limit (cost=0.43..46.43 rows=1 width=8) (actual time=3490.958..3490.959 rows=1 loops=1)
-> Index Scan using sharescheduledjob_pkey on sharescheduledjob sharesched0_ (cost=0.43..171383.41 rows=3726 width=8) (actual time=3490.956..3490.956 rows=1 loops=1)
Filter: ((status)::text = 'SCHEDULED'::text)
Rows Removed by Filter: 6058511
Total runtime: 3490.985 ms
But with limit 100 its pretty fast
Limit (cost=248.04..248.29 rows=100 width=8) (actual time=12.968..12.994 rows=100 loops=1)
-> Sort (cost=248.04..257.36 rows=3726 width=8) (actual time=12.966..12.978 rows=100 loops=1)
Sort Key: id
Sort Method: top-N heapsort Memory: 29kB
-> Index Scan using sharescheduledjob_status on sharescheduledjob sharesched0_ (cost=0.43..105.64 rows=3726 width=8) (actual time=0.044..8.636 rows=9284 loops=1)
Index Cond: ((status)::text = 'SCHEDULED'::text)
Total runtime: 13.042 ms
Is it possible to change database settings to enforce to lookup the status first?
I already tried
ANALYZE sharescheduledjob
but didnt help
And it is hard to change the query because its generated by hibernate from hql queries
As part of a query (one side of a join actually)
SELECT DISTINCT ON (shop_id) shop_id, id, color
FROM products
ORDER BY shop_id, id
There are btree indices on shop_id and id, but for some reasons they are not used.
EXPLAIN ANALYZE says:
-> Unique (cost=724198.71..742348.37 rows=360949 width=71) (actual time=179157.101..195998.646 rows=1673170 loops=1)
-> Sort (cost=724198.71..733273.54 rows=3629931 width=71) (actual time=179157.095..191853.377 rows=3599644 loops=1)
Sort Key: products.shop_id, products.id
Sort Method: external merge Disk: 285064kB
-> Seq Scan on products (cost=0.00..328690.31 rows=3629931 width=71) (actual time=0.025..7575.905 rows=3629713 loops=1)
I also tried to make an multicolum btree index on both shop_id and id, but it wasn't used.. (Maybe I did it wrong and would have had to restart postgresql and everything?)
How can I accelarete this query (with an index or something?) ?
Adding an multicolumn index on all three ("CREATE INDEX products_idx ON products USING btree (shop_id, id, color);") doesn't get used.
I experimented, and if I set a LIMIT of 10000, it will be used:
Limit (cost=0.00..161337.91 rows=10000 width=14) (actual time=0.043..15.973 rows=10000 loops=1)
-> Unique (cost=0.00..2925620.98 rows=181335 width=14) (actual time=0.042..15.249 rows=10000 loops=1)
-> Index Scan using products_idx on products (cost=0.00..2922753.69 rows=1146917 width=14) (actual time=0.041..12.927 rows=14004 loops=1)
Total runtime: 16.293 ms
There are around 3*10^6 entries (3 million)
For larger LIMIT, it uses sequential scan again :(
Limit (cost=213533.52..215114.73 rows=50000 width=14) (actual time=816.580..835.075 rows=50000 loops=1)
-> Unique (cost=213533.52..219268.11 rows=181335 width=14) (actual time=816.578..831.963 rows=50000 loops=1)
-> Sort (cost=213533.52..216400.81 rows=1146917 width=14) (actual time=816.576..823.034 rows=80830 loops=1)
Sort Key: shop_id, id
Sort Method: quicksort Memory: 107455kB
-> Seq Scan on products (cost=0.00..98100.17 rows=1146917 width=14) (actual time=0.019..296.867 rows=1146917 loops=1)
Total runtime: 840.788 ms
(I also had to raise the work_mem here to 128MB, otherwise there would be an external merge which takes even longer)
I have to optimize queries by tuning basic PostgreSQL server configuration parameters. In documentation I've came across the work_mem parameter. Then I checked how changing this parameter would influence performance of my query (using sort). I measured query execution time with various work_mem settings and was very disappointed.
The table on which I perform my query contains 10,000,000 rows and there are 430 MB of data to sort. (Sort Method: external merge Disk: 430112kB).
With work_mem = 1MB, EXPLAIN output is:
Total runtime: 29950.571 ms (sort takes about 19300 ms).
Sort (cost=4032588.78..4082588.66 rows=19999954 width=8)
(actual time=22577.149..26424.951 rows=20000000 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 430104kB
With work_mem = 5MB:
Total runtime: 36282.729 ms (sort: 25400 ms).
Sort (cost=3485713.78..3535713.66 rows=19999954 width=8)
(actual time=25062.383..33246.561 rows=20000000 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 430104kB
With work_mem = 64MB:
Total runtime: 42566.538 ms (sort: 31000 ms).
Sort (cost=3212276.28..3262276.16 rows=19999954 width=8)
(actual time=28599.611..39454.279 rows=20000000 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 430104kB
Can anyone explain why performance gets worse? Or suggest any other methods to makes queries execution faster by changing server parameters?
My query (I know it's not optimal, but I have to benchmark this kind of query):
SELECT n
FROM (
SELECT n + 1 AS n FROM table_name
EXCEPT
SELECT n FROM table_name) AS q1
ORDER BY n DESC;
Full execution plan:
Sort (cost=5805421.81..5830421.75 rows=9999977 width=8) (actual time=30405.682..30405.682 rows=1 loops=1)
Sort Key: q1.n
Sort Method: quicksort Memory: 25kB
-> Subquery Scan q1 (cost=4032588.78..4232588.32 rows=9999977 width=8) (actual time=30405.636..30405.637 rows=1 loops=1)
-> SetOp Except (cost=4032588.78..4132588.55 rows=9999977 width=8) (actual time=30405.634..30405.634 rows=1 loops=1)
-> Sort (cost=4032588.78..4082588.66 rows=19999954 width=8) (actual time=23046.478..27733.020 rows=20000000 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 430104kB
-> Append (cost=0.00..513495.02 rows=19999954 width=8) (actual time=0.040..8191.185 rows=20000000 loops=1)
-> Subquery Scan "*SELECT* 1" (cost=0.00..269247.48 rows=9999977 width=8) (actual time=0.039..3651.506 rows=10000000 loops=1)
-> Seq Scan on table_name (cost=0.00..169247.71 rows=9999977 width=8) (actual time=0.038..2258.323 rows=10000000 loops=1)
-> Subquery Scan "*SELECT* 2" (cost=0.00..244247.54 rows=9999977 width=8) (actual time=0.008..2697.546 rows=10000000 loops=1)
-> Seq Scan on table_name (cost=0.00..144247.77 rows=9999977 width=8) (actual time=0.006..1079.561 rows=10000000 loops=1)
Total runtime: 30496.100 ms
I posted your query plan on explain.depesz.com, have a look.
The query planner's estimates are terribly wrong in some places.
Have you run ANALYZE recently?
Read the chapters in the manual on Statistics Used by the Planner and Planner Cost Constants. Pay special attention to the chapters on random_page_cost and default_statistics_target.
You might try:
ALTER TABLE diplomas ALTER COLUMN number SET STATISTICS 1000;
ANALYZE diplomas;
Or go even a higher for a table with 10M rows. It depends on data distribution and actual queries. Experiment. Default is 100, maximum is 10000.
For a database of that size, only 1 or 5 MB of work_mem are generally not enough. Read the Postgres Wiki page on Tuning Postgres that #aleroot linked to.
As your query needs 430104kB of memory on disk according to EXPLAIN output, you have to set work_mem to something like 500MB or more to allow in-memory sorting. In-memory representation of data needs some more space than on-disk representation. You may be interested in what Tom Lane posted on that matter recently.
Increasing work_mem by just a little, like you tried, won't help much or can even slow down. Setting it to high globally can even hurt, especially with concurrent access. Multiple sessions might starve one another for resources. Allocating more for one purpose takes away memory from another if the resource is limited. The best setup depends on the complete situation.
To avoid side effects, only set it high enough locally in your session, and temporarily for the query:
SET work_mem = '500MB';
Reset it to your default afterwards:
RESET work_mem;
Or use SET LOCAL to set it just for the current transaction to begin with.
SET search_path='tmp';
-- Generate some data ...
-- DROP table tmp.table_name ;
-- CREATE table tmp.table_name ( n INTEGER NOT NULL PRIMARY KEY);
-- INSERT INTO tmp.table_name(n) SELECT generate_series(1,1000);
-- DELETE FROM tmp.table_name WHERE random() < 0.05 ;
The except query is equivalent to the following NOT EXISTS form, which generates a different query plan (but the same results) here ( 9.0.1beta something)
-- EXPLAIN ANALYZE
WITH q1 AS (
SELECT 1+tn.n AS n
FROM table_name tn
WHERE NOT EXISTS (
SELECT * FROM table_name nx
WHERE nx.n = tn.n+1
)
)
SELECT q1.n
FROM q1
ORDER BY q1.n DESC;
(a version with a recursive CTE might also be possible :-)
EDIT: the query plans. all for 100K records with 0.2 % deleted
Original query:
------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=36461.76..36711.20 rows=99778 width=4) (actual time=2682.600..2682.917 rows=222 loops=1)
Sort Key: q1.n
Sort Method: quicksort Memory: 22kB
-> Subquery Scan q1 (cost=24984.41..26979.97 rows=99778 width=4) (actual time=2003.047..2682.036 rows=222 loops=1)
-> SetOp Except (cost=24984.41..25982.19 rows=99778 width=4) (actual time=2003.042..2681.389 rows=222 loops=1)
-> Sort (cost=24984.41..25483.30 rows=199556 width=4) (actual time=2002.584..2368.963 rows=199556 loops=1)
Sort Key: "*SELECT* 1".n
Sort Method: external merge Disk: 3512kB
-> Append (cost=0.00..5026.57 rows=199556 width=4) (actual time=0.071..1452.838 rows=199556 loops=1)
-> Subquery Scan "*SELECT* 1" (cost=0.00..2638.01 rows=99778 width=4) (actual time=0.067..470.652 rows=99778 loops=1)
-> Seq Scan on table_name (cost=0.00..1640.22 rows=99778 width=4) (actual time=0.063..178.365 rows=99778 loops=1)
-> Subquery Scan "*SELECT* 2" (cost=0.00..2388.56 rows=99778 width=4) (actual time=0.014..429.224 rows=99778 loops=1)
-> Seq Scan on table_name (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.011..143.320 rows=99778 loops=1)
Total runtime: 2684.840 ms
(14 rows)
NOT EXISTS-version with CTE:
----------------------------------------------------------------------------------------------------------------------
Sort (cost=6394.60..6394.60 rows=1 width=4) (actual time=699.190..699.498 rows=222 loops=1)
Sort Key: q1.n
Sort Method: quicksort Memory: 22kB
CTE q1
-> Hash Anti Join (cost=2980.01..6394.57 rows=1 width=4) (actual time=312.262..697.985 rows=222 loops=1)
Hash Cond: ((tn.n + 1) = nx.n)
-> Seq Scan on table_name tn (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.013..143.210 rows=99778 loops=1)
-> Hash (cost=1390.78..1390.78 rows=99778 width=4) (actual time=309.923..309.923 rows=99778 loops=1)
-> Seq Scan on table_name nx (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.007..144.102 rows=99778 loops=1)
-> CTE Scan on q1 (cost=0.00..0.02 rows=1 width=4) (actual time=312.270..698.742 rows=222 loops=1)
Total runtime: 700.040 ms
(11 rows)
NOT EXISTS-version without CTE
--------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=6394.58..6394.58 rows=1 width=4) (actual time=692.313..692.625 rows=222 loops=1)
Sort Key: ((1 + tn.n))
Sort Method: quicksort Memory: 22kB
-> Hash Anti Join (cost=2980.01..6394.57 rows=1 width=4) (actual time=308.046..691.849 rows=222 loops=1)
Hash Cond: ((tn.n + 1) = nx.n)
-> Seq Scan on table_name tn (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.014..142.781 rows=99778 loops=1)
-> Hash (cost=1390.78..1390.78 rows=99778 width=4) (actual time=305.732..305.732 rows=99778 loops=1)
-> Seq Scan on table_name nx (cost=0.00..1390.78 rows=99778 width=4) (actual time=0.007..143.783 rows=99778 loops=1)
Total runtime: 693.139 ms
(9 rows)
My conclusion is that the "NOT EXISTS" versions cause postgres to produce better plans.