Postgres gin index on jsonb column not working - postgresql

I have two jsonb columns in my table: etl_value and user_value. I have jsonb_path_ops gin index on both columns.
"ix_variable_instance_versioned_etl_value" gin (etl_value jsonb_path_ops)
"ix_variable_instance_versioned_user_value" gin (user_value jsonb_path_ops)
Following query on user_value uses the index:
=> explain analyze select count(*) from clinical.variable_instances_versioned where user_value #? '$.coding[*].code ? (# like_regex "C" flag "i")';
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1650284.49..1650284.50 rows=1 width=8) (actual time=691.730..691.731 rows=1 loops=1)
-> Bitmap Heap Scan on variable_instances_versioned (cost=1649812.91..1650284.19 rows=118 width=0) (actual time=118.388..691.377 rows=4268 loops=1)
Recheck Cond: (user_value #? '$."coding"[*]."code"?(# like_regex "C" flag "i")'::jsonpath)
Rows Removed by Index Recheck: 1248999
Heap Blocks: exact=58317 lossy=35532
-> Bitmap Index Scan on ix_variable_instance_versioned_user_value (cost=0.00..1649812.88 rows=118 width=0) (actual time=107.017..107.017 rows=184334 loops=1)
Index Cond: (user_value #? '$."coding"[*]."code"?(# like_regex "C" flag "i")'::jsonpath)
Planning Time: 0.098 ms
Execution Time: 693.030 ms
(9 rows)
But the exact same query on etl_value does not use the index. Why would that be?
=> explain analyze select count(*) from clinical.variable_instances_versioned where etl_value #? '$.coding[*].code ? (# like_regex "C" flag "i")';
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=3558858.31..3558858.32 rows=1 width=8) (actual time=12911.026..12912.472 rows=1 loops=1)
-> Gather (cost=3558858.10..3558858.31 rows=2 width=8) (actual time=12910.955..12912.467 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=3557858.10..3557858.11 rows=1 width=8) (actual time=12908.707..12908.707 rows=1 loops=3)
-> Parallel Seq Scan on variable_instances_versioned (cost=0.00..3556075.62 rows=712989 width=0) (actual time=0.053..12871.096 rows=509619 loops=3)
Filter: (etl_value #? '$."coding"[*]."code"?(# like_regex "C" flag "i")'::jsonpath)
Rows Removed by Filter: 30108635
Planning Time: 0.188 ms
Execution Time: 12912.500 ms
(10 rows)

Related

Very slow search for NULL values with index

I have a Postgres table with ~50 columns and ~75 million records.
It has the following index among others:
"index_shipments_on_buyer_supplier_id" btree (buyer_supplier_id)
EXPLAIN shows it wants to use a sequential scan:
db=# EXPLAIN SELECT COUNT(*) FROM "shipments" WHERE (buyer_supplier_id IS NULL)
db-# ;
QUERY PLAN
--------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=15427130.32..15427130.33 rows=1 width=8)
-> Gather (cost=15427130.11..15427130.32 rows=2 width=8)
Workers Planned: 2
-> Partial Aggregate (cost=15426130.11..15426130.12 rows=1 width=8)
-> Parallel Seq Scan on shipments (cost=0.00..15354385.03 rows=28698029 width=0)
Filter: (buyer_supplier_id IS NULL)
(6 rows)
Now force use of the index:
db=# set enable_seqscan = false;
SET
db=# EXPLAIN SELECT COUNT(*) FROM "shipments" WHERE (buyer_supplier_id IS NULL);
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=17314493.48..17314493.49 rows=1 width=8)
-> Gather (cost=17314493.26..17314493.47 rows=2 width=8)
Workers Planned: 2
-> Partial Aggregate (cost=17313493.26..17313493.27 rows=1 width=8)
-> Parallel Bitmap Heap Scan on shipments (cost=1922711.90..17241748.19 rows=28698029 width=0)
Recheck Cond: (buyer_supplier_id IS NULL)
-> Bitmap Index Scan on index_shipments_on_buyer_supplier_id (cost=0.00..1905493.08 rows=68875269 width=0)
Index Cond: (buyer_supplier_id IS NULL)
(8 rows)
db=# EXPLAIN ANALYZE SELECT COUNT(*) FROM "shipments" WHERE (buyer_supplier_id IS NULL);
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=17314493.48..17314493.49 rows=1 width=8) (actual time=795551.977..795573.311 rows=1 loops=1)
-> Gather (cost=17314493.26..17314493.47 rows=2 width=8) (actual time=795528.063..795573.304 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=17313493.26..17313493.27 rows=1 width=8) (actual time=795519.276..795519.277 rows=1 loops=3)
-> Parallel Bitmap Heap Scan on shipments (cost=1922711.90..17241748.19 rows=28698029 width=0) (actual time=7642.771..794473.494 rows=5439073 loops=3)
Recheck Cond: (buyer_supplier_id IS NULL)
Rows Removed by Index Recheck: 10948389
Heap Blocks: exact=14343 lossy=3993510
-> Bitmap Index Scan on index_shipments_on_buyer_supplier_id (cost=0.00..1905493.08 rows=68875269 width=0) (actual time=7633.652..7633.652 rows=62174015 loops=1)
Index Cond: (buyer_supplier_id IS NULL)
Planning time: 0.102 ms
Execution time: 795573.347 ms
(13 rows)
I don't understand why getting a COUNT of NULL buyer_supplier_ids should be so taxing on the system. What am I missing here, and how can I make this count fast?
Postgres organizes indexes with nulls placed last by default. Check https://www.postgresql.org/docs/current/indexes-ordering.html for more info
In your case, if the table has high cardinality for buyers_supplier_id it will have to scroll through the entire index to look for nulls hence the planner might be deciding to use seq scan.
To fix this
You can either recreate the index with nulls first option or you can also create a partial index with buyers_supplier_id is null condition as #a_horse_with_no_name mentioned.
Another thing to look into is index bloat. If this table is frequently getting updated and has not been through a vacuum index might start getting bloated reducing the performance.

PostgreSQL: Sequential scan despite having indexes

I have the following two tables.
person_addresses
address_normalization
The person_addresses table has a field named address_id as the primary key and address_normalization has the corresponding field address_id which has an index on it.
Now, when I explain the following query, I see a sequential scan.
SELECT
count(*)
FROM
mp_member2.person_addresses pa
JOIN mp_member2.address_normalization an ON
an.address_id = pa.address_id
WHERE
an.sr_modification_time >= 1550692189468;
-- Result: 2654
Please refer to the following screenshot.
You see that there is a sequential scan after the hash join. I'm not sure I understand this part; why would a sequential scan follow a hash join.
And as seen in the query above, the set of records returned is also low.
Is this expected behaviour or am I doing something wrong?
Update #1: I also have indices on the sr_modification_time fields of both the tables
Update #2: Full execution plan
Aggregate (cost=206944.74..206944.75 rows=1 width=0) (actual time=2807.844..2807.844 rows=1 loops=1)
Buffers: shared hit=4629 read=82217
-> Hash Join (cost=2881.95..206825.15 rows=47836 width=0) (actual time=0.775..2807.160 rows=2654 loops=1)
Hash Cond: (pa.address_id = an.address_id)
Buffers: shared hit=4629 read=82217
-> Seq Scan on person_addresses pa (cost=0.00..135924.93 rows=4911993 width=8) (actual time=0.005..1374.610 rows=4911993 loops=1)
Buffers: shared hit=4588 read=82217
-> Hash (cost=2432.05..2432.05 rows=35992 width=18) (actual time=0.756..0.756 rows=1005 loops=1)
Buckets: 4096 Batches: 1 Memory Usage: 41kB
Buffers: shared hit=41
-> Index Scan using mp_member2_address_normalization_mod_time on address_normalization an (cost=0.43..2432.05 rows=35992 width=18) (actual time=0.012..0.424 rows=1005 loops=1)
Index Cond: (sr_modification_time >= 1550692189468::bigint)
Buffers: shared hit=41
Planning time: 0.244 ms
Execution time: 2807.885 ms
Update #3: I tried with a newer timestamp and it used an index scan.
EXPLAIN (
ANALYZE
, buffers
, format TEXT
) SELECT
COUNT(*)
FROM
mp_member2.person_addresses pa
JOIN mp_member2.address_normalization an ON
an.address_id = pa.address_id
WHERE
an.sr_modification_time >= 1557507300342;
-- count: 1364
Query Plan:
Aggregate (cost=295.48..295.49 rows=1 width=0) (actual time=2.770..2.770 rows=1 loops=1)
Buffers: shared hit=1404
-> Nested Loop (cost=4.89..295.43 rows=19 width=0) (actual time=0.038..2.491 rows=1364 loops=1)
Buffers: shared hit=1404
-> Index Scan using mp_member2_address_normalization_mod_time on address_normalization an (cost=0.43..8.82 rows=14 width=18) (actual time=0.009..0.142 rows=341 loops=1)
Index Cond: (sr_modification_time >= 1557507300342::bigint)
Buffers: shared hit=14
-> Bitmap Heap Scan on person_addresses pa (cost=4.46..20.43 rows=4 width=8) (actual time=0.004..0.005 rows=4 loops=341)
Recheck Cond: (address_id = an.address_id)
Heap Blocks: exact=360
Buffers: shared hit=1390
-> Bitmap Index Scan on idx_mp_member2_person_addresses_address_id (cost=0.00..4.46 rows=4 width=0) (actual time=0.003..0.003 rows=4 loops=341)
Index Cond: (address_id = an.address_id)
Buffers: shared hit=1030
Planning time: 0.214 ms
Execution time: 2.816 ms
That is the expected behavior because you don't have index for sr_modification_time so after create the hash join db has to scan the whole set to check each row for the sr_modification_time value
You should create:
index for (sr_modification_time)
or composite index for (address_id , sr_modification_time )

Partition and Indexes

I have a table partitioned for every quarter. Table name is data. In table there is couple of columns but also date. date is a field which has index on it created:
create index on data (date);
Now I am trying to querying the table:
justpremium=> EXPLAIN analyze SELECT sum(col_1) FROM data WHERE "date" BETWEEN '2018-12-01' AND '2018-12-31';
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=355709.66..355709.67 rows=1 width=32) (actual time=577.072..577.072 rows=1 loops=1)
-> Gather (cost=355709.44..355709.65 rows=2 width=32) (actual time=577.005..578.418 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=354709.44..354709.45 rows=1 width=32) (actual time=573.255..573.256 rows=1 loops=3)
-> Append (cost=0.42..352031.07 rows=1071346 width=8) (actual time=15.286..524.604 rows=837204 loops=3)
-> Parallel Index Scan using data_date_idx on data (cost=0.42..8.44 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=3)
Index Cond: ((date >= '2018-12-01'::date) AND (date <= '2018-12-31'::date))
-> Parallel Seq Scan on data_y2018q4 (cost=0.00..352022.64 rows=1071345 width=8) (actual time=15.282..465.859 rows=837204 loops=3)
Filter: ((date >= '2018-12-01'::date) AND (date <= '2018-12-31'::date))
Rows Removed by Filter: 1479844
Planning time: 1.437 ms
Execution time: 578.465 ms
(13 rows)
We may see that there is Parallel Seq Scan on data_y2018q4. In fact it is normal to me. I have one quarter partition. I am querying third part of the whole partition, so I have seq scan, great.
But now let's query directly partition table:
justpremium=> EXPLAIN analyze SELECT sum(col_1) FROM data_y2018q4 WHERE "date" BETWEEN '2018-12-01' AND '2018-12-31';
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=286475.38..286475.39 rows=1 width=32) (actual time=277.830..277.830 rows=1 loops=1)
-> Gather (cost=286475.16..286475.37 rows=2 width=32) (actual time=277.760..279.194 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=285475.16..285475.17 rows=1 width=32) (actual time=275.950..275.950 rows=1 loops=3)
-> Parallel Index Scan using data_y2018q4_date_idx on data_y2018q4 (cost=0.43..282796.80 rows=1071345 width=8) (actual time=0.022..227.687 rows=837204 loops=3)
Index Cond: ((date >= '2018-12-01'::date) AND (date <= '2018-12-31'::date))
Planning time: 0.187 ms
Execution time: 279.233 ms
(9 rows)
Now I have Index Scan using data_y2018q4_date_idx and also time of whole query is two times quicker: 279.233 ms compared to 578.465 ms. What is the explanation of this? How force planner to use the index scan when querying data table. How to achieve two times better timing?

Order by ASC 100x faster than Order by DESC ? Why?

I have one complexe query generated by Hibernate for JBPM. I can't really modify it and i'm searching to optimize it as much as possible.
I found out that ORDER BY DESC is way slower than ORDER BY ASC, do you have any idea ?
PostgreSQL Version : 9.4
Schema : https://pastebin.com/qNZhrbef
Query :
select
taskinstan0_.ID_ as ID1_27_,
taskinstan0_.VERSION_ as VERSION3_27_,
taskinstan0_.NAME_ as NAME4_27_,
taskinstan0_.DESCRIPTION_ as DESCRIPT5_27_,
taskinstan0_.ACTORID_ as ACTORID6_27_,
taskinstan0_.CREATE_ as CREATE7_27_,
taskinstan0_.START_ as START8_27_,
taskinstan0_.END_ as END9_27_,
taskinstan0_.DUEDATE_ as DUEDATE10_27_,
taskinstan0_.PRIORITY_ as PRIORITY11_27_,
taskinstan0_.ISCANCELLED_ as ISCANCE12_27_,
taskinstan0_.ISSUSPENDED_ as ISSUSPE13_27_,
taskinstan0_.ISOPEN_ as ISOPEN14_27_,
taskinstan0_.ISSIGNALLING_ as ISSIGNA15_27_,
taskinstan0_.ISBLOCKING_ as ISBLOCKING16_27_,
taskinstan0_.LOCKED as LOCKED27_,
taskinstan0_.QUEUE as QUEUE27_,
taskinstan0_.TASK_ as TASK19_27_,
taskinstan0_.TOKEN_ as TOKEN20_27_,
taskinstan0_.PROCINST_ as PROCINST21_27_,
taskinstan0_.SWIMLANINSTANCE_ as SWIMLAN22_27_,
taskinstan0_.TASKMGMTINSTANCE_ as TASKMGM23_27_
from JBPM_TASKINSTANCE taskinstan0_, JBPM_VARIABLEINSTANCE stringinst1_, JBPM_PROCESSINSTANCE processins2_, JBPM_VARIABLEINSTANCE variablein3_
where stringinst1_.CLASS_='S'
and taskinstan0_.PROCINST_=processins2_.ID_
and taskinstan0_.ID_=variablein3_.TASKINSTANCE_
and variablein3_.NAME_ = 'NIR'
and taskinstan0_.QUEUE = 'ERT_TPS'
and (processins2_.ORGAPATH_ like '/ERT%')
and taskinstan0_.ISOPEN_= 't'
and variablein3_.ID_=stringinst1_.ID_
order by stringinst1_.STRINGVALUE_ ASC limit '10';
Explain result for ASC :
Limit (cost=1.71..11652.93 rows=10 width=646) (actual time=6.588..82.407 rows=10 loops=1)
-> Nested Loop (cost=1.71..6215929.27 rows=5335 width=646) (actual time=6.587..82.402 rows=10 loops=1)
-> Nested Loop (cost=1.29..6213170.78 rows=5335 width=646) (actual time=6.578..82.363 rows=10 loops=1)
-> Nested Loop (cost=1.00..6159814.66 rows=153812 width=13) (actual time=0.537..82.130 rows=149 loops=1)
-> Index Scan Backward using totoidx10 on jbpm_variableinstance stringinst1_ (cost=0.56..558481.07 rows=11199905 width=13) (actual time=0.018..11.914 rows=40182 loops=1)
Filter: (class_ = 'S'::bpchar)
-> Index Scan using jbpm_variableinstance_pkey on jbpm_variableinstance variablein3_ (cost=0.43..0.49 rows=1 width=16) (actual time=0.002..0.002 rows=0 loops=40182)
Index Cond: (id_ = stringinst1_.id_)
Filter: ((name_)::text = 'NIR'::text)
Rows Removed by Filter: 1
-> Index Scan using jbpm_taskinstance_pkey on jbpm_taskinstance taskinstan0_ (cost=0.29..0.34 rows=1 width=641) (actual time=0.001..0.001 rows=0 loops=149)
Index Cond: (id_ = variablein3_.taskinstance_)
Filter: (isopen_ AND ((queue)::text = 'ERT_TPS'::text))
Rows Removed by Filter: 0
-> Index Only Scan using idx_procin_2 on jbpm_processinstance processins2_ (cost=0.42..0.51 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=10)
Index Cond: (id_ = taskinstan0_.procinst_)
Filter: ((orgapath_)::text ~~ '/ERT%'::text)
Heap Fetches: 0
Planning time: 2.598 ms
Execution time: 82.513 ms
Explain result for DESC :
Limit (cost=1.71..11652.93 rows=10 width=646) (actual time=8144.871..8144.986 rows=10 loops=1)
-> Nested Loop (cost=1.71..6215929.27 rows=5335 width=646) (actual time=8144.870..8144.984 rows=10 loops=1)
-> Nested Loop (cost=1.29..6213170.78 rows=5335 width=646) (actual time=8144.858..8144.951 rows=10 loops=1)
-> Nested Loop (cost=1.00..6159814.66 rows=153812 width=13) (actual time=8144.838..8144.910 rows=20 loops=1)
-> Index Scan using totoidx10 on jbpm_variableinstance stringinst1_ (cost=0.56..558481.07 rows=11199905 width=13) (actual time=0.066..2351.727 rows=2619671 loops=1)
Filter: (class_ = 'S'::bpchar)
Rows Removed by Filter: 906237
-> Index Scan using jbpm_variableinstance_pkey on jbpm_variableinstance variablein3_ (cost=0.43..0.49 rows=1 width=16) (actual time=0.002..0.002 rows=0 loops=2619671)
Index Cond: (id_ = stringinst1_.id_)
Filter: ((name_)::text = 'NIR'::text)
Rows Removed by Filter: 1
-> Index Scan using jbpm_taskinstance_pkey on jbpm_taskinstance taskinstan0_ (cost=0.29..0.34 rows=1 width=641) (actual time=0.002..0.002 rows=0 loops=20)
Index Cond: (id_ = variablein3_.taskinstance_)
Filter: (isopen_ AND ((queue)::text = 'ERT_TPS'::text))
-> Index Only Scan using idx_procin_2 on jbpm_processinstance processins2_ (cost=0.42..0.51 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=10)
Index Cond: (id_ = taskinstan0_.procinst_)
Filter: ((orgapath_)::text ~~ '/ERT%'::text)
Heap Fetches: 0
Planning time: 2.080 ms
Execution time: 8145.053 ms
Tables infos :
jbpm_variableinstance 12100592 rows
jbpm_taskinstance 69913 rows
jbpm_processinstance 97546 rows
If you have any idea, thanks
This typically only happens when OFFSET and / or LIMIT are involved (as is the case here).
The key difference is this line in the EXPLAIN output for the query with DESC:
Rows Removed by Filter: 906237
Meaning that while the first 10 rows in the index totoidx10 match when scanning backwards (which matches your ASC ordering, obviously), Postgres has to filter ~ 900k rows before it finally finds qualifying rows when scanning the same index forward.
A matching multicolumn index (with the right sort order) might help a lot.
Or, since Postgres chooses an unfavorable query plan, maybe just updated (or more detailed) table statistics or cost settings.
Related:
Keep PostgreSQL from sometimes choosing a bad query plan
Optimizing queries on a range of timestamps (two columns)

Postgres Slow group by query with max

I am using postgres 9.1 and I have a table with about 3.5M rows of eventtype (varchar) and eventtime (timestamp) - and some other fields. There are only about 20 different eventtype's and the event time spans about 4 years.
I want to get the last timestamp of each event type. If I run a query like:
select eventtype, max(eventtime)
from allevents
group by eventtype
it takes around 20 seconds. Selecting distinct eventtype's is equally slow. The query plan shows a full sequential scan of the table - not surprising it is slow.
Explain analyse for the above query gives:
HashAggregate (cost=84591.47..84591.68 rows=21 width=21) (actual time=20918.131..20918.141 rows=21 loops=1)
-> Seq Scan on allevents (cost=0.00..66117.98 rows=3694698 width=21) (actual time=0.021..4831.793 rows=3694392 loops=1)
Total runtime: 20918.204 ms
If I add a where clause to select a specific eventtype, it takes anywhere from 40ms to 150ms which is at least decent.
Query plan when selecting specific eventtype:
GroupAggregate (cost=343.87..24942.71 rows=1 width=21) (actual time=98.397..98.397 rows=1 loops=1)
-> Bitmap Heap Scan on allevents (cost=343.87..24871.07 rows=14325 width=21) (actual time=6.820..89.610 rows=19736 loops=1)
Recheck Cond: ((eventtype)::text = 'TEST_EVENT'::text)
-> Bitmap Index Scan on allevents_idx2 (cost=0.00..340.28 rows=14325 width=0) (actual time=6.121..6.121 rows=19736 loops=1)
Index Cond: ((eventtype)::text = 'TEST_EVENT'::text)
Total runtime: 98.482 ms
Primary key is (eventtype, eventtime). I also have the following indexes:
allevents_idx (event time desc, eventtype)
allevents_idx2 (eventtype).
How can I speed up the query?
Results of query play for correlated subquery suggested by #denis below with 14 manually entered values gives:
Function Scan on unnest val (cost=0.00..185.40 rows=100 width=32) (actual time=0.121..8983.134 rows=14 loops=1)
SubPlan 2
-> Result (cost=1.83..1.84 rows=1 width=0) (actual time=641.644..641.645 rows=1 loops=14)
InitPlan 1 (returns $1)
-> Limit (cost=0.00..1.83 rows=1 width=8) (actual time=641.640..641.641 rows=1 loops=14)
-> Index Scan using allevents_idx on allevents (cost=0.00..322672.36 rows=175938 width=8) (actual time=641.638..641.638 rows=1 loops=14)
Index Cond: ((eventtime IS NOT NULL) AND ((eventtype)::text = val.val))
Total runtime: 8983.203 ms
Using the recursive query suggested by #jjanes, the query runs between 4 and 5 seconds with the following plan:
CTE Scan on t (cost=260.32..448.63 rows=101 width=32) (actual time=0.146..4325.598 rows=22 loops=1)
CTE t
-> Recursive Union (cost=2.52..260.32 rows=101 width=32) (actual time=0.075..1.449 rows=22 loops=1)
-> Result (cost=2.52..2.53 rows=1 width=0) (actual time=0.074..0.074 rows=1 loops=1)
InitPlan 1 (returns $1)
-> Limit (cost=0.00..2.52 rows=1 width=13) (actual time=0.070..0.071 rows=1 loops=1)
-> Index Scan using allevents_idx2 on allevents (cost=0.00..9315751.37 rows=3696851 width=13) (actual time=0.070..0.070 rows=1 loops=1)
Index Cond: ((eventtype)::text IS NOT NULL)
-> WorkTable Scan on t (cost=0.00..25.58 rows=10 width=32) (actual time=0.059..0.060 rows=1 loops=22)
Filter: (eventtype IS NOT NULL)
SubPlan 3
-> Result (cost=2.53..2.54 rows=1 width=0) (actual time=0.059..0.059 rows=1 loops=21)
InitPlan 2 (returns $3)
-> Limit (cost=0.00..2.53 rows=1 width=13) (actual time=0.057..0.057 rows=1 loops=21)
-> Index Scan using allevents_idx2 on allevents (cost=0.00..3114852.66 rows=1232284 width=13) (actual time=0.055..0.055 rows=1 loops=21)
Index Cond: (((eventtype)::text IS NOT NULL) AND ((eventtype)::text > t.eventtype))
SubPlan 6
-> Result (cost=1.83..1.84 rows=1 width=0) (actual time=196.549..196.549 rows=1 loops=22)
InitPlan 5 (returns $6)
-> Limit (cost=0.00..1.83 rows=1 width=8) (actual time=196.546..196.546 rows=1 loops=22)
-> Index Scan using allevents_idx on allevents (cost=0.00..322946.21 rows=176041 width=8) (actual time=196.544..196.544 rows=1 loops=22)
Index Cond: ((eventtime IS NOT NULL) AND ((eventtype)::text = t.eventtype))
Total runtime: 4325.694 ms
What you need is a "skip scan" or "loose index scan". PostgreSQL's planner does not yet implement those automatically, but you can trick it into using one by using a recursive query.
WITH RECURSIVE t AS (
SELECT min(eventtype) AS eventtype FROM allevents
UNION ALL
SELECT (SELECT min(eventtype) as eventtype FROM allevents WHERE eventtype > t.eventtype)
FROM t where t.eventtype is not null
)
select eventtype, (select max(eventtime) from allevents where eventtype=t.eventtype) from t;
There may be a way to collapse the max(eventtime) into the recursive query rather than doing it outside that query, but if so I have not hit upon it.
This needs an index on (eventtype, eventtime) in order to be efficient. You can have it be DESC on the eventtime, but that is not necessary. This is efficiently only if eventtype has only a few distinct values (21 of them, in your case).
Based on the question you already have the relevant index.
If upgrading to Postgres 9.3 or an index on (eventtype, eventtime desc) doesn't make a difference, this is a case where rewriting the query so it uses a correlated subquery works very well if you can enumerate all of the event types manually:
select val as eventtype,
(select max(eventtime)
from allevents
where allevents.eventtype = val
) as eventtime
from unnest('{type1,type2,…}'::text[]) as val;
Here's the plans I get when running similar queries:
denis=# select version();
version
-----------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 9.3.1 on x86_64-apple-darwin11.4.2, compiled by Apple LLVM version 4.2 (clang-425.0.28) (based on LLVM 3.2svn), 64-bit
(1 row)
Test data:
denis=# create table test (evttype int, evttime timestamp, primary key (evttype, evttime));
CREATE TABLE
denis=# insert into test (evttype, evttime) select i, now() + (i % 3) * interval '1 min' - j * interval '1 sec' from generate_series(1,10) i, generate_series(1,10000) j;
INSERT 0 100000
denis=# create index on test (evttime, evttype);
CREATE INDEX
denis=# vacuum analyze test;
VACUUM
First query:
denis=# explain analyze select evttype, max(evttime) from test group by evttype; QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=2041.00..2041.10 rows=10 width=12) (actual time=54.983..54.987 rows=10 loops=1)
-> Seq Scan on test (cost=0.00..1541.00 rows=100000 width=12) (actual time=0.009..15.954 rows=100000 loops=1)
Total runtime: 55.045 ms
(3 rows)
Second query:
denis=# explain analyze select val as evttype, (select max(evttime) from test where test.evttype = val) as evttime from unnest('{1,2,3,4,5,6,7,8,9,10}'::int[]) val;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Function Scan on unnest val (cost=0.00..48.39 rows=100 width=4) (actual time=0.086..0.292 rows=10 loops=1)
SubPlan 2
-> Result (cost=0.46..0.47 rows=1 width=0) (actual time=0.024..0.024 rows=1 loops=10)
InitPlan 1 (returns $1)
-> Limit (cost=0.42..0.46 rows=1 width=8) (actual time=0.021..0.021 rows=1 loops=10)
-> Index Only Scan Backward using test_pkey on test (cost=0.42..464.42 rows=10000 width=8) (actual time=0.019..0.019 rows=1 loops=10)
Index Cond: ((evttype = val.val) AND (evttime IS NOT NULL))
Heap Fetches: 0
Total runtime: 0.370 ms
(9 rows)
index on (eventtype, eventtime desc) should help. or reindex on primary key index. I would also recommend replace type of eventtype to enum (if number of types is fixed) or int/smallint. This will decrease size of data and indexes so queries will run faster.