I recently did a full vacuum on a range of tables, and a specific monitoring query suddenly became really slow. This used to be a query we used for monitoring, so would happily run every 10sec for the past 2 months, but with the performance hit that came after the vacuum, most dashboards using that is down, and it ramps up until the server runs out of connections or resources depending.
Unfortunately I do not have the explain output of the previous one.
By not specifying a date limitation:
explain (analyze,timing) select min(id) from iqsim_cdrs;
Result (cost=0.64..0.65 rows=1 width=8) (actual time=6.222..6.222 rows=1 loops=1)
InitPlan 1 (returns $0)
-> Limit (cost=0.57..0.64 rows=1 width=8) (actual time=6.216..6.217 rows=1 loops=1)
-> Index Only Scan using iqsim_cdrs_pkey on iqsim_cdrs (cost=0.57..34265771.63 rows=531041357 width=8) (a
ctual time=6.213..6.213 rows=1 loops=1)
Index Cond: (id IS NOT NULL)
Heap Fetches: 1
Planning time: 1.876 ms
Execution time: 6.313 ms
(8 rows)
By limiting the date:
explain (analyze,timing) select min(id) from iqsim_cdrs where timestamp < '2019-01-01 00:00:00';
Result (cost=7.38..7.39 rows=1 width=8) (actual time=363763.144..363763.145 rows=1 loops=1)
InitPlan 1 (returns $0)
-> Limit (cost=0.57..7.38 rows=1 width=8) (actual time=363763.137..363763.138 rows=1 loops=1)
-> Index Scan using iqsim_cdrs_pkey on iqsim_cdrs (cost=0.57..35593384.68 rows=5227047 width=8) (actual t
ime=363763.133..363763.133 rows=1 loops=1)
Index Cond: (id IS NOT NULL)
Filter: ("timestamp" < '2019-01-01 00:00:00'::timestamp without time zone)
Rows Removed by Filter: 488693105
Planning time: 7.707 ms
Execution time: 363763.219 ms
(9 rows)
Not sure what could have caused this, I can only presume before the full vacuum it used the index on timestamp?
* UPDATE *
As per #jjanes's recommendation, here the id+0 update
explain (analyze,timing) select min(id+0) from iqsim_cdrs where timestamp < '2019-01-01 00:00:00';
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=377400.34..377400.35 rows=1 width=8) (actual time=109.176..109.177 rows=1 loops=1)
-> Index Scan using index_iqsim_cdrs_on_timestamp on iqsim_cdrs (cost=0.57..351196.84 rows=5240699 width=8) (actual time=0.131..108.911 rows=126 loops=1)
Index Cond: ("timestamp" < '2019-01-01 00:00:00'::timestamp without time zone)
Planning time: 4.756 ms
Execution time: 109.405 ms
(5 rows)
Related
I have a query that is very fast for large date filter
EXPLAIN ANALYZE
SELECT "advertisings"."id",
"advertisings"."page_id",
"advertisings"."page_name",
"advertisings"."created_at",
"posts"."image_url",
"posts"."thumbnail_url",
"posts"."post_content",
"posts"."like_count"
FROM "advertisings"
INNER JOIN "posts" ON "advertisings"."post_id" = "posts"."id"
WHERE "advertisings"."created_at" >= '2020-01-01T00:00:00Z'
AND "advertisings"."created_at" < '2020-12-02T23:59:59Z'
ORDER BY "like_count" DESC LIMIT 20
And the query plan is:
Limit (cost=0.85..20.13 rows=20 width=552) (actual time=0.026..0.173 rows=20 loops=1)
-> Nested Loop (cost=0.85..951662.55 rows=987279 width=552) (actual time=0.025..0.169 rows=20 loops=1)
-> Index Scan using posts_like_count_idx on posts (cost=0.43..378991.65 rows=1053015 width=504) (actual time=0.013..0.039 rows=20 loops=1)
-> Index Scan using advertisings_post_id_index on advertisings (cost=0.43..0.53 rows=1 width=52) (actual time=0.005..0.006 rows=1 loops=20)
Index Cond: (post_id = posts.id)
Filter: ((created_at >= '2020-01-01 00:00:00'::timestamp without time zone) AND (created_at < '2020-12-02 23:59:59'::timestamp without time zone))
Planning Time: 0.365 ms
Execution Time: 0.199 ms
However, when I narrow the filter (change "created_at" >= '2020-11-25T00:00:00Z') which returns 9 records (which is less than the limit 20), the query is very slow
EXPLAIN ANALYZE
SELECT "advertisings"."id",
"advertisings"."page_id",
"advertisings"."page_name",
"advertisings"."created_at",
"posts"."image_url",
"posts"."thumbnail_url",
"posts"."post_content",
"posts"."like_count"
FROM "advertisings"
INNER JOIN "posts" ON "advertisings"."post_id" = "posts"."id"
WHERE "advertisings"."created_at" >= '2020-11-25T00:00:00Z'
AND "advertisings"."created_at" < '2020-12-02T23:59:59Z'
ORDER BY "like_count" DESC LIMIT 20
Query plan:
Limit (cost=1000.88..8051.73 rows=20 width=552) (actual time=218.485..4155.336 rows=9 loops=1)
-> Gather Merge (cost=1000.88..612662.09 rows=1735 width=552) (actual time=218.483..4155.328 rows=9 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Nested Loop (cost=0.85..611461.80 rows=723 width=552) (actual time=118.170..3786.176 rows=3 loops=3)
-> Parallel Index Scan using posts_like_count_idx on posts (cost=0.43..372849.07 rows=438756 width=504) (actual time=0.024..1542.094 rows=351005 loops=3)
-> Index Scan using advertisings_post_id_index on advertisings (cost=0.43..0.53 rows=1 width=52) (actual time=0.006..0.006 rows=0 loops=1053015)
Index Cond: (post_id = posts.id)
Filter: ((created_at >= '2020-11-25 00:00:00'::timestamp without time zone) AND (created_at < '2020-12-02 23:59:59'::timestamp without time zone))
Rows Removed by Filter: 1
Planning Time: 0.394 ms
Execution Time: 4155.379 ms
I spent hours googling but couldn't find the right solution. And help would be greatly appreciated.
Updated
When I continue narrowing the filter to
WHERE "advertisings"."created_at" >= '2020-11-27T00:00:00Z'
AND "advertisings"."created_at" < '2020-12-02T23:59:59Z'
which also returns the 9 records as the slow above query. However, this time, the query is really fast again.
Limit (cost=8082.99..8083.04 rows=20 width=552) (actual time=0.062..0.065 rows=9 loops=1)
-> Sort (cost=8082.99..8085.40 rows=962 width=552) (actual time=0.061..0.062 rows=9 loops=1)
Sort Key: posts.like_count DESC
Sort Method: quicksort Memory: 32kB
-> Nested Loop (cost=0.85..8057.39 rows=962 width=552) (actual time=0.019..0.047 rows=9 loops=1)
-> Index Scan using advertisings_created_at_index on advertisings (cost=0.43..501.30 rows=962 width=52) (actual time=0.008..0.012 rows=9 loops=1)
Index Cond: ((created_at >= '2020-11-27 00:00:00'::timestamp without time zone) AND (created_at < '2020-12-02 23:59:59'::timestamp without time zone))
-> Index Scan using posts_pkey on posts (cost=0.43..7.85 rows=1 width=504) (actual time=0.003..0.003 rows=1 loops=9)
Index Cond: (id = advertisings.post_id)
Planning Time: 0.540 ms
Execution Time: 0.096 ms
I have no idea what happens
PostgreSQL follows two different strategies in the first two and the last query:
If there are many matching advertisings rows, it uses a nested loop join to fetch the rows in the order of the ORDER BY clause and discards rows that don't match the condition until it has found 20.
If there are few matching advertisings rows, it fetches those few rows, then the matching rows in posts, then sorts and takes the first 20 rows.
The second execution is slow because PostgreSQL overestimates the rows in advertisings that match the condition. See how it estimates 962 instead of 9 in the third query?
The solution is to improve PostgreSQL's estimate:
if running
ANALYZE advertisings;
is enough to make the slow query fast, tell PostgreSQL to collect statistics more often:
ALTER TABLE advertisings SET (autovacuum_analyze_scale_factor = 0.05);
if that is not enough, try collecting more detailed statistics:
SET default_statistics_target = 1000;
ANALYZE advertisings;
You can experiment with values up to 10000. Once you found the value that works, persist it:
ALTER TABLE advertisings ALTER created_at SET STATISTICS 1000;
I've got following issue with Postgres:
Got two tables A and B:
A got 64 mln records
B got 16 mln records
A got b_id field which is indexed --> ix_A_b_id
B got datetime_field which is indexed --> ix_B_datetime
Got following query:
SELECT
A.id,
B.some_field
FROM
A
JOIN
B
ON A.b_id = B.id
WHERE
B.datetime_field BETWEEN 'from' AND 'to'
This query is fine when difference between from and to is small, in that case postgres use both indexes and i get results quite fast
When difference between dates is bigger query is slowing much, because postgres decides to use ix_B_datetime only and then Full Scan on table with 64 M records... which is simple stupid
I found point when optimizer decides that using Full Scan is faster.
For dates between
2019-03-10 17:05:00 and 2019-03-15 01:00:00
it got similar cost like for
2019-03-10 17:00:00 and 2019-03-15 01:00:00.
But fetching time for first query is something about 50 ms and for second almost 2 minutes.
Plans are below
Nested Loop (cost=1.00..3484455.17 rows=113057 width=8)
-> Index Scan using ix_B_datetime on B (cost=0.44..80197.62 rows=28561 width=12)
Index Cond: ((datetime_field >= '2019-03-10 17:05:00'::timestamp without time zone) AND (datetime_field < '2019-03-15 01:00:00'::timestamp without time zone))
-> Index Scan using ix_A_b_id on A (cost=0.56..112.18 rows=701 width=12)
Index Cond: (b_id = B.id)
Hash Join (cost=80615.72..3450771.89 rows=113148 width=8)
Hash Cond: (A.b_id = B.id)
-> Seq Scan on spot (cost=0.00..3119079.50 rows=66652050 width=12)
-> Hash (cost=80258.42..80258.42 rows=28584 width=12)
-> Index Scan using ix_B_datetime on B (cost=0.44..80258.42 rows=28584 width=12)
Index Cond: ((datetime_field >= '2019-03-10 17:00:00'::timestamp without time zone) AND (datetime_field < '2019-03-15 01:00:00'::timestamp without time zone))
So my question is why my Postgres lies about costs? Why it calculates something more expensive as it is actually? How to fix that?
Temporary I had to rewrite query to always use index on table A but I do not like following solution, because it's hacky, not clear and slower for small chunks of data but much faster for bigger chunks
with cc as (
select id, some_field from B WHERE B.datetime_field >= '2019-03-08'
AND B.datetime_field < '2019-03-15'
)
SELECT X.id, Y.some_field
FROM (SELECT b_id, id from A where b_id in (SELECT id from cc)) X
JOIN (SELECT id, some_field FROM cc) Y ON X.b_id = Y.id
EDIT:
So as #a_horse_with_no_name suggested I've played with RANDOM_PAGE_COST
I've modified query to count number of entries because fetching all was unnecessary so query looks following
SELECT count(*) FROM (
SELECT
A.id,
B.some_field
FROM
A
JOIN
B
ON A.b_id = B.id
WHERE
B.datetime_field BETWEEN '2019-03-01 00:00:00' AND '2019-03-15 01:00:00'
) A
And I've tested different levels of cost
RANDOM_PAGE_COST=0.25
Aggregate (cost=3491773.34..3491773.35 rows=1 width=8) (actual time=4166.998..4166.999 rows=1 loops=1)
Buffers: shared hit=1939402
-> Nested Loop (cost=1.00..3490398.51 rows=549932 width=0) (actual time=0.041..3620.975 rows=2462836 loops=1)
Buffers: shared hit=1939402
-> Index Scan using ix_B_datetime_field on B (cost=0.44..24902.79 rows=138927 width=8) (actual time=0.013..364.018 rows=313399 loops=1)
Index Cond: ((datetime_field >= '2019-03-01 00:00:00'::timestamp without time zone) AND (datetime_field < '2019-03-15 01:00:00'::timestamp without time zone))
Buffers: shared hit=311461
-> Index Only Scan using A_b_id_index on A (cost=0.56..17.93 rows=701 width=8) (actual time=0.004..0.007 rows=8 loops=313399)
Index Cond: (b_id = B.id)
Heap Fetches: 2462836
Buffers: shared hit=1627941
Planning time: 0.316 ms
Execution time: 4167.040 ms
RANDOM_PAGE_COST=1
Aggregate (cost=3918191.39..3918191.40 rows=1 width=8) (actual time=281236.100..281236.101 rows=1 loops=1)
" Buffers: shared hit=7531789 read=2567818, temp read=693 written=693"
-> Merge Join (cost=102182.07..3916816.56 rows=549932 width=0) (actual time=243755.551..280666.992 rows=2462836 loops=1)
Merge Cond: (A.b_id = B.id)
" Buffers: shared hit=7531789 read=2567818, temp read=693 written=693"
-> Index Only Scan using A_b_id_index on A (cost=0.56..3685479.55 rows=66652050 width=8) (actual time=0.010..263635.124 rows=64700055 loops=1)
Heap Fetches: 64700055
Buffers: shared hit=7220328 read=2567818
-> Materialize (cost=101543.05..102237.68 rows=138927 width=8) (actual time=523.618..1287.145 rows=2503965 loops=1)
" Buffers: shared hit=311461, temp read=693 written=693"
-> Sort (cost=101543.05..101890.36 rows=138927 width=8) (actual time=523.616..674.736 rows=313399 loops=1)
Sort Key: B.id
Sort Method: external merge Disk: 5504kB
" Buffers: shared hit=311461, temp read=693 written=693"
-> Index Scan using ix_B_datetime_field on B (cost=0.44..88589.92 rows=138927 width=8) (actual time=0.013..322.016 rows=313399 loops=1)
Index Cond: ((datetime_field >= '2019-03-01 00:00:00'::timestamp without time zone) AND (datetime_field < '2019-03-15 01:00:00'::timestamp without time zone))
Buffers: shared hit=311461
Planning time: 0.314 ms
Execution time: 281237.202 ms
RANDOM_PAGE_COST=2
Aggregate (cost=4072947.53..4072947.54 rows=1 width=8) (actual time=166896.775..166896.776 rows=1 loops=1)
" Buffers: shared hit=696849 read=2067171, temp read=194524 written=194516"
-> Hash Join (cost=175785.69..4071572.70 rows=549932 width=0) (actual time=29321.835..166332.812 rows=2462836 loops=1)
Hash Cond: (A.B_id = B.id)
" Buffers: shared hit=696849 read=2067171, temp read=194524 written=194516"
-> Seq Scan on A (cost=0.00..3119079.50 rows=66652050 width=8) (actual time=0.008..108959.789 rows=64700055 loops=1)
Buffers: shared hit=437580 read=2014979
-> Hash (cost=173506.11..173506.11 rows=138927 width=8) (actual time=29321.416..29321.416 rows=313399 loops=1)
Buckets: 131072 (originally 131072) Batches: 8 (originally 2) Memory Usage: 4084kB
" Buffers: shared hit=259269 read=52192, temp written=803"
-> Index Scan using ix_B_datetime_field on B (cost=0.44..173506.11 rows=138927 width=8) (actual time=1.676..29158.413 rows=313399 loops=1)
Index Cond: ((datetime_field >= '2019-03-01 00:00:00'::timestamp without time zone) AND (datetime_field < '2019-03-15 01:00:00'::timestamp without time zone))
Buffers: shared hit=259269 read=52192
Planning time: 7.367 ms
Execution time: 166896.824 ms
Still it's unclear for me, cost 0.25 is best for me but everywhere I can read that for ssd disk it should be 1-1.5. (I'm using AWS instance with ssd)
What is weird at cost 1 plan is worse than at 2 and 0.25
So what value to pick? Is there any possibility to calculate it?
Costs 0.25 > 2 > 1 efficiency in that case, what about other cases? How can I be sure that 0.25 which is good for my query won't break other queries. Do I need to write performance tests for every query I got?
I have a table partitioned for every quarter. Table name is data. In table there is couple of columns but also date. date is a field which has index on it created:
create index on data (date);
Now I am trying to querying the table:
justpremium=> EXPLAIN analyze SELECT sum(col_1) FROM data WHERE "date" BETWEEN '2018-12-01' AND '2018-12-31';
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=355709.66..355709.67 rows=1 width=32) (actual time=577.072..577.072 rows=1 loops=1)
-> Gather (cost=355709.44..355709.65 rows=2 width=32) (actual time=577.005..578.418 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=354709.44..354709.45 rows=1 width=32) (actual time=573.255..573.256 rows=1 loops=3)
-> Append (cost=0.42..352031.07 rows=1071346 width=8) (actual time=15.286..524.604 rows=837204 loops=3)
-> Parallel Index Scan using data_date_idx on data (cost=0.42..8.44 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=3)
Index Cond: ((date >= '2018-12-01'::date) AND (date <= '2018-12-31'::date))
-> Parallel Seq Scan on data_y2018q4 (cost=0.00..352022.64 rows=1071345 width=8) (actual time=15.282..465.859 rows=837204 loops=3)
Filter: ((date >= '2018-12-01'::date) AND (date <= '2018-12-31'::date))
Rows Removed by Filter: 1479844
Planning time: 1.437 ms
Execution time: 578.465 ms
(13 rows)
We may see that there is Parallel Seq Scan on data_y2018q4. In fact it is normal to me. I have one quarter partition. I am querying third part of the whole partition, so I have seq scan, great.
But now let's query directly partition table:
justpremium=> EXPLAIN analyze SELECT sum(col_1) FROM data_y2018q4 WHERE "date" BETWEEN '2018-12-01' AND '2018-12-31';
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=286475.38..286475.39 rows=1 width=32) (actual time=277.830..277.830 rows=1 loops=1)
-> Gather (cost=286475.16..286475.37 rows=2 width=32) (actual time=277.760..279.194 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=285475.16..285475.17 rows=1 width=32) (actual time=275.950..275.950 rows=1 loops=3)
-> Parallel Index Scan using data_y2018q4_date_idx on data_y2018q4 (cost=0.43..282796.80 rows=1071345 width=8) (actual time=0.022..227.687 rows=837204 loops=3)
Index Cond: ((date >= '2018-12-01'::date) AND (date <= '2018-12-31'::date))
Planning time: 0.187 ms
Execution time: 279.233 ms
(9 rows)
Now I have Index Scan using data_y2018q4_date_idx and also time of whole query is two times quicker: 279.233 ms compared to 578.465 ms. What is the explanation of this? How force planner to use the index scan when querying data table. How to achieve two times better timing?
Every time I run an EXPLAIN ANALYZE over a query in PostgreSQL, Execution time decreases. Why?
I need to do some indexing on the table and this way I can't be sure if my actions which will enhance performance. What do you recommend me?
Example of the result of successive execution of the query explain (analyze, buffers) :
my_db=# explain (analyze, buffers)
SELECT COUNT(*) AS count
FROM my_view
WHERE creation_time >= '2019-01-18 00:00:00'
AND creation_time <= '2019-01-18 09:43:36'
ORDER BY count DESC
LIMIT 10000;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------
Limit (cost=2568854.23..2568854.23 rows=1 width=8) (actual time=24380.613..24380.614 rows=1 loops=1)
Buffers: shared hit=14915 read=2052993, temp read=562 written=563
-> Sort (cost=2568854.23..2568854.23 rows=1 width=8) (actual time=24380.611..24380.611 rows=1 loops=1)
Sort Key: (count(*)) DESC
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=14915 read=2052993, temp read=562 written=563
-> Aggregate (cost=2568854.21..2568854.22 rows=1 width=8) (actual time=24380.599..24380.599 rows=1 loops=1)
Buffers: shared hit=14915 read=2052993, temp read=562 written=563
-> GroupAggregate (cost=2568854.18..2568854.20 rows=1 width=28) (actual time=24339.455..24380.589 rows=40 loops=1)
Group Key: my_table.creation_time, my_table.some_field
Buffers: shared hit=14915 read=2052993, temp read=562 written=563
-> Sort (cost=2568854.18..2568854.18 rows=1 width=12) (actual time=24338.361..24357.171 rows=199309 loops=1)
Sort Key: my_table.creation_time, my_table.some_field
Sort Method: external merge Disk: 4496kB
Buffers: shared hit=14915 read=2052993, temp read=562 written=563
-> Gather (cost=1000.00..2568854.17 rows=1 width=12) (actual time=23799.237..24142.217 rows=199309 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=14915 read=2052993
-> Parallel Seq Scan on my_table (cost=0.00..2567854.07 rows=1 width=12) (actual time=23796.695..24087.574 rows=66436 loops=3)
Filter: (creation_time >= '2019-01-18 00:00:00+00'::timestamp with time zone) AND (creation_time <= '2019-01-18 09:43:36+00'::timestamp with time zone)
Rows Removed by Filter: 21818095
Buffers: shared hit=14915 read=2052993
Planning time: 10.982 ms
Execution time: 24381.544 ms
(25 rows)
my_db=# explain (analyze, buffers)
SELECT COUNT(*) AS count
FROM my_view
WHERE creation_time >= '2019-01-18 00:00:00'
AND creation_time <= '2019-01-18 09:43:36'
ORDER BY count DESC
LIMIT 10000;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------
Limit (cost=2568854.23..2568854.23 rows=1 width=8) (actual time=6836.247..6836.248 rows=1 loops=1)
Buffers: shared hit=15181 read=2052727, temp read=562 written=563
-> Sort (cost=2568854.23..2568854.23 rows=1 width=8) (actual time=6836.245..6836.246 rows=1 loops=1)
Sort Key: (count(*)) DESC
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=15181 read=2052727, temp read=562 written=563
-> Aggregate (cost=2568854.21..2568854.22 rows=1 width=8) (actual time=6836.232..6836.232 rows=1 loops=1)
Buffers: shared hit=15181 read=2052727, temp read=562 written=563
-> GroupAggregate (cost=2568854.18..2568854.20 rows=1 width=28) (actual time=6792.036..6836.221 rows=40 loops=1)
Group Key: my_table.creation_time, my_table.some_field
Buffers: shared hit=15181 read=2052727, temp read=562 written=563
-> Sort (cost=2568854.18..2568854.18 rows=1 width=12) (actual time=6790.807..6811.469 rows=199309 loops=1)
Sort Key: my_table.creation_time, my_table.some_field
Sort Method: external merge Disk: 4496kB
Buffers: shared hit=15181 read=2052727, temp read=562 written=563
-> Gather (cost=1000.00..2568854.17 rows=1 width=12) (actual time=6271.571..6604.946 rows=199309 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=15181 read=2052727
-> Parallel Seq Scan on my_table (cost=0.00..2567854.07 rows=1 width=12) (actual time=6268.383..6529.416 rows=66436 loops=3)
Filter: (creation_time >= '2019-01-18 00:00:00+00'::timestamp with time zone) AND (creation_time <= '2019-01-18 09:43:36+00'::timestamp with time zone)
Rows Removed by Filter: 21818095
Buffers: shared hit=15181 read=2052727
Planning time: 0.570 ms
Execution time: 6837.137 ms
(25 rows)
Thank you.
What version of PostgreSQL is this?
There are two problems:
A missing index:
CREATE INDEX ON my_table (creation_time);
The value distribution statistics for creation time are way off, which causes PostgreSQL to underestimate the number of result rows terribly.
This is not the immediate cause of your problem, but you should investigate what is going on there:
If ANALYZE my_table improves the estimate, see that autoanalyze runs more often for that table.
If ANALYZE my_table does not help, collect better statistics:
ALTER TABLE my_table ALTER creation_time SET STATISTICS 1000;
ANALYZE my_table;
The reduced execution time you observe with EXPLAIN (ANALYZE) is probably due to caching effects.
I have a very small table "events" with just 10,703 records.
The following query takes about 600 ms:
SELECT count(id)
FROM events
WHERE event_date > now()
AND earth_distance((select position from zips where zip='94121'), ll_to_earth(venue_lat, venue_lon))<16090;
I tried to set gis index like this
CREATE INDEX latlon_idx on events USING gist(ll_to_earth(venue_lat, venue_lon));
but it didn't change anything. I also have index on event_date.
Here's explain analyze:
Aggregate (cost=5400.48..5400.49 rows=1 width=8) (actual time=615.479..615.479 rows=1 loops=1) InitPlan 1 (returns $0)
-> Index Scan using zips_zip_idx on zips (cost=0.00..8.27 rows=1 width=56) (actual time=0.051..0.056 rows=1 loops=1)
Index Cond: ((zip)::text = '94121'::text) -> Bitmap Heap Scan on events (cost=144.41..5386.03 rows=2468 width=8) (actual time=16.065..599.613 rows=3347 loops=1)
Recheck Cond: (event_date > now())
Filter: (sec_to_gc(cube_distance(($0)::cube, (ll_to_earth((venue_lat)::double precision, (venue_lon)::double precision))::cube)) < 16090::double precision)
-> Bitmap Index Scan on events_date_idx (cost=0.00..143.79 rows=7405 width=0) (actual time=13.523..13.523 rows=7614 loops=1)
Index Cond: (event_date > now()) Total runtime: 615.663 ms (10 rows)
What else I can try to speed it up?