I'm currently working on a data tracking system. The system is a multiprocess application written in Python and working in the following manner:
every S seconds it selects the N most appropriate tasks from the
database (currently Postgres) and finds data for it
if there's no tasks, it creates N new tasks and returns to (1).
The problem is following - currently I have approx. 80GB of data and 36M of tasks and the queries to the tasks table begin to work slower and slower (its the most populated and the most frequently used table).
The main bottleneck of performance
is the task tracking query:
LOCK TABLE task IN ACCESS EXCLUSIVE MODE;
SELECT * FROM task WHERE line = 1 AND action = ANY(ARRAY['Find', 'Get']) AND (stat IN ('', 'CR1') OR stat = 'ERROR' AND (actiondate <= NOW() OR actiondate IS NULL)) ORDER BY taskid, actiondate, action DESC, idtype, date ASC LIMIT 36;
Table "public.task"
Column | Type | Modifiers
------------+-----------------------------+-------------------------------------------------
number | character varying(16) | not null
date | timestamp without time zone | default now()
stat | character varying(16) | not null default ''::character varying
idtype | character varying(16) | not null default 'container'::character varying
uri | character varying(1024) |
action | character varying(16) | not null default 'Find'::character varying
reason | character varying(4096) | not null default ''::character varying
rev | integer | not null default 0
actiondate | timestamp without time zone |
modifydate | timestamp without time zone |
line | integer |
datasource | character varying(512) |
taskid | character varying(32) |
found | integer | not null default 0
Indexes:
"task_pkey" PRIMARY KEY, btree (idtype, number)
"action_index" btree (action)
"actiondate_index" btree (actiondate)
"date_index" btree (date)
"line_index" btree (line)
"modifydate_index" btree (modifydate)
"stat_index" btree (stat)
"taskid_index" btree (taskid)
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=312638.87..312638.96 rows=36 width=668) (actual time=1838.193..1838.197 rows=36 loops=1)
-> Sort (cost=312638.87..313149.54 rows=204267 width=668) (actual time=1838.192..1838.194 rows=36 loops=1)
Sort Key: taskid, actiondate, action, idtype, date
Sort Method: top-N heapsort Memory: 43kB
-> Bitmap Heap Scan on task (cost=107497.61..306337.31 rows=204267 width=668) (actual time=1013.491..1343.751 rows=914586 loops=1)
Recheck Cond: ((((stat)::text = ANY ('{"",CR1}'::text[])) OR ((stat)::text = 'ERROR'::text)) AND (line = 1))
Filter: (((action)::text = ANY ('{Find,Get}'::text[])) AND (((stat)::text = ANY ('{"",CR1}'::text[])) OR (((stat)::text = 'ERROR'::text) AND ((actiondate <= now()) OR (actiondate IS NULL)))))
Rows Removed by Filter: 133
Heap Blocks: exact=76064
-> BitmapAnd (cost=107497.61..107497.61 rows=237348 width=0) (actual time=999.457..999.457 rows=0 loops=1)
-> BitmapOr (cost=9949.15..9949.15 rows=964044 width=0) (actual time=121.936..121.936 rows=0 loops=1)
-> Bitmap Index Scan on stat_index (cost=0.00..9449.46 rows=925379 width=0) (actual time=117.791..117.791 rows=920900 loops=1)
Index Cond: ((stat)::text = ANY ('{"",CR1}'::text[]))
-> Bitmap Index Scan on stat_index (cost=0.00..397.55 rows=38665 width=0) (actual time=4.144..4.144 rows=30262 loops=1)
Index Cond: ((stat)::text = 'ERROR'::text)
-> Bitmap Index Scan on line_index (cost=0.00..97497.14 rows=9519277 width=0) (actual time=853.033..853.033 rows=9605462 loops=1)
Index Cond: (line = 1)
Planning time: 0.284 ms
Execution time: 1838.882 ms
(19 rows)
Of course, all involved fields are indexed. I'm currently thinking in two directions:
how to optimize the query and will it actually give me a performance improvement for perspective or not (currently it takes approx. 10 seconds per query which is inacceptable in dynamic task tracking)
where and how it would be more effective to store the task data - may be I should use another DB for such purposes - Cassandra, VoltDB or another Big Data store?
I think that the data should be somehow preordered to get actual tasks as fast as possible.
And also please keep in mind that my current volume of 80G is most likely a minimum rather than maximum for a such task.
Thanks in advance!
I don't quite understand your use case, but it doesn't look to me like your indexes are working too well. It looks like the query is relying mostly on the stat index. I think you need to look into a composite index something like (action, line, stat).
Another option is to shard your data across multiple tables splitting it on some key with low cardinality. I don't use postgres but I don't think looking at another db solution is going to work better unless you know exactly what you're optimizing for.
Related
I have a query which is taking 2.5 seconds to run. On checking the query plan, I got to know that postgres is heavily underestimating the number of rows leading to nested loops.
Following is the query
explain analyze
SELECT
reprocessed_videos.video_id AS reprocessed_videos_video_id
FROM
reprocessed_videos
JOIN commit_info ON commit_info.id = reprocessed_videos.commit_id
WHERE
commit_info.tag = 'stop_sign_tbc_inertial_fix'
AND reprocessed_videos.reprocess_type_id = 28
AND reprocessed_videos.classification_crop_type_id = 0
AND reprocessed_videos.reprocess_status = 'success';
Following is the explain analyze output.
Nested Loop (cost=0.84..22941.18 rows=1120 width=4) (actual time=31.169..2650.181 rows=179524 loops=1)
-> Index Scan using commit_info_tag_key on commit_info (cost=0.28..8.29 rows=1 width=4) (actual time=0.395..0.397 rows=1 loops=1)
Index Cond: ((tag)::text = 'stop_sign_tbc_inertial_fix'::text)
-> Index Scan using ix_reprocessed_videos_commit_id on reprocessed_videos (cost=0.56..22919.99 rows=1289 width=8) (actual time=30.770..2634.546 rows=179524 loops=1)
Index Cond: (commit_id = commit_info.id)
Filter: ((reprocess_type_id = 28) AND (classification_crop_type_id = 0) AND ((reprocess_status)::text = 'success'::text))
Rows Removed by Filter: 1190
Planning Time: 0.326 ms
Execution Time: 2657.724 ms
As we can see index scan using ix_reprocessed_videos_commit_id anticipated 1289 rows, whereas there were 179524 rows. I have trying to find the reason for this but have been unsuccessful in whatever I tried.
Following are the things I tried.
Vacuum and analyzing all the involved tables. (helped a little but not much maybe because the tables were automatically vacuumed and analyzed)
Increasing the statistics count for commit_id column alter table reprocessed_videos alter column commit_id set statistics 1000; (helped a little)
I read about extended statistics, but not sure if they are of any use here.
Following are the number of tuples in each of these tables
kpis=> SELECT relname, reltuples FROM pg_class where relname in ('reprocessed_videos', 'video_catalog', 'commit_info');
relname | reltuples
--------------------+---------------
commit_info | 1439
reprocessed_videos | 3.1563756e+07
Following is some information related to table schemas
Table "public.reprocessed_videos"
Column | Type | Collation | Nullable | Default
-----------------------------+-----------------------------+-----------+----------+------------------------------------------------
id | integer | | not null | nextval('reprocessed_videos_id_seq'::regclass)
video_id | integer | | |
reprocess_status | character varying | | |
commit_id | integer | | |
reprocess_type_id | integer | | |
classification_crop_type_id | integer | | |
Indexes:
"reprocessed_videos_pkey" PRIMARY KEY, btree (id)
"ix_reprocessed_videos_commit_id" btree (commit_id)
"ix_reprocessed_videos_video_id" btree (video_id)
"reprocessed_videos_video_commit_reprocess_crop_key" UNIQUE CONSTRAINT, btree (video_id, commit_id, reprocess_type_id, classification_crop_type_id)
Foreign-key constraints:
"reprocessed_videos_commit_id_fkey" FOREIGN KEY (commit_id) REFERENCES commit_info(id)
Table "public.commit_info"
Column | Type | Collation | Nullable | Default
------------------------+-------------------+-----------+----------+-----------------------------------------
id | integer | | not null | nextval('commit_info_id_seq'::regclass)
tag | character varying | | |
commit | character varying | | |
Indexes:
"commit_info_pkey" PRIMARY KEY, btree (id)
"commit_info_tag_key" UNIQUE CONSTRAINT, btree (tag)
I am sure that postgres should not use nested loops in this case, but is using them because of bad row estimates. Any help is highly appreciated.
Following are the experiments I tried.
Disabling index scan
Nested Loop (cost=734.59..84368.70 rows=1120 width=4) (actual time=274.694..934.965 rows=179524 loops=1)
-> Bitmap Heap Scan on commit_info (cost=4.29..8.30 rows=1 width=4) (actual time=0.441..0.444 rows=1 loops=1)
Recheck Cond: ((tag)::text = 'stop_sign_tbc_inertial_fix'::text)
Heap Blocks: exact=1
-> Bitmap Index Scan on commit_info_tag_key (cost=0.00..4.29 rows=1 width=0) (actual time=0.437..0.439 rows=1 loops=1)
Index Cond: ((tag)::text = 'stop_sign_tbc_inertial_fix'::text)
-> Bitmap Heap Scan on reprocessed_videos (cost=730.30..84347.51 rows=1289 width=8) (actual time=274.250..920.137 rows=179524 loops=1)
Recheck Cond: (commit_id = commit_info.id)
Filter: ((reprocess_type_id = 28) AND (classification_crop_type_id = 0) AND ((reprocess_status)::text = 'success'::text))
Rows Removed by Filter: 1190
Heap Blocks: exact=5881
-> Bitmap Index Scan on ix_reprocessed_videos_commit_id (cost=0.00..729.98 rows=25256 width=0) (actual time=273.534..273.534 rows=180714 loops=1)
Index Cond: (commit_id = commit_info.id)
Planning Time: 0.413 ms
Execution Time: 941.874 ms
I also set updated the statistics for the commit_id column. I observe a approximately 3x speed increase.
On trying to disable bitmapscan, the query does a sequential scan and takes 19 seconds to run
The nested loop is the perfect join strategy, because there is only one row from commit_info. Any other join strategy would lose.
The question is if the index scan on reprocessed_videos is really too slow. To experiment, try again after SET enable_indexscan = off; to get a bitmap index scan and see if that is better. Then also SET enable_bitmapscan = off; to get a sequential scan. I suspect that your current plan will win, but the bitmap index scan has a good chance.
If the bitmap index scan is better, you should indeed try to improve the estimate:
ALTER TABLE reprocessed_videos ALTER commit_id SET STATISTICS 1000;
ANALYZE reprocessed_videos;
You can try with other values; pick the lowest that gives you a good enough estimate.
Another thing to try are extended statistics:
CREATE STATISTICS corr (dependencies)
ON (reprocess_type_id, classification_crop_type_id, reprocess_status)
FROM reprocessed_videos;
ANALYZE reprocessed_videos;
Perhaps you don't need even all three columns in there; play with it.
If the bitmap index scan does not offer enough benefit, there is one way how you can speed up the current index scan:
CLUSTER reprocessed_videos USING ix_reprocessed_videos_commit_id;
That rewrites the table in index order (and blocks concurrent access while it is running, so be careful!). After that, the index scan could be considerably faster. However, the order is not maintained, so you'll have to repeat the CLUSTER occasionally if enough of the table has been modified.
Create a covering index; one that has all the condition columns (first, in descending order of cardinality) and the value columns (last) needed for you query, which means the index alone can be used - avoiding accessing the table:
create index covering_index on reprocessed_videos(
reprocess_type_id,
classification_crop_type_id,
reprocess_status,
commit_id,
video_id
);
And ensure there's one on commit_info(id) too - indexes are not automatically defined in postgres, even for primary keys:
create index commit_info__id on commit_info(id);
To get more accurate query plans, you can manually set the cardinality of condition columns, for example:
select count(distinct reprocess_type_id) from reprocessed_videos;
Then set that value to the column:
alter table reprocessed_videos alter column reprocess_type_id set (n_distinct = number_from_above_query)
I have a simple query for a simple table in postgres. I have a simple index on that table.
In some environments it is using the index when performing the query, in other environments (on the same RDS instance, different database) it isn't. (checked using EXPLAIN ANALYSE)
One thing I've noticed is that if the 'Check X Min' flag on the index is TRUE then index is not used. (pg_catalog.pg_index.indcheckxmin)
How do I ensure the index is used and, presumably, have the 'Check X Min' flag set to false?
Table contains 100K+ rows.
Things I have tried:
The index is valid and is always used in environments where the 'Check X Min' is set to false.
set enable_seqscan to off; still does not use the index.
Creating/recreating an index in these environments always seems to have 'Check X Min' set to true.
Vacuuming does not seem to help.
Setup of table and index:
CREATE TABLE schema_1.table_1 (
field_1 varchar(20) NOT NULL,
field_2 int4 NULL,
field_3 timestamptz NULL,
field_4 numeric(10,2) NULL
);
CREATE INDEX table_1_field_1_field_3_idx ON schema_1.table_1 USING btree (field_1, field_3 DESC);
Query:
select field_1, field_2, field_3, field_4
from schema_1.table_1
where field_1 = ’abcdef’
order by field_3 desc limit 1;
When not using index:
QUERY PLAN |
---------------------------------------------------------------------------------------------------------------------|
Limit (cost=4.41..4.41 rows=1 width=51) (actual time=3.174..3.176 rows=1 loops=1) |
-> Sort (cost=4.41..4.42 rows=6 width=51) (actual time=3.174..3.174 rows=1 loops=1) |
Sort Key: field_3 DESC |
Sort Method: top-N heapsort Memory: 25kB |
-> Seq Scan on table_1 (cost=0.00..4.38 rows=6 width=51) (actual time=3.119..3.150 rows=3 loops=1)|
Filter: ((field_1)::text = 'abcdef'::text) |
Rows Removed by Filter: 96 |
Planning time: 2.895 ms |
Execution time: 3.197 ms |
When using index:
QUERY PLAN |
--------------------------------------------------------------------------------------------------------------------------------------------------------|
Limit (cost=0.28..6.30 rows=1 width=51) (actual time=0.070..0.144 rows=1 loops=1) |
-> Index Scan using table_1_field_1_field_3_idx on field_1 (cost=0.28..12.31 rows=2 width=51) (actual time=0.049..0.066 rows=1 loops=1)|
Index Cond: ((field_1)::text = 'abcdef'::text) |
Planning time: 0.184 ms |
Execution time: 0.303 ms |
Have renamed fields, schema, and table to avoid sharing business context
You seem to be using CREATE INDEX CONCURRENTLY, and have long-open transactions. From the docs:
Even then, however, the index may not be immediately usable for queries: in the worst case, it cannot be used as long as transactions exist that predate the start of the index build.
You don't have a lot of options here. Hunt down and fix your long-open transactions, don't use CONCURRENTLY, or put up with the limitation.
I'm running into an issue in PostgreSQL (version 9.6.10) with indexes not working to speed up a MAX query with a simple equality filter on another column. Logically it seems that a simple multicolumn index on (A, B DESC) should make the query super fast.
I can't for the life of me figure out why I can't get a query to be performant regardless of what indexes are defined.
The table definition has the following:
- A primary key foo VARCHAR PRIMARY KEY (not used in the query)
- A UUID field that is NOT NULL called bar UUID
- A sequential_id column that was created as a BIGSERIAL UNIQUE type
Here's what the relevant columns look like exactly (with names modified for privacy):
Table "public.foo"
Column | Type | Modifiers
----------------------+--------------------------+--------------------------------------------------------------------------------
foo_uid | character varying | not null
bar_uid | uuid | not null
sequential_id | bigint | not null default nextval('foo_sequential_id_seq'::regclass)
Indexes:
"foo_pkey" PRIMARY KEY, btree (foo_uid)
"foo_bar_uid_sequential_id_idx", btree (bar_uid, sequential_id DESC)
"foo_sequential_id_key" UNIQUE CONSTRAINT, btree (sequential_id)
Despite having the index listed above on (bar_uid, sequential_id DESC), the following query requires an index scan and takes 100-300ms with a few million rows in the database.
The Query (get the max sequential_id for a given bar_uid):
SELECT MAX(sequential_id)
FROM foo
WHERE bar_uid = 'fa61424d-389f-4e75-ba2d-b77e6bb8491f';
The EXPLAIN ANALYZE result doesn't use the proper index. Also, for some reason it checks if sequential_id IS NOT NULL even though it's declared as not null.
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Result (cost=0.75..0.76 rows=1 width=8) (actual time=321.110..321.110 rows=1 loops=1)
InitPlan 1 (returns $0)
-> Limit (cost=0.43..0.75 rows=1 width=8) (actual time=321.106..321.106 rows=1 loops=1)
-> Index Scan Backward using foo_sequential_id_key on foo (cost=0.43..98936.43 rows=308401 width=8) (actual time=321.106..321.106 rows=1 loops=1)
Index Cond: (sequential_id IS NOT NULL)
Filter: (bar_uid = 'fa61424d-389f-4e75-ba2d-b77e6bb8491f'::uuid)
Rows Removed by Filter: 920761
Planning time: 0.196 ms
Execution time: 321.127 ms
(9 rows)
I can add a seemingly unnecessary GROUP BY to this query, and that speeds it up a bit, but it's still really slow for a query that should be near instantaneous with indexes defined:
SELECT MAX(sequential_id)
FROM foo
WHERE bar_uid = 'fa61424d-389f-4e75-ba2d-b77e6bb8491f'
GROUP BY bar_uid;
The EXPLAIN (ANALYZE, BUFFERS) result:
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GroupAggregate (cost=8510.54..65953.61 rows=6 width=24) (actual time=234.529..234.530 rows=1 loops=1)
Group Key: bar_uid
Buffers: shared hit=1 read=11909
-> Bitmap Heap Scan on foo (cost=8510.54..64411.55 rows=308401 width=24) (actual time=65.259..201.969 rows=309023 loops=1)
Recheck Cond: (bar_uid = 'fa61424d-389f-4e75-ba2d-b77e6bb8491f'::uuid)
Heap Blocks: exact=10385
Buffers: shared hit=1 read=11909
-> Bitmap Index Scan on foo_bar_uid_sequential_id_idx (cost=0.00..8433.43 rows=308401 width=0) (actual time=63.549..63.549 rows=309023 loops=1)
Index Cond: (bar_uid = 'fa61424d-389f-4e75-ba2d-b77e6bb8491f'::uuid)
Buffers: shared read=1525
Planning time: 3.067 ms
Execution time: 234.589 ms
(12 rows)
Does anyone have any idea what's blocking this query from being on the order of 10 milliseconds? This should logically be instantaneous with the right index defined. It should only require the time to follow links to the leaf value in the B-Tree.
Someone asked:
What do you get for SELECT * FROM pg_stats WHERE tablename = 'foo' and attname = 'bar_uid';?
schemaname | tablename | attname | inherited | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation | most_common_elems | most_common_elem_freqs | elem_count_histogram
------------+------------------------+-------------+-----------+-----------+-----------+------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------+------------------+-------------+-------------------+------------------------+----------------------
public | foo | bar_uir | f | 0 | 16 | 6 | {fa61424d-389f-4e75-ba2d-b77e6bb8491f,5c5dcae9-1b7e-4413-99a1-62fde2b89c32,50b1e842-fc32-4c2c-b00f-4a17c3c1c5fa,7ff1999c-c0ea-b700-343f-9a737f6ad659,f667b353-e199-4890-9ffd-4940ea11fe2c,b24ce968-29fd-4587-ba1f-227036ee3135} | {0.203733,0.203167,0.201567,0.195867,0.1952,0.000466667} | | -0.158093 | | |
(1 row)
I'm running Geodjango/Postgres 9.1/PostGIS and I'm trying to get the following query (and others like it) to run faster.
[query snipped for brevity]
SELECT "crowdbreaks_incomingkeyword"."keyword_id"
, COUNT("crowdbreaks_incomingkeyword"."keyword_id") AS "cnt"
FROM "crowdbreaks_incomingkeyword"
INNER JOIN "crowdbreaks_tweet"
ON ("crowdbreaks_incomingkeyword"."tweet_id"
= "crowdbreaks_tweet"."tweet_id")
LEFT OUTER JOIN "crowdbreaks_place"
ON ("crowdbreaks_tweet"."place_id"
= "crowdbreaks_place"."place_id")
WHERE (("crowdbreaks_tweet"."coordinates"
# ST_GeomFromEWKB(E'\\001 ... \\000\\000\\000\\0008#'::bytea)
OR ST_Overlaps("crowdbreaks_place"."bounding_box"
, ST_GeomFromEWKB(E'\\001...00\\000\\0008#'::bytea)
))
AND "crowdbreaks_tweet"."created_at" > E'2012-04-17 15:46:12.109893'
AND "crowdbreaks_tweet"."created_at" < E'2012-04-18 15:46:12.109899' )
GROUP BY "crowdbreaks_incomingkeyword"."keyword_id"
, "crowdbreaks_incomingkeyword"."keyword_id"
;
Here is what the crowdbreaks_tweet table looks like:
\d+ crowdbreaks_tweet;
Table "public.crowdbreaks_tweet"
Column | Type | Modifiers | Storage | Description
---------------+--------------------------+-----------+----------+-------------
tweet_id | bigint | not null | plain |
tweeter | bigint | not null | plain |
text | text | not null | extended |
created_at | timestamp with time zone | not null | plain |
country_code | character varying(3) | | extended |
place_id | character varying(32) | | extended |
coordinates | geometry | | main |
Indexes:
"crowdbreaks_tweet_pkey" PRIMARY KEY, btree (tweet_id)
"crowdbreaks_tweet_coordinates_id" gist (coordinates)
"crowdbreaks_tweet_created_at" btree (created_at)
"crowdbreaks_tweet_place_id" btree (place_id)
"crowdbreaks_tweet_place_id_like" btree (place_id varchar_pattern_ops)
Check constraints:
"enforce_dims_coordinates" CHECK (st_ndims(coordinates) = 2)
"enforce_geotype_coordinates" CHECK (geometrytype(coordinates) = 'POINT'::text OR coordinates IS NULL)
"enforce_srid_coordinates" CHECK (st_srid(coordinates) = 4326)
Foreign-key constraints:
"crowdbreaks_tweet_place_id_fkey" FOREIGN KEY (place_id) REFERENCES crowdbreaks_place(place_id) DEFERRABLE INITIALLY DEFERRED
Referenced by:
TABLE "crowdbreaks_incomingkeyword" CONSTRAINT "crowdbreaks_incomingkeyword_tweet_id_fkey" FOREIGN KEY (tweet_id) REFERENCES crowdbreaks_tweet(tweet_id) DEFERRABLE INITIALLY DEFERRED
TABLE "crowdbreaks_tweetanswer" CONSTRAINT "crowdbreaks_tweetanswer_tweet_id_id_fkey" FOREIGN KEY (tweet_id_id) REFERENCES crowdbreaks_tweet(tweet_id) DEFERRABLE INITIALLY DEFERRED
Has OIDs: no
And here is the explain analyze for the query:
HashAggregate (cost=184022.03..184023.18 rows=115 width=4) (actual time=6381.707..6381.769 rows=62 loops=1)
-> Hash Join (cost=103857.48..183600.24 rows=84357 width=4) (actual time=1745.449..6377.505 rows=3453 loops=1)
Hash Cond: (crowdbreaks_incomingkeyword.tweet_id = crowdbreaks_tweet.tweet_id)
-> Seq Scan on crowdbreaks_incomingkeyword (cost=0.00..36873.97 rows=2252597 width=12) (actual time=0.008..2136.839 rows=2252597 loops=1)
-> Hash (cost=102535.68..102535.68 rows=80544 width=8) (actual time=1744.815..1744.815 rows=3091 loops=1)
Buckets: 4096 Batches: 4 Memory Usage: 32kB
-> Hash Left Join (cost=16574.93..102535.68 rows=80544 width=8) (actual time=112.551..1740.651 rows=3091 loops=1)
Hash Cond: ((crowdbreaks_tweet.place_id)::text = (crowdbreaks_place.place_id)::text)
Filter: ((crowdbreaks_tweet.coordinates # '0103000020E61000000100000005000000AE47E17A141E5FC00000000000003840AE47E17A141E5FC029ED0DBE30B14840A4703D0AD7A350C029ED0DBE30B14840A4703D0AD7A350C00000000000003840AE47E17A141E5FC00000000000003840'::geometry) OR ((crowdbreaks_place.bounding_box && '0103000020E61000000100000005000000AE47E17A141E5FC00000000000003840AE47E17A141E5FC029ED0DBE30B14840A4703D0AD7A350C029ED0DBE30B14840A4703D0AD7A350C00000000000003840AE47E17A141E5FC00000000000003840'::geometry) AND _st_overlaps(crowdbreaks_place.bounding_box, '0103000020E61000000100000005000000AE47E17A141E5FC00000000000003840AE47E17A141E5FC029ED0DBE30B14840A4703D0AD7A350C029ED0DBE30B14840A4703D0AD7A350C00000000000003840AE47E17A141E5FC00000000000003840'::geometry)))
-> Bitmap Heap Scan on crowdbreaks_tweet (cost=15874.18..67060.28 rows=747873 width=125) (actual time=96.012..940.462 rows=736784 loops=1)
Recheck Cond: ((created_at > '2012-04-17 15:46:12.109893+00'::timestamp with time zone) AND (created_at < '2012-04-18 15:46:12.109899+00'::timestamp with time zone))
-> Bitmap Index Scan on crowdbreaks_tweet_crreated_at (cost=0.00..15687.22 rows=747873 width=0) (actual time=94.259..94.259 rows=736784 loops=1)
Index Cond: ((created_at > '2012-04-17 15:46:12.109893+00'::timestamp with time zone) AND (created_at < '2012-04-18 15:46:12.109899+00'::timestamp with time zone))
-> Hash (cost=217.11..217.11 rows=6611 width=469) (actual time=15.926..15.926 rows=6611 loops=1)
Buckets: 1024 Batches: 4 Memory Usage: 259kB
-> Seq Scan on crowdbreaks_place (cost=0.00..217.11 rows=6611 width=469) (actual time=0.005..6.908 rows=6611 loops=1)
Total runtime: 6381.903 ms
(17 rows)
That's a pretty bad runtime for the query. Ideally, I'd like to get results back in a second or two.
I've increased shared_buffers on Postgres to 2GB (I have 8GB of RAM) but other than that I'm not quite sure what to do. What are my options? Should I do fewer joins? Are there any other indexes I can throw on there? The sequential scan on crowdbreaks_incomingkeyword doesn't make sense to me. It's a table of foreign keys to other tables, and thus has indexes on it.
Judging from your comment I would try two things:
Raise statistics target for involved columns (and run ANALYZE).
ALTER TABLE tbl ALTER COLUMN column SET STATISTICS 1000;
The data distribution may be uneven. A bigger sample may provide the query planner with more accurate estimates.
Play with the cost settings in postgresql.conf. Your sequential scans might need to be more expensive compared to your index scans to give good estimates.
Try to lower the cost for cpu_index_tuple_cost and set effective_cache_size to something as high as three quaters of your total RAM for a dedicated DB server.
Our application has a very slow statement, it takes more than 11 second, so I want to know is there any way to optimize it ?
The SQL statement
SELECT id FROM mapfriends.cell_forum_topic WHERE id in (
SELECT topicid FROM mapfriends.cell_forum_item WHERE skyid=103230293 GROUP BY topicid )
AND categoryid=29 AND hidden=false ORDER BY restoretime DESC LIMIT 10 OFFSET 0;
id
---------
2471959
2382296
1535967
2432006
2367281
2159706
1501759
1549304
2179763
1598043
(10 rows)
Time: 11444.976 ms
Plan
friends=> explain SELECT id FROM friends.cell_forum_topic WHERE id in (
friends(> SELECT topicid FROM friends.cell_forum_item WHERE skyid=103230293 GROUP BY topicid)
friends-> AND categoryid=29 AND hidden=false ORDER BY restoretime DESC LIMIT 10 OFFSET 0;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
Limit (cost=1443.15..1443.15 rows=2 width=12)
-> Sort (cost=1443.15..1443.15 rows=2 width=12)
Sort Key: cell_forum_topic.restoretime
-> Nested Loop (cost=1434.28..1443.14 rows=2 width=12)
-> HashAggregate (cost=1434.28..1434.30 rows=2 width=4)
-> Index Scan using cell_forum_item_idx_skyid on cell_forum_item (cost=0.00..1430.49 rows=1516 width=4)
Index Cond: (skyid = 103230293)
-> Index Scan using cell_forum_topic_pkey on cell_forum_topic (cost=0.00..4.40 rows=1 width=12)
Index Cond: (cell_forum_topic.id = cell_forum_item.topicid)
Filter: ((NOT cell_forum_topic.hidden) AND (cell_forum_topic.categoryid = 29))
(10 rows)
Time: 1.109 ms
Indexes
friends=> \d cell_forum_item
Table "friends.cell_forum_item"
Column | Type | Modifiers
---------+--------------------------------+--------------------------------------------------------------
id | integer | not null default nextval('cell_forum_item_id_seq'::regclass)
topicid | integer | not null
skyid | integer | not null
content | character varying(200) |
addtime | timestamp(0) without time zone | default now()
ischeck | boolean |
Indexes:
"cell_forum_item_pkey" PRIMARY KEY, btree (id)
"cell_forum_item_idx" btree (topicid, skyid)
"cell_forum_item_idx_1" btree (topicid, id)
"cell_forum_item_idx_skyid" btree (skyid)
friends=> \d cell_forum_topic
Table "friends.cell_forum_topic"
Column | Type | Modifiers
-------------+--------------------------------+-------------------------------------------------------------------------------------
-
id | integer | not null default nextval(('"friends"."cell_forum_topic_id_seq"'::text)::regclass)
categoryid | integer | not null
topic | character varying | not null
content | character varying | not null
skyid | integer | not null
addtime | timestamp(0) without time zone | default now()
reference | integer | default 0
restore | integer | default 0
restoretime | timestamp(0) without time zone | default now()
locked | boolean | default false
settop | boolean | default false
hidden | boolean | default false
feature | boolean | default false
picid | integer | default 29249
managerid | integer |
imageid | integer | default 0
pass | boolean | default false
ischeck | boolean |
Indexes:
"cell_forum_topic_pkey" PRIMARY KEY, btree (id)
"idx_cell_forum_topic_1" btree (categoryid, settop, hidden, restoretime, skyid)
"idx_cell_forum_topic_2" btree (categoryid, hidden, restoretime, skyid)
"idx_cell_forum_topic_3" btree (categoryid, hidden, restoretime)
"idx_cell_forum_topic_4" btree (categoryid, hidden, restore)
"idx_cell_forum_topic_5" btree (categoryid, hidden, restoretime, feature)
"idx_cell_forum_topic_6" btree (categoryid, settop, hidden, restoretime)
Explain analyze
mapfriends=> explain analyze SELECT id FROM mapfriends.cell_forum_topic
mapfriends-> join (SELECT topicid FROM mapfriends.cell_forum_item WHERE skyid=103230293 GROUP BY topicid) as tmp
mapfriends-> on mapfriends.cell_forum_topic.id=tmp.topicid
mapfriends-> where categoryid=29 AND hidden=false ORDER BY restoretime DESC LIMIT 10 OFFSET 0;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------
Limit (cost=1446.89..1446.90 rows=2 width=12) (actual time=18016.006..18016.013 rows=10 loops=1)
-> Sort (cost=1446.89..1446.90 rows=2 width=12) (actual time=18016.001..18016.002 rows=10 loops=1)
Sort Key: cell_forum_topic.restoretime
Sort Method: quicksort Memory: 25kB
-> Nested Loop (cost=1438.02..1446.88 rows=2 width=12) (actual time=16988.492..18015.869 rows=20 loops=1)
-> HashAggregate (cost=1438.02..1438.04 rows=2 width=4) (actual time=15446.735..15447.243 rows=610 loops=1)
-> Index Scan using cell_forum_item_idx_skyid on cell_forum_item (cost=0.00..1434.22 rows=1520 width=4) (actual time=302.378..15429.782 rows=7133 loops=1)
Index Cond: (skyid = 103230293)
-> Index Scan using cell_forum_topic_pkey on cell_forum_topic (cost=0.00..4.40 rows=1 width=12) (actual time=4.210..4.210 rows=0 loops=610)
Index Cond: (cell_forum_topic.id = cell_forum_item.topicid)
Filter: ((NOT cell_forum_topic.hidden) AND (cell_forum_topic.categoryid = 29))
Total runtime: 18019.461 ms
Could you give us some more information about the tables (the statistics) and the configuration?
SELECT version();
SELECT category, name, setting FROM pg_settings WHERE name IN('effective_cache_size', 'enable_seqscan', 'shared_buffers');
SELECT * FROM pg_stat_user_tables WHERE relname IN('cell_forum_topic', 'cell_forum_item');
SELECT * FROM pg_stat_user_indexes WHERE relname IN('cell_forum_topic', 'cell_forum_item');
SELECT * FROM pg_stats WHERE tablename IN('cell_forum_topic', 'cell_forum_item');
And before getting this data, use ANALYZE.
It looks like you have a problem with an index, this is where all the query spends all it's time:
-> Index Scan using cell_forum_item_idx_skyid on
cell_forum_item (cost=0.00..1434.22
rows=1520 width=4) (actual
time=302.378..15429.782 rows=7133
loops=1)
If you use VACUUM FULL on a regular basis (NOT RECOMMENDED!), index bloat might be your problem. A REINDEX might be a good idea, just to be sure:
REINDEX TABLE cell_forum_item;
And talking about indexes, you can drop a couple of them, these are obsolete:
"idx_cell_forum_topic_6" btree (categoryid, settop, hidden, restoretime)
"idx_cell_forum_topic_3" btree (categoryid, hidden, restoretime)
Other indexes have the same data and can be used by the database as well.
It looks like you have a couple of problems:
autovacuum is turned off or it's way
behind. That last autovacuum was on
2010-12-02 and you have 256734 dead
tuples in one table and 451430 dead
ones in the other.... You have to do
something about this, this is a
serious problem.
When autovacuum is working again, you
have to do a VACUUM FULL and a
REINDEX to force a table rewrite and
get rid of all empty space in your
tables.
after fixing the vacuum-problem, you
have to analyze as well: the database
expects 1520 results but it gets 7133
results. This could be a problem with
statistics, maybe you have to
increase the STATISTICS.
The query itself needs some rewriting
as well: It gets 7133 results but it
needs only 610 results. Over 90% of
the results are lost... And getting
these 7133 takes a lot of time, over
15 seconds. Get rid of the subquery by using a JOIN without the GROUP BY or use EXISTS, also without the GROUP BY.
But first get autovacuum back on track, before you get new or other problems.
the problem isn't due to lack of caching of the query plan but most likely due to the choice of plan due to lack of appropriate indexes