Slow postgresql query if where clause matches no rows - postgresql

I have a simple table in a PostgreSQL 9.0.3 database that holds data polled from a wind turbine controller. Each row represents the value of a particular sensor at a particular time. Currently the table has around 90M rows:
wtdata=> \d integer_data
Table "public.integer_data"
Column | Type | Modifiers
--------+--------------------------+-----------
date | timestamp with time zone | not null
name | character varying(64) | not null
value | integer | not null
Indexes:
"integer_data_pkey" PRIMARY KEY, btree (date, name)
"integer_data_date_idx" btree (date)
"integer_data_name_idx" btree (name)
One query that I need is to find the last time that a variable was updated:
select max(date) from integer_data where name = '<name of variable>';
This query works fine when searching for a variable that exists in the table:
wtdata=> select max(date) from integer_data where name = 'STATUS_OF_OUTPUTS_UINT16';
max
------------------------
2011-04-11 02:01:40-05
(1 row)
However, if I try and search for a variable that doesn't exist in the table, the query hangs (or takes longer than I have patience for):
select max(date) from integer_data where name = 'Message';
I've let the query run for hours and sometimes days with no end in sight. There are no rows in the table with name = 'Message':
wtdata=> select count(*) from integer_data where name = 'Message';
count
-------
0
(1 row)
I don't understand why one query is fast and the other takes forever. Is the query somehow being forced to scan the entire table for some reason?
wtdata=> explain select max(date) from integer_data where name = 'Message';
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Result (cost=13.67..13.68 rows=1 width=0)
InitPlan 1 (returns $0)
-> Limit (cost=0.00..13.67 rows=1 width=8)
-> Index Scan Backward using integer_data_pkey on integer_data (cost=0.00..6362849.53 rows=465452 width=8)
Index Cond: ((date IS NOT NULL) AND ((name)::text = 'Message'::text))
(5 rows)
Here's the query plan for a fast query:
wtdata=> explain select max(date) from integer_data where name = 'STATUS_OF_OUTPUTS_UINT16';
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
Result (cost=4.64..4.65 rows=1 width=0)
InitPlan 1 (returns $0)
-> Limit (cost=0.00..4.64 rows=1 width=8)
-> Index Scan Backward using integer_data_pkey on integer_data (cost=0.00..16988170.38 rows=3659570 width=8)
Index Cond: ((date IS NOT NULL) AND ((name)::text = 'STATUS_OF_OUTPUTS_UINT16'::text))
(5 rows)

Change the primary key to (name,date).

Related

Why is a MAX query with an equality filter on one other column so slow in Postgresql?

I'm running into an issue in PostgreSQL (version 9.6.10) with indexes not working to speed up a MAX query with a simple equality filter on another column. Logically it seems that a simple multicolumn index on (A, B DESC) should make the query super fast.
I can't for the life of me figure out why I can't get a query to be performant regardless of what indexes are defined.
The table definition has the following:
- A primary key foo VARCHAR PRIMARY KEY (not used in the query)
- A UUID field that is NOT NULL called bar UUID
- A sequential_id column that was created as a BIGSERIAL UNIQUE type
Here's what the relevant columns look like exactly (with names modified for privacy):
Table "public.foo"
Column | Type | Modifiers
----------------------+--------------------------+--------------------------------------------------------------------------------
foo_uid | character varying | not null
bar_uid | uuid | not null
sequential_id | bigint | not null default nextval('foo_sequential_id_seq'::regclass)
Indexes:
"foo_pkey" PRIMARY KEY, btree (foo_uid)
"foo_bar_uid_sequential_id_idx", btree (bar_uid, sequential_id DESC)
"foo_sequential_id_key" UNIQUE CONSTRAINT, btree (sequential_id)
Despite having the index listed above on (bar_uid, sequential_id DESC), the following query requires an index scan and takes 100-300ms with a few million rows in the database.
The Query (get the max sequential_id for a given bar_uid):
SELECT MAX(sequential_id)
FROM foo
WHERE bar_uid = 'fa61424d-389f-4e75-ba2d-b77e6bb8491f';
The EXPLAIN ANALYZE result doesn't use the proper index. Also, for some reason it checks if sequential_id IS NOT NULL even though it's declared as not null.
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Result (cost=0.75..0.76 rows=1 width=8) (actual time=321.110..321.110 rows=1 loops=1)
InitPlan 1 (returns $0)
-> Limit (cost=0.43..0.75 rows=1 width=8) (actual time=321.106..321.106 rows=1 loops=1)
-> Index Scan Backward using foo_sequential_id_key on foo (cost=0.43..98936.43 rows=308401 width=8) (actual time=321.106..321.106 rows=1 loops=1)
Index Cond: (sequential_id IS NOT NULL)
Filter: (bar_uid = 'fa61424d-389f-4e75-ba2d-b77e6bb8491f'::uuid)
Rows Removed by Filter: 920761
Planning time: 0.196 ms
Execution time: 321.127 ms
(9 rows)
I can add a seemingly unnecessary GROUP BY to this query, and that speeds it up a bit, but it's still really slow for a query that should be near instantaneous with indexes defined:
SELECT MAX(sequential_id)
FROM foo
WHERE bar_uid = 'fa61424d-389f-4e75-ba2d-b77e6bb8491f'
GROUP BY bar_uid;
The EXPLAIN (ANALYZE, BUFFERS) result:
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GroupAggregate (cost=8510.54..65953.61 rows=6 width=24) (actual time=234.529..234.530 rows=1 loops=1)
Group Key: bar_uid
Buffers: shared hit=1 read=11909
-> Bitmap Heap Scan on foo (cost=8510.54..64411.55 rows=308401 width=24) (actual time=65.259..201.969 rows=309023 loops=1)
Recheck Cond: (bar_uid = 'fa61424d-389f-4e75-ba2d-b77e6bb8491f'::uuid)
Heap Blocks: exact=10385
Buffers: shared hit=1 read=11909
-> Bitmap Index Scan on foo_bar_uid_sequential_id_idx (cost=0.00..8433.43 rows=308401 width=0) (actual time=63.549..63.549 rows=309023 loops=1)
Index Cond: (bar_uid = 'fa61424d-389f-4e75-ba2d-b77e6bb8491f'::uuid)
Buffers: shared read=1525
Planning time: 3.067 ms
Execution time: 234.589 ms
(12 rows)
Does anyone have any idea what's blocking this query from being on the order of 10 milliseconds? This should logically be instantaneous with the right index defined. It should only require the time to follow links to the leaf value in the B-Tree.
Someone asked:
What do you get for SELECT * FROM pg_stats WHERE tablename = 'foo' and attname = 'bar_uid';?
schemaname | tablename | attname | inherited | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation | most_common_elems | most_common_elem_freqs | elem_count_histogram
------------+------------------------+-------------+-----------+-----------+-----------+------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------+------------------+-------------+-------------------+------------------------+----------------------
public | foo | bar_uir | f | 0 | 16 | 6 | {fa61424d-389f-4e75-ba2d-b77e6bb8491f,5c5dcae9-1b7e-4413-99a1-62fde2b89c32,50b1e842-fc32-4c2c-b00f-4a17c3c1c5fa,7ff1999c-c0ea-b700-343f-9a737f6ad659,f667b353-e199-4890-9ffd-4940ea11fe2c,b24ce968-29fd-4587-ba1f-227036ee3135} | {0.203733,0.203167,0.201567,0.195867,0.1952,0.000466667} | | -0.158093 | | |
(1 row)

Will selecting from table 1 into a temp table before joining with table 2 help a query run faster?

I have table 1 with 50,000,000 rows
I left join it to table 2 with 100,000,000 rows
The query runs slowly. That doesn't surprise me as both tables are large.
When I examine table 1 I see that I only need 39,000 rows from it.
If I were to select those 39,000 rows into a temp table, then join the temp table with table 2, would it help my query run faster?
Does Postgresql somehow optimize my query to a similar effect?
My join on fields are indexed, but some of my where clause fields are not indexed.
My table indexing, counts, and current slow query are below.
Thanks!
schemaname | tablename | indexname | tablespace | indexdef
------------+------------+---------------------------------+------------+-------------------------------------------------------------------------------------------
schema1 | table1 | idx_table1_ind1 | | CREATE INDEX idx_table1_ind1 ON table1 USING btree(field2)
schema1 | table1 | pk_table1 | | CREATE UNIQUE INDEX pk_table1 ON table1 USING btree (id)
(5 rows)
iii=> select * from pg_indexes where tablename = 'table2';
schemaname | tablename | indexname | tablespace | indexdef
------------+-----------+---------------------------------------+------------+--------------------------------------------------------------------------------------------------------
schema2 | table2 | idx_table2_ind1 | | CREATE INDEX idx_table2_ind1 ON table2 USING btree (field3)
schema2 | table2 | idx_table2_ind3 | | CREATE INDEX idx_table2_ind3 ON table2 USING btree (field6)
schema2 | table2 | pk_table2 | | CREATE UNIQUE INDEX pk_table2 ON table2 USING btree (id)
(5 rows)
iii=> select count(id) from table1;
count
----------
50,442,468
(1 row)
iii=> select count(id) from table2;
count
-----------
107,978,483
(1 row)
select
table2.field1,
count(*)
from table1
left join table2
on table1.field2 = table2.field3
where 1=1
and extract(year from cast(table1.field4 as date)) = 2018
and extract(month from cast(table1.field4 as date)) = 1
and table1.field5 = 'foo'
and table2.field6 = 'bar'
group by table2.field1
Explain results:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=2945968.78..2945968.88 rows=10 width=40) (actual time=22526.520..22526.543 rows=10 loops=1)
-> HashAggregate (cost=2945968.77..2945974.93 rows=616 width=40) (actual time=22526.515..22526.531 rows=11 loops=1)
Group Key: varfield.field_content
-> Nested Loop (cost=0.57..2945965.69 rows=616 width=40) (actual time=15151.373..22343.986 rows=102148 loops=1)
-> Seq Scan on circ_trans (cost=0.00..2870045.85 rows=59 width=8) (actual time=15151.324..17935.922 rows=102147 loops=1)
Filter: ((item_agency_code_num = 9) AND (date_part('year'::text, ((transaction_gmt)::date)::timestamp without time zone) = 2018::double precision) AND (date_part('month'::text, ((transaction_gmt)::date)::timestamp without time zone) = 1::double precision))
Rows Removed by Filter: 50371955
-> Index Scan using idx_varfield_record_id on varfield (cost=0.57..1286.68 rows=10 width=48) (actual time=0.014..0.040 rows=1 loops=102147)
Index Cond: (record_id = circ_trans.bib_record_id)
Filter: ((marc_tag)::text = '245'::text)
Rows Removed by Filter: 26
Planning time: 0.528 ms
Execution time: 22527.076 ms
(13 rows)

Limit slows down my postgres query

Hi i have a simple query on a single table which runs pretty fast, but i want to page my results and the LIMIT slows down the select incredibly. The Table contains about 80 Million rows. I'm on postgres 9.2.
Without LIMIT it takes 330ms and returns 2100 rows
EXPLAIN SELECT * from interval where username='1228321f131084766f3b0c6e40bc5edc41d4677e' order by time desc
Sort (cost=156599.71..156622.43 rows=45438 width=108)"
Sort Key: "time""
-> Bitmap Heap Scan on "interval" (cost=1608.05..155896.71 rows=45438 width=108)"
Recheck Cond: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)"
-> Bitmap Index Scan on interval_username (cost=0.00..1605.77 rows=45438 width=0)"
Index Cond: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)
EXPLAIN ANALYZE SELECT * from interval where
username='1228321f131084766f3b0c6e40bc5edc41d4677e' order by time desc
Sort (cost=156599.71..156622.43 rows=45438 width=108) (actual time=1.734..1.887 rows=2131 loops=1)
Sort Key: id
Sort Method: quicksort Memory: 396kB
-> Bitmap Heap Scan on "interval" (cost=1608.05..155896.71 rows=45438 width=108) (actual time=0.425..0.934 rows=2131 loops=1)
Recheck Cond: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)
-> Bitmap Index Scan on interval_username (cost=0.00..1605.77 rows=45438 width=0) (actual time=0.402..0.402 rows=2131 loops=1)
Index Cond: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)
Total runtime: 2.065 ms
With LIMIT it takes several minuts (i never waited for it to end)
EXPLAIN SELECT * from interval where username='1228321f131084766f3b0c6e40bc5edc41d4677e' order by time desc LIMIT 10
Limit (cost=0.00..6693.99 rows=10 width=108)
-> Index Scan Backward using interval_time on "interval" (cost=0.00..30416156.03 rows=45438 width=108)
Filter: ((username)::text = '1228321f131084766f3b0c6e40bc5edc41d4677e'::text)
Table definition
-- Table: "interval"
-- DROP TABLE "interval";
CREATE TABLE "interval"
(
uuid character varying(255) NOT NULL,
deleted boolean NOT NULL,
id bigint NOT NULL,
"interval" bigint NOT NULL,
"time" timestamp without time zone,
trackerversion character varying(255),
username character varying(255),
CONSTRAINT interval_pkey PRIMARY KEY (uuid),
CONSTRAINT fk_272h71b2gfyov9fwnksyditdd FOREIGN KEY (username)
REFERENCES appuser (panelistcode) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE CASCADE,
CONSTRAINT uk_hyi5iws50qif6jwky9xcch3of UNIQUE (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE "interval"
OWNER TO postgres;
-- Index: interval_time
-- DROP INDEX interval_time;
CREATE INDEX interval_time
ON "interval"
USING btree
("time");
-- Index: interval_username
-- DROP INDEX interval_username;
CREATE INDEX interval_username
ON "interval"
USING btree
(username COLLATE pg_catalog."default");
-- Index: interval_uuid
-- DROP INDEX interval_uuid;
CREATE INDEX interval_uuid
ON "interval"
USING btree
(uuid COLLATE pg_catalog."default");
Further results
SELECT n_distinct FROM pg_stats WHERE tablename='interval' AND attname='username';
n_distinct=1460
SELECT AVG(length) FROM (SELECT username, COUNT(*) AS length FROM interval GROUP BY username) as freq;
45786.022605591910
SELECT COUNT(*) FROM interval WHERE username='1228321f131084766f3b0c6e40bc5edc41d4677e';
2131
The planner is expecting 45438 rows for username '1228321f131084766f3b0c6e40bc5edc41d4677e', while in reality there are only 2131 rows with it, thus it thinks it will find the 10 rows you want faster by looking backward through the interval_time index.
Try increasing the stats on the username column and see whether the query plan will change.
ALTER TABLE interval ALTER COLUMN username SET STATISTICS 100;
ANALYZE interval;
You can try different values of statistics up to 10000.
If you are still not satisfied with the plan and you are sure that you can do better than the planner and know what you are doing, then you can bypass any index easily by performing some operation over it that does not change its value.
For example, instead of ORDER BY time, you can use ORDER BY time + '0 seconds'::interval. That way any index on the value of time stored in the table will be bypassed. For integer values you can multiply * 1, etc.
The page http://thebuild.com/blog/2014/11/18/when-limit-attacks/ showed that i could force postgres to do better by using CTE
WITH inner_query AS (SELECT * from interval where username='7823721a3eb9243be63c6c3a13dffee44753cda6')
SELECT * FROM inner_query order by time desc LIMIT 10;

Slow postgres text column query

I have a btree index on a text column which holds a status token - queries including this field run 100x slower than queries without it - what can I do to speed it up? It's rather low cardinality - I've tried hash, btree with text_pattern_ops, and partial indexes -- collation is utf8 . Queries without the status index run in about the same time...
db=# show lc_collate;
lc_collate
-------------
en_US.UTF-8
(1 row)
db=# drop index job_status_idx;
DROP INDEX
db=# CREATE INDEX job_status_idx ON job(status text_pattern_ops);
CREATE INDEX
db=# select status, count(*) from job group by 1;
status | count
-------------+--------
pending | 365027
booked | 37515
submitted | 20783
cancelled | 191707
negotiating | 30
completed | 241339
canceled | 56
(7 rows)
db=# explain analyze SELECT key_ FROM "job" WHERE active = true and start > '2014-06-15T19:23:23.691670'::timestamp and status = 'completed' ORDER BY start DESC OFFSET 450 LIMIT
150;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=5723.07..7630.61 rows=150 width=51) (actual time=634.978..638.086 rows=150 loops=1)
-> Index Scan Backward using job_start_idx on job (cost=0.42..524054.39 rows=41209 width=51) (actual time=625.776..637.023 rows=600 loops=1)
Index Cond: (start > '2014-06-15 19:23:23.69167'::timestamp without time zone)
Filter: (active AND (status = 'completed'::text))
Rows Removed by Filter: 94866
Total runtime: 638.358 ms
(6 rows)
db=# explain analyze SELECT key_ FROM "job" WHERE active = true and start > '2014-06-15T19:23:23.691670'::timestamp ORDER BY start DESC OFFSET 450 LIMIT 150;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=1585.61..2114.01 rows=150 width=51) (actual time=4.620..6.333 rows=150 loops=1)
-> Index Scan Backward using job_start_idx on job (cost=0.42..523679.58 rows=148661 width=51) (actual time=0.080..5.271 rows=600 loops=1)
Index Cond: (start > '2014-06-15 19:23:23.69167'::timestamp without time zone)
Filter: active
Total runtime: 6.584 ms
(5 rows)
This is your query:
SELECT key_
FROM "job"
WHERE active = true and
start > '2014-06-15T19:23:23.691670'::timestamp and
status = 'completed'
ORDER BY start DESC
OFFSET 450 LIMIT 150;
The index on status is not very selective. I would suggest a composite index:
CREATE INDEX job_status_idx ON job(status text_pattern_ops, active, start, key_)
This is a covering index and it should do a better job of matching the where clause.

A slow sql statments , is there any way to optmize it?

Our application has a very slow statement, it takes more than 11 second, so I want to know is there any way to optimize it ?
The SQL statement
SELECT id FROM mapfriends.cell_forum_topic WHERE id in (
SELECT topicid FROM mapfriends.cell_forum_item WHERE skyid=103230293 GROUP BY topicid )
AND categoryid=29 AND hidden=false ORDER BY restoretime DESC LIMIT 10 OFFSET 0;
id
---------
2471959
2382296
1535967
2432006
2367281
2159706
1501759
1549304
2179763
1598043
(10 rows)
Time: 11444.976 ms
Plan
friends=> explain SELECT id FROM friends.cell_forum_topic WHERE id in (
friends(> SELECT topicid FROM friends.cell_forum_item WHERE skyid=103230293 GROUP BY topicid)
friends-> AND categoryid=29 AND hidden=false ORDER BY restoretime DESC LIMIT 10 OFFSET 0;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
Limit (cost=1443.15..1443.15 rows=2 width=12)
-> Sort (cost=1443.15..1443.15 rows=2 width=12)
Sort Key: cell_forum_topic.restoretime
-> Nested Loop (cost=1434.28..1443.14 rows=2 width=12)
-> HashAggregate (cost=1434.28..1434.30 rows=2 width=4)
-> Index Scan using cell_forum_item_idx_skyid on cell_forum_item (cost=0.00..1430.49 rows=1516 width=4)
Index Cond: (skyid = 103230293)
-> Index Scan using cell_forum_topic_pkey on cell_forum_topic (cost=0.00..4.40 rows=1 width=12)
Index Cond: (cell_forum_topic.id = cell_forum_item.topicid)
Filter: ((NOT cell_forum_topic.hidden) AND (cell_forum_topic.categoryid = 29))
(10 rows)
Time: 1.109 ms
Indexes
friends=> \d cell_forum_item
Table "friends.cell_forum_item"
Column | Type | Modifiers
---------+--------------------------------+--------------------------------------------------------------
id | integer | not null default nextval('cell_forum_item_id_seq'::regclass)
topicid | integer | not null
skyid | integer | not null
content | character varying(200) |
addtime | timestamp(0) without time zone | default now()
ischeck | boolean |
Indexes:
"cell_forum_item_pkey" PRIMARY KEY, btree (id)
"cell_forum_item_idx" btree (topicid, skyid)
"cell_forum_item_idx_1" btree (topicid, id)
"cell_forum_item_idx_skyid" btree (skyid)
friends=> \d cell_forum_topic
Table "friends.cell_forum_topic"
Column | Type | Modifiers
-------------+--------------------------------+-------------------------------------------------------------------------------------
-
id | integer | not null default nextval(('"friends"."cell_forum_topic_id_seq"'::text)::regclass)
categoryid | integer | not null
topic | character varying | not null
content | character varying | not null
skyid | integer | not null
addtime | timestamp(0) without time zone | default now()
reference | integer | default 0
restore | integer | default 0
restoretime | timestamp(0) without time zone | default now()
locked | boolean | default false
settop | boolean | default false
hidden | boolean | default false
feature | boolean | default false
picid | integer | default 29249
managerid | integer |
imageid | integer | default 0
pass | boolean | default false
ischeck | boolean |
Indexes:
"cell_forum_topic_pkey" PRIMARY KEY, btree (id)
"idx_cell_forum_topic_1" btree (categoryid, settop, hidden, restoretime, skyid)
"idx_cell_forum_topic_2" btree (categoryid, hidden, restoretime, skyid)
"idx_cell_forum_topic_3" btree (categoryid, hidden, restoretime)
"idx_cell_forum_topic_4" btree (categoryid, hidden, restore)
"idx_cell_forum_topic_5" btree (categoryid, hidden, restoretime, feature)
"idx_cell_forum_topic_6" btree (categoryid, settop, hidden, restoretime)
Explain analyze
mapfriends=> explain analyze SELECT id FROM mapfriends.cell_forum_topic
mapfriends-> join (SELECT topicid FROM mapfriends.cell_forum_item WHERE skyid=103230293 GROUP BY topicid) as tmp
mapfriends-> on mapfriends.cell_forum_topic.id=tmp.topicid
mapfriends-> where categoryid=29 AND hidden=false ORDER BY restoretime DESC LIMIT 10 OFFSET 0;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------
Limit (cost=1446.89..1446.90 rows=2 width=12) (actual time=18016.006..18016.013 rows=10 loops=1)
-> Sort (cost=1446.89..1446.90 rows=2 width=12) (actual time=18016.001..18016.002 rows=10 loops=1)
Sort Key: cell_forum_topic.restoretime
Sort Method: quicksort Memory: 25kB
-> Nested Loop (cost=1438.02..1446.88 rows=2 width=12) (actual time=16988.492..18015.869 rows=20 loops=1)
-> HashAggregate (cost=1438.02..1438.04 rows=2 width=4) (actual time=15446.735..15447.243 rows=610 loops=1)
-> Index Scan using cell_forum_item_idx_skyid on cell_forum_item (cost=0.00..1434.22 rows=1520 width=4) (actual time=302.378..15429.782 rows=7133 loops=1)
Index Cond: (skyid = 103230293)
-> Index Scan using cell_forum_topic_pkey on cell_forum_topic (cost=0.00..4.40 rows=1 width=12) (actual time=4.210..4.210 rows=0 loops=610)
Index Cond: (cell_forum_topic.id = cell_forum_item.topicid)
Filter: ((NOT cell_forum_topic.hidden) AND (cell_forum_topic.categoryid = 29))
Total runtime: 18019.461 ms
Could you give us some more information about the tables (the statistics) and the configuration?
SELECT version();
SELECT category, name, setting FROM pg_settings WHERE name IN('effective_cache_size', 'enable_seqscan', 'shared_buffers');
SELECT * FROM pg_stat_user_tables WHERE relname IN('cell_forum_topic', 'cell_forum_item');
SELECT * FROM pg_stat_user_indexes WHERE relname IN('cell_forum_topic', 'cell_forum_item');
SELECT * FROM pg_stats WHERE tablename IN('cell_forum_topic', 'cell_forum_item');
And before getting this data, use ANALYZE.
It looks like you have a problem with an index, this is where all the query spends all it's time:
-> Index Scan using cell_forum_item_idx_skyid on
cell_forum_item (cost=0.00..1434.22
rows=1520 width=4) (actual
time=302.378..15429.782 rows=7133
loops=1)
If you use VACUUM FULL on a regular basis (NOT RECOMMENDED!), index bloat might be your problem. A REINDEX might be a good idea, just to be sure:
REINDEX TABLE cell_forum_item;
And talking about indexes, you can drop a couple of them, these are obsolete:
"idx_cell_forum_topic_6" btree (categoryid, settop, hidden, restoretime)
"idx_cell_forum_topic_3" btree (categoryid, hidden, restoretime)
Other indexes have the same data and can be used by the database as well.
It looks like you have a couple of problems:
autovacuum is turned off or it's way
behind. That last autovacuum was on
2010-12-02 and you have 256734 dead
tuples in one table and 451430 dead
ones in the other.... You have to do
something about this, this is a
serious problem.
When autovacuum is working again, you
have to do a VACUUM FULL and a
REINDEX to force a table rewrite and
get rid of all empty space in your
tables.
after fixing the vacuum-problem, you
have to analyze as well: the database
expects 1520 results but it gets 7133
results. This could be a problem with
statistics, maybe you have to
increase the STATISTICS.
The query itself needs some rewriting
as well: It gets 7133 results but it
needs only 610 results. Over 90% of
the results are lost... And getting
these 7133 takes a lot of time, over
15 seconds. Get rid of the subquery by using a JOIN without the GROUP BY or use EXISTS, also without the GROUP BY.
But first get autovacuum back on track, before you get new or other problems.
the problem isn't due to lack of caching of the query plan but most likely due to the choice of plan due to lack of appropriate indexes