I have a master/detail Table situation. For every entry at the master table I have dozens at the detail table.
Lets say these are my tables:
+-----------------------
| Master
+-----------------------
| master_key integer,
| insert_date timestamp
+-----------------------
+-----------------------
| Detail
+-----------------------
| detail_key integer,
| master_key integer
| quantity numeric
| amount numeric
+-----------------------
And my most used Query is something like
SELECT extract(year from insert_date) AS Insert_Year, extract(month from insert_date) AS Insert_Month, sum(quantity) AS Quantity, sum(amount) AS Amount
FROM Master, Detail
WHERE (amount not null) and (insert_date <= '2016-12-31') and (insert_date >= '2015-01-01') and (Detail.master_key=Master.master_key)
GROUP BY Insert_Year, Insert_Month
ORDER BY Insert_Year ASC, Insert_Month ASC;
This query becomes to slow because there are tons of data for many years in both tables.
Of cause I have Indexes at both tables and EXPLAIN ANALYZE tells me that the INDEX scan takes mode than 80% of the hole Execution Time.
"Sort (cost=44013.52..44013.53 rows=1 width=19) (actual time=17073.129..17073.129 rows=16 loops=1)"
" Sort Key: (date_part('year'::text, master.insert_date)), (date_part('month'::text, master.insert_date))"
" Sort Method: quicksort Memory: 26kB"
" -> HashAggregate (cost=44013.49..44013.51 rows=1 width=19) (actual time=17073.046..17073.053 rows=16 loops=1)"
" Group Key: date_part('year'::text, master.insert_date), date_part('month'::text, master.insert_date)"
" -> Nested Loop (cost=0.43..43860.32 rows=15317 width=19) (actual time=0.056..15951.178 rows=843647 loops=1)"
" -> Seq Scan on master (cost=0.00..18881.38 rows=3127 width=12) (actual time=0.027..636.202 rows=182338 loops=1)"
" Filter: ((date(insert_date) >= '2015-01-01'::date) AND (date(insert_date) <= '2016-12-31'::date))"
" Rows Removed by Filter: 443031"
" -> Index Scan using idx_detail_master_key on detail (cost=0.43..7.89 rows=7 width=15) (actual time=0.055..0.077 rows=5 loops=182338)"
" Index Cond: (master_key = master.master_key)"
" Filter: (amount IS NOT NULL)"
" Rows Removed by Filter: 2"
"Planning time: 105.317 ms"
"Execution time: 17073.396 ms"
So my idea was to reduce the index sizes by defining them partial. In most cases only data of the last 2 years are queried.
So I tried something like that:
CREATE INDEX idx_detail_table_master_keys
ON detail (master_key)
WHERE master_key in (SELECT master_key FROM master WHERE (extract( year from insert_date) = 2016) or (extract( year from insert_date) = 2015))
Of cause this is not the final version it should just be a proof of concept and it failed. PGAdmin Tells me that I'm not allowed to use subselects on Index creation.
So my question is: Is it posiple to create a partial Index, basing on the data of an other table?
And of cause I would be thankful for any Tips speeding constellations like this up.
regards
It is not possible to create a partial index, basing on the data of another table because relational databases like postgresql have three ways of joining: nested loops, hash-join and sort-merge. All of these methods load the tables of a join seperately. Since the database optimizer decides which of those methods will be used and also in which direction the join will be executed, it doesn't make sense to create a table index that covers data of another table. This is why you can't define such indexes. A more detailed description about this topics can be found here: http://use-the-index-luke.com/sql/join (and the following sections of the online book)
For further optimization see the comment of Gabriel's Messanger
Related
I have a total of over 6,000,000 rows of data in this table, after filtering it by where condition, there are about 120,000 rows of data.
Currently I have created two multi-column indexers
CREATE INDEX "IDX_module_method_height" ON "events" ("module", "method", "block_height")
CREATE INDEX "IDX_module_method" ON "events" ("module", "method")
When I run the following sql and it was really fast
The fast one:
explain analyze Select block_height from events where (module='amm' and method in ('Traded', 'LiquidityAdded')) order by block_height desc limit 500 offset 200;
"Limit (cost=2748.32..2749.57 rows=500 width=4) (actual time=51.207..51.288 rows=500 loops=1)"
" -> Sort (cost=2747.82..2757.85 rows=4010 width=4) (actual time=51.183..51.236 rows=700 loops=1)"
" Sort Key: block_height DESC"
" Sort Method: top-N heapsort Memory: 81kB"
" -> Index Only Scan using ""IDX_module_method_height"" on events (cost=0.56..2538.28 rows=4010 width=4) (actual time=0.061..35.880 rows=128860 loops=1)"
" Index Cond: ((method = ANY ('{Traded,LiquidityAdded}'::text[])) AND (module = 'amm'::text))"
" Heap Fetches: 17403"
"Planning Time: 0.212 ms"
"Execution Time: 51.344 ms"
But when I add one more selected column (eg: data), it is really slow, but i really need data column.
The slow one (just add one more data field in select)
explain analyze Select block_height, data from events where (module='amm' and method in ('Traded', 'LiquidityAdded')) order by block_height desc limit 500 offset 200;
"Limit (cost=14459.53..14460.78 rows=500 width=133) (actual time=12061.968..12062.068 rows=500 loops=1)"
" -> Sort (cost=14459.03..14469.06 rows=4011 width=133) (actual time=12061.935..12062.012 rows=700 loops=1)"
" Sort Key: block_height DESC"
" Sort Method: top-N heapsort Memory: 371kB"
" -> Index Scan using "IDX_module_method" on events (cost=0.43..14249.43 rows=4011 width=133) (actual time=1.302..12014.625 rows=128860 loops=1)"
" Index Cond: (((module)::text = 'amm'::text) AND ((method)::text = ANY ('{Traded,LiquidityAdded}'::text[])))"
"Planning Time: 0.144 ms"
"Execution Time: 12063.364 ms"
Why do select columns affect the selection of indexes, I need data fields and how should I create indexes to make sql efficient?
reason is that it needs to do table lookup. your WHERE clause can be solved by both indices, the smaller is faster. with single field it could do INDEX ONLY SCAN, now it does INDEX SCAN.
you can try to make your index fit "better"
CREATE INDEX "IDX_module_method_height_desc" ON "events" ("module", "method", "block_height" DESC)
if your table have primary key, you could make your table lookup small with index only scan and LIMIT first.
SELECT block_height, data
FROM events
JOIN (
SELECT id
FROM events
WHERE ...
ORDER BY block_height DESC
LIMIT 500 OFFSET 200) x USING (id)
otherwise as others suggested you can extend your index with "include" if you use Postgres 11 or newer
CREATE INDEX "IDX_module_method_height_desc" ON "events" ("module", "method", "block_height" DESC) INCLUDE ("data")
My table in Postgresql consist of 1.2m rows.I am using combound index on timestamp1,id13
My query look like this:
EXPLAIN (ANALYZE ,VERBOSE, COSTS,BUFFERS ) select date_trunc('day',timestamp1) as day,avg(id9) from oneindextwocolumns where timestamp1>='2019-01-01 00:00:00' and timestamp1<='2020-12-31 00:55:00' group by day,id13 order by day desc
The explain plan is this:
"GroupAggregate (cost=40802.98..46082.38 rows=211176 width=24) (actual time=150.115..213.289 rows=91645 loops=1)"
" Output: (date_trunc('day'::text, timestamp1)), avg(id9), id13"
" Group Key: (date_trunc('day'::text, oneindextwocolumns.timestamp1)), oneindextwocolumns.id13"
" Buffers: shared hit=5937, temp read=875 written=877"
" -> Sort (cost=40802.98..41330.92 rows=211176 width=24) (actual time=150.088..169.306 rows=210252 loops=1)"
" Output: (date_trunc('day'::text, timestamp1)), id13, id9"
" Sort Key: (date_trunc('day'::text, oneindextwocolumns.timestamp1)) DESC, oneindextwocolumns.id13"
" Sort Method: external merge Disk: 7000kB"
" Buffers: shared hit=5937, temp read=875 written=877"
" -> Index Scan using oneindextwocolumns_timestamp1_id13_idx on public.oneindextwocolumns (cost=0.43..22126.48 rows=211176 width=24) (actual time=0.020..67.899 rows=210252 loops=1)"
" Output: date_trunc('day'::text, timestamp1), id13, id9"
" Index Cond: ((oneindextwocolumns.timestamp1 >= '2019-01-01 00:00:00'::timestamp without time zone) AND (oneindextwocolumns.timestamp1 <= '2020-12-31 00:55:00'::timestamp without time zone))"
" Buffers: shared hit=5937"
"Planning Time: 0.118 ms"
"Execution Time: 217.945 ms"
How can i know if the group stage uses index?
Thanks in advance!
It does not use the index to provide the order. If it did, what would the function of the sort occurring before the aggregate be? If it used the order supplied by the index to do the grouping, then the plan would show a GroupAgg being fed directly by the IndexScan with no Sort in between.
There is a caveat to that. In the newest versions of PostgreSQL, there is also an Incremental Sort. In this, the index (or something) supplies rows in a certain order which is mostly what we need, but then we need to re-sort the ties to put them in a specific order. This can be used for example if the index provides ordering by (col1), but we need ordering by (col1,col2). But that is not the case here--your sitution is not eligible for such sort, and if one were being used the sort node would be labeled as "Incremental Sort", not just as "Sort".
The planner does not think through the fact that ordering by timestamp1 will necessarily order by date_trunc('day'::text, timestamp1) as well. That is a missing optimization. Maybe someone will implement it some day.
I have one table that includes about 100K rows and their growth and growth.
My query response has a bad response time and it affects my front-end user experience.
I want to ask for your help to improve my response time from the DB.
Today the PostgreSQL runs on my local machine, Macbook pro 13 2019 16 RAM and I5 Core.
I the future I will load this DB on a docker and run it on a Better server.
What can I do to improve it for now?
Table Structure:
CREATE TABLE dots
(
dot_id INT,
site_id INT,
latitude float ( 6 ),
longitude float ( 6 ),
rsrp float ( 6 ),
dist INT,
project_id INT,
dist_from_site INT,
geom geometry,
dist_from_ref INT,
file_name VARCHAR
);
The dot_id resets after inserting the bulk of data and each for "file_name" column.
Table Dots:
The queries:
Query #1:
await db.query(
`select MAX(rsrp) FROM dots where site_id=$1 and ${table}=$2 and project_id = $3 and file_name ilike $4`,
[site_id, dist, project_id, filename]
);
Time for response: 200ms
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=37159.88..37159.89 rows=1 width=4) (actual time=198.416..201.762 rows=1 loops=1)
Buffers: shared hit=16165 read=16031
-> Gather (cost=37159.66..37159.87 rows=2 width=4) (actual time=198.299..201.752 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=16165 read=16031
-> Partial Aggregate (cost=36159.66..36159.67 rows=1 width=4) (actual time=179.009..179.010 rows=1 loops=3)
Buffers: shared hit=16165 read=16031
-> Parallel Seq Scan on dots (cost=0.00..36150.01 rows=3861 width=4) (actual time=122.889..178.817 rows=1088 loops=3)
Filter: (((file_name)::text ~~* 'BigFile'::text) AND (site_id = 42047) AND (dist_from_ref = 500) AND (project_id = 1))
Rows Removed by Filter: 157073
Buffers: shared hit=16165 read=16031
Planning Time: 0.290 ms
Execution Time: 201.879 ms
(14 rows)
Query #2:
await db.query(
`SELECT DISTINCT (${table}) FROM dots where site_id=$1 and project_id = $2 and file_name ilike $3 order by ${table}`,
[site_id, project_id, filename]
);
Time for response: 1100ms
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------
Sort (cost=41322.12..41322.31 rows=77 width=4) (actual time=1176.071..1176.077 rows=66 loops=1)
Sort Key: dist_from_ref
Sort Method: quicksort Memory: 28kB
Buffers: shared hit=16175 read=16021
-> HashAggregate (cost=41318.94..41319.71 rows=77 width=4) (actual time=1176.024..1176.042 rows=66 loops=1)
Group Key: dist_from_ref
Batches: 1 Memory Usage: 24kB
Buffers: shared hit=16175 read=16021
-> Seq Scan on dots (cost=0.00..40499.42 rows=327807 width=4) (actual time=0.423..1066.316 rows=326668 loops=1)
Filter: (((file_name)::text ~~* 'BigFile'::text) AND (site_id = 42047) AND (project_id = 1))
Rows Removed by Filter: 147813
Buffers: shared hit=16175 read=16021
Planning:
Buffers: shared hit=5 dirtied=1
Planning Time: 0.242 ms
Execution Time: 1176.125 ms
(16 rows)
Query #3:
await db.query(
`SELECT count(*) FROM dots WHERE site_id = $1 AND ${table} = $2 and project_id = $3 and file_name ilike $4`,
[site_id, dist, project_id, filename]
);
Time for response: 200ms
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=37159.88..37159.89 rows=1 width=8) (actual time=198.725..202.335 rows=1 loops=1)
Buffers: shared hit=16160 read=16036
-> Gather (cost=37159.66..37159.87 rows=2 width=8) (actual time=198.613..202.328 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=16160 read=16036
-> Partial Aggregate (cost=36159.66..36159.67 rows=1 width=8) (actual time=179.182..179.183 rows=1 loops=3)
Buffers: shared hit=16160 read=16036
-> Parallel Seq Scan on dots (cost=0.00..36150.01 rows=3861 width=0) (actual time=119.340..179.020 rows=1088 loops=3)
Filter: (((file_name)::text ~~* 'BigFile'::text) AND (site_id = 42047) AND (dist_from_ref = 500) AND (project_id = 1))
Rows Removed by Filter: 157073
Buffers: shared hit=16160 read=16036
Planning Time: 0.109 ms
Execution Time: 202.377 ms
(14 rows)
Tables do no have any indexes.
I added an index and it helped a bit... create index idx1 on dots (site_id, project_id, file_name, dist_from_site,dist_from_ref)
OK, that's a bit too much.
An index on columns (a,b) is useful for "where a=..." and also for "where a=... and b=..." but it is not really useful for "where b=...". Creating an index with many columns uses more disk space and is slower than with fewer columns, which is a waste if the extra columns in the index don't make your queries faster. Both dist_ columns in the index will probably not be used.
Indices are a compromise : if your table has small rows, like two integer columns, and you create an index on these two columns, then it will be as big as the table, so you better be sure you need it. But in your case, your rows are pretty large at around 5kB and the number of rows is small, so adding an index or several on small int columns costs very little overhead.
So, since you very often use WHERE conditions on both site_id and project_id you can create an index on site_id,project_id. This will also work for a WHERE condition on site_id alone. If you sometimes use project_id alone, you can swap the order of the columns so it appears first, or even create another index.
You say the cardinality of these columns is about 30, so selecting on one value of site_id or project_id should hit 1/30 or 3.3% of the table, and selecting on both columns should hit 0.1% of the table, if they are uncorrelated and evenly distributed. This should already result in a substantial speedup.
You could also add an index on dist_from_site, and another one on dist_on_ref, if they have good selectivity (ie, high cardinality in those columns). Postgres can combine indices with bitmap index scan. But, if say 50% of the rows in the table have the same value for dist_from_site, then an index will be useless for this value, due to not having enough selectivity.
You could also replace the previous 2-column index with 2 indices on site_id,project_id,dist_from_site and site_id,project_id,dist_from_ref. You can try it, see if it is worth the extra resources.
Also there's the filename column and ILIKE. ILIKE can't use an index, which means it's slow. One solution is to use an expression index
CREATE INDEX dots_filename ON dots( lower(file_name) );
and replace your where condition with:
lower(file_name) like lower($4)
This will use the index unless the parameter $4 starts with a "%". And if you never use the LIKE '%' wildcards, and you're just using ILIKE for case-insensitive comparison, then you can replace LIKE with the = operator. Basically lower(a) = lower(b) is a case-insensitive comparison.
Likewise you could replace the previous 2-column index with an index on site_id,project_id,lower(filename) if you often use these three together in the WHERE condition. But as said above, it won't optimize searches on filename alone.
Since your rows are huge, even adding 1 kB of index per row will only add 20% overhead to your table, so you can overdo it without too much trouble. So go ahead and experiment, you'll see what works best.
If I want to select 0.5% rows, or even 5% rows from the following table via a PK, the query planner correctly chooses to use the PK index. Here is the table:
create table weather as
with numbers as(
select generate_series as id from generate_series(0,1048575))
select id,
50 + 50*sin(id) as temperature_in_f,
50 + 50*sin(id) as humidity_in_percent
from numbers;
alter table weather
add constraint pk_weather primary key(id);
vacuum analyze weather;
The stats are up-to-date, and the following query does use the PK index:
explain analyze select sum(w.id), sum(humidity_in_percent), count(*)
from weather as w
where w.id between 1 and 66720;
Suppose, however, that we need to join this table with another, much smaller, one:
create table lightnings
as
select id as weather_id
from weather
where humidity_in_percent between 99.99 and 100;
alter table lightnings
add constraint pk_lightnings
primary key(weather_id);
analyze lightnings;
Here is my join, in four logically equivalent forms:
explain analyze select sum(w.id), count(*) from weather as w
where w.humidity_in_percent between 99.99 and 100
and exists(select * from lightnings as l
where l.weather_id=w.id);
explain analyze select sum(w.id), count(*)
from weather as w
join lightnings as l
on l.weather_id=w.id
where w.humidity_in_percent between 99.99 and 100;
explain analyze select sum(w.id), count(*)
from lightnings as l
join weather as w
on l.weather_id=w.id
where w.humidity_in_percent between 99.99 and 100;
-- replaced explicit join with where clause
explain analyze select sum(w.id), count(*)
from lightnings as l, weather as w
where w.humidity_in_percent between 99.99 and 100
and l.weather_id=w.id;
Unfortunately the query planner resorts to scanning the whole weather table:
"Aggregate (cost=22645.68..22645.69 rows=1 width=4) (actual time=167.427..167.427 rows=1 loops=1)"
" -> Hash Join (cost=180.12..22645.52 rows=32 width=4) (actual time=2.500..166.444 rows=6672 loops=1)"
" Hash Cond: (w.id = l.weather_id)"
" -> Seq Scan on weather w (cost=0.00..22407.64 rows=5106 width=4) (actual time=0.013..158.593 rows=6672 loops=1)"
" Filter: ((humidity_in_percent >= 99.99::double precision) AND (humidity_in_percent <= 100::double precision))"
" Rows Removed by Filter: 1041904"
" -> Hash (cost=96.72..96.72 rows=6672 width=4) (actual time=2.479..2.479 rows=6672 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 235kB"
" -> Seq Scan on lightnings l (cost=0.00..96.72 rows=6672 width=4) (actual time=0.009..0.908 rows=6672 loops=1)"
"Planning time: 0.326 ms"
"Execution time: 167.581 ms"
The query planner's estimate on how many rows in weather table will be selected is rows=5106. This is more or less close to the exact value of 6672. If I select this small number of rows in weather table via id, the PK index is used. If I select the same amount via a join with another table, the query planner goes for scanning the table.
What am I missing?
select version()
"PostgreSQL 9.4.0"
Edit: if I remove the condition on humidity, the query planner correctly recognizes that the condition on weather.id is quite selective, and chooses to use the index on PK:
explain analyze select sum(w.id), count(*) from weather as w
where exists(select * from lightnings as l
where l.weather_id=w.id);
"Aggregate (cost=14677.84..14677.85 rows=1 width=4) (actual time=37.200..37.200 rows=1 loops=1)"
" -> Nested Loop (cost=0.42..14644.48 rows=6672 width=4) (actual time=0.022..36.189 rows=6672 loops=1)"
" -> Seq Scan on lightnings l (cost=0.00..96.72 rows=6672 width=4) (actual time=0.011..0.868 rows=6672 loops=1)"
" -> Index Only Scan using pk_weather on weather w (cost=0.42..2.17 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=6672)"
" Index Cond: (id = l.weather_id)"
" Heap Fetches: 0"
"Planning time: 0.321 ms"
"Execution time: 37.254 ms"
Yet adding a condition totally confuses the query planner.
Expecting the optimiser to use an index on the PK of the larger table implies that you expect the query to be driven from the smaller table. Of course, you know that the rows that the smaller table will join to in the larger one are the same as those selected by the predicate on it, but the optimiser does not.
Look at the line on the plan:
Hash Join (cost=180.12..22645.52 rows=32 width=4) (actual time=2.500..166.444 rows=6672 loops=1)"
It expects 32 rows to result from the join, but 6672 actually result.
Anyway, it pretty much has the option of:
A full scan on the smaller table, and an index lookup on the larger, with the predicate being used to filter out rows subsequent to the join (and expecting most of the rows to then be filtered out).
A full scan on both tables, with rows being removed by the predicate on the larger table, and a hash join of the result.
A scan of the larger table with rows being removed by the predicate, and an index lookup on the smaller table that may fail to find a value.
The second of these has been judged to be the lowest cost, and I think it is correct to do so based on the evidence it has, as hash joins are very efficient for joining many rows.
Of course it would probably be more efficient to place an index on weather(humidity_in_percent,id) in this particular case, but I suspect that this is a modified version of your real situation (the sum of the id column?) so specific advice may not be applicable.
I believe the differences your seeing between the first query, which uses the index and the other 3 which don't, is in the where clause.
In the first query, your where clause is on w.id, which is indexed.
In the other 3, the effective where clause is on w.humidity_in_percent. I tested the following ...
create index wh_idx on weather(humidity_in_percent);
explain analyse select sum(w.id), count(*) from weather as w
where w.humidity_in_percent between 99.99 and 100
and exists(select * from lightnings as l
where l.weather_id=w.id);
and get a much better plan. I tried to post the actual plan returned, but I'm having trouble formatting it for proper display, sorry.
I am using postgres 9.2.4.
We have a background job that imports user's emails into our system and stores them in a postgres database table.
Below is the table:
CREATE TABLE emails
(
id serial NOT NULL,
subject text,
body text,
personal boolean,
sent_at timestamp without time zone NOT NULL,
created_at timestamp without time zone,
updated_at timestamp without time zone,
account_id integer NOT NULL,
sender_user_id integer,
sender_contact_id integer,
html text,
folder text,
draft boolean DEFAULT false,
check_for_response timestamp without time zone,
send_time timestamp without time zone,
CONSTRAINT emails_pkey PRIMARY KEY (id),
CONSTRAINT emails_account_id_fkey FOREIGN KEY (account_id)
REFERENCES accounts (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE CASCADE,
CONSTRAINT emails_sender_contact_id_fkey FOREIGN KEY (sender_contact_id)
REFERENCES contacts (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE CASCADE
)
WITH (
OIDS=FALSE
);
ALTER TABLE emails
OWNER TO paulcowan;
-- Index: emails_account_id_index
-- DROP INDEX emails_account_id_index;
CREATE INDEX emails_account_id_index
ON emails
USING btree
(account_id);
-- Index: emails_sender_contact_id_index
-- DROP INDEX emails_sender_contact_id_index;
CREATE INDEX emails_sender_contact_id_index
ON emails
USING btree
(sender_contact_id);
-- Index: emails_sender_user_id_index
-- DROP INDEX emails_sender_user_id_index;
CREATE INDEX emails_sender_user_id_index
ON emails
USING btree
(sender_user_id);
The query is further complicated because I have a view on this table where I pull in other data:
CREATE OR REPLACE VIEW email_graphs AS
SELECT emails.id, emails.subject, emails.body, emails.folder, emails.html,
emails.personal, emails.draft, emails.created_at, emails.updated_at,
emails.sent_at, emails.sender_contact_id, emails.sender_user_id,
emails.addresses, emails.read_by, emails.check_for_response,
emails.send_time, ts.ids AS todo_ids, cs.ids AS call_ids,
ds.ids AS deal_ids, ms.ids AS meeting_ids, c.comments, p.people,
atts.ids AS attachment_ids
FROM emails
LEFT JOIN ( SELECT todos.reference_email_id AS email_id,
array_to_json(array_agg(todos.id)) AS ids
FROM todos
GROUP BY todos.reference_email_id) ts ON ts.email_id = emails.id
LEFT JOIN ( SELECT calls.reference_email_id AS email_id,
array_to_json(array_agg(calls.id)) AS ids
FROM calls
GROUP BY calls.reference_email_id) cs ON cs.email_id = emails.id
LEFT JOIN ( SELECT deals.reference_email_id AS email_id,
array_to_json(array_agg(deals.id)) AS ids
FROM deals
GROUP BY deals.reference_email_id) ds ON ds.email_id = emails.id
LEFT JOIN ( SELECT meetings.reference_email_id AS email_id,
array_to_json(array_agg(meetings.id)) AS ids
FROM meetings
GROUP BY meetings.reference_email_id) ms ON ms.email_id = emails.id
LEFT JOIN ( SELECT comments.email_id,
array_to_json(array_agg(( SELECT row_to_json(r.*) AS row_to_json
FROM ( VALUES (comments.id,comments.text,comments.author_id,comments.created_at,comments.updated_at)) r(id, text, author_id, created_at, updated_at)))) AS comments
FROM comments
WHERE comments.email_id IS NOT NULL
GROUP BY comments.email_id) c ON c.email_id = emails.id
LEFT JOIN ( SELECT email_participants.email_id,
array_to_json(array_agg(( SELECT row_to_json(r.*) AS row_to_json
FROM ( VALUES (email_participants.user_id,email_participants.contact_id,email_participants.kind)) r(user_id, contact_id, kind)))) AS people
FROM email_participants
GROUP BY email_participants.email_id) p ON p.email_id = emails.id
LEFT JOIN ( SELECT attachments.reference_email_id AS email_id,
array_to_json(array_agg(attachments.id)) AS ids
FROM attachments
GROUP BY attachments.reference_email_id) atts ON atts.email_id = emails.id;
ALTER TABLE email_graphs
OWNER TO paulcowan;
We then run paginated queries against this view e.g.
SELECT "email_graphs".* FROM "email_graphs" INNER JOIN "email_participants" ON ("email_participants"."email_id" = "email_graphs"."id") WHERE (("user_id" = 75) AND ("folder" = 'INBOX')) ORDER BY "sent_at" DESC LIMIT 5 OFFSET 0
As the table has grown, queries on this table have dramatically slowed down.
If I run the paginated query with EXPLAIN ANALYZE
EXPLAIN ANALYZE SELECT "email_graphs".* FROM "email_graphs" INNER JOIN "email_participants" ON ("email_participants"."email_id" = "email_graphs"."id") WHERE (("user_id" = 75) AND ("folder" = 'INBOX')) ORDER BY "sent_at" DESC LIMIT 5 OFFSET 0;
I get this result
-> Seq Scan on deals (cost=0.00..9.11 rows=36 width=8) (actual time=0.003..0.044 rows=34 loops=1)
-> Sort (cost=5.36..5.43 rows=131 width=36) (actual time=0.416..0.416 rows=1 loops=1)
Sort Key: ms.email_id
Sort Method: quicksort Memory: 26kB
-> Subquery Scan on ms (cost=3.52..4.44 rows=131 width=36) (actual time=0.408..0.411 rows=1 loops=1)
-> HashAggregate (cost=3.52..4.05 rows=131 width=8) (actual time=0.406..0.408 rows=1 loops=1)
-> Seq Scan on meetings (cost=0.00..3.39 rows=131 width=8) (actual time=0.006..0.163 rows=161 loops=1)
-> Sort (cost=18.81..18.91 rows=199 width=36) (actual time=0.012..0.012 rows=0 loops=1)
Sort Key: c.email_id
Sort Method: quicksort Memory: 25kB
-> Subquery Scan on c (cost=15.90..17.29 rows=199 width=36) (actual time=0.007..0.007 rows=0 loops=1)
-> HashAggregate (cost=15.90..16.70 rows=199 width=60) (actual time=0.006..0.006 rows=0 loops=1)
-> Seq Scan on comments (cost=0.00..12.22 rows=736 width=60) (actual time=0.004..0.004 rows=0 loops=1)
Filter: (email_id IS NOT NULL)
Rows Removed by Filter: 2
SubPlan 1
-> Values Scan on "*VALUES*" (cost=0.00..0.00 rows=1 width=56) (never executed)
-> Materialize (cost=4220.14..4883.55 rows=27275 width=36) (actual time=247.720..1189.545 rows=29516 loops=1)
-> GroupAggregate (cost=4220.14..4788.09 rows=27275 width=15) (actual time=247.715..1131.787 rows=29516 loops=1)
-> Sort (cost=4220.14..4261.86 rows=83426 width=15) (actual time=247.634..339.376 rows=82632 loops=1)
Sort Key: public.email_participants.email_id
Sort Method: external sort Disk: 1760kB
-> Seq Scan on email_participants (cost=0.00..2856.28 rows=83426 width=15) (actual time=0.009..88.938 rows=82720 loops=1)
SubPlan 2
-> Values Scan on "*VALUES*" (cost=0.00..0.00 rows=1 width=40) (actual time=0.004..0.005 rows=1 loops=82631)
-> Sort (cost=2.01..2.01 rows=1 width=36) (actual time=0.074..0.077 rows=3 loops=1)
Sort Key: atts.email_id
Sort Method: quicksort Memory: 25kB
-> Subquery Scan on atts (cost=2.00..2.01 rows=1 width=36) (actual time=0.048..0.060 rows=3 loops=1)
-> HashAggregate (cost=2.00..2.01 rows=1 width=8) (actual time=0.045..0.051 rows=3 loops=1)
-> Seq Scan on attachments (cost=0.00..2.00 rows=1 width=8) (actual time=0.013..0.021 rows=5 loops=1)
-> Index Only Scan using email_participants_email_id_user_id_index on email_participants (cost=0.00..990.04 rows=269 width=4) (actual time=1.357..2.886 rows=43 loops=1)
Index Cond: (user_id = 75)
Heap Fetches: 43
Total runtime: 1642.157 ms
(75 rows)
I am definitely not looking for fixes :) or a refactored query. Any sort of hight level advice would be most welcome.
Per my comment, the gist of the issue lies in aggregates that get joined with one another. This prevents the use of indexes, and yields a bunch of merge joins (and a materialize) in your query plan…
Put another way, think of it as a plan so convoluted that Postgres proceeds by materializing temporary tables in memory, and then sorting them repeatedly until they're all merge-joined as appropriate. From where I'm standing, the full hogwash seems to amount to selecting all rows from all tables, and all of their possible relationships. Once it has been figured out and conjured up into existence, Postgres proceeds to sorting the mess in order to extract the top-n rows.
Anyway, you want rewrite the query so it can actually use indexes to begin with.
Part of this is simple. This, for instance, is a big no no:
select …,
ts.ids AS todo_ids, cs.ids AS call_ids,
ds.ids AS deal_ids, ms.ids AS meeting_ids, c.comments, p.people,
atts.ids AS attachment_ids
Fetch the emails in one query. Fetch the related objects in separate queries with a bite-sized email_id in (…) clause. Merely doing that should speed things up quite a bit.
For the rest, it may or may not be simple or involve some re-engineering of your schema. I only scanned through the incomprehensible monster and its gruesome query plan, so I cannot comment for sure.
I think the big view is not likely to ever perform well and that you should break it into more manageable components, but still here are two specific bits of advice that come to mind:
Schema change
Move the text and html bodies out of the main table.
Although large contents get automatically stored in TOAST space, mail parts will often be smaller than the TOAST threshold (~2000 bytes), especially for plain text, so it won't happen systematically.
Each non-TOASTED content inflates the table in a way that is detrimental to I/O performance and caching, if you consider that the primary purpose of the table is to contain the header fields like sender, recipients, date, subject...
I can test this with contents I happen to have in a mail database. On a sample of the 55k mails in my inbox:
average text/plain size: 1511 bytes.
average text/html size: 11895 bytes (but 42395 messages have no html at all)
Size of the mail table without the bodies: 14Mb (no TOAST)
If adding the bodies as 2 more TEXT columns like you have: 59Mb in main storage, 61Mb in TOAST.
Despite TOAST, the main storage appears to be 4 times larger. So when scanning the table without need for the TEXT columns, 80% of the I/Os are wasted. Future row updates are likely to make this worse with the fragmentation effect.
The effect in terms of block reads can be spotted through the pg_statio_all_tables view (compare heap_blks_read + heap_blks_hit before and after the query)
Tuning
This part of the EXPLAIN
Sort Method: external sort Disk: 1760kB
suggests that you work_mem is too small. You don't want to hit the disk for such small sorts. Make it as least 10Mb unless you're low on free memory. While you're at it, set shared_buffers to a reasonable value if it's still the default. See http://wiki.postgresql.org/wiki/Performance_Optimization for more.