Database where filter using compound index - postgresql

I've a like object that can be used to keep likes to any other db table, each represented by a content_type_id
SELECT *
from like LEFT OUTER JOIN "accounts_user"
ON ("like"."user_id" = "accounts_user"."id")
WHERE ("like"."object_id" = %s AND "like"."content_type_id" = %s AND "like"."like" = %s)`
like is a Boolean field. What would be the best indices (indexes) for this database table?
Operations are, checking if a user liked an object, and checking which users liked which objects.
My current indices are
CREATE INDEX like_417f1b1d ON like USING btree (content_type_id)
CREATE INDEX like_e8701ad5 ON like USING btree (user_id)
CREATE UNIQUE INDEX like_user_id_uniq ON like USING btree (user_id, content_type_id, object_id
)
And the query plan is:
Query plan -> Nested Loop Left Join (cost=26563.29..168218.18 rows=1 width=1383)
Query plan -> Bitmap Heap Scan on like (cost=26562.86..168209.72 rows=1 width=29)
Query plan Recheck Cond: ?
Query plan Filter: ?
Query plan -> Bitmap Index Scan on like_417f1b1d (cost=0.00..26562.86 rows=1271257 width=0)
Query plan Index Cond: ?
Query plan -> Index Scan using accounts_user_pkey on accounts_user (cost=0.43..8.45 rows=1 width=1354)
Query plan Index Cond: ?
Can you spot any irregularities? Because this is taking way longer than expected. (1.5 seconds)
I assume since the indices I have are not the exact WHERE clause this might cause a slowdown but DB engine should be smart enough to handle it, so that's why I am here.

I would recommend
CREATE INDEX ON like(object_id);
This assumes that object_id is selective (there are many different object_ids in the table) and content_type_id and like are not selective.
If there are many different values for content_type_id, add it to the index.
If like is not evenly distributed (e.g. 99% of all values ate TRUE), it might also help to add it to the index.
I'd like to mention that posting a truncated execution plan like that does not help at all. We need the complete thing.

Related

Why doesn't postresql use all columns in a multi-column index?

I am using the extension
CREATE EXTENSION btree_gin;
I have an index that looks like this...
create index boundaries2 on rets USING GIN(source, isonlastsync, status, (geoinfo::jsonb->'boundaries'), ctcvalidto, searchablePrice, ctcSortOrder);
before I started messing with it, the index looked like this, with the same results that I'm about to share, so it seems minor variations in the index definition don't make a difference:
create index boundaries on rets USING GIN((geoinfo::jsonb->'boundaries'), source, status, isonlastsync, ctcvalidto, searchablePrice, ctcSortOrder);
I give pgsql 11 this query:
explain analyze select id from rets where ((geoinfo::jsonb->'boundaries' ?| array['High School: Torrey Pines']) AND source='SDMLS'
AND searchablePrice>=800000 AND searchablePrice<=1200000 AND YrBlt>=2000 AND EstSF>=2300
AND Beds>=3 AND FB>=2 AND ctcSortOrder>'2019-07-05 16:02:54 UTC' AND Status IN ('ACTIVE','BACK ON MARKET')
AND ctcvalidto='9999-12-31 23:59:59 UTC' AND isonlastsync='true') order by LstDate desc, ctcSortOrder desc LIMIT 3000;
with result...
Limit (cost=120.06..120.06 rows=1 width=23) (actual time=472.849..472.850 rows=1 loops=1)
-> Sort (cost=120.06..120.06 rows=1 width=23) (actual time=472.847..472.848 rows=1 loops=1)
Sort Key: lstdate DESC, ctcsortorder DESC
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on rets (cost=116.00..120.05 rows=1 width=23) (actual time=472.748..472.841 rows=1 loops=1)
Recheck Cond: ((source = 'SDMLS'::text) AND (((geoinfo)::jsonb -> 'boundaries'::text) ?| '{"High School: Torrey Pines"}'::text[]) AND (ctcvalidto = '9999-12-31 23:59:59+00'::timestamp with time zone) AND (searchableprice >= 800000) AND (searchableprice <= 1200000) AND (ctcsortorder > '2019-07-05 16:02:54+00'::timestamp with time zone))
Rows Removed by Index Recheck: 93
Filter: (isonlastsync AND (yrblt >= 2000) AND (estsf >= 2300) AND (beds >= 3) AND (fb >= 2) AND (status = ANY ('{ACTIVE,"BACK ON MARKET"}'::text[])))
Rows Removed by Filter: 10
Heap Blocks: exact=102
-> Bitmap Index Scan on boundaries2 (cost=0.00..116.00 rows=1 width=0) (actual time=471.762..471.762 rows=104 loops=1)
Index Cond: ((source = 'SDMLS'::text) AND (((geoinfo)::jsonb -> 'boundaries'::text) ?| '{"High School: Torrey Pines"}'::text[]) AND (ctcvalidto = '9999-12-31 23:59:59+00'::timestamp with time zone) AND (searchableprice >= 800000) AND (searchableprice <= 1200000) AND (ctcsortorder > '2019-07-05 16:02:54+00'::timestamp with time zone))
Planning Time: 0.333 ms
Execution Time: 474.311 ms
(14 rows)
The Question
Why are the columns status and isonlastsync not used by the Bitmap Index Scan on boundaries2?
It can do so if it predicts that filtering out those columns will be faster. This is usually the case if cardinality on columns is very low and you will fetch large enough portion of all rows; this is true for boolean like isonlastsync and usually true for status columns with just a few distinct values.
Rows Removed by Filter: 10 this is very little to filter out, either because your table does not hold large number of rows or most of them fit into condition you specified for those two columns. You might try generating more data in that table or selecting rows with rare status.
I suggest doing partial indexes (with WHERE condition), at least for boolean value and remove those two columns to make this index a bit more lightweight.
I cannot tell you why, but I can help you optimize the query.
You should not use a multi-column GIN index, but a GIN index on only the jsonb expression and a b-tree index on the other columns.
The order of columns matters: put the oned used in an equality condition first, with the most selective in the beginning. Next put the column with the must selective inequality or IN conditions. For the following columns, the order doesn't matter, as they will only act as filters in the index scan.
Make sure that the indexes are cached in RAM.
I'd expect you to be faster that way.
I think you're asking yourself the wrong question. As Lukasz answered already, PostgreSQL may find inefficient to check all columns in the index. The problem here is that your index is too big on disk.
Probably by trying to make this SQL faster you added as many columns as possible to the index, but it is backfiring.
The trick is to realize how much data PostgreSQL has to read to find your records. If your index contains too much data, it will have to read a lot. Also, be aware that low cardinality columns don't play well with BTree and common index types; generally you want to avoid indexing them.
To have an index as small as possible and it's quick to do lookups you have to find the column with more cardinality, or better, the column that will return less rows for your query. My guess is "ctcSortOrder". This will be the first column of your index.
Now look, after filtering by the 1st column, which column has now the most cardinality or will filter out most rows. I have no idea on your data, but "source" looks like a good candidate.
Try to avoid jsonb searches unless they are the primary source of the cardinality, and keep the index as a Btree. BTree is several times faster.
And like Lukasz suggested, look on partial indexes. For example add "WHERE Status IN ('ACTIVE','BACK ON MARKET') AND isonlastsync='true'" as these two may be common for all your searches.
Bottom line is, having a simpler, smaller index is faster than having all columns indexed. And the order of the columns does matter a lot. Stick with BTree unless there is a good reason (lots of cardinality in non-btree compatible types).
If your table is huge (>10M rows) consider table partitioning, for example by ctcSortOrder. But I don't think this is your case.

Postgres combining multiple Indexes

I have the following table/indexes -
CREATE TABLE test
(
coords geography(Point,4326),
user_id varchar(50),
created_at timestamp
);
CREATE INDEX ix_coords ON test USING GIST (coords);
CREATE INDEX ix_user_id ON test (user_id);
CREATE INDEX ix_created_at ON test (created_at DESC);
This is the query I want to execute:
select *
from updates
where ST_DWithin(coords, ST_MakePoint(-126.4, 45.32)::geography, 30000)
and user_id='3212312'
order by created_at desc
limit 60
When I run the query it only uses ix_coords index. How can I ensure that Postgres uses ix_user_id and ix_created_at index as well for the query?
This is a new table in which I did bulk insert of production data. Total rows in the test table: 15,069,489
I am running PostgreSQL 9.2.1 (with Postgis) with (effective_cache_size = 2GB). This is my local OSX with 16GB RAM, Core i7/2.5 GHz, non-SSD disk.
Adding the EXPLAIN ANALYZE output -
Limit (cost=71.64..71.65 rows=1 width=280) (actual time=1278.652..1278.665 rows=60 loops=1)
-> Sort (cost=71.64..71.65 rows=1 width=280) (actual time=1278.651..1278.662 rows=60 loops=1)
Sort Key: created_at
Sort Method: top-N heapsort Memory: 33kB
-> Index Scan using ix_coords on test (cost=0.00..71.63 rows=1 width=280) (actual time=0.198..1278.227 rows=178 loops=1)
Index Cond: (coords && '0101000020E61000006666666666E63C40C3F5285C8F824440'::geography)
Filter: (((user_id)::text = '4f1092000b921a000100015c'::text) AND ('0101000020E61000006666666666E63C40C3F5285C8F824440'::geography && _st_expand(coords, 30000::double precision)) AND _st_dwithin(coords, '0101000020E61000006666666666E63C40C3F5285C8F824440'::geography, 30000::double precision, true))
Rows Removed by Filter: 3122459
Total runtime: 1278.701 ms
UPDATE:
Based on the suggestions below I tried index on cords + user_id:
CREATE INDEX ix_coords_and_user_id ON updates USING GIST (coords, user_id);
..but get the following error:
ERROR: data type character varying has no default operator class for access method "gist"
HINT: You must specify an operator class for the index or define a default operator class for the data type.
UPDATE:
So the CREATE EXTENSION btree_gist; solved the btree/gist compound index issue. And now my index looks like
CREATE INDEX ix_coords_user_id_created_at ON test USING GIST (coords, user_id, created_at);
NOTE: btree_gist does not accept DESC/ASC.
New query plan:
Limit (cost=134.99..135.00 rows=1 width=280) (actual time=273.282..273.292 rows=60 loops=1)
-> Sort (cost=134.99..135.00 rows=1 width=280) (actual time=273.281..273.285 rows=60 loops=1)
Sort Key: created_at
Sort Method: quicksort Memory: 41kB
-> Index Scan using ix_updates_coords_user_id_created_at on updates (cost=0.00..134.98 rows=1 width=280) (actual time=0.406..273.110 rows=115 loops=1)
Index Cond: ((coords && '0101000020E61000006666666666E63C40C3F5285C8F824440'::geography) AND ((user_id)::text = '4e952bb5b9a77200010019ad'::text))
Filter: (('0101000020E61000006666666666E63C40C3F5285C8F824440'::geography && _st_expand(coords, 30000::double precision)) AND _st_dwithin(coords, '0101000020E61000006666666666E63C40C3F5285C8F824440'::geography, 30000::double precision, true))
Rows Removed by Filter: 1
Total runtime: 273.331 ms
The query is performing better than before, almost a second better but still not great. I guess this is the best that I can get?? I was hoping somewhere around 60-80ms. Also taking order by created_at desc from the query, shaves off another 100ms, meaning it is unable to use the index. Anyway to fix this?
I don't know if Pg can combine a GiST index and regular b-tree indexes with a bitmap index scan, but I suspect not. You may be getting the best result you can without adding a user_id column to your GiST index (and consequently making it bigger and slower for other queries that don't use user_id).
As an experiment you could:
CREATE EXTENSION btree_gist;
CREATE INDEX ix_coords_and_user_id ON test USING GIST (coords, user_id);
which is likely to result in a big index, but might boost that query - if it works. Be aware that maintaining such an index will significantly slow INSERT and UPDATEs. If you drop the old ix_coords your queries will use ix_coords_and_user_id even if they don't filter on user_id, but it'll be slower than ix_coords. Keeping both will make the INSERT and UPDATE slowdown even worse.
See btree-gist
(Obsoleted by edit to question that changes the question completely; when written the user had a multicolumn index they've now split into two separate ones):
You don't seem to be filtering or sorting on user_id, only create_date. Pg won't (can't?) use only the second term of a multi-column index like (user_id, create_date), it needs use of the first item too.
If you want to index create_date, create a separate index for it. If you use and need the (user_id, create_date) index and don't generally use just user_id alone, see if you can reverse the column order. Alternately create two independent indexes, (user_id) and (create_date). When both columns are needed Pg can combine the two indepependent indexes using a bitmap index scan.
I think Craig is correct with his answer, but I just wanted to add a few things (and it wouldn't fit in a comment)
You have to work pretty hard to force PostgreSQL to use an index. The Query optimizer is smart and there are times where it will believe that a sequential table scan will be faster. It is usually right! :) But, you can play with some settings (such as seq_page_cost, random_page_cost, etc) you can play with to try and get it to favor an index. Here is a link to some of the configurations that you might want to examine if you feel like it is not making the correct decision. But, again... my experience is that most of the time, Postgres is smarter than I am! :)
Hope this helps you (or someone in the future).

How can I optimize this postgresql query?

Below is a postgres query that seems to be taking far longer than I would expect. The field_instances table is indexed on both form_instance_id and field_id, and the form_instances table is indexed on workflow_state. So I thought it would be a fast query, but it takes forever. Can anybody help me interpret the query plan and what kinds of indexes to add to speed it up? Thanks.
explain analyze
select form_id,form_instance_id,answer,field_id
from form_instances,field_instances
where workflow_state = 'DRqueued'
and form_instance_id = form_instances.id
and field_id = 'Book_EstimatedDueDate';
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=8733.85..95692.90 rows=9277 width=29) (actual time=2550.000..15430.000 rows=11431 loops=1)
Hash Cond: (field_instances.form_instance_id = form_instances.id)
-> Bitmap Heap Scan on field_instances (cost=2681.11..89071.72 rows=47567 width=25) (actual time=850.000..13690.000 rows=51726 loops=1)
Recheck Cond: ((field_id)::text = 'Book_EstimatedDueDate'::text)
-> Bitmap Index Scan on index_field_instances_on_field_id (cost=0.00..2669.22 rows=47567 width=0) (actual time=830.000..830.000 rows=51729 loops=1)
Index Cond: ((field_id)::text = 'Book_EstimatedDueDate'::text)
-> Hash (cost=5911.34..5911.34 rows=11312 width=8) (actual time=1590.000..1590.000 rows=11431 loops=1)
-> Bitmap Heap Scan on form_instances (cost=511.94..5911.34 rows=11312 width=8) (actual time=720.000..1570.000 rows=11431 loops=1)
Recheck Cond: ((workflow_state)::text = 'DRqueued'::text)
-> Bitmap Index Scan on index_form_instances_on_workflow_state (cost=0.00..509.11 rows=11312 width=0) (actual time=650.000..650.000 rows=11509 loops=1)
Index Cond: ((workflow_state)::text = 'DRqueued'::text)
Total runtime: 15430.000 ms
(12 rows)
When you say The field_instances table is indexed on both form_instance_id and field_id you mean that there are separate indexes on form_instance_id and field_id on that table?
Try dropping the index on form_instance_id and put a concatenated index on (form_instance_id, field_id).
An index works by giving you a quick lookup that tells you where the rows are that match your index. It then has to fetch through those rows to do what you want. So you always want your index to be as specific as possible. If you put two indexes on the table, you'll have two different ways to do a lookup, but a query will usually only take advantage of one of them. If you put a concatenated index on the table, you'll be able to look up on the first field in the index, the first two fields, etc efficiently. (So a concatenated index on (a, b) gives you fast lookups on a, even faster lookups on both a and b, but doesn't help you look things up on b)
Right now it is figuring out all possible things in form_instances that have the right state. It separately figures out all of the field_instances that have the right field id. It then does a hash join. For this makes a lookup hash from one result set, and scans the other for matches.
With my suggestion it should figure out all possible form_instances of interest. It will then go to the index, and figure out all of the field_instances that match on both the form instance and field id, and then it will find exactly the results of interest. Because the index is more specific, the database will have fewer rows of data to deal with to process your query.
http://explain.depesz.com is a fantastic online tool that helps you identify the hot spots visually. I pasted your results into the tool and got this analysis: http://explain.depesz.com/s/VIk
It's hard to tell anything specifically without seeing your tables and indexes, however.
Need to know the data you have in your table however just from looking at the sql and column names I would recommend
do you really need an index on workflow_state assuming elements within it can't be very unique - this might not improve select but will insert or update...
try making field_id check the first condition in your where statement

Postgresql index on xpath expression gives no speed up

We are trying to create OEBS-analog functionality in Postgresql. Let's say we have a form constructor and need to store form results in database (e.g. email bodies). In Oracle you could use a table with 150~ columns (and some mapping stored elsewhere) to store each field in separate column. But in contrast to Oracle we would like to store all the form in postgresql xml field.
The example of the tree is
<row xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<object_id>2</object_id>
<pack_form_id>23</pack_form_id>
<prod_form_id>34</prod_form_id>
</row>
We would like to search through this field.
Test table contains 400k rows and the following select executes in 90 seconds:
select *
from params
where (xpath('//prod_form_id/text()'::text, xmlvalue))[1]::text::int=34
So I created this index:
create index prod_form_idx
ON params using btree(
((xpath('//prod_form_id/text()'::text, xmlvalue))[1]::text::int)
);
And it made no difference. Still 90 seconds execution. EXPLAIN plan show this:
Bitmap Heap Scan on params (cost=40.29..6366.44 rows=2063 width=292)
Recheck Cond: ((((xpath('//prod_form_id/text()'::text, xmlvalue, '{}'::text[]))[1])::text)::integer = 34)
-> Bitmap Index Scan on prod_form_idx (cost=0.00..39.78 rows=2063 width=0)
Index Cond: ((((xpath('//prod_form_id/text()'::text, xmlvalue, '{}'::text[]))[1])::text)::integer = 34)
I am not the great plan interpreter so I suppose this means that index is being used. The question is: where's all the speed? And what can i do in order to optimize this kind of queries?
Well, at least the index is used. You get a bitmap index scan instead of a normal index scan though, which means the xpath() function will be called lots of times.
Let's do a little check :
CREATE TABLE foo ( id serial primary key, x xml, h hstore );
insert into foo (x,h) select XMLPARSE( CONTENT '<row xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<object_id>2</object_id>
<pack_form_id>' || n || '</pack_form_id>
<prod_form_id>34</prod_form_id>
</row>' ),
('object_id=>2,prod_form_id=>34,pack_form_id=>'||n)::hstore
FROM generate_series( 1,100000 ) n;
test=> EXPLAIN ANALYZE SELECT count(*) FROM foo;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------
Aggregate (cost=4821.00..4821.01 rows=1 width=0) (actual time=24.694..24.694 rows=1 loops=1)
-> Seq Scan on foo (cost=0.00..4571.00 rows=100000 width=0) (actual time=0.006..13.996 rows=100000 loops=1)
Total runtime: 24.730 ms
test=> explain analyze select * from foo where (h->'pack_form_id')='123';
QUERY PLAN
----------------------------------------------------------------------------------------------------
Seq Scan on foo (cost=0.00..5571.00 rows=500 width=68) (actual time=0.075..48.763 rows=1 loops=1)
Filter: ((h -> 'pack_form_id'::text) = '123'::text)
Total runtime: 36.808 ms
test=> explain analyze select * from foo where ((xpath('//pack_form_id/text()'::text, x))[1]::text) = '123';
QUERY PLAN
------------------------------------------------------------------------------------------------------
Seq Scan on foo (cost=0.00..5071.00 rows=500 width=68) (actual time=4.271..3368.838 rows=1 loops=1)
Filter: (((xpath('//pack_form_id/text()'::text, x, '{}'::text[]))[1])::text = '123'::text)
Total runtime: 3368.865 ms
As we can see,
scanning the whole table with count(*) takes 25 ms
extracting one key/value from a hstore adds a small extra cost, about 0.12 µs/row
extracting one key/value from a xml using xpath adds a huge cost, about 33 µs/row
Conclusions :
xml is slow (but everyone knows that)
if you want to put a flexible key/value store in a column, use hstore
Also since your xml data is pretty big it will be toasted (compressed and stored out of the main table). This makes the rows in the main table much smaller, hence more rows per page, which reduces the efficiency of bitmap scans since all rows on a page have to be rechecked.
You can fix this though. For some reason the xpath() function (which is very slow, since it handles xml) has the same cost (1 unit) as say, the integer operator "+"...
update pg_proc set procost=1000 where proname='xpath';
You may need to tweak the cost value. When given the right info, the planner knows xpath is slow and will avoid a bitmap index scan, using an index scan instead, which doesn't need rechecking the condition for all rows on a page.
Note that this does not solve the row estimates problem. Since you can't ANALYZE the inside of the xml (or hstore) you get default estimates for the number of rows (here, 500). So, the planner may be completely wrong and choose a catastrophic plan if some joins are involved. The only solution to this is to use proper columns.

optimize Query in PostgreSQL

SELECT count(*)
FROM contacts_lists
JOIN plain_contacts
ON contacts_lists.contact_id = plain_contacts.contact_id
JOIN contacts
ON contacts.id = plain_contacts.contact_id
WHERE plain_contacts.has_email
AND NOT contacts.email_bad
AND NOT contacts.email_unsub
AND contacts_lists.list_id =67339
how can i optimize this query.. could you please explain...
Reformatting your query plan for clarity:
QUERY PLAN Aggregate (cost=126377.96..126377.97 rows=1 width=0)
-> Hash Join (cost=6014.51..126225.38 rows=61033 width=0)
Hash Cond: (contacts_lists.contact_id = plain_contacts.contact_id)
-> Hash Join (cost=3067.30..121828.63 rows=61033 width=8)
Hash Cond: (contacts_lists.contact_id = contacts.id)
-> Index Scan using index_contacts_lists_on_list_id_and_contact_id
on contacts_lists (cost=0.00..116909.97 rows=61033 width=4)
Index Cond: (list_id = 66996)
-> Hash (cost=1721.41..1721.41 rows=84551 width=4)
-> Seq Scan on contacts (cost=0.00..1721.41 rows=84551 width=4)
Filter: ((NOT email_bad) AND (NOT email_unsub))
-> Hash (cost=2474.97..2474.97 rows=37779 width=4)
-> Seq Scan on plain_contacts (cost=0.00..2474.97 rows=37779 width=4)
Filter: has_email
Two partial indexes might eliminate seq scans depending on your data distribution:
-- if many contacts have bad emails or are unsubscribed:
CREATE INDEX contacts_valid_email_idx ON contacts (id)
WHERE (NOT email_bad AND NOT email_unsub);
-- if many contacts have no email:
CREATE INDEX plain_contacts_valid_email_idx ON plain_contacts (id)
WHERE (has_email);
You might be missing an index on a foreign key:
CREATE INDEX plain_contacts_contact_id_idx ON plain_contacts (contact_id);
Last but not least if you've never analyzed your data, you need to run:
VACUUM ANALYZE;
If it's still slow once all that is done, there isn't much you can do short of merging your plain_contacts and your contacts tables: getting the above query plan in spite of the above indexes means most/all of your subscribers are subscribed to that particular list -- in which case the above query plan is the fastest you'll get.
This is already a very simple query that the database will run in the most efficient way providing that statistics are up to date
So in terms of the query itself there's not much to do.
In terms of database administration you can add indexes - there should be indexes in the database for all the join conditions and also for the most selective part of the where clause (list_id, contact_id as FK in plain_contacts and contacts_lists). This is the most significant opportunity to improve performance of this query (orders of magnitude). Still as SpliFF notes, you probably already have those indexes, so check.
Also, postgres has good explain command that you should learn and use. It will help with optimizing queries.
Since you only want to inlude rows that has some flags set in the joined tables, I would move that statements into the join clause:
SELECT count(*)
FROM contacts_lists
JOIN plain_contacts
ON contacts_lists.contact_id = plain_contacts.contact_id
AND NOT plain_contacts.has_email
JOIN contacts
ON contacts.id = plain_contacts.contact_id
AND NOT contacts.email_unsub
AND NOT contacts.email_bad
WHERE contacts_lists.list_id =67339
I'm not sure if this would make a great impact on performance, but worth a try. You should probably have indexes on the joined tables as well for optimal performance, like this:
plain_contacts: contact_id, has_email
contacts: id, email_unsub, email_bad
Have you run ANALYZE on the database recently? Do the row counts in the EXPLAIN plan look like they make sense? (Looks like you ran only EXPLAIN. EXPLAIN ANALYZE gives both estimated and actual timings.)
You can use SELECT count(1) ... but other than that I'd say it looks fine. You could always cache some parts of the query using views or put indexes on contact_id and list_id if you're really struggling (I assume you have one on id already).