I have a query that is strangely slow in Postgres 13 for a database containing only small amounts of data. I have even seen the problem in my test suite where I fabricate some fake data.
SELECT sales.* FROM sales
INNER JOIN members ON members.id = sales.member_id
INNER JOIN members_teams ON members_teams.member_id = members.id
INNER JOIN teams ON teams.id = members_teams.team_id
WHERE teams.id IN (1, 2)
In my test suite I have the following counts of data in the different tables:
| Table | Count |
| -------- | -------------- |
| members | 501 |
| teams | 3 |
| members_teams | 501 |
| sales | 502 |
Here is an example of when it is slow:
Nested Loop (cost=0.75..25.83 rows=1 width=631) (actual time=38226.620..38226.622 rows=0 loops=1)
Join Filter: (members_teams.team_id = teams.id)
-> Nested Loop (cost=0.75..24.82 rows=1 width=635) (actual time=0.082..38220.385 rows=502 loops=1)
Join Filter: (members.id = members_teams.member_id)
Rows Removed by Join Filter: 251000
-> Index Scan using index_members_teams_on_team_id on members_teams (cost=0.25..8.26 rows=1 width=8) (actual time=0.031..0.544 rows=501 loops=1)
-> Nested Loop (cost=0.50..16.54 rows=1 width=635) (actual time=0.014..76.217 rows=502 loops=501)
Join Filter: (sales.member_id = members.id)
Rows Removed by Join Filter: 125250
-> Index Scan using index_sales_on_member_id on sales (cost=0.25..8.26 rows=1 width=631) (actual time=0.005..0.262 rows=502 loops=501)
-> Index Only Scan using members_pkey on members (cost=0.25..8.26 rows=1 width=4) (actual time=0.008..0.124 rows=251 loops=251502)
Heap Fetches: 63001752
-> Seq Scan on teams (cost=0.00..1.00 rows=1 width=4) (actual time=0.005..0.005 rows=0 loops=502)
Filter: (id = ANY ('{1,2}'::integer[]))
Rows Removed by Filter: 3
Planning Time: 0.690 ms
Execution Time: 38226.701 ms
Here is an example of when it is a more normal speed:
Nested Loop (cost=0.75..24.82 rows=1 width=631) (actual time=224.746..224.747 rows=0 loops=1)
Join Filter: (members.id = members_teams.member_id)
-> Nested Loop (cost=0.50..16.54 rows=1 width=635) (actual time=0.047..80.953 rows=502 loops=1)
Join Filter: (sales.member_id = members.id)
Rows Removed by Join Filter: 125250
-> Index Scan using index_sales_on_member_id on sales (cost=0.25..8.26 rows=1 width=631) (actual time=0.015..0.367 rows=502 loops=1)
-> Index Only Scan using members_pkey on members (cost=0.25..8.26 rows=1 width=4) (actual time=0.009..0.131 rows=251 loops=502)
Heap Fetches: 125752
-> Index Only Scan using index_members_teams_on_member_id_and_team_id on members_teams (cost=0.25..8.27 rows=1 width=4) (actual time=0.286..0.286 rows=0 loops=502)
Filter: (team_id = ANY (‘{1,2}’::integer[]))
Rows Removed by Filter: 501
Heap Fetches: 251502
Planning Time: 0.481 ms
Execution Time: 224.798 ms
Summary
A key difference seems to be which index it uses for the join table members_teams. Do you have any suggestions for how I can make this consistently performant? I thought about removing the join to teams and filtering on the team_id on the join table, but I'm worried that in the future we may need to use this query with additional constraints from the teams table.
Your estimates seem completely off. Do you have autovacuum disabled, or is your statistics collector malfunctioning? You should get better plans by explicitly collecting statistics:
ANALYZE sales;
ANALYZE members;
ANALYZE members_teams;
Related
Table: Customer
Type:
telephone1 | character varying(255)
telephone2 | character varying(255)
location_id | integer
Index:
"idx_customers_location_id" btree (location_id)
"idx_customers_telephone1_txt" btree (telephone1 text_pattern_ops)
"idx_customers_trim_telephone_1" btree (btrim(telephone1::text))
"idx_customers_trim_telephone2" btree (btrim(telephone2::text))
I have a table called customers, total rows are 141182. I was checking values in two columns (telephone1, telephone2), all the column telephone1 has data, but only 8 rows have value for the column telephone2
When I check for the value 1, getting this below execution time.
SELECT customers.id, location_id, telephone1, telephon2 FROM "customers" INNER JOIN "locations" ON
"locations"."id" = "customers"."location_id" WHERE (customers.location_id = 189 AND (telephone1 = '1'
OR telephone2 = '1')) GROUP BY customers.id LIMIT 20 OFFSET 0;
Limit (cost=519.62..519.64 rows=4 width=125) (actual time=25.895..25.898 rows=1 loops=1)
-> GroupAggregate (cost=519.62..519.64 rows=4 width=125) (actual time=25.893..25.896 rows=1 loops=1)
Group Key: customers.id
-> Sort (cost=519.62..519.62 rows=4 width=127) (actual time=25.876..25.879 rows=1 loops=1)
Sort Key: customers.id
Sort Method: quicksort Memory: 25kB
-> Nested Loop (cost=8.62..519.61 rows=4 width=127) (actual time=10.740..25.869 rows=1 loops=1)
-> Index Scan using locations_pkey on locations (cost=0.06..4.06 rows=1 width=70) (actual time=0.027..0.029 rows=1 loops=1)
Index Cond: (id = 189)
-> Bitmap Heap Scan on customers (cost=8.56..515.54 rows=4 width=61) (actual time=10.707..25.832 rows=1 loops=1)
Recheck Cond: (((telephone1)::text = '1'::text) OR ((telephone2)::text = '1'::text))
Filter: (location_id = 189)
Rows Removed by Filter: 1048
Heap Blocks: exact=1737
-> BitmapOr (cost=8.56..8.56 rows=259 width=0) (actual time=3.445..3.446 rows=0 loops=1)
-> Bitmap Index Scan on idx_customers_telephone1_txt (cost=0.00..2.10 rows=7 width=0) (actual time=0.065..0.066 rows=99 loops=1)
Index Cond: ((telephone1)::text = '1'::text)
-> Bitmap Index Scan on idx_customers_telephone2_txt (cost=0.00..6.47 rows=253 width=0) (actual time=3.378..3.378 rows=1664 loops=1)
Index Cond: ((telephone2)::text = '1'::text)
Planning Time: 0.419 ms
Execution Time: 25.995 ms
When I check for value 0 there is a huge change in the execution time (7753.216 ms)
Limit (cost=0.14..2440.90 rows=10 width=125) (actual time=5900.924..7753.133 rows=4 loops=1)
-> GroupAggregate (cost=0.14..292402.20 rows=1198 width=125) (actual time=5900.922..7753.129 rows=4 loops=1)
Group Key: customers.id
-> Nested Loop (cost=0.14..292395.61 rows=1198 width=127) (actual time=4350.358..7753.087 rows=4 loops=1)
-> Index Scan using customers_pkey on customers (cost=0.09..292387.36 rows=1198 width=61) (actual time=4350.338..7753.054 rows=4 loops=1)
Filter: ((location_id = 189) AND (((telephone1)::text = '0'::text) OR ((telephone2)::text = '0'::text)))
Rows Removed by Filter: 8484280
-> Materialize (cost=0.06..4.06 rows=1 width=70) (actual time=0.005..0.005 rows=1 loops=4)
-> Index Scan using locations_pkey on locations (cost=0.06..4.06 rows=1 width=70) (actual time=0.013..0.013 rows=1 loops=1)
Index Cond: (id = 189)
Planning Time: 0.322 ms
Execution Time: 7753.216 ms
Is there any particular reason, that takes more time to execute for the value 0 ? or anything wrong here?
One more thing I have noticed this issue happens only with column telephone2.
but only 8 rows have value for the column telephone2
Your explain plan indicates otherwise, finding 1664 rows with one specific value for telephone2. Now maybe most of those are not visible, but in that case you really need to VACUUM ANALYZE the table.
Nested Loop (cost=0.14..292395.61 rows=1198 width=127) (actual time=4350.358..7753.087 rows=4 loops=1)
With this second query, it thinks it will find 1198 rows (if run to completion). But it thinks it can stop after the first 20, so that would be 1.67% of index. But instead there are only 4 rows, so it unexpectedly has to scan the entire index without getting to stop early.
Why are the estimates off by so much? I don't know, it could just be stale statistics (again, VACUUM ANALYZE the table), or there could be some interrelation between the columns that make the estimation hard to do even with accurate statistics.
What is the point in joining to locations at all?
We have a PostgreSQL query with multiple tables and left outer joins, and is running very slow.
It is completing in 25-40s, so we want to optimize it more and want to decrease run time to 1-2 sec.
select a.campaignid, b.campaign_name , case when b.message_type_id = 1 then 'Promotional'
when b.message_type_id = 2 then 'Transactional'
else 'Other' end as Campaign_type, c.username , aggregator_type,
e.cli_manager_id as senderID,
b.schedule_time as campaign_schedule_date,
count(a.mobile) as campaign_submitted_count, count(case when a.status = 'DELIVRD' then mobile end) as Delivered,
count(a.mobile) as Total_count,
count(case when a.status = 'FAILED' then mobile end) as failure_count,
count(case when a.status = 'DND_check_failed' then mobile end) as DND_count,
sum(credits_used) as credits_used
from tbl_cdr_test a left outer join tbl_campaign b
on a.campaignid = b.tbl_campaign_id left outer join tbl_users_master c
on b.user_id =c.user_master_id
left outer join tbl_cli_manager e on b.user_id = e.user_id
left outer join tbl_user_channel f on b.user_id =f.user_id
left outer join tbl_user_configurations g on b.user_id = g.user_id
where date(insert_datetime) between '2020-05-23' and '2020-06-23'
and c.username = coalesce(null, c.username)
and g.msg_cat_id = coalesce(null, g.msg_cat_id)
and a.campaignid = coalesce(null, a.campaignid)
and e.cli_manager_id = coalesce(null, e.cli_manager_id)
group by a.campaignid, b.campaign_name , b.message_type_id,c.username , b.schedule_time,
aggregator_type, e.cli_manager_id;
We have create appropriate indexes as well, but still it is taking time.
Moreover there is "external merge disk" sorting method in execution plan whereas to resolve same I have set work_mem = 50MB. Still it is using disk sort instead of memory.Please suggest
Below is execution plan:
GroupAggregate (cost=4872.01..4872.07 rows=1 width=543) (actual time=20564.239..27415.264 rows=8 loops=1)
Group Key: a.campaignid, b.campaign_name, b.message_type_id, c.username, b.schedule_time, f.aggregator_type, e.cli_manager_id
-> Sort (cost=4872.01..4872.01 rows=1 width=483) (actual time=19627.424..25020.702 rows=3206196 loops=1)
Sort Key: a.campaignid, b.campaign_name, b.message_type_id, c.username, b.schedule_time, f.aggregator_type, e.cli_manager_id
Sort Method: external merge Disk: 281456kB
-> Nested Loop (cost=22.03..4872.00 rows=1 width=483) (actual time=99.704..12086.244 rows=3206196 loops=1)
Join Filter: (b.user_id = g.user_id)
-> Nested Loop Left Join (cost=21.89..4871.79 rows=1 width=495) (actual time=99.688..4518.533 rows=3206196 loops=1)
-> Nested Loop (cost=21.75..4871.54 rows=1 width=77) (actual time=99.664..935.689 rows=356244 loops=1)
-> Nested Loop (cost=21.33..31.57 rows=1 width=65) (actual time=0.295..2.376 rows=588 loops=1)
Join Filter: (b.user_id = c.user_master_id)
-> Merge Join (cost=21.18..30.22 rows=6 width=46) (actual time=0.246..0.663 rows=588 loops=1)
Merge Cond: (e.user_id = b.user_id)
-> Index Scan using "idx_FK_7hc6agd_tbl_cli_ma_1592228110_32" on tbl_cli_manager e (cost=0.42..6281.84 rows=762 width=12) (actual time=0.014..0.035 rows=5 loops=1)
Filter: (cli_manager_id = COALESCE(cli_manager_id))
-> Sort (cost=20.76..21.13 rows=147 width=34) (actual time=0.225..0.333 rows=585 loops=1)
Sort Key: b.user_id
Sort Method: quicksort Memory: 36kB
-> Seq Scan on tbl_campaign b (cost=0.00..15.47 rows=147 width=34) (actual time=0.013..0.154 rows=147 loops=1)
-> Index Scan using ind_user_master_c_user on tbl_users_master c (cost=0.14..0.21 rows=1 width=19) (actual time=0.002..0.002 rows=1 loops=588)
Index Cond: (user_master_id = e.user_id)
Filter: ((username)::text = (COALESCE(username))::text)
-> Append (cost=0.42..4839.94 rows=3 width=20) (actual time=0.546..1.426 rows=606 loops=588)
-> Index Scan using testh11_campaignid_idx on testh11 a (cost=0.42..4253.99 rows=2 width=20) (actual time=0.543..0.543 rows=0 loops=588)
Index Cond: (campaignid = b.tbl_campaign_id)
Filter: ((campaignid = COALESCE(campaignid)) AND (date(insert_datetime) >= '2020-05-23'::date) AND (date(insert_datetime) <= '2020-06-23'::date))
Rows Removed by Filter: 656
-> Index Scan using testh21_campaignid_idx on testh21 a_1 (cost=0.42..585.94 rows=1 width=20) (actual time=0.002..0.796 rows=606 loops=588)
Index Cond: (campaignid = b.tbl_campaign_id)
Filter: ((campaignid = COALESCE(campaignid)) AND (date(insert_datetime) >= '2020-05-23'::date) AND (date(insert_datetime) <= '2020-06-23'::date))
-> Index Scan using idx_user_id_tbl_user_c_1592227657_19 on tbl_user_channel f (cost=0.14..0.24 rows=1 width=422) (actual time=0.002..0.004 rows=9 loops=356244)
Index Cond: (user_id = b.user_id)
-> Index Scan using "idx_FK_6958qvy_tbl_user_c_1592228774_151" on tbl_user_configurations g (cost=0.14..0.20 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=3206196)
Index Cond: (user_id = e.user_id)
Filter: (msg_cat_id = COALESCE(msg_cat_id))
Planning Time: 6.561 ms
Execution Time: 27477.860 ms
There is a gross underestimate of the result rows for the index scan on testh21. The consequence is that PostgreSQL chooses nested loop joins, which is where your time is spent.
Try the following:
New statistics:
ANALYZE testh21;
If that improves the estimate, make sure that autoanalyze treats the table more often.
Prevent bad estimates caused by correlation:
CREATE STATISTICS testh21_stat (dependencies)
ON campaignid, insert_datetime FROM testh21;
ANALYZE testh21;
Perhaps there is a correlation between the columns, and that improves the estimate.
More detailed statistics: try raising default_statistics_target before ANALYZE of the table.
If you cannot improve the estimates, take the hammer and set enable_nestloop = off for the duration of the query.
I'm trying to understand if it's possible to optimize the query containing a self-join, and if it is possible - how to do it.
I'm working on a bigger real-life task, but here I extracted a simple sub-task from it to keep focus on a particular issue: optimizing a self-join query.
I have a table called parties. It contains over 85k records and looks like this:
# \d test.parties
Table "test.parties"
Column | Type | Collation | Nullable | Default
-------------+------+-----------+----------+---------
id | uuid | | |
contract_id | uuid | | |
Doing a self-join on contract_id I get this plan:
# explain analyse select p1.id from test.parties p1 join test.parties p2 on p1.contract_id = p2.contract_id;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Merge Join (cost=20207.87..628157.87 rows=40500000 width=16) (actual time=109.709..184.523 rows=197632 loops=1)
Merge Cond: (p1.contract_id = p2.contract_id)
-> Sort (cost=11181.94..11406.94 rows=90000 width=32) (actual time=55.560..66.173 rows=86332 loops=1)
Sort Key: p1.contract_id
Sort Method: external merge Disk: 3560kB
-> Seq Scan on parties p1 (cost=0.00..1620.00 rows=90000 width=32) (actual time=0.018..14.518 rows=86332 loops=1)
-> Sort (cost=9025.94..9250.94 rows=90000 width=16) (actual time=54.135..74.973 rows=197631 loops=1)
Sort Key: p2.contract_id
Sort Method: external sort Disk: 2544kB
-> Seq Scan on parties p2 (cost=0.00..1620.00 rows=90000 width=16) (actual time=0.009..10.462 rows=86332 loops=1)
Planning Time: 0.167 ms
Execution Time: 199.677 ms
(12 rows)
Adding an index on contract_id I get this plan:
# create index on test.parties(contract_id);
CREATE INDEX
# explain analyse select p1.id from test.parties p1 join test.parties p2 on p1.contract_id = p2.contract_id;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=3084.47..10570.76 rows=192484 width=16) (actual time=32.457..97.662 rows=197632 loops=1)
Hash Cond: (p1.contract_id = p2.contract_id)
-> Seq Scan on parties p1 (cost=0.00..1583.32 rows=86332 width=32) (actual time=0.013..11.293 rows=86332 loops=1)
-> Hash (cost=1583.32..1583.32 rows=86332 width=16) (actual time=32.133..32.133 rows=86332 loops=1)
Buckets: 131072 Batches: 2 Memory Usage: 3048kB
-> Seq Scan on parties p2 (cost=0.00..1583.32 rows=86332 width=16) (actual time=0.007..12.815 rows=86332 loops=1)
Planning Time: 0.444 ms
Execution Time: 110.692 ms
(8 rows)
Is there a way I could get rid of those Seq Scans?
I don't see the presence of any index in your explain plan, so assign that you have not yet looked into using indices, here is one suggestion:
CREATE INDEX idx ON parties (contract_id, id);
This should speed up the join, and it also covers the id value, which is required in the SELECT clause.
Trying to select users with most "followed_by" joining to filter by "tag". Both tables have millions of records. Using distinct to only select unique users.
select distinct u.*
from users u join posts p
on u.id=p.user_id
where p.tags #> ARRAY['love']
order by u.followed_by desc nulls last limit 21
It runs over 16s, seems because of the 'distinct' causing a Seq Scan over 6+ million users. Here is the explain analyse
Limit (cost=15509958.30..15509959.09 rows=21 width=292) (actual time=16882.861..16883.753 rows=21 loops=1)
-> Unique (cost=15509958.30..15595560.30 rows=2282720 width=292) (actual time=16882.859..16883.749 rows=21 loops=1)
-> Sort (cost=15509958.30..15515665.10 rows=2282720 width=292) (actual time=16882.857..16883.424 rows=525 loops=1)
Sort Key: u.followed_by DESC NULLS LAST, u.id, u.username, u.fullna
Sort Method: external merge Disk: 583064kBme, u.follows, u
-> Gather (cost=1000.57..14956785.06 rows=2282720 width=292) (actual time=0.377..11506.001 rows=1680890 loops=1).media, u.profile_pic_url_hd, u.is_private, u.is_verified, u.biography, u.external_url, u.updated, u.location_id, u.final_post
Workers Planned: 9
Workers Launched: 9
-> Nested Loop (cost=0.57..14727513.06 rows=253636 width=292) (actual time=1.013..12031.634 rows=168089 loops=10)
-> Parallel Seq Scan on posts p (cost=0.00..13187797.79 rows=253636 width=8) (actual time=0.940..10872.630 rows=168089 loops=10)
Filter: (tags #> '{love}'::text[])
Rows Removed by Filter: 6251355
-> Index Scan using user_pk on users u (cost=0.57..6.06 rows=1 width=292) (actual time=0.006..0.006 rows=1 loops=1680890)
Index Cond: (id = p.user_id)
Planning time: 1.276 ms
Execution time: 16964.271 ms
Would appreciate tips on how to make this fast.
Update
Thanks to #a_horse_with_no_name, for "love" tags it became really fast
Limit (cost=1.14..4293986.91 rows=21 width=292) (actual time=1.735..31.613 rows=21 loops=1)
-> Nested Loop Semi Join (cost=1.14..10959887484.70 rows=53600 width=292) (actual time=1.733..31.607 rows=21 loops=1)
-> Index Scan using idx_followed_by on users u (cost=0.57..322693786.19 rows=232404560 width=292) (actual time=0.011..0.103 rows=32 loops=1)
-> Index Scan using fki_user_fk1 on posts p (cost=0.57..1943.85 rows=43 width=8) (actual time=0.983..0.983 rows=1 loops=32)
Index Cond: (user_id = u.id)
Filter: (tags #> '{love}'::text[])
Rows Removed by Filter: 1699
Planning time: 1.322 ms
Execution time: 31.656 ms
However for some other tags like "beautiful" it's better, but still little slow. It also takes a different execution path
Limit (cost=3893365.84..3893365.89 rows=21 width=292) (actual time=2813.876..2813.892 rows=21 loops=1)
-> Sort (cost=3893365.84..3893499.84 rows=53600 width=292) (actual time=2813.874..2813.887 rows=21 loops=1)
Sort Key: u.followed_by DESC NULLS LAST
Sort Method: top-N heapsort Memory: 34kB
-> Nested Loop (cost=3437011.27..3891920.70 rows=53600 width=292) (actual time=1130.847..2779.928 rows=35230 loops=1)
-> HashAggregate (cost=3437010.70..3437546.70 rows=53600 width=8) (actual time=1130.809..1148.209 rows=35230 loops=1)
Group Key: p.user_id
-> Bitmap Heap Scan on posts p (cost=10484.20..3434173.21 rows=1134993 width=8) (actual time=268.602..972.390 rows=814919 loops=1)
Recheck Cond: (tags #> '{beautiful}'::text[])
Heap Blocks: exact=347002
-> Bitmap Index Scan on idx_tags (cost=0.00..10200.45 rows=1134993 width=0) (actual time=168.453..168.453 rows=814919 loops=1)
Index Cond: (tags #> '{beautiful}'::text[])
-> Index Scan using user_pk on users u (cost=0.57..8.47 rows=1 width=292) (actual time=0.045..0.046 rows=1 loops=35230)
Index Cond: (id = p.user_id)
Planning time: 1.388 ms
Execution time: 2814.132 ms
I did have a gin index for 'tags' already in place
This should be faster:
select *
from users u
where exists (select *
from posts p
where u.id=p.user_id
and p.tags #> ARRAY['love'])
order by u.followed_by desc nulls last
limit 21;
If there are only a few (<10%) posts with that tag, an index on posts.tags should help as well:
create index using gin on posts (tags);
I have the following query:
SELECT "person_dimensions"."dimension"
FROM "person_dimensions"
join users
on users.id = person_dimensions.user_id
where users.team_id = 2
The following is the result of EXPLAIN ANALYZE:
Nested Loop (cost=0.43..93033.84 rows=452 width=11) (actual time=1245.321..42915.426 rows=827 loops=1)
-> Seq Scan on person_dimensions (cost=0.00..254.72 rows=13772 width=15) (actual time=0.022..9.907 rows=13772 loops=1)
-> Index Scan using users_pkey on users (cost=0.43..6.73 rows=1 width=4) (actual time=2.978..3.114 rows=0 loops=13772)
Index Cond: (id = person_dimensions.user_id)
Filter: (team_id = 2)
Rows Removed by Filter: 1
Planning time: 0.396 ms
Execution time: 42915.678 ms
Indexes exist on person_dimensions.user_id and users.team_id, so it is unclear as to why this seemingly simple query would be taking so long.
Maybe it has something to do with team_id being unable to be used in the join condition? Ideas how to speed this up?
EDIT:
I tried this query:
SELECT "person_dimensions"."dimension"
FROM "person_dimensions"
join users ON users.id = person_dimensions.user_id
WHERE users.id IN (2337,2654,3501,56,4373,1060,3170,97,4629,41,3175,4541,2827)
which contains the id's returned by the subquery:
SELECT id FROM users WHERE team_id = 2
The result was 380ms versus 42s as above. I could use this as a workaround, but I am really curious as to what is going on here...
I rebooted my DB server yesterday, and when it came back up this same query was performing as expected with a completely different query plan that used expected indices:
QUERY PLAN
Hash Join (cost=1135.63..1443.45 rows=84 width=11) (actual time=0.354..6.312 rows=835 loops=1)
Hash Cond: (person_dimensions.user_id = users.id)
-> Seq Scan on person_dimensions (cost=0.00..255.17 rows=13817 width=15) (actual time=0.002..2.764 rows=13902 loops=1)
-> Hash (cost=1132.96..1132.96 rows=214 width=4) (actual time=0.175..0.175 rows=60 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 11kB
-> Bitmap Heap Scan on users (cost=286.07..1132.96 rows=214 width=4) (actual time=0.032..0.157 rows=60 loops=1)
Recheck Cond: (team_id = 2)
Heap Blocks: exact=68
-> Bitmap Index Scan on index_users_on_team_id (cost=0.00..286.02 rows=214 width=0) (actual time=0.021..0.021 rows=82 loops=1)
Index Cond: (team_id = 2)
Planning time: 0.215 ms
Execution time: 6.474 ms
Anyone have any ideas why it required a reboot to be aware of all of this? Could it be that manual vacuums were required that hadn't been done in a while, or something like this? Recall I did do an analyze on the relevant tables before the reboot and it didn't change anything.