Efficient querying/indexing in Postgres for WHERE IN (...) and ORDER BY - postgresql

I have a table of posts, and each post belongs to a classroom. I want to be able to query for most recent posts across several classrooms, like this:
SELECT * FROM posts
WHERE posts.classroom_id IN (6691, 6693, 6695, 6702)
ORDER BY date desc, created_at desc
LIMIT 30;
Unfortunately, this results in Postgres pulling and sorting tens of thousands of records - it has to get all the posts for each classroom, and sort all of them together, in order to find the 30 most recent overall.
Here's the explain+analyze:
-> Sort (cost=67525.77..67571.26 rows=18194 width=489) (actual time=9373.376..9373.381 rows=30 loops=1)
Sort Key: date DESC, created_at DESC
Sort Method: top-N heapsort Memory: 62kB
-> Bitmap Heap Scan on posts (cost=350.74..66988.42 rows=18194 width=489) (actual time=41.360..9271.782 rows=42924 loops=1)
Recheck Cond: (classroom_id = ANY ('{6691,6693,6695,6702}'::integer[]))
Heap Blocks: exact=29456
-> Bitmap Index Scan on optimize_finding_photos_and_tagged_posts_by_classroom (cost=0.00..346.19 rows=18194 width=0) (actual time=16.205..16.205 rows=42924 loops=1)
Index Cond: (classroom_id = ANY ('{6691,6693,6695,6702}'::integer[]))
Planning time: 0.216 ms
Execution time: 9390.323 ms
From various index options, the planner chose one that starts with classroom_id, which makes sense (the subsequent fields in that index are irrelevant). But it seems so inefficient that it has to gather 42,924 rows and sort them all.
It seems it could take a big shortcut by retrieving only the 30 most recent for each classroom, and then sorting those. To facilitate this, I tried adding a new index on [classroom_id, date DESC, created_at DESC], but the planner chose not to use it. Is Postgres just not quite clever enough to use the shortcut I describe? Or is there something I'm overlooking?
So, is there a better way to index or query, so that this kind of lookup can be more efficient?
One more side question: in the explain+analyze, why does the sort node take so little time? I would expect the sorting to be fairly slow/expensive.

Creating a test database...
CREATE TABLE posts( classroom_id INT NOT NULL, date FLOAT NOT NULL, foo TEXT );
INSERT INTO posts SELECT random()*100, random() FROM generate_series( 1,1500000 );
CREATE INDEX posts_cd ON posts( classroom_id, date );
CREATE INDEX posts_date ON posts( date );
VACUUM ANALYZE posts;
Note the "foo" column is there to avoid an index-only scan on posts which would be very fast on this test setup which only contains indexed columns classroom_id,date but would be useless for you since you will select other columns also.
If you have an index on date that you use for other things, like displaying the most recent posts for all classrooms, then you can use it here too:
EXPLAIN ANALYZE SELECT * FROM posts WHERE posts.classroom_id IN (1,2,6)
ORDER BY date desc LIMIT 30;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.29..55.67 rows=30 width=44) (actual time=0.040..0.983 rows=30 loops=1)
-> Index Scan Backward using posts_date on posts (cost=0.29..5447.29 rows=2951 width=44) (actual time=0.039..0.978 rows=30 loops=1)
Filter: (classroom_id = ANY ('{1,2,6}'::integer[]))
Rows Removed by Filter: 916
Planning time: 0.117 ms
Execution time: 1.008 ms
This one is a bit risky since the condition on classroom is not indexed: since it will scan the date index backwards, if many classrooms that are excluded by the WHERE condition have recent posts it may have to skip lots of rows in the index before finding the requested rows. My test data distribution is random, but this query may have different performance if your data distribution is different.
Now, without the index on date.
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=10922.61..10922.69 rows=30 width=44) (actual time=41.038..41.049 rows=30 loops=1)
-> Sort (cost=10922.61..11028.44 rows=42331 width=44) (actual time=41.036..41.040 rows=30 loops=1)
Sort Key: date DESC
Sort Method: top-N heapsort Memory: 26kB
-> Bitmap Heap Scan on posts (cost=981.34..9672.39 rows=42331 width=44) (actual time=10.275..33.056 rows=44902 loops=1)
Recheck Cond: (classroom_id = ANY ('{1,2,6}'::integer[]))
Heap Blocks: exact=8069
-> Bitmap Index Scan on posts_cd (cost=0.00..970.76 rows=42331 width=0) (actual time=8.613..8.613 rows=44902 loops=1)
Index Cond: (classroom_id = ANY ('{1,2,6}'::integer[]))
Planning time: 0.145 ms
Execution time: 41.086 ms
Note I've adjusted the number of rows in the table so the bitmap scan finds about the same number as yours.
It's the same plan you had, including the Top-N heapsort which is much faster than a complete sort (and uses a lot less memory):
One more side question: in the explain+analyze, why does the sort node take so little time?
Basically what it does is only keep the top N rows in the heapsort buffer since the rest will be discarded by the LIMIT anyway, so it doesn't have to sort everything. As the rows are fetched they are pushed into the heapsort buffer (or discarded if they would be discarded by the LIMIT anyway). So the sort doesn't happen as a separate step after the data to be sorted is gathered, instead it happens while the data is gathered, which is why it takes the same time as retrieving the data.
Now, my query is a lot faster than yours, while they use the same plan. Several reasons could explain this, for example I run it on a SSD which is fast. But I think the most likely explanation is that your posts table probably contains ... posts ... which means large-ish TEXT data. This means a lot of data will have to be fetched, then discarded, keeping only 30 rows. In order to test this I just did:
UPDATE posts SET foo= 992 bytes of text
VACUUM ANALYZE posts;
...and the query is much slower, 360ms, and it says:
Heap Blocks: exact=41046
So that's probably your problem. In order to solve it, the query should not fetch large amounts of data then discard them, which means we're going to use the primary key... you must have one already but I forgot it, so here it is.
ALTER TABLE posts ADD post_id SERIAL PRIMARY KEY;
VACUUM ANALYZE posts;
DROP INDEX posts_cd;
CREATE INDEX posts_cdi ON posts( classroom_id, date, post_id );
I add the PK to the index, and drop the previous index, because I want an index-only scan in order to avoid fetching all the data from the table. Scanning only the index involves much less data since it doesn't contain the actual posts. Of course, we only get the PKs, so we have to JOIN back to the main table to get the posts, but that happens only after all the filtering is done, so it's only 30 rows.
EXPLAIN ANALYZE SELECT p.* FROM posts p
JOIN (SELECT post_id FROM posts WHERE posts.classroom_id IN (1,2,6)
ORDER BY date desc LIMIT 30) pids USING (post_id)
ORDER BY date desc LIMIT 30;
Limit (cost=3212.05..3212.12 rows=30 width=1012) (actual time=38.410..38.421 rows=30 loops=1)
-> Sort (cost=3212.05..3212.12 rows=30 width=1012) (actual time=38.410..38.419 rows=30 loops=1)
Sort Key: p.date DESC
Sort Method: quicksort Memory: 85kB
-> Nested Loop (cost=2957.71..3211.31 rows=30 width=1012) (actual time=38.108..38.329 rows=30 loops=1)
-> Limit (cost=2957.29..2957.36 rows=30 width=12) (actual time=38.092..38.105 rows=30 loops=1)
-> Sort (cost=2957.29..3067.84 rows=44223 width=12) (actual time=38.092..38.104 rows=30 loops=1)
Sort Key: posts.date DESC
Sort Method: top-N heapsort Memory: 26kB
-> Index Only Scan using posts_cdi on posts (cost=0.43..1651.19 rows=44223 width=12) (actual time=0.023..22.186 rows=44902 loops=1)
Index Cond: (classroom_id = ANY ('{1,2,6}'::integer[]))
Heap Fetches: 0
-> Index Scan using posts_pkey on posts p (cost=0.43..8.45 rows=1 width=1012) (actual time=0.006..0.006 rows=1 loops=30)
Index Cond: (post_id = posts.post_id)
Planning time: 0.305 ms
Execution time: 38.468 ms
OK. Much faster now. This trick is pretty useful: when the table contains lots of data, or even lots of columns, that will have to be lugged around inside the query engine then filtered and most of it discarded, sometimes it is faster to do the filtering and sorting on only the few small columns that are actually used, then fetching the rest of the data only for the rows that remain after the filtering is done. Sometimes it is worth it to split the table in two even, with the columns used for filtering and sorting in one table, and all the rest in the other table.
To go even faster we can make the query ugly:
SELECT p.* FROM posts p
JOIN (
SELECT * FROM (SELECT post_id, date FROM posts WHERE posts.classroom_id=1 ORDER BY date desc LIMIT 30) a
UNION ALL
SELECT * FROM (SELECT post_id, date FROM posts WHERE posts.classroom_id=2 ORDER BY date desc LIMIT 30) b
UNION ALL
SELECT * FROM (SELECT post_id, date FROM posts WHERE posts.classroom_id=3 ORDER BY date desc LIMIT 30) c
ORDER BY date desc LIMIT 30
) q USING (post_id)
ORDER BY date desc LIMIT 30;
This exploits the fact that, if there is only one classroom_id in the WHERE condition, then postgres will use index scan backward on (classroom_id,date) directly. And since I've added post_id to it, it doesn't even need to touch the table. And since the three selects in the union have the same sort order, it combines them with a merge, which means it doesn't even need to sort or even fetch the rows that ate cut off by the outer LIMIT 30.
Limit (cost=257.97..258.05 rows=30 width=1012) (actual time=0.357..0.367 rows=30 loops=1)
-> Sort (cost=257.97..258.05 rows=30 width=1012) (actual time=0.356..0.364 rows=30 loops=1)
Sort Key: p.date DESC
Sort Method: quicksort Memory: 85kB
-> Nested Loop (cost=1.73..257.23 rows=30 width=1012) (actual time=0.063..0.319 rows=30 loops=1)
-> Limit (cost=1.31..3.28 rows=30 width=12) (actual time=0.050..0.085 rows=30 loops=1)
-> Merge Append (cost=1.31..7.24 rows=90 width=12) (actual time=0.049..0.081 rows=30 loops=1)
Sort Key: posts.date DESC
-> Limit (cost=0.43..1.56 rows=30 width=12) (actual time=0.024..0.032 rows=12 loops=1)
-> Index Only Scan Backward using posts_cdi on posts (cost=0.43..531.81 rows=14136 width=12) (actual time=0.024..0.029 rows=12 loops=1)
Index Cond: (classroom_id = 1)
Heap Fetches: 0
-> Limit (cost=0.43..1.55 rows=30 width=12) (actual time=0.018..0.024 rows=9 loops=1)
-> Index Only Scan Backward using posts_cdi on posts posts_1 (cost=0.43..599.55 rows=15950 width=12) (actual time=0.017..0.023 rows=9 loops=1)
Index Cond: (classroom_id = 2)
Heap Fetches: 0
-> Limit (cost=0.43..1.56 rows=30 width=12) (actual time=0.006..0.014 rows=11 loops=1)
-> Index Only Scan Backward using posts_cdi on posts posts_2 (cost=0.43..531.81 rows=14136 width=12) (actual time=0.006..0.014 rows=11 loops=1)
Index Cond: (classroom_id = 3)
Heap Fetches: 0
-> Index Scan using posts_pkey on posts p (cost=0.43..8.45 rows=1 width=1012) (actual time=0.006..0.007 rows=1 loops=30)
Index Cond: (post_id = posts.post_id)
Planning time: 0.445 ms
Execution time: 0.432 ms
The resulting speedup is pretty ridiculous. I think this should work.

To facilitate this, I tried adding a new index on [classroom_id, date DESC, created_at DESC], but the planner chose not to use it. Is Postgres just not quite clever enough to use the shortcut I describe?
It is just not clever enough. You could write it out explicitly to get the execution you envision. It is ugly, but it should be effective:
(SELECT * FROM posts WHERE classroom_id = 6691 ORDER BY date desc, created_at desc LIMIT 30)
union all
(SELECT * FROM posts WHERE classroom_id = 6693 ORDER BY date desc, created_at desc LIMIT 30)
union all
(SELECT * FROM posts WHERE classroom_id = 6695 ORDER BY date desc, created_at desc LIMIT 30)
union all
(SELECT * FROM posts WHERE classroom_id = 6697 ORDER BY date desc, created_at desc LIMIT 30)
order by date desc, created_at desc LIMIT 30;
One more side question: in the explain+analyze, why does the sort node take so little time? I would expect the sorting to be fairly slow/expensive.
CPUs are very fast, and 40,000 rows is just not very many. Unlike CPUs however, your storage is not nearly as fast, and stomping all over a very large table to collect 40,000 rows takes a lot of time. There are all kinds of ways try to address this (other than fixing the planner or rewriting your query). Get faster primary storage or more caching. Get it to use an index-only scan (is selecting * really necessary?), clustering or partitioning the table on classroom_id so that rows of the same classroom are located together, etc.
If you don't want to rearrange your data or change your hardware or rewrite your query, then maybe the simplest thing to try would be just to build an index on (date, created_at), which might lead to another plan which is less good than the perfect plan, but much better than the current plan. It could use the index to walk the data in already-ordered order, collecting rows which meet the IN condition until it has collected 30.

Related

How to use ts_headline() in PostgreSQL while doing efficient full-text search? Comparing two query plans

I am experimenting with a full-text search system over my PostgreSQL database, where I am using tsvectors with ts_rank() to pull out relevant items to a user search query. In general this works really fantastic as a simple solution (i.e. no major overhead infrastructure). However, I am finding that the ts_headline() component (which gives users context for the relevance of the search results) is slowing down my queries significantly, by an order of about 10x. I wanted to inquire what is the best way to use ts_headline() without incurring computational expense.
To give an example, here is a very fast tsvector search that does not use ts_headline(). For context, my table has two relevant fields, search_text which has the natural-language text which is being searched against, and search_text_tsv which is a tsvector that is directly queried against (and also used to rank the item). When I use ts_headline(), it references the main search_text field in order to produce a user-readable headline. Further, the column search_text_tsv is indexed using GIN, which provides very fast lookups for ## websearch_to_tsquery('my query here').
Again, here is query #1:
SELECT
item_id,
title,
author,
search_text,
ts_rank(search_text_tsv, websearch_to_tsquery(unaccent('my query text here')), 1) as rank
FROM search_index
WHERE search_text_tsv ## websearch_to_tsquery(unaccent('my query text here'))
ORDER BY rank DESC
LIMIT 20 OFFSET 20
This gives me 20 top results very fast, running on my laptop about 50ms.
Now, query #2 uses ts_headline() to produce a user-readable headline. I found that this was very slow when it ran against all possible search results, so I used a sub-query to produce the top 20 results and then calculated the ts_headline() only for those top results (as opposed to, say, 1000 possible results).
SELECT *,
ts_headline(search_text,websearch_to_tsquery(unaccent('my query text here')),'StartSel=<b>,StopSel=</b>,MaxFragments=2,' || 'FragmentDelimiter=...,MaxWords=10,MinWords=1') AS headline
FROM (
SELECT
item_id,
title,
author,
search_text,
ts_rank(search_text_tsv, websearch_to_tsquery(unaccent('my query text here')), 1) as rank
FROM search_index
WHERE search_text_tsv ## websearch_to_tsquery(unaccent('my query text here'))
ORDER BY rank DESC
LIMIT 20 OFFSET 20) as foo
Basically, what this does is limits the # of results (as in the first query), and then uses that as a sub-query, returning all of the columns in the subquery (i.e. *) and also the ts_headline() calculation. However, this is very slow, by an order of magnitude of about 10, coming in at around 800ms on my laptop.
Is there anything I can do to speed up ts_headline()? It seems pretty clear that this is what is slowing down the second query.
For reference, here are the query plans being produced by Postgresql (from EXPLAIN ANALYZE):
Query plan 1: (straight full-text search)
Limit (cost=56.79..56.79 rows=1 width=270) (actual time=66.118..66.125 rows=20 loops=1)
-> Sort (cost=56.78..56.79 rows=1 width=270) (actual time=66.113..66.120 rows=40 loops=1)
Sort Key: (ts_rank(search_text_tsv, websearch_to_tsquery(unaccent('my search query here'::text)), 1)) DESC
Sort Method: top-N heapsort Memory: 34kB
-> Bitmap Heap Scan on search_index (cost=52.25..56.77 rows=1 width=270) (actual time=1.070..65.641 rows=462 loops=1)
Recheck Cond: (search_text_tsv ## websearch_to_tsquery(unaccent('my search query here'::text)))
Heap Blocks: exact=424
-> Bitmap Index Scan on idx_fts_search (cost=0.00..52.25 rows=1 width=0) (actual time=0.966..0.966 rows=462 loops=1)
Index Cond: (search_text_tsv ## websearch_to_tsquery(unaccent('my search query here'::text)))
Planning Time: 0.182 ms
Execution Time: 66.154 ms
Query plan 2: (full text search w/ subquery & ts_headline())
Subquery Scan on foo (cost=56.79..57.31 rows=1 width=302) (actual time=116.424..881.617 rows=20 loops=1)
-> Limit (cost=56.79..56.79 rows=1 width=270) (actual time=62.470..62.497 rows=20 loops=1)
-> Sort (cost=56.78..56.79 rows=1 width=270) (actual time=62.466..62.484 rows=40 loops=1)
Sort Key: (ts_rank(search_index.search_text_tsv, websearch_to_tsquery(unaccent('my search query here'::text)), 1)) DESC
Sort Method: top-N heapsort Memory: 34kB
-> Bitmap Heap Scan on search_index (cost=52.25..56.77 rows=1 width=270) (actual time=2.378..62.151 rows=462 loops=1)
Recheck Cond: (search_text_tsv ## websearch_to_tsquery(unaccent('my search query here'::text)))
Heap Blocks: exact=424
-> Bitmap Index Scan on idx_fts_search (cost=0.00..52.25 rows=1 width=0) (actual time=2.154..2.154 rows=462 loops=1)
Index Cond: (search_text_tsv ## websearch_to_tsquery(unaccent('my search query here'::text)))
Planning Time: 0.350 ms
Execution Time: 881.702 ms
Just encountered exactly the same issue. When collecting a list of search results (20-30 documents) and also getting their ts_headline highlight in the same query the execution time was x10 at least.
To be fair, Postgres documentation is warning about that [1]:
ts_headline uses the original document, not a tsvector summary, so it can be slow and should be used with care.
I ended up getting the list of documents first and then loading the highlights with ts_headline asynchronously one-by-one. Still slow single queries (>150ms) but better user experience then waiting multiple seconds for an initial load.
[1] https://www.postgresql.org/docs/15/textsearch-controls.html#TEXTSEARCH-HEADLINE
I think I can buy you a few more milliseconds. In your query, you're returning "SELECT *, ts_headline" which includes the full original document search_text in the return. When I limited my SELECT to everything but the "search_text" from the subquery (+ ts_headline as headline), my queries dropped from 500-800ms to 100-400ms. I'm also using AWS RDS so that might play a role on my end.

Slow varchar index performance in Postgres

I've got a table with ~500,000 rows with a column with values like Brutus, Dreamer of the Wanton Wasteland. I need to do a case-insensitive LIKE query on these, but it seems to perform very slowly. I tried making an index with:
create index name_idx on deck (name);
and
create index deck_name_idx on deck (lower(name));
But the query is equally slow either way. Here is my query:
select *
from deck
where lower(deck.name) like '%brutus, dreamer of the%'
order by deck.id desc
limit 20
Here are the results of my explain analyze (this is with the second index, but both are equally slow.)
Limit (cost=152534.89..152537.23 rows=20 width=1496) (actual time=627.480..627.490 rows=1 loops=1)
-> Gather Merge (cost=152534.89..152539.56 rows=40 width=1496) (actual time=627.479..627.488 rows=1 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Sort (cost=151534.87..151534.92 rows=20 width=1496) (actual time=611.447..611.447 rows=0 loops=3)
Sort Key: id DESC
Sort Method: quicksort Memory: 25kB
-> Parallel Seq Scan on deck (cost=0.00..151534.44 rows=20 width=1496) (actual time=609.818..611.304 rows=0 loops=3)
Filter: (lower((name)::text) ~~ '%brutus, dreamer of the%'::text)
Rows Removed by Filter: 162210
Planning time: 0.786 ms
Execution time: 656.510 ms
Is there a better way to set up this index? If I have to I could denormalize the column to a lowercase version, but I'd rather not do that unless it will help a lot and there's no better way.
To support LIKE queries with no wildcard in the beginning, use
CREATE INDEX ON deck (lower(name) varchar_pattern_ops);
To support LIKE searches that can have a wildcard at the beginning, you can
CREATE EXTENSION pg_trgm;
CREATE INDEX ON deck USING gin (lower(name) gin_trgm_ops);

Slow on first query

I'm having troubles when I perform the first query on a table. Subsequent queries are much faster, even if I change the range date to look for. I assume that PostgreSQL implements a caching mechanism that allows the subsequent queries to be much faster. I can try to warmup the cache so the first user request can hit the cache. However, I think I can somehow improve the following query:
SELECT
y.id, y.title, x.visits, x.score
FROM (
SELECT
article_id, visits,
COALESCE(ROUND((visits / NULLIF(hits ,0)::float)::numeric, 4), 0) score
FROM (
SELECT
article_id, SUM(visits) visits, SUM(hits) hits
FROM
article_reports
WHERE
a.site_id = 'XYZ' AND a.date >= '2017-04-13' AND a.date <= '2017-06-28'
GROUP BY
article_id
) q ORDER BY score DESC, visits DESC LIMIT(20)
) x
INNER JOIN
articles y ON x.article_id = y.id
Any ideas on how can I improve this. The following is the result of EXPLAIN:
Nested Loop (cost=84859.76..85028.54 rows=20 width=272) (actual time=12612.596..12612.836 rows=20 loops=1)
-> Limit (cost=84859.34..84859.39 rows=20 width=52) (actual time=12612.502..12612.517 rows=20 loops=1)
-> Sort (cost=84859.34..84880.26 rows=8371 width=52) (actual time=12612.499..12612.503 rows=20 loops=1)
Sort Key: q.score DESC, q.visits DESC
Sort Method: top-N heapsort Memory: 27kB
-> Subquery Scan on q (cost=84218.04..84636.59 rows=8371 width=52) (actual time=12513.168..12602.649 rows=28965 loops=1)
-> HashAggregate (cost=84218.04..84301.75 rows=8371 width=36) (actual time=12513.093..12536.823 rows=28965 loops=1)
Group Key: a.article_id
-> Bitmap Heap Scan on article_reports a (cost=20122.78..77122.91 rows=405436 width=36) (actual time=135.588..11974.774 rows=398242 loops=1)
Recheck Cond: (((site_id)::text = 'XYZ'::text) AND (date >= '2017-04-13'::date) AND (date <= '2017-06-28'::date))
Heap Blocks: exact=36911
-> Bitmap Index Scan on index_article_reports_on_site_id_and_article_id_and_date (cost=0.00..20021.42 rows=405436 width=0) (actual time=125.846..125.846 rows=398479 loops=1)"
Index Cond: (((site_id)::text = 'XYZ'::text) AND (date >= '2017-04-13'::date) AND (date <= '2017-06-28'::date))
-> Index Scan using articles_pkey on articles y (cost=0.42..8.44 rows=1 width=128) (actual time=0.014..0.014 rows=1 loops=20)
Index Cond: (id = q.article_id)
Planning time: 1.443 ms
Execution time: 12613.689 ms
Thanks in advance
There are two levels of "cache" that Postgres uses:
OS file cache
shared buffers.
Important: Postgres directly controls only the second one, and relies on the first one, which is under OS' control.
First thing I would check are these two settings in postgresql.conf:
effective_cache_size – usually I set it to ~3/4 of all RAM available. Notice that it's not a setting that tells Postgres how to allocate memory, it's just "an advice" to Postgres planner telling some estimate of OS file cache size
shared_buffers – usually I set it to 1/4 of RAM size. This is allocation setting.
Also, I'd check other memory-related settings (work_mem, maintenance_work_mem) to understand how much RAM might be consumed, so will my effective_cache_size estimation be correct at most times.
But if you just turned your Postgres on, the first queries will most probably be long because there is no data in OS file cache and in shared buffers. You can check it with advanced EXPLAIN options:
EXPLAIN (ANALYZE, BUFFERS) SELECT ...
-- you will see how many buffers were fetched from disk ("read") or from cache ("hit")
Here you can find good material on using EXPLAIN: http://www.dalibo.org/_media/understanding_explain.pdf
Additionally, there is an extension aiming to solve "cold cache" problem: pg_prewarm https://www.postgresql.org/docs/current/static/pgprewarm.html
Also, working with SSD disks instead of magnetic ones will mean that disk reads will be much faster.
Have fun and well working Postgres :-)
If it is the first query after inserting several rows you must run an
ANALYZE
in all the database or over the involved tables. Try executing it at database level.

PostgresQL index not used

I have a table with several million rows called item with columns that look like this:
CREATE TABLE item (
id bigint NOT NULL,
company_id bigint NOT NULL,
date_created timestamp with time zone,
....
)
There is an index on company_id
CREATE INDEX idx_company_id ON photo USING btree (company_id);
This table is often searched for the last 10 items for a certain customer, i.e.,
SELECT * FROM item WHERE company_id = 5 ORDER BY date_created LIMIT 10;
Currently, there is one customer that accounts for about 75% of the data in that table, the other 25% of the data is spread across 25 or so other customers, meaning that 75% of the rows have a company id of 5, the other rows have company ids between 6 and 25.
The query generally runs very fast for all companies except the predominant one (id = 5). I can understand why since the index on company_id can be used for companies except 5.
I have experimented with different indexes to make this search more efficient for company 5. The one that seemed to make the most sense is
CREATE INDEX idx_date_created
ON item (date_created DESC NULLS LAST);
If I add this index, queries for the predominant company (id = 5) are greatly improved, but queries for all other companies go to crap.
Some results of EXPLAIN ANALYZE for company id 5 & 6 with and without the new index:
Company Id 5
Before new index
QUERY PLAN
Limit (cost=214874.63..214874.65 rows=10 width=639) (actual time=10481.989..10482.017 rows=10 loops=1)
-> Sort (cost=214874.63..218560.33 rows=1474282 width=639) (actual time=10481.985..10481.994 rows=10 loops=1)
Sort Key: photo_created
Sort Method: top-N heapsort Memory: 35kB
-> Seq Scan on photo (cost=0.00..183015.92 rows=1474282 width=639) (actual time=0.009..5345.551 rows=1473561 loops=1)
Filter: (company_id = 5)
Rows Removed by Filter: 402513
Total runtime: 10482.075 ms
After new index:
QUERY PLAN
Limit (cost=0.43..1.98 rows=10 width=639) (actual time=0.087..0.120 rows=10 loops=1)
-> Index Scan using idx_photo__photo_created on photo (cost=0.43..228408.04 rows=1474282 width=639) (actual time=0.084..0.099 rows=10 loops=1)
Filter: (company_id = 5)
Rows Removed by Filter: 26
Total runtime: 0.164 ms
Company Id 6
Before new index:
QUERY PLAN
Limit (cost=2204.27..2204.30 rows=10 width=639) (actual time=0.044..0.053 rows=3 loops=1)
-> Sort (cost=2204.27..2207.55 rows=1310 width=639) (actual time=0.040..0.044 rows=3 loops=1)
Sort Key: photo_created
Sort Method: quicksort Memory: 28kB
-> Index Scan using idx_photo__company_id on photo (cost=0.43..2175.96 rows=1310 width=639) (actual time=0.020..0.026 rows=3 loops=1)
Index Cond: (company_id = 6)
Total runtime: 0.100 ms
After new index:
QUERY PLAN
Limit (cost=0.43..1744.00 rows=10 width=639) (actual time=0.039..3938.986 rows=3 loops=1)
-> Index Scan using idx_photo__photo_created on photo (cost=0.43..228408.04 rows=1310 width=639) (actual time=0.035..3938.975 rows=3 loops=1)
Filter: (company_id = 6)
Rows Removed by Filter: 1876071
Total runtime: 3939.028 ms
I have run a full VACUUM and ANALYZE on the table, so PostgreSQL should have up-to-date statistics. Any ideas how I can get PostgreSQL to choose the right index for the company being queried?
This is known as the "abort-early plan problem", and it's been a chronic mis-optimization for years. Abort-early plans are amazing when they work, but terrible when they don't; see that linked mailing list thread for a more detailed explanation. Basically, the planner thinks it'll find the 10 rows you want for customer 6 without scanning the whole date_created index, and it's wrong.
There isn't any hard-and-fast way to improve this query categorically prior to PostgreSQL 10 (not in beta). What you'll want to do is nudge the query planner in various ways in hopes of getting what you want. Primary methods include anything which makes PostgreSQL more likely to use multi-column indexes, such as:
lowering random_page_cost (a good idea anyway if you're on SSDs).
lowering cpu_index_tuple_cost
It's also possible that you may be able to fix the planner behavior by playing with the table statistics. This includes:
raising statistics_target for the table and running ANALYZE again, in order to make PostgreSQL take more samples and get a better picture of row distribution;
increasing n_distinct in the stats to accurately reflect the number of customer_ids or different created_dates.
However, all of these solutions are approximate, and if query performance goes to heck as your data changes in the future, this should be the first query you look at.
In PostgreSQL 10, you'll be able to create Cross-Column Stats which should improve the situation more reliably. Depending on how broken this is for you, you could try using the beta.
If none of that works, I suggest the #postgresql IRC channel on Freenode or the pgsql-performance mailing list. Folks there will ask for your detailed table stats in order to make some suggestions.
Yet another point:
Why do you create index
CREATE INDEX idx_date_created ON item (date_created DESC NULLS LAST);
But call:
SELECT * FROM item WHERE company_id = 5 ORDER BY date_created LIMIT 10;
May be you mean
SELECT * FROM item WHERE company_id = 5 ORDER BY date_created DESC NULLS LAST LIMIT 10;
Also is better to create combine index:
CREATE INDEX idx_company_id_date_created ON item (company_id, date_created DESC NULLS LAST);
And after that:
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.43..28.11 rows=10 width=16) (actual time=0.120..0.153 rows=10 loops=1)
-> Index Only Scan using idx_company_id_date_created on item (cost=0.43..20763.68 rows=7500 width=16) (actual time=0.118..0.145 rows=10 loops=1)
Index Cond: (company_id = 5)
Heap Fetches: 10
Planning time: 1.003 ms
Execution time: 0.209 ms
(6 rows)
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.43..28.11 rows=10 width=16) (actual time=0.085..0.115 rows=10 loops=1)
-> Index Only Scan using idx_company_id_date_created on item (cost=0.43..20763.68 rows=7500 width=16) (actual time=0.084..0.108 rows=10 loops=1)
Index Cond: (company_id = 6)
Heap Fetches: 10
Planning time: 0.136 ms
Execution time: 0.155 ms
(6 rows)
On your server it might be slower but in any case much better than in above examples.

Trivial order by double type: performance crash

Characters:
id BIGINT
geo_point POINT (PostGIS)
stroke_when TIMESTAMPTZ (indexed!)
stroke_when_second DOUBLE PRECISION
PostgeSQL 9.1, PostGIS 2.0.
1. Query:
SELECT ST_AsText(geo_point)
FROM lightnings
ORDER BY stroke_when DESC, stroke_when_second DESC
LIMIT 1
Total runtime: 31100.911 ms !
EXPLAIN (ANALYZE on, VERBOSE off, COSTS on, BUFFERS on):
Limit (cost=169529.67..169529.67 rows=1 width=144) (actual time=31100.869..31100.869 rows=1 loops=1)
Buffers: shared hit=3343 read=120342
-> Sort (cost=169529.67..176079.48 rows=2619924 width=144) (actual time=31100.865..31100.865 rows=1 loops=1)
Sort Key: stroke_when, stroke_when_second
Sort Method: top-N heapsort Memory: 17kB
Buffers: shared hit=3343 read=120342
-> Seq Scan on lightnings (cost=0.00..156430.05 rows=2619924 width=144) (actual time=1.589..29983.410 rows=2619924 loops=1)
Buffers: shared hit=3339 read=120342
2. Selecting another field:
SELECT id
FROM lightnings
ORDER BY stroke_when DESC, stroke_when_second DESC
LIMIT 1
Total runtime: 2144.057 ms.
EXPLAIN (ANALYZE on, VERBOSE off, COSTS on, BUFFERS on):
Limit (cost=162979.86..162979.86 rows=1 width=24) (actual time=2144.013..2144.014 rows=1 loops=1)
Buffers: shared hit=3513 read=120172
-> Sort (cost=162979.86..169529.67 rows=2619924 width=24) (actual time=2144.011..2144.011 rows=1 loops=1)
Sort Key: stroke_when, stroke_when_second
Sort Method: top-N heapsort Memory: 17kB
Buffers: shared hit=3513 read=120172
-> Seq Scan on lightnings (cost=0.00..149880.24 rows=2619924 width=24) (actual time=0.056..1464.904 rows=2619924 loops=1)
Buffers: shared hit=3509 read=120172
3. Correct optimization:
SELECT id
FROM lightnings
ORDER BY stroke_when DESC
LIMIT 1
Total runtime: 0.044 ms
EXPLAIN (ANALYZE on, VERBOSE off, COSTS on, BUFFERS on):
Limit (cost=0.00..3.52 rows=1 width=16) (actual time=0.020..0.020 rows=1 loops=1)
Buffers: shared hit=5
-> Index Scan Backward using lightnings_idx on lightnings (cost=0.00..9233232.80 rows=2619924 width=16) (actual time=0.018..0.018 rows=1 loops=1)
Buffers: shared hit=5
As you can see there are two bad and very different collisions though the query is a quite primitive when the SQL optimizer uses index:
Even if the optimizer doesnt use the index, why using As_Text(geo_point) instead of id takes so much more time? There is only one row in result!
Impossibility of using first order index when an unindexed field is presented in ORDER BY. Mention that as on practice only few rows on each second are presented in DB.
Of course above is a simplified query, extracted from a more complex construction. Usually I select rows by date range, applying complicated filters.
PostgreSQL can't use your index to produce values in the desired order for the first two queries. When two or more rows have identical store_when identical they are returned from the index scan in arbitrary order. To decide the correct order for the rows would require a secondary sorting pass. Because PostgreSQL executor doesn't have a facility to perform that secondary sort it falls back to a full sort approach.
If you regularly need to query the table with that order then replace your current index with a composite index that includes both columns.
You can transform your current query into a form that explicitly specifies the secondary sort on only the largest value of store_when:
SELECT ST_AsText(geo_point) FROM lightnings
WHERE store_when = (SELECT max(store_when) FROM lightnings)
ORDER BY stroke_when_second DESC LIMIT 1
First step could be: create a composite index on {stroke_when, stroke_when_second}