Slow varchar index performance in Postgres - postgresql

I've got a table with ~500,000 rows with a column with values like Brutus, Dreamer of the Wanton Wasteland. I need to do a case-insensitive LIKE query on these, but it seems to perform very slowly. I tried making an index with:
create index name_idx on deck (name);
and
create index deck_name_idx on deck (lower(name));
But the query is equally slow either way. Here is my query:
select *
from deck
where lower(deck.name) like '%brutus, dreamer of the%'
order by deck.id desc
limit 20
Here are the results of my explain analyze (this is with the second index, but both are equally slow.)
Limit (cost=152534.89..152537.23 rows=20 width=1496) (actual time=627.480..627.490 rows=1 loops=1)
-> Gather Merge (cost=152534.89..152539.56 rows=40 width=1496) (actual time=627.479..627.488 rows=1 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Sort (cost=151534.87..151534.92 rows=20 width=1496) (actual time=611.447..611.447 rows=0 loops=3)
Sort Key: id DESC
Sort Method: quicksort Memory: 25kB
-> Parallel Seq Scan on deck (cost=0.00..151534.44 rows=20 width=1496) (actual time=609.818..611.304 rows=0 loops=3)
Filter: (lower((name)::text) ~~ '%brutus, dreamer of the%'::text)
Rows Removed by Filter: 162210
Planning time: 0.786 ms
Execution time: 656.510 ms
Is there a better way to set up this index? If I have to I could denormalize the column to a lowercase version, but I'd rather not do that unless it will help a lot and there's no better way.

To support LIKE queries with no wildcard in the beginning, use
CREATE INDEX ON deck (lower(name) varchar_pattern_ops);
To support LIKE searches that can have a wildcard at the beginning, you can
CREATE EXTENSION pg_trgm;
CREATE INDEX ON deck USING gin (lower(name) gin_trgm_ops);

Related

Postgres Optimizing Free Text Search when many results are returned

We are building lightweight text search on top of our data at Postgres with GIN indexes. When the matched data is small, it works, really fast. However, if we search common terms, due to many matches, the performance of it degrades significantly.
Consider the following query:
EXPLAIN ANALYZE
SELECT count(id)
FROM data_change_records d
WHERE to_tsvector('english', d.content) ## websearch_to_tsquery('english', 'mustafa');
The result is as follows:
Finalize Aggregate (cost=47207.99..47208.00 rows=1 width=8) (actual time=15.461..17.129 rows=1 loops=1)
-> Gather (cost=47207.78..47207.99 rows=2 width=8) (actual time=9.734..17.119 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=46207.78..46207.79 rows=1 width=8) (actual time=3.773..3.774 rows=1 loops=3)
-> Parallel Bitmap Heap Scan on data_change_records d (cost=1759.41..46194.95 rows=5130 width=37) (actual time=1.765..3.673 rows=1143 loops=3)
Recheck Cond: (to_tsvector('english'::regconfig, content) ## '''mustafa'''::tsquery)"
Heap Blocks: exact=2300
-> Bitmap Index Scan on data_change_records_content_to_tsvector_idx (cost=0.00..1756.33 rows=12311 width=0) (actual time=4.219..4.219 rows=3738 loops=1)
Index Cond: (to_tsvector('english'::regconfig, content) ## '''mustafa'''::tsquery)"
Planning Time: 0.141 ms
Execution Time: 17.163 ms
If the query is simple, like mustafa replaced with aws, which reduced to aw with tokenizer the analysis is as follows:
Finalize Aggregate (cost=723889.39..723889.40 rows=1 width=8) (actual time=1073.513..1086.414 rows=1 loops=1)
-> Gather (cost=723889.17..723889.38 rows=2 width=8) (actual time=1069.439..1086.401 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=722889.17..722889.18 rows=1 width=8) (actual time=1063.847..1063.848 rows=1 loops=3)
-> Parallel Bitmap Heap Scan on data_change_records d (cost=17128.34..721138.59 rows=700233 width=37) (actual time=389.347..1014.440 rows=542724 loops=3)
Recheck Cond: (to_tsvector('english'::regconfig, content) ## '''aw'''::tsquery)"
Heap Blocks: exact=167605
-> Bitmap Index Scan on data_change_records_content_to_tsvector_idx (cost=0.00..16708.20 rows=1680560 width=0) (actual time=282.517..282.518 rows=1647916 loops=1)
Index Cond: (to_tsvector('english'::regconfig, content) ## '''aw'''::tsquery)"
Planning Time: 0.150 ms
Execution Time: 1086.455 ms
At this point, we are not sure how to proceed in this case. Options include changing the tokenization to not allow 2 words. We have lots of aws indexed that is the cause. For instance, if we search for ok which is also 2 words but not that common, the query returns in 61.378 ms
Searching for frequent words can never be as fast as searching for rare words.
One thing that strikes me is that you are using English stemming to search for names. If that is really your use case, you should use the simple dictionary that wouldn't stem aws to aw.
Alternatively, you could introduce an additional synonym dictionary to a custom text search configuration that contains aws and prevents stemming.
But, as I said, searching for frequent words cannot be fast if you want all result rows. A trick you could use is to set gin_fuzzy_search_limit to the limit of hits you want to find, then the index scan will stop early and may be faster (but you won't get all results).
If you have a new-enough version of PostgreSQL and your table is well-vacuumed, you can get an bitmap-only scan which doesn't need to visit the table, just the index. But, you would need to use count(*), not count(id), to get that. If "id" is never NULL, then these should give identical answers.
The query plan does not make it easy to tell when the bitmap-only optimization kicks in or how effective it is. If you use EXPLAIN (ANALYZE, BUFFERS) you should get at least some clue based on the buffer counts.

Improve Postgres performance

I am new to Postgres and sure I’m doing something wrong.
So I just wondered if anybody had experienced something similar to my experiences below or could point me in the right direction to improve Postgres performance.
My initial goal was to speed up the analytical processing of my Datamarts in various Dashboards by moving from MS SQL Server to Postgres.
To get a sample query to compare speeds I ran query profiler on MS SQL Server whilst referencing a BI dashboard, which produced something similar to this (I know there are redundant columns in the sub query):
SELECT COUNT(*)
FROM (
SELECT
BM.Key_Date, BM.[Actual Date], BM.[Month]
,BM.[Month Number], BM.[Month Year], BM.[No of Working Days]
,SDI.Key_Delivery, SDI.[Order Number], SDI.[Quantity SKU]
,SDI.[Quantity Sales Unit], SDI.[FactSales - GBP], SDI.[NNSA Capsules]
,SFI.[Ship-to], SFI.[Sold-to], SFI.[Sales Force Type], SFI.Region
,SFI.[Top Level Account], SFI.[Customer Organisation]
,EX.Rate
,PDI.[Product Description], PDI.[Product Type Group], PDI.[Product Type],
PDI.[Main Product Categories], PDI.Section, PDI.Family
FROM Fact.SalesDataInvoiced AS SDI
JOIN Dimension.SalesforceInvoiced AS SFI
ON SDI.[Key_Ship-to]=SFI.[Key_Ship-to]
JOIN Dimension.BillingMonth AS BM
ON SDI.[Key_Billing Month]=BM.Key_Date
JOIN Dimension.ProductDataInvoiced AS PDI
ON SDI.[Key_Product Code]=PDI.[Key_Product Code]
CROSS JOIN Dimension.Exchange AS EX
WHERE BM.[Actual Date] BETWEEN '20160101' AND '20211001'
) AS a
GROUP BY [Product Type], [Product Type Group],[Main Product Categories]
I then installed Postgres 14 (on Centos 8) and MS SQL Server Developer 2017 (on windows 10) on separate identical laptops and created a Database and tables from the same csv data files to enable the replication of the above query.
Running a Postgres query with indexing performs massively slower than MS SQL without indexing.
Adding indexes to MS SQL produces results almost instantly.
Because of the difference in processing time I even installed Citus with Postgres14 and created Fact.SalesDataInvoiced as a columnar table (This made the processing time worse).
I have played about with memory settings in postgresql.conf but nothing seems to enable speeds comparable to MSSQL.
Explain Analyze shows that despite the indexes it always runs a sequential scan of all tables. Forcing indexed scans doesn't make any difference to processing time.
Would I be right in thinking Postgres would perform significantly better using a cluster and partitioning? Even if this is the case surely a simple query like the one I'm trying to run on a stand alone machine should be faster?
TABLE DETAILS
Dimension.BillingMonth
Records 120,
Primary Key is KeyDate,
Clustered Unique Index on KeyDate
Dimension.Exchange
Records 1
Dimension.ProductDataInvoiced
Records 275563,
Primary Key is KeyProduct,
Clustered Unique Index on KeyProduct
Dimension.SalesforceInvoiced
Records 377414,
Primary Key is KeyShipTo,
Clustered Unique Index on KeyShipTo
Fact.SalesDataInvoiced
Records 43807943,
Non-Clustered Unique Index on KeyShipTo, KeyProduct, KeyBillingMonth
Any help would be appreciated as previously mentioned I'm sure I must be missing something obvious.
Many thanks in advance.
David
Thank you for the responses. I have placed additional info below.
Forgot to add my postgres performance woes were after i'd carried out a Full Vacuum and Reindex. I performed these maintenance tasks after I had imported the data and created my indexes.
Output after querying pg_indexes
tablename
indexname
indexdef
BillingMonth
BillingMonth_pkey
CREATE UNIQUE INDEX BillingMonth_pkey ON public.BillingMonth USING btree (KeyDate)
ProductDataInvoiced
ProductDataInvoiced_pkey
CREATE UNIQUE INDEX ProductDataInvoiced_pkey ON public.ProductDataInvoiced USING btree (KeyProductCode)
SalesforceInvoiced
SalesforceInvoiced_pkey
CREATE UNIQUE INDEX SalesforceInvoiced_pkey ON public.SalesforceInvoiced USING btree (KeyShipTo)
SalesDataInvoiced
CI_SalesData
CREATE INDEX CI_SalesData ON public.SalesDataInvoiced USING btree (KeyShipTo, KeyProductCode, KeyBillingMonth)
Output After running EXPLAIN (ANALYZE, BUFFERS)
Finalize GroupAggregate (cost=1435439.30..1435565.71 rows=480 width=53) (actual time=25960.468..25973.326 rows=31 loops=1)
Group Key: pdi."ProductType", pdi."ProductTypeGroup", pdi."MainProductCategories"
Buffers: shared hit=71246 read=859119
-> Gather Merge (cost=1435439.30..1435551.31 rows=960 width=53) (actual time=25960.458..25973.282 rows=89 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=71246 read=859119
-> Sort (cost=1434439.28..1434440.48 rows=480 width=53) (actual time=25956.982..25956.989 rows=30 loops=3)
Sort Key: pdi."ProductType", pdi."ProductTypeGroup", pdi."MainProductCategories"
Sort Method: quicksort Memory: 28kB
Buffers: shared hit=71246 read=859119
Worker 0: Sort Method: quicksort Memory: 29kB
Worker 1: Sort Method: quicksort Memory: 29kB
-> Partial HashAggregate (cost=1434413.10..1434417.90 rows=480 width=53) (actual time=25956.878..25956.895 rows=30 loops=3)
Group Key: pdi."ProductType", pdi."ProductTypeGroup", pdi."MainProductCategories"
Batches: 1 Memory Usage: 49kB
Buffers: shared hit=71230 read=859119
Worker 0: Batches: 1 Memory Usage: 49kB
Worker 1: Batches: 1 Memory Usage: 49kB
-> Parallel Hash Join (cost=62124.74..1327935.46 rows=10647764 width=45) (actual time=285.864..19240.004 rows=14602648 loops=3)
Hash Cond: (sdi."KeyShipTo" = sfi."KeyShipTo")
Buffers: shared hit=71230 read=859119
-> Hash Join (cost=19648.48..1257508.51 rows=10647764 width=49) (actual time=204.794..12862.063 rows=14602648 loops=3)
Hash Cond: (sdi."KeyProductCode" = pdi."KeyProductCode")
Buffers: shared hit=32264 read=859119
-> Hash Join (cost=3.67..1091456.95 rows=10647764 width=8) (actual time=0.143..7076.104 rows=14602648 loops=3)
Hash Cond: (sdi."KeyBillingMonth" = bm."KeyDate")
Buffers: shared hit=197 read=859119
-> Parallel Seq Scan on "SalesData_Invoiced" sdi (cost=0.00..1041846.10 rows=18253310 width=12) (actual
time=0.071..2585.596 rows=14602648 loops=3)
Buffers: shared hit=194 read=859119
-> Hash (cost=2.80..2.80 rows=70 width=4) (actual time=0.049..0.050 rows=70 loops=3)
Hash Cond: (sdi."KeyBillingMonth" = bm."KeyDate")
Buffers: shared hit=197 read=859119
-> Parallel Seq Scan on "SalesData_Invoiced" sdi (cost=0.00..1041846.10 rows=18253310 width=12) (actual
time=0.071..2585.596 rows=14602648 loops=3)
Buffers: shared hit=194 read=859119
-> Hash (cost=2.80..2.80 rows=70 width=4) (actual time=0.049..0.050 rows=70 loops=3)
Buckets: 1024 Batches: 1 Memory Usage: 11kB
Buffers: shared hit=3
-> Seq Scan on "BillingMonth" bm (cost=0.00..2.80 rows=70 width=4) (actual time=0.012..0.028
rows=70 loops=3)
Filter: (("ActualDate" >= '2016-01-01'::date) AND ("ActualDate" <= '2021-10-01'::date))
Rows Removed by Filter: 50
Buffers: shared hit=3
-> Hash (cost=16200.27..16200.27 rows=275563 width=49) (actual time=203.237..203.238 rows=275563 loops=3)
Buckets: 524288 Batches: 1 Memory Usage: 26832kB
Buffers: shared hit=32067
-> Nested Loop (cost=0.00..16200.27 rows=275563 width=49) (actual time=0.034..104.143 rows=275563 loops=3)
Buffers: shared hit=32067
-> Seq Scan on "Exchange" ex (cost=0.00..1.01 rows=1 width=0) (actual time=0.024..0.024 rows=
1 loops=3)
Buffers: shared hit=3
-> Seq Scan on "ProductData_Invoiced" pdi (cost=0.00..13443.63 rows=275563 width=49) (actual
time=0.007..48.176 rows=275563 loops=3)
Buffers: shared hit=32064
-> Parallel Hash (cost=40510.56..40510.56 rows=157256 width=4) (actual time=79.536..79.536 rows=125805 loops=3)
Buckets: 524288 Batches: 1 Memory Usage: 18912kB
Buffers: shared hit=38938
-> Parallel Seq Scan on "Salesforce_Invoiced" sfi (cost=0.00..40510.56 rows=157256 width=4) (actual time=
0.011..42.968 rows=125805 loops=3)
Buffers: shared hit=38938
Planning:
Buffers: shared hit=426
Planning Time: 1.936 ms
Execution Time: 25973.709 ms
(55 rows)
Firstly, remember to run VACUUM ANALYZE after rebuilding indexes, or sometimes after importing large amount of data. (VACUUM FULL is mainly useful for the OS to reclaim disk space, and you'd still need to analyse afterwards, especially after rebuilding indexes.)
It seems from your query that your main table is SalesDataInvoiced (SDI) and that you'd want to use an index on KeyBillingMonth if possible (since it's the main restriction you're placing). In general, you'd also want indexes, at least on the other tables on the columns that are used for the joins.
As the documentation for multi-column indexes in PostgreSQL says:
A multicolumn B-tree index can be used with query conditions that involve any subset of the index's columns, but the index is most efficient when there are constraints on the leading (leftmost) columns. The exact rule is that equality constraints on leading columns, plus any inequality constraints on the first column that does not have an equality constraint, will be used to limit the portion of the index that is scanned. Constraints on columns to the right of these columns are checked in the index, so they save visits to the table proper, but they do not reduce the portion of the index that has to be scanned. For example, given an index on (a, b, c) and a query condition WHERE a = 5 AND b >= 42 AND c < 77, the index would have to be scanned from the first entry with a = 5 and b = 42 up through the last entry with a = 5. Index entries with c >= 77 would be skipped, but they'd still have to be scanned through. This index could in principle be used for queries that have constraints on b and/or c with no constraint on a — but the entire index would have to be scanned, so in most cases the planner would prefer a sequential table scan over using the index.
In your example, the main column you'd want to use a constraint on (KeyBillingMonth) is in third position, so it's unlikely to be used.
CREATE INDEX CI_SalesData ON public.SalesDataInvoiced
USING btree (KeyShipTo, KeyProductCode, KeyBillingMonth)
Creating this should make it more likely to be used:
CREATE INDEX ON SalesDataInvoiced(KeyBillingMonth);
Then, run VACUUM ANALYZE and try your query again.
You may also want an index on BillingMonth(ActualDate), but that's not necessarily useful since there seems to be few rows (and most of them are returned in your query).
It's not clear what the BillingMonth table is for. If it's basically about truncating the ActualDate to have the first day of the month, you could for example get rid of the join on BillingMonth and use the constraint on SalesDataInvoiced.KeyBillingMonth directly. For example ... WHERE SDI.KeyBillingMonth BETWEEN '2016-01-01' AND '2021-10-01' ....
As a side-note, as far as I know, BETWEEN is inclusive for its upper bound. I'd imagine a query like this is meant to represent some monthly statistics, hence should probably not include what's on 2021-10-01 (but not the rest of that month).

What are the things you look for when using EXPLAIN ANALYZE to determine if there are improvements you can make or not

I've been reading up a bunch of the Postgres docs, but it's still not super clear to me when looking at the output if the query is optimized, or not. I've tried adding some indexes, which has reduced the number of lines in the output. If you were to look at something like this:
Limit (cost=26.16..26.18 rows=10 width=322) (actual time=0.077..0.079 rows=10 loops=1)
-> Sort (cost=26.16..26.19 rows=12 width=322) (actual time=0.076..0.077 rows=10 loops=1)
Sort Key: like_count DESC, inserted_at DESC
Sort Method: top-N heapsort Memory: 28kB
-> Bitmap Heap Scan on comments c0 (cost=4.40..25.94 rows=12 width=322) (actual time=0.036..0.049 rows=38 loops=1)
Recheck Cond: ((post_id = 'dc1ab68f-db3f-4b45-aa48-b5c30298e261'::uuid) AND (parent_id IS NULL))
Heap Blocks: exact=9
-> Bitmap Index Scan on comments_post_id_parent_id_index (cost=0.00..4.40 rows=12 width=0) (actual time=0.013..0.013 rows=38 loops=1)
Index Cond: ((post_id = 'dc1ab68f-db3f-4b45-aa48-b5c30298e261'::uuid) AND (parent_id IS NULL))
Planning Time: 0.099 ms
Execution Time: 0.099 ms
(11 rows)
Are there any key things you look at to say "This query is pretty optimized", or "Wow, there's an index I can add to reduce all that work"?
The first thing I'd notice is that it took less than 1/10,000 of a second to run, and so is unlikely to need manual optimization work. And then I'd wonder, why did I get started looking at such a fast query in the first place? Surely I should be examining the slow queries, not the fast ones.
I first look for sequential table scans which indicate that the query planner could not use an index either because there isn't one, or it has failed to use it for some reason.

PostgresQL index not used

I have a table with several million rows called item with columns that look like this:
CREATE TABLE item (
id bigint NOT NULL,
company_id bigint NOT NULL,
date_created timestamp with time zone,
....
)
There is an index on company_id
CREATE INDEX idx_company_id ON photo USING btree (company_id);
This table is often searched for the last 10 items for a certain customer, i.e.,
SELECT * FROM item WHERE company_id = 5 ORDER BY date_created LIMIT 10;
Currently, there is one customer that accounts for about 75% of the data in that table, the other 25% of the data is spread across 25 or so other customers, meaning that 75% of the rows have a company id of 5, the other rows have company ids between 6 and 25.
The query generally runs very fast for all companies except the predominant one (id = 5). I can understand why since the index on company_id can be used for companies except 5.
I have experimented with different indexes to make this search more efficient for company 5. The one that seemed to make the most sense is
CREATE INDEX idx_date_created
ON item (date_created DESC NULLS LAST);
If I add this index, queries for the predominant company (id = 5) are greatly improved, but queries for all other companies go to crap.
Some results of EXPLAIN ANALYZE for company id 5 & 6 with and without the new index:
Company Id 5
Before new index
QUERY PLAN
Limit (cost=214874.63..214874.65 rows=10 width=639) (actual time=10481.989..10482.017 rows=10 loops=1)
-> Sort (cost=214874.63..218560.33 rows=1474282 width=639) (actual time=10481.985..10481.994 rows=10 loops=1)
Sort Key: photo_created
Sort Method: top-N heapsort Memory: 35kB
-> Seq Scan on photo (cost=0.00..183015.92 rows=1474282 width=639) (actual time=0.009..5345.551 rows=1473561 loops=1)
Filter: (company_id = 5)
Rows Removed by Filter: 402513
Total runtime: 10482.075 ms
After new index:
QUERY PLAN
Limit (cost=0.43..1.98 rows=10 width=639) (actual time=0.087..0.120 rows=10 loops=1)
-> Index Scan using idx_photo__photo_created on photo (cost=0.43..228408.04 rows=1474282 width=639) (actual time=0.084..0.099 rows=10 loops=1)
Filter: (company_id = 5)
Rows Removed by Filter: 26
Total runtime: 0.164 ms
Company Id 6
Before new index:
QUERY PLAN
Limit (cost=2204.27..2204.30 rows=10 width=639) (actual time=0.044..0.053 rows=3 loops=1)
-> Sort (cost=2204.27..2207.55 rows=1310 width=639) (actual time=0.040..0.044 rows=3 loops=1)
Sort Key: photo_created
Sort Method: quicksort Memory: 28kB
-> Index Scan using idx_photo__company_id on photo (cost=0.43..2175.96 rows=1310 width=639) (actual time=0.020..0.026 rows=3 loops=1)
Index Cond: (company_id = 6)
Total runtime: 0.100 ms
After new index:
QUERY PLAN
Limit (cost=0.43..1744.00 rows=10 width=639) (actual time=0.039..3938.986 rows=3 loops=1)
-> Index Scan using idx_photo__photo_created on photo (cost=0.43..228408.04 rows=1310 width=639) (actual time=0.035..3938.975 rows=3 loops=1)
Filter: (company_id = 6)
Rows Removed by Filter: 1876071
Total runtime: 3939.028 ms
I have run a full VACUUM and ANALYZE on the table, so PostgreSQL should have up-to-date statistics. Any ideas how I can get PostgreSQL to choose the right index for the company being queried?
This is known as the "abort-early plan problem", and it's been a chronic mis-optimization for years. Abort-early plans are amazing when they work, but terrible when they don't; see that linked mailing list thread for a more detailed explanation. Basically, the planner thinks it'll find the 10 rows you want for customer 6 without scanning the whole date_created index, and it's wrong.
There isn't any hard-and-fast way to improve this query categorically prior to PostgreSQL 10 (not in beta). What you'll want to do is nudge the query planner in various ways in hopes of getting what you want. Primary methods include anything which makes PostgreSQL more likely to use multi-column indexes, such as:
lowering random_page_cost (a good idea anyway if you're on SSDs).
lowering cpu_index_tuple_cost
It's also possible that you may be able to fix the planner behavior by playing with the table statistics. This includes:
raising statistics_target for the table and running ANALYZE again, in order to make PostgreSQL take more samples and get a better picture of row distribution;
increasing n_distinct in the stats to accurately reflect the number of customer_ids or different created_dates.
However, all of these solutions are approximate, and if query performance goes to heck as your data changes in the future, this should be the first query you look at.
In PostgreSQL 10, you'll be able to create Cross-Column Stats which should improve the situation more reliably. Depending on how broken this is for you, you could try using the beta.
If none of that works, I suggest the #postgresql IRC channel on Freenode or the pgsql-performance mailing list. Folks there will ask for your detailed table stats in order to make some suggestions.
Yet another point:
Why do you create index
CREATE INDEX idx_date_created ON item (date_created DESC NULLS LAST);
But call:
SELECT * FROM item WHERE company_id = 5 ORDER BY date_created LIMIT 10;
May be you mean
SELECT * FROM item WHERE company_id = 5 ORDER BY date_created DESC NULLS LAST LIMIT 10;
Also is better to create combine index:
CREATE INDEX idx_company_id_date_created ON item (company_id, date_created DESC NULLS LAST);
And after that:
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.43..28.11 rows=10 width=16) (actual time=0.120..0.153 rows=10 loops=1)
-> Index Only Scan using idx_company_id_date_created on item (cost=0.43..20763.68 rows=7500 width=16) (actual time=0.118..0.145 rows=10 loops=1)
Index Cond: (company_id = 5)
Heap Fetches: 10
Planning time: 1.003 ms
Execution time: 0.209 ms
(6 rows)
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.43..28.11 rows=10 width=16) (actual time=0.085..0.115 rows=10 loops=1)
-> Index Only Scan using idx_company_id_date_created on item (cost=0.43..20763.68 rows=7500 width=16) (actual time=0.084..0.108 rows=10 loops=1)
Index Cond: (company_id = 6)
Heap Fetches: 10
Planning time: 0.136 ms
Execution time: 0.155 ms
(6 rows)
On your server it might be slower but in any case much better than in above examples.

Trivial order by double type: performance crash

Characters:
id BIGINT
geo_point POINT (PostGIS)
stroke_when TIMESTAMPTZ (indexed!)
stroke_when_second DOUBLE PRECISION
PostgeSQL 9.1, PostGIS 2.0.
1. Query:
SELECT ST_AsText(geo_point)
FROM lightnings
ORDER BY stroke_when DESC, stroke_when_second DESC
LIMIT 1
Total runtime: 31100.911 ms !
EXPLAIN (ANALYZE on, VERBOSE off, COSTS on, BUFFERS on):
Limit (cost=169529.67..169529.67 rows=1 width=144) (actual time=31100.869..31100.869 rows=1 loops=1)
Buffers: shared hit=3343 read=120342
-> Sort (cost=169529.67..176079.48 rows=2619924 width=144) (actual time=31100.865..31100.865 rows=1 loops=1)
Sort Key: stroke_when, stroke_when_second
Sort Method: top-N heapsort Memory: 17kB
Buffers: shared hit=3343 read=120342
-> Seq Scan on lightnings (cost=0.00..156430.05 rows=2619924 width=144) (actual time=1.589..29983.410 rows=2619924 loops=1)
Buffers: shared hit=3339 read=120342
2. Selecting another field:
SELECT id
FROM lightnings
ORDER BY stroke_when DESC, stroke_when_second DESC
LIMIT 1
Total runtime: 2144.057 ms.
EXPLAIN (ANALYZE on, VERBOSE off, COSTS on, BUFFERS on):
Limit (cost=162979.86..162979.86 rows=1 width=24) (actual time=2144.013..2144.014 rows=1 loops=1)
Buffers: shared hit=3513 read=120172
-> Sort (cost=162979.86..169529.67 rows=2619924 width=24) (actual time=2144.011..2144.011 rows=1 loops=1)
Sort Key: stroke_when, stroke_when_second
Sort Method: top-N heapsort Memory: 17kB
Buffers: shared hit=3513 read=120172
-> Seq Scan on lightnings (cost=0.00..149880.24 rows=2619924 width=24) (actual time=0.056..1464.904 rows=2619924 loops=1)
Buffers: shared hit=3509 read=120172
3. Correct optimization:
SELECT id
FROM lightnings
ORDER BY stroke_when DESC
LIMIT 1
Total runtime: 0.044 ms
EXPLAIN (ANALYZE on, VERBOSE off, COSTS on, BUFFERS on):
Limit (cost=0.00..3.52 rows=1 width=16) (actual time=0.020..0.020 rows=1 loops=1)
Buffers: shared hit=5
-> Index Scan Backward using lightnings_idx on lightnings (cost=0.00..9233232.80 rows=2619924 width=16) (actual time=0.018..0.018 rows=1 loops=1)
Buffers: shared hit=5
As you can see there are two bad and very different collisions though the query is a quite primitive when the SQL optimizer uses index:
Even if the optimizer doesnt use the index, why using As_Text(geo_point) instead of id takes so much more time? There is only one row in result!
Impossibility of using first order index when an unindexed field is presented in ORDER BY. Mention that as on practice only few rows on each second are presented in DB.
Of course above is a simplified query, extracted from a more complex construction. Usually I select rows by date range, applying complicated filters.
PostgreSQL can't use your index to produce values in the desired order for the first two queries. When two or more rows have identical store_when identical they are returned from the index scan in arbitrary order. To decide the correct order for the rows would require a secondary sorting pass. Because PostgreSQL executor doesn't have a facility to perform that secondary sort it falls back to a full sort approach.
If you regularly need to query the table with that order then replace your current index with a composite index that includes both columns.
You can transform your current query into a form that explicitly specifies the secondary sort on only the largest value of store_when:
SELECT ST_AsText(geo_point) FROM lightnings
WHERE store_when = (SELECT max(store_when) FROM lightnings)
ORDER BY stroke_when_second DESC LIMIT 1
First step could be: create a composite index on {stroke_when, stroke_when_second}