I'm having trouble with a select query, it's a simple query to select the most recent row from a table, but the query planner is doing something weird.
SELECT id FROM plots WHERE fk_id='73a711a5-cb31-545d-b8d6-75c2a0e3ba9d'
ORDER BY created_at DESC LIMIT 1
Limit (cost=0.43..155.97 rows=1 width=24)
-> Index Scan Backward using idx_plot_created_at on plots
(cost=0.43..304694.48 rows=1959 width=24)
Filter: (fk_id = '73a711a5-cb31-545d-b8d6-75c2a0e3ba9d'::uuid)
And it takes about 2,8 seconds to execute. But when I remove the order by and limit, then it uses the index on the fk_id field and returns 6 results in 102 ms. Shouldn't Postgresql just read the 6 results using the fk_id index and order them by the created_at field? Instead it seems to scan the table using the index on the field that is used for the ORDER BY clause and will then check every row with the Filter condition. Why does it do that?
Related
I have the following basic table in PostgreSQL 12.13:
Record:
database_id: FK to database table
updated_at: timestamp
I've created an index on both the database_id and updated_at fields.
I have a query that fetches the most recent 100 records for a given database id:
SELECT * FROM record WHERE database_id='fa9bcfa6-8d89-4c95-b04a-24c85b169066'
ORDER BY store_record.updated_at DESC
LIMIT 100;
This query is EXTREMELY slow (recently took about 6 min to run). Here is the query plan:
Limit (cost=0.09..1033.82 rows=100 width=486)
-> Index Scan Backward using record_updated_at on record (cost=0.09..8149369.05 rows=788343 width=486)
Filter: (database_id = 'fa9bcfa6-8d89-4c95-b04a-24c85b169066'::uuid)
If I change ORDER BY DESC to ORDER BY ASC then the query takes milliseconds, even though the query plan looks about the same:
SELECT * FROM record WHERE database_id='fa9bcfa6-8d89-4c95-b04a-24c85b169066'
ORDER BY store_record.updated_at
LIMIT 100;
Limit (cost=0.09..1033.86 rows=100 width=486)
-> Index Scan using record_updated_at on record (cost=0.09..8149892.78 rows=788361 width=486)
Filter: (database_id = 'fa9bcfa6-8d89-4c95-b04a-24c85b169066'::uuid)
If I remove the ORDER BY completely then the query is also fast:
SELECT * FROM record WHERE database_id='fa9bcfa6-8d89-4c95-b04a-24c85b169066'
LIMIT 100;
Limit (cost=0.11..164.75 rows=100 width=486)
-> Index Scan using record_database_id on record (cost=0.11..1297917.10 rows=788366 width=486)
Index Cond: (database_id = 'fa9bcfa6-8d89-4c95-b04a-24c85b169066'::uuid)
Few questions:
Why is the first query so much slower than the other two? I understand why the last one is faster but I don't understand why changing the ORDER BY DESC to ORDER BY makes such a difference. Amy I missing an index?
How can I speed up the initial query?
The plan follows the index to read records in order of updated_at DESC, testing each for the desired database_id, and then stops as soon as it finds 100 which have the desired database_id. But that specific desired value of database_id is much more common on one end of the index than the other. There is nothing magically about the DESC here, presumably there is some other value of database_id for which it works the opposite way, finding 100 of them when descending much faster than when ascending. If you had done EXPLAIN (ANALYZE), this would have been immediately clear based on the different reported values for "Rows Removed by Filter"
If you have a multicolumn index on (database_id,updated_at), then it can immediately jump to the desired spot of the index and read the first 100 rows it finds (after "filtering" them for visibility), and that would work fast no matter which direction you want the ORDER BY to go.
I try to improve query performances on a big (500M rows) time partitioned table. Here is the simplified table structure:
CREATE TABLE execution (
start_time TIMESTAMP WITH TIME ZONE NOT NULL,
end_time TIMESTAMP WITH TIME ZONE,
restriction_criteria VARCHAR(36) NOT NULL
PARTITION BY RANGE (start_time);
Time partitioning
is based on the start_time column because the end_time value is not known when the row is created.
is used to implement efficiently the retention policy.
Request looks like to this generic pattern
SELECT *
FROM execution
WHERE start_time BETWEEN :from AND start_time :to
AND restriction_criteria IN ('123', '456')
ORDER BY end_time DESC, id
FETCH NEXT 20 ROWS ONLY;
I've got the "best" performances using this index
CREATE INDEX IF NOT EXISTS end_time_desc_start_time_index ON execution USING btree (end_time DESC, start_time);
Yet, performances are not good enough.
Limit (cost=1303.21..27189.31 rows=20 width=1674) (actual time=6791.191..6791.198 rows=20 loops=1)
-> Incremental Sort (cost=1303.21..250693964.74 rows=193689 width=1674) (actual time=6791.189..6791.194 rows=20 loops=1)
" Sort Key: execution.end_time DESC, execution.id"
Presorted Key: execution.end_time
Full-sort Groups: 1 Sort Method: quicksort Average Memory: 64kB Peak Memory: 64kB
-> Merge Append (cost=8.93..250685248.74 rows=193689 width=1674) (actual time=4082.161..6791.047 rows=21 loops=1)
Sort Key: execution.end_time DESC
Subplans Removed: 15
-> Index Scan using execution_2021_10_end_time_start_time_idx on execution_2021_10 execution_1 (cost=0.56..113448316.66 rows=93103 width=1674) (actual time=578.896..578.896 rows=1 loops=1)
Index Cond: ((start_time <= '2021-12-05 02:00:04+00'::timestamp with time zone) AND (start_time >= '2021-10-02 02:00:04+00'::timestamp with time zone))
" Filter: (((restriction_criteria)::text = ANY ('{123,456}'::text[])))"
Rows Removed by Filter: 734
-> Index Scan using execution_2021_11_end_time_start_time_idx on execution_2021_11 execution_2 (cost=0.56..113653576.54 rows=87605 width=1674) (actual time=116.841..116.841 rows=1 loops=1)
Index Cond: ((start_time <= '2021-12-05 02:00:04+00'::timestamp with time zone) AND (start_time >= '2021-10-02 02:00:04+00'::timestamp with time zone))
" Filter: (((restriction_criteria)::text = ANY ('{123,456}'::text[])))"
Rows Removed by Filter: 200
-> Index Scan using execution_2021_12_end_time_start_time_idx on execution_2021_12 execution_3 (cost=0.56..16367185.18 rows=12966 width=1674) (actual time=3386.416..6095.261 rows=21 loops=1)
Index Cond: ((start_time <= '2021-12-05 02:00:04+00'::timestamp with time zone) AND (start_time >= '2021-10-02 02:00:04+00'::timestamp with time zone))
" Filter: (((restriction_criteria)::text = ANY ('{123,456}'::text[])))"
Rows Removed by Filter: 5934
Planning Time: 4.108 ms
Execution Time: 6791.317 ms
The index Filter looks is very slow.
I set up a multi-column index hoping the filtering would be done in the Index cond. But it doesn't work
CREATE INDEX IF NOT EXISTS pagination_index ON execution USING btree (end_time DESC, start_time, restriction_criteria);
My feeling is that the first index column should be end_time because we want to leverage the btree index sorting capability. The second one should be restriction_criteria so that an index cond filters rows which doesn't match the restriction_criteria. However, this doesn't work because the query planner need to also check the start_time clause.
The alternative I imagine is to get rid of the partitioning because a multi-column end_time, restriction_critera index would work just fine.
Yet, this is not a perfect solution because dealing with our retention policy would become a pain.
Is there another alternative allowing to keep the start_time partitioning ?
I set up a multi-column index hoping the filtering would be done in the Index cond
The index machinery is very circumspect about what code it runs inside the index. It won't call any operators that it doesn't 'trust', because if the operator throws an error then the whole query will error out, possibly due to rows that weren't even user 'visible' in the first place (i.e. ones that were already deleted or created but never committed). No one wants that. Now the =ANY construct could be considered trustable, but it is not. That means it won't be applied in the Index Cond, but must be applied against the table row, which in turn means you need to visit the table, which is probably where all your time is going, visiting random table rows.
I don't know what it would take code-wise to make =ANY trusted. I've made efforts to investigate that in the past but really never got anywhere, the code around the ANY is too complicated for me to grasp. That would be a nice improvement for the future, but won't help you now anyway.
One way around this is to get an index-only scan. At that point it will call arbitrary code in the index, as it already knows the tuple is visible. But it won't do that for you, because you are selecting at least one column not in the index (and also not shown in your CREATE command, but obviously present anyway)
If you create an index like your widest one but adding "id" to the end, and only select from among those columns, then you should be get a much faster index only scans with merge appends.
If you have even more columns than the ones you've shown plus "id", and you really need to select those columns, and don't want to add all of them to the index, then you can use a trick to use an index-only scan anyway by doing a dummy self join:
with t as (SELECT id
FROM execution
WHERE start_time BETWEEN :from AND :to
AND restriction_criteria IN ('123', '456')
ORDER BY end_time DESC, id
FETCH NEXT 20 ROWS ONLY
)
select real.* from execution real join t using (id)
ORDER BY end_time DESC, id
(If "id" is not unique, then you might need to join on additional column. Also, you would need an index on "id", which you probably already have)
This one will still need to visit the table to fetch the extra columns, but only for the 20 rows being returned, not for all the ones failing the restriction_criteria.
If the restriction_criteria is very selective, another approach might be better: an index on or leading with that column. It will need to read and sort all of those rows (in the relevant partitions) before applying the LIMIT, but if it is very selective this will not take long.
While you can have the output sorted if the leading column is end_time you can reduce the amount of data processed if you use start_time as a leading column.
Since your filter in start_time and restriction_criteria, is excluding ~7000 rows in order to retrieve 20, maybe speeding up the filtering is more important that speeding up the sorting.
CREATE INDEX IF NOT EXISTS execution_start_time_restriction_idx
ON execution USING btree (start_time, restriction_criteria);
CREATE INDEX IF NOT EXISTS execution_restriction_start_time_idx
ON execution USING btree (restriction_criteria, start_time);
ANALYZE execution
If
FROM execution
WHERE start_time BETWEEN :from AND start_time :to
AND restriction_criteria IN ('123', '456')
Is more than the number of rows removed by the filter then having the `end_time as the leading column might be a good idea. But the planner should be able to figure that out for you.
In the end if some of those indices are not used you can drop it.
I have the following PostgreSQL table:
CREATE TABLE staff (
id integer primary key,
full_name VARCHAR(100) NOT NULL,
department VARCHAR(100) NULL,
tier bigint
);
Filled random data into this table using following block:
do $$
declare
begin
FOR counter IN 1 .. 100000 LOOP
INSERT INTO staff (id, full_name, department, tier)
VALUES (nextval('staff_sequence'),
random_string(10),
random_string(20),
get_department(),
floor(random() * 5 + 1)::bigint);
end LOOP;
end; $$
After the data is populated, I created an index on this table on the tier column:
create index staff_tier_idx on staff(tier);
Although I created this index, when I execute a query using this column, I want this index NOT to be used. To accomplish this, I tried to execute this query:
select count(*) from staff where tier=1::numeric;
Due to mismatching data types on the indexed column and the query condition, I thought the index will not be used & instead a sequential scan will be executed. However, when I run EXPLAIN ANALYZE on the above query I get the following output:
Aggregate (cost=2349.54..2349.55 rows=1 width=8) (actual time=17.078..17.079 rows=1 loops=1)
-> Index Only Scan using staff_tier_idx on staff (cost=0.29..2348.29 rows=500 width=0) (actual time=0.022..15.925 rows=19942 loops=1)
Filter: ((tier)::numeric = '1'::numeric)
Rows Removed by Filter: 80058
Heap Fetches: 0
Planning Time: 0.305 ms
Execution Time: 17.130 ms
Showing that the index has indeed been used.
How do I change this so that the query uses a sequential scan instead of the index? This is purely for a testing/learning purposes.
If its of any importance, I am running this on an Amazon RDS database instance
From the "Filter" rows of the plan like
Rows Removed by Filter: 80058
you can see that the index is not being used as a real index, but just as a skinny table, testing the casted condition for each row. This appears favorable because the index is less than 1/4 the size of the table, while the default ratio of random_page_cost/seq_page_cost = 4.
In addition to just outright disabling index scans as Adrian already suggested, you could also discourage this "skinny table" usage by just increasing random_page_cost, since pages of indexes are assumed to be read in random order.
Another method would be to change the query so it can't use the index-only scan. For example, just using count(full_name) would do that, as PostgreSQL then needs to visit the table to make sure full_name is not NULL (even though it has a constraint asserting that already--sometimes it is not very clever)
Which method is better depends on what it is you are wanting to test/learn.
PostgreSQL version 10
Windows 10
16GB RAM
SSD
I'm ashamed to admit that, despite searching the hundred years of PG support archives, I cannot figure out this most basic problem. But here it is...
I have big_table with 45 million rows and little_table with 12,000 rows. I need to do a left join to include all big_table rows, along with the id's of little_table rows where big_table's timestamp overlaps with two timestamps in little_table.
This doesn't seem like it should be an extreme operation for PG, but it is taking 2 1/2 hours!
Any ideas on what I can do here? Or do you think I have unwittingly come up against the limitations of my software/hardware combo given the table size?
Thanks!
little_table with 12,000 rows
CREATE TABLE public.little_table
(
id bigint,
start_time timestamp without time zone,
stop_time timestamp without time zone
);
CREATE INDEX idx_little_table
ON public.little_table USING btree
(start_time, stop_time DESC);
big_table with 45 million rows
CREATE TABLE public.big_table
(
id bigint,
datetime timestamp without time zone
) ;
CREATE INDEX idx_big_table
ON public.big_table USING btree
(datetime);
Query
explain analyze
select
bt.id as bt_id,
lt.id as lt_id
from
big_table bt
left join
little_table lt
on
(bt.datetime between lt.start_time and lt.stop_time)
Explain Results
Nested Loop Left Join (cost=0.29..3260589190.64 rows=64945831346 width=16) (actual time=0.672..9163998.367 rows=1374445323 loops=1)
-> Seq Scan on big_table bt (cost=0.00..694755.92 rows=45097792 width=16) (actual time=0.014..10085.746 rows=45098790 loops=1)
-> Index Scan using idx_little_table on little_table lt (cost=0.29..57.89 rows=1440 width=24) (actual time=0.188..0.199 rows=30 loops=45098790)
Index Cond: ((bt.datetime >= start_time) AND (bt.datetime <= stop_time))
Planning time: 0.165 ms
Execution time: 9199473.052 ms
NOTE: My actual query criteria is a bit more complex, but this seems to be the root of the problem. If I can fix this part, I think I can fix the rest.
This query cannot perform any faster.
Since there is no equality operator (=) in your join condition, the only strategy left to PostgreSQL is a nested loop join. 45 million repetitions of an index scan on the small table just take a while.
I would suggest trying to change the start_time and end_time columns in the
little table to a single tsrange column. According to the docs, this datatype supports a GIST index which can speed up the "range contains element" operator #>. Maybe this will do better than the index scan on your current btree.
Generating 1.3 billion rows seems pretty extreme to me. How often do you need to do this, and how fast do you need it to be?
To explain a bit about your current plan:
Index Cond: ((bt.datetime >= start_time) AND (bt.datetime <= stop_time))
While it is not obvious from what is displayed above, this always scans about half the index. It starts at the beginning of the index, and stops once start_time > bt.datetime, using bt.datetime <= stop_time as an in-index filter that need to examine each row before rejecting it.
To flesh out Bergi's answer, you could do this:
alter table little_table add range tsrange;
update little_table set range =tsrange(start_time,stop_time,'[]');
create index on little_table using gist(range);
select
bt.id as bt_id,
lt.id as lt_id
from
big_table bt
left join
little_table lt
on
(bt.datetime <# lt.range)
In my hands, that is about 4 times faster than your current method.
If your join did not need to do a left join, then you could get some more efficient operations by joining the tables in the opposite order. Perhaps you could get better performance by separating this into 2 operations, and inner join and then a probe for missing values, and combining the results.
My queries get very slow when I add a limit 1.
I have a table object_values with timestamped values for objects:
timestamp | objectID | value
--------------------------------
2014-01-27| 234 | ksghdf
Per object I want to get the latest value:
SELECT * FROM object_values WHERE (objectID = 53708) ORDER BY timestamp DESC LIMIT 1;
(I cancelled the query after more than 10 minutes)
This query is very slow when there are no values for a given objectID (it is fast if there are results).
If I remove the limit it tells me nearly instantaneous that there are no results:
SELECT * FROM object_values WHERE (objectID = 53708) ORDER BY timestamp DESC;
...
Time: 0.463 ms
An explain shows me that the query without limit uses the index, where as the query with limit 1 does not make use of the index:
Slow query:
explain SELECT * FROM object_values WHERE (objectID = 53708) ORDER BY timestamp DESC limit 1;
QUERY PLAN`
----------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..2350.44 rows=1 width=126)
-> Index Scan Backward using object_values_timestamp on object_values (cost=0.00..3995743.59 rows=1700 width=126)
Filter: (objectID = 53708)`
Fast query:
explain SELECT * FROM object_values WHERE (objectID = 53708) ORDER BY timestamp DESC;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------
Sort (cost=6540.86..6545.11 rows=1700 width=126)
Sort Key: timestamp
-> Index Scan using object_values_objectID on working_hours_t (cost=0.00..6449.65 rows=1700 width=126)
Index Cond: (objectID = 53708)
The table contains 44,884,559 rows and 66,762 distinct objectIDs.
I have separate indexes on both fields: timestamp and objectID.
I have done a vacuum analyze on the table and I have reindexed the table.
Additionally the slow query becomes fast when I set the limit to 3 or higher:
explain SELECT * FROM object_values WHERE (objectID = 53708) ORDER BY timestamp DESC limit 3;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------
Limit (cost=6471.62..6471.63 rows=3 width=126)
-> Sort (cost=6471.62..6475.87 rows=1700 width=126)
Sort Key: timestamp
-> Index Scan using object_values_objectID on object_values (cost=0.00..6449.65 rows=1700 width=126)
Index Cond: (objectID = 53708)
In general I assume it has to do with the planner making wrong assumptions about the exectution costs and therefore chooses for a slower execution plan.
Is this the real reason? Is there a solution for this?
You can avoid this issue by adding an unneeded ORDER BY clause to the query.
SELECT * FROM object_values WHERE (objectID = 53708) ORDER BY timestamp, objectID DESC limit 1;
You're running into an issue which relates, I think, to the lack of statistics on row correlations. Consider reporting it to pg-bugs for reference if this is using the latest version Postgres.
The interpretation I'd suggest for your plans is:
limit 1 makes Postgres look for a single row, and in doing so it assumes that your object_id is common enough that it'll show up reasonably quickly in an index scan.
Based on the stats you gave its thinking probably is that it'll need to read ~70 rows on average to find one row that fits; it just doesn't realize that object_id and timestamp correlate to the point where it's actually going to read a large portion of the table.
limit 3, in contrast, makes it realize that it's uncommon enough, so it seriously considers (and ends up…) top-n sorting an expected 1700 rows with the object_id you want, on grounds that doing so is likely cheaper.
For instance, it might know that the distribution of these rows is so that they're all packed in the same area on the disk.
no limit clause means it'll fetch the 1700 anyways, so it goes straight for the index on object_id.
Solution, btw: add an index on (object_id, timestamp) or (object_id, timestamp desc).
I started having similar symptoms on an update-heavy table, and what was needed in my case was
analyze $table_name;
In this case the statistics needed to be refreshed, which then fixed the slow query plans that were occurring.
Supporting docs: https://www.postgresql.org/docs/current/sql-analyze.html
Not a fix, but sure enough switching from limit 1 to limit 50 (for me) and returning the first result row is way faster...Postgres 9.x in this instance. Just thought I'd mention it as a workaround mentioned by the OP.