I have this PostgreSQL 9.4 query that runs very fast (~12ms):
SELECT
auth_web_events.id,
auth_web_events.time_stamp,
auth_web_events.description,
auth_web_events.origin,
auth_user.email,
customers.name,
auth_web_events.client_ip
FROM
public.auth_web_events,
public.auth_user,
public.customers
WHERE
auth_web_events.user_id_fk = auth_user.id AND
auth_user.customer_id_fk = customers.id AND
auth_web_events.user_id_fk = 2
ORDER BY
auth_web_events.id DESC;
But if I embed it into a function, the query runs very slow through all data, seems that is running through every record, what am I missing?, I have ~1M of data and I want to simplify my database layer storing the large queries into functions and views.
CREATE OR REPLACE FUNCTION get_web_events_by_userid(int) RETURNS TABLE(
id int,
time_stamp timestamp with time zone,
description text,
origin text,
userlogin text,
customer text,
client_ip inet
) AS
$func$
SELECT
auth_web_events.id,
auth_web_events.time_stamp,
auth_web_events.description,
auth_web_events.origin,
auth_user.email AS user,
customers.name AS customer,
auth_web_events.client_ip
FROM
public.auth_web_events,
public.auth_user,
public.customers
WHERE
auth_web_events.user_id_fk = auth_user.id AND
auth_user.customer_id_fk = customers.id AND
auth_web_events.user_id_fk = $1
ORDER BY
auth_web_events.id DESC;
$func$ LANGUAGE SQL;
The query plan is:
"Sort (cost=20.94..20.94 rows=1 width=791) (actual time=61.905..61.906 rows=2 loops=1)"
" Sort Key: auth_web_events.id"
" Sort Method: quicksort Memory: 25kB"
" -> Nested Loop (cost=0.85..20.93 rows=1 width=791) (actual time=61.884..61.893 rows=2 loops=1)"
" -> Nested Loop (cost=0.71..12.75 rows=1 width=577) (actual time=61.874..61.879 rows=2 loops=1)"
" -> Index Scan using auth_web_events_fk1 on auth_web_events (cost=0.57..4.58 rows=1 width=61) (actual time=61.860..61.860 rows=2 loops=1)"
" Index Cond: (user_id_fk = 2)"
" -> Index Scan using auth_user_pkey on auth_user (cost=0.14..8.16 rows=1 width=524) (actual time=0.005..0.005 rows=1 loops=2)"
" Index Cond: (id = 2)"
" -> Index Scan using customers_id_idx on customers (cost=0.14..8.16 rows=1 width=222) (actual time=0.004..0.005 rows=1 loops=2)"
" Index Cond: (id = auth_user.customer_id_fk)"
"Planning time: 0.369 ms"
"Execution time: 61.965 ms"
I'm calling the funcion on this way:
SELECT * from get_web_events_by_userid(2)
The query plan for the function:
"Function Scan on get_web_events_by_userid (cost=0.25..10.25 rows=1000 width=172) (actual time=279107.142..279107.144 rows=2 loops=1)"
"Planning time: 0.038 ms"
"Execution time: 279107.175 ms"
EDIT: I just change the parameters, and the issue persist.
EDIT2: Query plan for the Erwin answer:
"Sort (cost=20.94..20.94 rows=1 width=791) (actual time=0.048..0.049 rows=2 loops=1)"
" Sort Key: w.id"
" Sort Method: quicksort Memory: 25kB"
" -> Nested Loop (cost=0.85..20.93 rows=1 width=791) (actual time=0.030..0.037 rows=2 loops=1)"
" -> Nested Loop (cost=0.71..12.75 rows=1 width=577) (actual time=0.023..0.025 rows=2 loops=1)"
" -> Index Scan using auth_user_pkey on auth_user u (cost=0.14..8.16 rows=1 width=524) (actual time=0.011..0.012 rows=1 loops=1)"
" Index Cond: (id = 2)"
" -> Index Scan using auth_web_events_fk1 on auth_web_events w (cost=0.57..4.58 rows=1 width=61) (actual time=0.008..0.008 rows=2 loops=1)"
" Index Cond: (user_id_fk = 2)"
" -> Index Scan using customers_id_idx on customers c (cost=0.14..8.16 rows=1 width=222) (actual time=0.003..0.004 rows=1 loops=2)"
" Index Cond: (id = u.customer_id_fk)"
"Planning time: 0.541 ms"
"Execution time: 0.101 ms"
user
While rewriting your function I realized that you added column aliases here:
SELECT
...
auth_user.email AS user,
customers.name AS customer,
.. which wouldn't do anything to begin with, since those aliases are invisible outside the function and not referenced inside the function. So they would be ignored. For documentation purposes better use a comment.
But it also makes your query invalid, because user is a completely reserved word and cannot be used as column alias unless double-quoted.
Oddly, in my tests the function seems to work with the invalid alias. Probably because it is ignored (?). But I am not sure this couldn't have side effects.
Your function rewritten (otherwise equivalent):
CREATE OR REPLACE FUNCTION get_web_events_by_userid(int)
RETURNS TABLE (
id int
, time_stamp timestamptz
, description text
, origin text
, userlogin text
, customer text
, client_ip inet
)
LANGUAGE sql STABLE AS
$func$
SELECT w.id
, w.time_stamp
, w.description
, w.origin
, u.email -- AS user -- make this a comment!
, c.name -- AS customer
, w.client_ip
FROM public.auth_user u
JOIN public.auth_web_events w ON w.user_id_fk = u.id
JOIN public.customers c ON c.id = u.customer_id_fk
WHERE u.id = $1 -- reverted the logic here
ORDER BY w.id DESC
$func$;
Obviously, the STABLE keyword changed the outcome. Function volatility should not be an issue in the test situation you describe. The setting does not normally profit a single, isolated function call. Read details in the manual. Also, standard EXPLAIN does not show query plans for what's going on inside functions. You could employ the additional module auto-explain for that:
Postgres query plan of a UDF invocation written in pgpsql
You have a very odd data distribution:
auth_web_events table has 100000000 records, auth_user->2 records, customers-> 1 record
Since you didn't define otherwise, the function assumes an estimate of 1000 rows to be returned. But your function is actually returning only 2 rows. If all your calls only return (in the vicinity of) 2 rows, just declare that with an added ROWS 2. Might change the query plan for the VOLATILE variant as well (even if STABLE is the right choice anyway here).
You will get better performance by making this query dynamic and using plpgsql.
CREATE OR REPLACE FUNCTION get_web_events_by_userid(uid int) RETURNS TABLE(
id int,
time_stamp timestamp with time zone,
description text,
origin text,
userlogin text,
customer text,
client_ip inet
) AS $$
BEGIN
RETURN QUERY EXECUTE
'SELECT
auth_web_events.id,
auth_web_events.time_stamp,
auth_web_events.description,
auth_web_events.origin,
auth_user.email AS user,
customers.name AS customer,
auth_web_events.client_ip
FROM
public.auth_web_events,
public.auth_user,
public.customers
WHERE
auth_web_events.user_id_fk = auth_user.id AND
auth_user.customer_id_fk = customers.id AND
auth_web_events.user_id_fk = ' || uid ||
'ORDER BY
auth_web_events.id DESC;'
END;
$$ LANGUAGE plpgsql;
Related
I have a stored procedure on PostgreSQL like this:
create or replace procedure new_emp_sp (f_name varchar, l_name varchar, age integer, threshold integer, dept varchar)
language plpgsql
as $$
declare
new_emp_count integer;
begin
INSERT INTO employees (id, first_name, last_name, age)
VALUES (nextval('emp_id_seq'),
random_string(10),
random_string(20),
age);
select count(*) into new_emp_count from employees where age > threshold;
update dept_employees set emp_count = new_emp_count where id = dept;
end; $$
I have enabled auto_explain module and set log_min_duration to 0 so that it logs everything.
I have an issue with the update statement in the procedure. From the auto_explain logs I see that it is not using the primary key index to update the table:
-> Seq Scan on dept_employees (cost=0.00..1.05 rows=1 width=14) (actual time=0.005..0.006 rows=1 loops=1)
Filter: ((id)::text = 'ABC'::text)
Rows Removed by Filter: 3
This worked as expected until a couple of hours ago and I used to get a log like this:
-> Index Scan using dept_employees_pkey on dept_employees (cost=0.15..8.17 rows=1 width=48) (actual time=0.010..0.011 rows=1 loops=1)
Index Cond: ((id)::text = 'ABC'::text)
Without the procedure, if I run the statement standalone like this:
explain analyze update dept_employees set emp_count = 123 where id = 'ABC';
The statement correctly uses the primary key index:
Update on dept_employees (cost=0.15..8.17 rows=1 width=128) (actual time=0.049..0.049 rows=0 loops=1)
-> Index Scan using dept_employees_pkey on dept_employees (cost=0.15..8.17 rows=1 width=128) (actual time=0.035..0.036 rows=1 loops=1)
Index Cond: ((id)::text = 'ABC'::text)
I can't figure out what has gone wrong especially because it worked perfectly just a couple of hours ago.
It is faster to scan N rows sequentially than to scan N rows using an index. So for small tables Postgres may decide that a sequence scan is faster than an index scan.
PL/pgSQL can cache prepared statements and execution plans, so you're probably getting a cached execution plan from when the table was smaller.
I have a complex query which has been greatly simplified below, running on "PostgreSQL 11.9 on aarch64-unknown-linux-gnu, compiled by aarch64-unknown-linux-gnu-gcc (GCC) 7.4.0, 64-bit", running on an AWS Aurora Serverless 2xlarge server (8 cores, 64GB RAM).
I have the following...
mv_journey, a materialized view with ~550M rows which contains information about journeys which have an origin and destination, and some measures about those (how long the journey took, etc), defined with columns from_id and from_region which identify the origins, and to_id and to_region for the destinations.
place_from and place_to, which are calculated from a function, fn_location_get, in an initial step of a CTE, and contain id and region (which map to the from_id, from_region, and to_id, to_region, respectively). These also contain rollup levels from the region, eg country, continent. Typically these return between ~100 and 20,000 rows.
Later in that CTE, I use place_from and place_to to filter the 550M mv_journey rows, and group by to create a rollup report based on journeys, eg from country to country.
The simplified query is something like this.
WITH place_from AS (
select *
from fn_location_get(...)
), place_to AS (
select *
from fn_location_get(...)
)
select [many dimension columns...]
, [a few aggregated measure columns]
from mv_journey j
inner join place_from o on j.from_id = o.id
and j.from_region = o.region
inner join place_to d on j.from_id = d.id
and j.from_region = d.region
where service_type_id = ?
group by [many dimension columns...]
I have indexes on mv_journey
CREATE INDEX idx_mv_journey_from ON mv_journey (from_id, from_region);
CREATE INDEX idx_mv_journey_to ON mv_journey (to_id, to_region);
When I run the query (using SET LOCAL work_mem = '2048MB' to invoke quicksorts) with a small number of rows in the place_from (92) and a large number in place_to (~18,000), the query runs in about 25 seconds with the following query plan (which includes the steps in the CTE to generate place_from and place_to).
"GroupAggregate (cost=530108.64..530129.64 rows=30 width=686) (actual time=13097.187..25408.707 rows=92 loops=1)"
" Group Key: [many dimension columns...]"
" CTE place_from"
" -> Function Scan on fn_location_get (cost=0.25..10.25 rows=1000 width=396) (actual time=34.275..34.331 rows=92 loops=1)"
" CTE place_to"
" -> Function Scan on fn_location_get (cost=0.25..10.25 rows=1000 width=396) (actual time=96.287..97.428 rows=18085 loops=1)"
" -> Sort (cost=530088.14..530088.22 rows=30 width=622) (actual time=12935.329..13295.468 rows=1871349 loops=1)"
" Sort Key: [many dimension columns...]"
" Sort Method: quicksort Memory: 826782kB"
" -> Merge Join (cost=529643.68..530087.41 rows=30 width=622) (actual time=4708.780..6021.449 rows=1871349 loops=1)"
" Merge Cond: ((j.to_id = d.id) AND (j.to_region = d.region))"
" -> Sort (cost=529573.85..529719.16 rows=58124 width=340) (actual time=4583.265..4788.625 rows=1878801 loops=1)"
" Sort Key: j.to_id, j.to_region"
" Sort Method: quicksort Memory: 623260kB"
" -> Nested Loop (cost=0.57..524974.25 rows=58124 width=340) (actual time=34.324..3079.815 rows=1878801 loops=1)"
" -> CTE Scan on place_from o (cost=0.00..20.00 rows=1000 width=320) (actual time=34.277..34.432 rows=92 loops=1)"
" -> Index Scan using idx_mv_journey_from on mv_journey j (cost=0.57..524.37 rows=58 width=60) (actual time=0.018..30.022 rows=20422 loops=92)"
" Index Cond: ((from_id = o.id) AND (from_region = o.region))"
" Filter: (service_type_id = 'ALL'::text)"
" Rows Removed by Filter: 81687"
" -> Sort (cost=69.83..72.33 rows=1000 width=320) (actual time=125.505..223.780 rows=1871350 loops=1)"
" Sort Key: d.id, d.region"
" Sort Method: quicksort Memory: 3329kB"
" -> CTE Scan on place_to d (cost=0.00..20.00 rows=1000 width=320) (actual time=96.292..103.677 rows=18085 loops=1)"
"Planning Time: 0.546 ms"
"Execution Time: 25501.827 ms"
The problem is that when I swap the locations in the from/to, ie, large number of rows in the place_from (~18,000) and a small number in place_to (92), the query takes forever. By the way, mv_journey is expected to have the same number of rows matched in both cases - there are not more records expected in one direction than the other.
I have not once got this second query to complete without it running for hours and PGAdmin 4 losing the connection to the server. I therefore cannot even do a EXPLAIN ANALYZE on it. However I have the EXPLAIN:
"GroupAggregate (cost=474135.40..474152.90 rows=25 width=686)"
" Group Key: [many dimension columns...]"
" CTE place_from"
" -> Function Scan on fn_location_get (cost=0.25..10.25 rows=1000 width=396)"
" CTE place_to"
" -> Function Scan on fn_location_get (cost=0.25..10.25 rows=1000 width=396)"
" -> Sort (cost=474114.90..474114.96 rows=25 width=622)"
" Sort Key: [many dimension columns...]"
" -> Merge Join (cost=473720.23..474114.31 rows=25 width=622)"
" Merge Cond: ((j.to_id = d.id) AND (j.to_region = d.region))## Heading ##"
" -> Sort (cost=473650.40..473779.18 rows=51511 width=340)"
" Sort Key: j.to_id, j.to_region"
" -> Nested Loop (cost=0.57..469619.00 rows=51511 width=340)"
" -> CTE Scan on place_from o (cost=0.00..20.00 rows=1000 width=320)"
" -> Index Scan using idx_mv_journey_from on mv_journey j (cost=0.57..469.08 rows=52 width=60)"
" Index Cond: ((from_id = o.id) AND (from_region = o.region))"
" Filter: (service_type_id = 'ALL'::text)"
" -> Sort (cost=69.83..72.33 rows=1000 width=320)"
" Sort Key: d.id, d.region"
" -> CTE Scan on place_to d (cost=0.00..20.00 rows=1000 width=320)"
My assumption was that if I had the equivalent indexes on both sides of the from/to, then Postgres would use the mirror-opposite query plan, doing a merge join for the origin and a nested loop join using idx_mv_journey_to for the destination.
But it looks like the query planner's row count estimates are way off in both queries. It seems only luck that the first query performs so well despite that.
I have tried the following, none of which worked
swap the inner join statements so the destination join is first
ALTER TABLE mv_journey ALTER COLUMN to_id SET STATISTICS 1000; ANALYZE mv_journey
ALTER TABLE mv_journey ALTER COLUMN from_id SET STATISTICS 1000; ANALYZE mv_journey
I guess the plan is done before the start of CTE execution? And that's why it has no idea what will come out of the fn_location_get calls that create the place_from and place_to sets?
fn_location_get is a complicated function with its own recursive CTE and I don't want to bring its logic out of the function and into this CTE.
What's the best way out of this mess?
The most straightforward approach is would be to create two temp tables as the result of the function calls, manually ANALYZE them, then run the query against the temp tables rather than the function calls.
I sort of worked out an answer in writing the question... don't use a CTE, but use temp tables instead.
DROP TABLE IF EXISTS place_from;
CREATE TEMP TABLE place_from AS
select *
from fn_location_get(...);
DROP TABLE IF EXISTS place_to;
CREATE TEMP TABLE place_to AS
select *
from fn_location_get(...);
select [many dimension columns...]
, [a few aggregated measure columns]
from mv_journey j
inner join place_from o on j.from_id = o.id
and j.from_region = o.region
inner join place_to d on j.from_id = d.id
and j.from_region = d.region
where service_type_id = ?
group by [many dimension columns...]
I thought this worked because by the time the query plan for the reporting select is done, the temp tables' row counts are known, and a better query plan can be made.
However, the rows estimates are still inaccurate. Good enough to choose the right plan, but inaccurate.
"GroupAggregate (cost=200682.98..200706.78 rows=34 width=686) (actual time=21233.486..33200.052 rows=92 loops=1)"
" Group Key: [many dimension columns...]"
" -> Sort (cost=200682.98..200683.07 rows=34 width=622) (actual time=21077.807..21443.739 rows=1802571 loops=1)"
" Sort Key: [many dimension columns...]"
" Sort Method: quicksort Memory: 800480kB"
" -> Merge Join (cost=200555.00..200682.12 rows=34 width=622) (actual time=4820.798..6106.722 rows=1802571 loops=1)"
" Merge Cond: ((from_id = o.id) AND (from_region = o.region))"
" -> Sort (cost=199652.79..199677.24 rows=9779 width=340) (actual time=4794.354..5003.954 rows=1810023 loops=1)"
" Sort Key: j.from_id, j.from_region"
" Sort Method: quicksort Memory: 603741kB"
" -> Nested Loop (cost=0.57..199004.67 rows=9779 width=340) (actual time=0.044..3498.767 rows=1810023 loops=1)"
" -> Seq Scan on place_to d (cost=0.00..11.90 rows=190 width=320) (actual time=0.006..0.078 rows=92 loops=1)"
" -> Index Scan using idx_mv_journey_to on mv_journey j (cost=0.57..1046.82 rows=51 width=60) (actual time=0.020..35.055 rows=19674 loops=92)"
" Index Cond: ((j.to_id = d.id) AND (j.to_region = d.region))"
" Filter: (service_type_id = 'ALL'::text)"
" Rows Removed by Filter: 78697"
" -> Sort (cost=902.20..920.02 rows=7125 width=320) (actual time=26.434..121.106 rows=1802572 loops=1)"
" Sort Key: o.id, o.region"
" Sort Method: quicksort Memory: 3329kB"
" -> Seq Scan on place_from o (cost=0.00..446.25 rows=7125 width=320) (actual time=0.016..4.205 rows=18085 loops=1)"
"Planning Time: 0.792 ms"
"Execution Time: 33286.461 ms"
UPDATE: When adding the manual ANALYZE after the CREATE as jjanes suggests, the estimates are now as expected.
In some cases, PostgreSQL does not filter out window function partitions until they are calculated, while in a very similar scenario PostgreSQL filters row before performing window function calculation.
Tables used for minimal STR - log is the main data table, each row contains either increment or absolute value. Absolute value resets the current counter with a new base value. Window functions need to process all logs for a given account_id to calculate the correct running total. View uses a subquery to ensure that underlying log rows are not filtered by ts, otherwise, this would break the window function.
CREATE TABLE account(
id serial,
name VARCHAR(100)
);
CREATE TABLE log(
id serial,
absolute int,
incremental int,
account_id int,
ts timestamp,
PRIMARY KEY(id),
CONSTRAINT fk_account
FOREIGN KEY(account_id)
REFERENCES account(id)
);
CREATE FUNCTION get_running_total_func(
aggregated_total int,
absolute int,
incremental int
) RETURNS int
LANGUAGE sql IMMUTABLE CALLED ON NULL INPUT AS
$$
SELECT
CASE
WHEN absolute IS NOT NULL THEN absolute
ELSE COALESCE(aggregated_total, 0) + incremental
END
$$;
CREATE AGGREGATE get_running_total(integer, integer) (
sfunc = get_running_total_func,
stype = integer
);
Slow view:
CREATE VIEW test_view
(
log_id,
running_value,
account_id,
ts
)
AS
SELECT log_running.* FROM
(SELECT
log.id,
get_running_total(
log.absolute,
log.incremental
)
OVER(
PARTITION BY log.account_id
ORDER BY log.ts RANGE UNBOUNDED PRECEDING
),
account.id,
ts
FROM log log JOIN account account ON log.account_id=account.id
) AS log_running;
CREATE VIEW
postgres=# EXPLAIN ANALYZE SELECT * FROM test_view WHERE account_id=1;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------
Subquery Scan on log_running (cost=12734.02..15981.48 rows=1 width=20) (actual time=7510.851..16122.404 rows=20 loops=1)
Filter: (log_running.id_1 = 1)
Rows Removed by Filter: 99902
-> WindowAgg (cost=12734.02..14732.46 rows=99922 width=32) (actual time=7510.830..14438.783 rows=99922 loops=1)
-> Sort (cost=12734.02..12983.82 rows=99922 width=28) (actual time=7510.628..9312.399 rows=99922 loops=1)
Sort Key: log.account_id, log.ts
Sort Method: external merge Disk: 3328kB
-> Hash Join (cost=143.50..2042.24 rows=99922 width=28) (actual time=169.941..5431.650 rows=99922 loops=1)
Hash Cond: (log.account_id = account.id)
-> Seq Scan on log (cost=0.00..1636.22 rows=99922 width=24) (actual time=0.063..1697.802 rows=99922 loops=1)
-> Hash (cost=81.00..81.00 rows=5000 width=4) (actual time=169.837..169.865 rows=5000 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 240kB
-> Seq Scan on account (cost=0.00..81.00 rows=5000 width=4) (actual time=0.017..84.639 rows=5000 loops=1)
Planning Time: 0.199 ms
Execution Time: 16127.275 ms
(15 rows)
Fast view - only change is account.id -> log.account_id (!):
CREATE VIEW test_view
(
log_id,
running_value,
account_id,
ts
)
AS
SELECT log_running.* FROM
(SELECT
log.id,
get_running_total(
log.absolute,
log.incremental
)
OVER(
PARTITION BY log.account_id
ORDER BY log.ts RANGE UNBOUNDED PRECEDING
),
log.account_id,
ts
FROM log log JOIN account account ON log.account_id=account.id
) AS log_running;
CREATE VIEW
postgres=# EXPLAIN ANALYZE SELECT * FROM test_view WHERE account_id=1;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------
Subquery Scan on log_running (cost=1894.96..1895.56 rows=20 width=20) (actual time=34.718..45.958 rows=20 loops=1)
-> WindowAgg (cost=1894.96..1895.36 rows=20 width=28) (actual time=34.691..45.307 rows=20 loops=1)
-> Sort (cost=1894.96..1895.01 rows=20 width=24) (actual time=34.367..35.925 rows=20 loops=1)
Sort Key: log.ts
Sort Method: quicksort Memory: 26kB
-> Nested Loop (cost=0.28..1894.53 rows=20 width=24) (actual time=0.542..34.066 rows=20 loops=1)
-> Index Only Scan using account_pkey on account (cost=0.28..8.30 rows=1 width=4) (actual time=0.025..0.054 rows=1 loops=1)
Index Cond: (id = 1)
Heap Fetches: 1
-> Seq Scan on log (cost=0.00..1886.03 rows=20 width=24) (actual time=0.195..32.937 rows=20 loops=1)
Filter: (account_id = 1)
Rows Removed by Filter: 99902
Planning Time: 0.297 ms
Execution Time: 47.300 ms
(14 rows)
Is it a bug in PostgreSQL implementation? It seems that this change in view definition shouldn't affect performance at all, PostgreSQL should be able to filter data before applying window function for all data set.
I have problems with a query that uses a wrong query plan. Because of the non-optimal query plan the query takes almost 20s.
The problem occurs only for a small number of owner_ids. The distribution of the owner_ids is not uniform. The owner_id in the example has 7948 routes. The total number of routes is 2903096.
The database is hosted on Amazon RDS on a server with 34.2 GiB memory, 4vCPU and provisioned IOPS (instance type db.m2.2xlarge). The Postgres version is 9.3.5.
EXPLAIN ANALYZE SELECT
route.id, route_meta.name
FROM
route
INNER JOIN
route_meta
USING (id)
WHERE
route.owner_id = 128905
ORDER BY
route_meta.name
LIMIT
61
Query plan:
"Limit (cost=0.86..58637.88 rows=61 width=24) (actual time=49.731..18828.052 rows=61 loops=1)"
" -> Nested Loop (cost=0.86..7934263.10 rows=8254 width=24) (actual time=49.728..18827.887 rows=61 loops=1)"
" -> Index Scan using route_meta_i_name on route_meta (cost=0.43..289911.22 rows=2902910 width=24) (actual time=0.016..2825.932 rows=1411126 loops=1)"
" -> Index Scan using route_pkey on route (cost=0.43..2.62 rows=1 width=4) (actual time=0.009..0.009 rows=0 loops=1411126)"
" Index Cond: (id = route_meta.id)"
" Filter: (owner_id = 128905)"
" Rows Removed by Filter: 1"
"Total runtime: 18828.214 ms"
If I increase the limit to 100, a better query plan is used. It takes now less then 100ms.
EXPLAIN ANALYZE SELECT
route.id, route_meta.name
FROM
route
INNER JOIN
route_meta
USING (id)
WHERE
route.owner_id = 128905
ORDER BY
route_meta.name
LIMIT
100
Query plan:
"Limit (cost=79964.98..79965.23 rows=100 width=24) (actual time=93.037..93.294 rows=100 loops=1)"
" -> Sort (cost=79964.98..79985.61 rows=8254 width=24) (actual time=93.033..93.120 rows=100 loops=1)"
" Sort Key: route_meta.name"
" Sort Method: top-N heapsort Memory: 31kB"
" -> Nested Loop (cost=0.86..79649.52 rows=8254 width=24) (actual time=0.039..77.955 rows=7948 loops=1)"
" -> Index Scan using route_i_owner_id on route (cost=0.43..22765.84 rows=8408 width=4) (actual time=0.023..13.839 rows=7948 loops=1)"
" Index Cond: (owner_id = 128905)"
" -> Index Scan using route_meta_pkey on route_meta (cost=0.43..6.76 rows=1 width=24) (actual time=0.003..0.004 rows=1 loops=7948)"
" Index Cond: (id = route.id)"
"Total runtime: 93.444 ms"
I already tried following things:
increasing statistics for owner_id (The owner_id in the example is included in the pg_stats)
ALTER TABLE route ALTER COLUMN owner_id SET STATISTICS 1000;
reindex owner_id and name
vacuum analyse
increased work_mem from 1MB to 16MB
when I rewrite the query to row_number() OVER (ORDER BY xxx) AS rn
... WHERE rn <= yyy in a subquery, the specific case is solved. However it
introduces performance problems with other ownerids.
A similar problem was solved with a combined index, but that seems impossible here because of the different tables.
Postgres uses wrong index in query plan
In short: Distinct,Min,Max on the Left hand side of a Left Join, should be answerable without doing the join.
I’m using a SQL array type (on Postgres 9.3) to condense several rows of data in to a single row, and then a view to return the unnested normalized view. I do this to save on index costs, as well as to get Postgres to compress the data in the array.
Things work pretty well, but some queries that could be answered without unnesting and materializing/exploding the view are quite expensive because they are deferred till after the view is materialized. Is there any way to solve this?
Here is the basic table:
CREATE TABLE mt_count_by_day
(
run_id integer NOT NULL,
type character varying(64) NOT NULL,
start_day date NOT NULL,
end_day date NOT NULL,
counts bigint[] NOT NULL,
CONSTRAINT mt_count_by_day_pkey PRIMARY KEY (run_id, type),
)
An index on ‘type’ just for good measure:
CREATE INDEX runinfo_mt_count_by_day_type_idx on runinfo.mt_count_by_day (type);
Here is the view that uses generate_series and unnest
CREATE OR REPLACE VIEW runinfo.v_mt_count_by_day AS
SELECT mt_count_by_day.run_id,
mt_count_by_day.type,
mt_count_by_day.brand,
generate_series(mt_count_by_day.start_day::timestamp without time zone, mt_count_by_day.end_day - '1 day'::interval, '1 day'::interval) AS row_date,
unnest(mt_count_by_day.counts) AS row_count
FROM runinfo.mt_count_by_day;
What if I want to do distinct on the ‘type' column?
explain analyze select distinct(type) from mt_count_by_day;
"HashAggregate (cost=9566.81..9577.28 rows=1047 width=19) (actual time=171.653..172.019 rows=1221 loops=1)"
" -> Seq Scan on mt_count_by_day (cost=0.00..9318.25 rows=99425 width=19) (actual time=0.089..99.110 rows=99425 loops=1)"
"Total runtime: 172.338 ms"
Now what happens if I do the same on the view?
explain analyze select distinct(type) from v_mt_count_by_day;
"HashAggregate (cost=1749752.88..1749763.34 rows=1047 width=19) (actual time=58586.934..58587.191 rows=1221 loops=1)"
" -> Subquery Scan on v_mt_count_by_day (cost=0.00..1501190.38 rows=99425000 width=19) (actual time=0.114..37134.349 rows=68299959 loops=1)"
" -> Seq Scan on mt_count_by_day (cost=0.00..506940.38 rows=99425000 width=597) (actual time=0.113..24907.147 rows=68299959 loops=1)"
"Total runtime: 58587.474 ms"
Is there a way to get postgres to recognize that it can solve this without first exploding the view?
Here we can see for comparison we are counting the number of rows matching criteria in the table vs the view. Everything works as expected. Postgres filters down the rows before materializing the view. Not quite the same, but this property is what makes our data more manageable.
explain analyze select count(*) from mt_count_by_day where type = ’SOCIAL_GOOGLE'
"Aggregate (cost=157.01..157.02 rows=1 width=0) (actual time=0.538..0.538 rows=1 loops=1)"
" -> Bitmap Heap Scan on mt_count_by_day (cost=4.73..156.91 rows=40 width=0) (actual time=0.139..0.509 rows=122 loops=1)"
" Recheck Cond: ((type)::text = 'SOCIAL_GOOGLE'::text)"
" -> Bitmap Index Scan on runinfo_mt_count_by_day_type_idx (cost=0.00..4.72 rows=40 width=0) (actual time=0.098..0.098 rows=122 loops=1)"
" Index Cond: ((type)::text = 'SOCIAL_GOOGLE'::text)"
"Total runtime: 0.625 ms"
explain analyze select count(*) from v_mt_count_by_day where type = 'SOCIAL_GOOGLE'
"Aggregate (cost=857.11..857.12 rows=1 width=0) (actual time=6.827..6.827 rows=1 loops=1)"
" -> Bitmap Heap Scan on mt_count_by_day (cost=4.73..357.11 rows=40000 width=597) (actual time=0.124..5.294 rows=15916 loops=1)"
" Recheck Cond: ((type)::text = 'SOCIAL_GOOGLE'::text)"
" -> Bitmap Index Scan on runinfo_mt_count_by_day_type_idx (cost=0.00..4.72 rows=40 width=0) (actual time=0.082..0.082 rows=122 loops=1)"
" Index Cond: ((type)::text = 'SOCIAL_GOOGLE'::text)"
"Total runtime: 6.885 ms"
Here is the code required to reproduce this:
CREATE TABLE base_table
(
run_id integer NOT NULL,
type integer NOT NULL,
start_day date NOT NULL,
end_day date NOT NULL,
counts bigint[] NOT NULL
CONSTRAINT match_check CHECK (end_day > start_day AND (end_day - start_day) = array_length(counts, 1)),
CONSTRAINT base_table_pkey PRIMARY KEY (run_id, type)
);
--Just because...
CREATE INDEX base_type_idx on base_table (type);
CREATE OR REPLACE VIEW v_foo AS
SELECT m.run_id,
m.type,
t.row_date::date,
t.row_count
FROM base_table m
LEFT JOIN LATERAL ROWS FROM (
unnest(m.counts),
generate_series(m.start_day, m.end_day-1, interval '1d')
) t(row_count, row_date) ON true;
insert into base_table
select a.run_id, a.type, '20120101'::date as start_day, '20120401'::date as end_day, b.counts from (SELECT N AS run_id, L as type
FROM
generate_series(1, 10000) N
CROSS JOIN
generate_series(1, 7) L
ORDER BY N, L) a, (SELECT array_agg(generate_series)::bigint[] as counts FROM generate_series(1, 91) ) b
And the results on 9.4.1:
explain analyze select distinct type from base_table;
"HashAggregate (cost=6750.00..6750.03 rows=3 width=4) (actual time=51.939..51.940 rows=3 loops=1)"
" Group Key: type"
" -> Seq Scan on base_table (cost=0.00..6600.00 rows=60000 width=4) (actual time=0.030..33.655 rows=60000 loops=1)"
"Planning time: 0.086 ms"
"Execution time: 51.975 ms"
explain analyze select distinct type from v_foo;
"HashAggregate (cost=1356600.01..1356600.04 rows=3 width=4) (actual time=9215.630..9215.630 rows=3 loops=1)"
" Group Key: m.type"
" -> Nested Loop Left Join (cost=0.01..1206600.01 rows=60000000 width=4) (actual time=0.112..7834.094 rows=5460000 loops=1)"
" -> Seq Scan on base_table m (cost=0.00..6600.00 rows=60000 width=764) (actual time=0.009..42.694 rows=60000 loops=1)"
" -> Function Scan on t (cost=0.01..10.01 rows=1000 width=0) (actual time=0.091..0.111 rows=91 loops=60000)"
"Planning time: 0.132 ms"
"Execution time: 9215.686 ms"
Generally, the Postgres query planner does "inline" views to optimize the whole query. Per documentation:
One application of the rewrite system is in the realization of views.
Whenever a query against a view (i.e., a virtual table) is made, the
rewrite system rewrites the user's query to a query that accesses the
base tables given in the view definition instead.
But I don't think Postgres is smart enough to conclude that it can reach the same result from the base table without exploding rows.
You can try this alternative query with a LATERAL join. It's cleaner:
CREATE OR REPLACE VIEW runinfo.v_mt_count_by_day AS
SELECT m.run_id, m.type, m.brand
, m.start_day + c.rn - 1 AS row_date
, c.row_count
FROM runinfo.mt_count_by_day m
LEFT JOIN LATERAL unnest(m.counts) WITH ORDINALITY c(row_count, rn) ON true;
It also makes clear that one of (end_day, start_day) is redundant.
Using LEFT JOIN because that might allow the query planner to ignore the join from your query:
SELECT DISTINCT type FROM v_mt_count_by_day;
Else (with a CROSS JOIN or INNER JOIN) it must evaluate the join to see whether rows from the first table are eliminated.
BTW, it's:
SELECT DISTINCT type ...
not:
SELECT DISTINCT(type) ...
Note that this returns a date instead of the timestamp in your original. Easer, and I guess it's what you want anyway?
Requires Postgres 9.3+ Details:
PostgreSQL unnest() with element number
ROWS FROM in Postgres 9.4+
To explode both columns in parallel safely:
CREATE OR REPLACE VIEW runinfo.v_mt_count_by_day AS
SELECT m.run_id, m.type, m.brand
t.row_date::date, t.row_count
FROM runinfo.mt_count_by_day m
LEFT JOIN LATERAL ROWS FROM (
unnest(m.counts)
, generate_series(m.start_day, m.end_day, interval '1d')
) t(row_count, row_date) ON true;
The main benefit: This would not derail into a Cartesian product if the two SRF don't return the same number of rows. Instead, NULL values would be padded.
Again, I can't say whether this would help the query planner with a faster plan for DISTINCT type without testing.