Related
Select query scans all the child tables. can we make the query-optimizer to scan the right child table?
Example:
Created a parent table and two child table using the inheritance concept in Postgres 9.6, ignored constraints to make it simple
create table student(id INTEGER, name varchar(10), result varchar(1) );
create table student_pass() inherits (student);
create table student_fail() inherits (student);
Index
create index student_result_idx on student (result);
create index student_result_idx2 on student_pass (result) where result='P';
create index student_result_idx3 on student_fail (result) where result='F';
Procedure
CREATE OR REPLACE FUNCTION student_partition()
RETURNS TRIGGER AS $$
BEGIN
IF (new.result = 'P')THEN
INSERT INTO student_pass VALUES (NEW.*);
ELSE
INSERT INTO student_fail VALUES (NEW.*);
END IF;
RETURN NULL;
END;
$$
LANGUAGE plpgsql;
Trigger
CREATE TRIGGER insert_trigger BEFORE INSERT ON student
FOR EACH ROW EXECUTE procedure student_partition();
Insert
insert into student values
(1,'aaa','P'),
(2,'bbb','F');
insert happens in their respective tables as expected
Select
select * from student where result='P';
The problem here is when I do select it scans all the tables. How to make the query-optimizer smart enough to pick the right child table?
Do we need the where condition in the index as the entire table is going to be either 'P' or 'F'?
Output of EXPLAIN(analyze, buffers) select * from student where result='P'
Append (cost=0.00..37.94 rows=11 width=50) (actual time=0.016..0.042 rows=2 loops=1)
Buffers: shared hit=4
-> Seq Scan on student (cost=0.00..2.30 rows=1 width=50) (actual time=0.015..0.017 rows=1 loops=1)
Filter: ((result)::text = 'P'::text)
Rows Removed by Filter: 1
Buffers: shared hit=1
-> Bitmap Heap Scan on student_pass (cost=4.17..12.64 rows=5 width=50) (actual time=0.013..0.014 rows=1 loops=1)
Recheck Cond: ((result)::text = 'P'::text)
Heap Blocks: exact=1
Buffers: shared hit=2
-> Bitmap Index Scan on student_result_idx2 (cost=0.00..4.17 rows=5 width=0) (actual time=0.007..0.007 rows=1 loops=1)
Buffers: shared hit=1
-> Seq Scan on student_fail (cost=0.00..23.00 rows=5 width=50) (actual time=0.007..0.007 rows=0 loops=1)
Filter: ((result)::text = 'P'::text)
Rows Removed by Filter: 1
Buffers: shared hit=1
Planning time: 0.447 ms
Execution time: 0.120 ms
Adding constraints helps
alter table student_pass add constraint pass_cst check (result ='P');
alter table student_fail add constraint fail_cst check (result not in ('P'));
Output of EXPLAIN(analyze, buffers) select * from student where result='P'
Append (cost=0.00..23.00 rows=6 width=50) (actual time=0.299..0.303 rows=2 loops=1)
Buffers: shared read=1
-> Seq Scan on student (cost=0.00..0.00 rows=1 width=50) (actual time=0.004..0.004 rows=0 loops=1)
Filter: ((result)::text = 'P'::text)
-> Seq Scan on student_pass (cost=0.00..23.00 rows=5 width=50) (actual time=0.294..0.296 rows=2 loops=1)
Filter: ((result)::text = 'P'::text)
Buffers: shared read=1
Planning time: 10.488 ms
Execution time: 0.361 ms
Query-optimiser skipped student_fail table
This is a follow up to this issue I posted a while ago.
I have the following code:
SET work_mem = '16MB';
SELECT s.start_date, s.end_date, s.resources, s.activity_index, r.resource_id, sa.usedresourceset
FROM rm_o_resource_usage_instance_splits_new s
INNER JOIN rm_o_resource_usage r ON s.usage_id = r.id
INNER JOIN scheduledactivities sa ON s.activity_index = sa.activity_index AND r.schedule_id = sa.solution_id and s.solution = sa.solution_id
WHERE r.schedule_id = 10
ORDER BY r.resource_id, s.start_date
When I run EXPLAIN (ANALYZE, BUFFERS) I get the following:
Sort (cost=3724.02..3724.29 rows=105 width=89) (actual time=245.802..247.573 rows=22302 loops=1)
Sort Key: r.resource_id, s.start_date
Sort Method: quicksort Memory: 6692kB
Buffers: shared hit=198702 read=5993 written=612
-> Nested Loop (cost=703.76..3720.50 rows=105 width=89) (actual time=1.898..164.741 rows=22302 loops=1)
Buffers: shared hit=198702 read=5993 written=612
-> Hash Join (cost=703.34..3558.54 rows=105 width=101) (actual time=1.815..11.259 rows=22302 loops=1)
Hash Cond: (s.usage_id = r.id)
Buffers: shared hit=3 read=397 written=2
-> Bitmap Heap Scan on rm_o_resource_usage_instance_splits_new s (cost=690.61..3486.58 rows=22477 width=69) (actual time=1.782..5.820 rows=22302 loops=1)
Recheck Cond: (solution = 10)
Heap Blocks: exact=319
Buffers: shared hit=2 read=396 written=2
-> Bitmap Index Scan on rm_o_resource_usage_instance_splits_new_solution_idx (cost=0.00..685.00 rows=22477 width=0) (actual time=1.609..1.609 rows=22302 loops=1)
Index Cond: (solution = 10)
Buffers: shared hit=2 read=77
-> Hash (cost=12.66..12.66 rows=5 width=48) (actual time=0.023..0.023 rows=1 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
Buffers: shared hit=1 read=1
-> Bitmap Heap Scan on rm_o_resource_usage r (cost=4.19..12.66 rows=5 width=48) (actual time=0.020..0.020 rows=1 loops=1)
Recheck Cond: (schedule_id = 10)
Heap Blocks: exact=1
Buffers: shared hit=1 read=1
-> Bitmap Index Scan on rm_o_resource_usage_sched (cost=0.00..4.19 rows=5 width=0) (actual time=0.017..0.017 rows=1 loops=1)
Index Cond: (schedule_id = 10)
Buffers: shared read=1
-> Index Scan using scheduledactivities_activity_index_idx on scheduledactivities sa (cost=0.42..1.53 rows=1 width=16) (actual time=0.004..0.007 rows=1 loops=22302)
Index Cond: (activity_index = s.activity_index)
Filter: (solution_id = 10)
Rows Removed by Filter: 5
Buffers: shared hit=198699 read=5596 written=610
Planning time: 7.070 ms
Execution time: 248.691 ms
Every time I run EXPLAIN, I get roughly the same results. The Execution Time is always between 170ms and 250ms, which, to me is perfectly fine. However, when this query is run through a C++ project (using PQexec(conn, query) where conn is a dedicated connection, and query is the above query), the time it takes seems to vary widely. In general, the query is very quick, and you don't notice a delay. The problem is, that on occasion, this query will take 2 to 3 minutes to complete.
If I open the pgadmin, and have a look at the "server activity" for the database, there's about 30 or so connections, mostly sitting at "idle". The above query's connection is marked as "active", and will stay as "active" for several minutes.
I am at a loss of why it randomly takes several minutes to complete the same query, with no change in data in the DB either. I have tried increasing the work_mem which didn't make any difference (nor did I really expect it to). Any help or suggestions would be greatly appreciated.
There isn't any more specific tags, but I'm currently using Postgres 10.11, but it's also been an issue on other versions of 10.x. System is a Xeon quad-core # 3.4Ghz, with SSD and 24GB of memory.
Per jjanes's suggestion, I put in the auto_explain. Eventually go this output:
duration: 128057.373 ms
plan:
Query Text: SET work_mem = '32MB';SELECT s.start_date, s.end_date, s.resources, s.activity_index, r.resource_id, sa.usedresourceset FROM rm_o_resource_usage_instance_splits_new s INNER JOIN rm_o_resource_usage r ON s.usage_id = r.id INNER JOIN scheduledactivities sa ON s.activity_index = sa.activity_index AND r.schedule_id = sa.solution_id and s.solution = sa.solution_id WHERE r.schedule_id = 12642 ORDER BY r.resource_id, s.start_date
Sort (cost=14.36..14.37 rows=1 width=98) (actual time=128042.083..128043.287 rows=21899 loops=1)
Output: s.start_date, s.end_date, s.resources, s.activity_index, r.resource_id, sa.usedresourceset
Sort Key: r.resource_id, s.start_date
Sort Method: quicksort Memory: 6585kB
Buffers: shared hit=21198435 read=388 dirtied=119
-> Nested Loop (cost=0.85..14.35 rows=1 width=98) (actual time=4.995..127958.935 rows=21899 loops=1)
Output: s.start_date, s.end_date, s.resources, s.activity_index, r.resource_id, sa.usedresourceset
Join Filter: (s.activity_index = sa.activity_index)
Rows Removed by Join Filter: 705476285
Buffers: shared hit=21198435 read=388 dirtied=119
-> Nested Loop (cost=0.42..9.74 rows=1 width=110) (actual time=0.091..227.705 rows=21899 loops=1)
Output: s.start_date, s.end_date, s.resources, s.activity_index, s.solution, r.resource_id, r.schedule_id
Inner Unique: true
Join Filter: (s.usage_id = r.id)
Buffers: shared hit=22102 read=388 dirtied=119
-> Index Scan using rm_o_resource_usage_instance_splits_new_solution_idx on public.rm_o_resource_usage_instance_splits_new s (cost=0.42..8.44 rows=1 width=69) (actual time=0.082..17.418 rows=21899 loops=1)
Output: s.start_time, s.end_time, s.resources, s.activity_index, s.usage_id, s.start_date, s.end_date, s.solution
Index Cond: (s.solution = 12642)
Buffers: shared hit=203 read=388 dirtied=119
-> Seq Scan on public.rm_o_resource_usage r (cost=0.00..1.29 rows=1 width=57) (actual time=0.002..0.002 rows=1 loops=21899)
Output: r.id, r.schedule_id, r.resource_id
Filter: (r.schedule_id = 12642)
Rows Removed by Filter: 26
Buffers: shared hit=21899
-> Index Scan using scheduled_activities_idx on public.scheduledactivities sa (cost=0.42..4.60 rows=1 width=16) (actual time=0.006..4.612 rows=32216 loops=21899)
Output: sa.usedresourceset, sa.activity_index, sa.solution_id
Index Cond: (sa.solution_id = 12642)
Buffers: shared hit=21176333",,,,,,,,,""
EDIT: Full definitions of the tables are below:
CREATE TABLE public.rm_o_resource_usage_instance_splits_new
(
start_time integer NOT NULL,
end_time integer NOT NULL,
resources jsonb NOT NULL,
activity_index integer NOT NULL,
usage_id bigint NOT NULL,
start_date text COLLATE pg_catalog."default" NOT NULL,
end_date text COLLATE pg_catalog."default" NOT NULL,
solution bigint NOT NULL,
CONSTRAINT rm_o_resource_usage_instance_splits_new_pkey PRIMARY KEY (start_time, activity_index, usage_id),
CONSTRAINT rm_o_resource_usage_instance_splits_new_solution_fkey FOREIGN KEY (solution)
REFERENCES public.rm_o_schedule_stats (id) MATCH SIMPLE
ON UPDATE CASCADE
ON DELETE CASCADE,
CONSTRAINT rm_o_resource_usage_instance_splits_new_usage_id_fkey FOREIGN KEY (usage_id)
REFERENCES public.rm_o_resource_usage (id) MATCH SIMPLE
ON UPDATE CASCADE
ON DELETE CASCADE
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
CREATE INDEX rm_o_resource_usage_instance_splits_new_activity_idx
ON public.rm_o_resource_usage_instance_splits_new USING btree
(activity_index ASC NULLS LAST)
TABLESPACE pg_default;
CREATE INDEX rm_o_resource_usage_instance_splits_new_solution_idx
ON public.rm_o_resource_usage_instance_splits_new USING btree
(solution ASC NULLS LAST)
TABLESPACE pg_default;
CREATE INDEX rm_o_resource_usage_instance_splits_new_usage_idx
ON public.rm_o_resource_usage_instance_splits_new USING btree
(usage_id ASC NULLS LAST)
TABLESPACE pg_default;
CREATE TABLE public.rm_o_resource_usage
(
id bigint NOT NULL DEFAULT nextval('rm_o_resource_usage_id_seq'::regclass),
schedule_id bigint NOT NULL,
resource_id text COLLATE pg_catalog."default" NOT NULL,
CONSTRAINT rm_o_resource_usage_pkey PRIMARY KEY (id),
CONSTRAINT rm_o_resource_usage_schedule_id_fkey FOREIGN KEY (schedule_id)
REFERENCES public.rm_o_schedule_stats (id) MATCH SIMPLE
ON UPDATE CASCADE
ON DELETE CASCADE
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
CREATE INDEX rm_o_resource_usage_idx
ON public.rm_o_resource_usage USING btree
(id ASC NULLS LAST)
TABLESPACE pg_default;
CREATE INDEX rm_o_resource_usage_sched
ON public.rm_o_resource_usage USING btree
(schedule_id ASC NULLS LAST)
TABLESPACE pg_default;
CREATE TABLE public.scheduledactivities
(
id bigint NOT NULL DEFAULT nextval('scheduledactivities_id_seq'::regclass),
solution_id bigint NOT NULL,
activity_id text COLLATE pg_catalog."default" NOT NULL,
sequence_index integer,
startminute integer,
finishminute integer,
issue text COLLATE pg_catalog."default",
activity_index integer NOT NULL,
is_objective boolean NOT NULL,
usedresourceset integer DEFAULT '-1'::integer,
start timestamp without time zone,
finish timestamp without time zone,
is_ore boolean,
is_ignored boolean,
CONSTRAINT scheduled_activities_pkey PRIMARY KEY (id),
CONSTRAINT scheduledactivities_solution_id_fkey FOREIGN KEY (solution_id)
REFERENCES public.rm_o_schedule_stats (id) MATCH SIMPLE
ON UPDATE CASCADE
ON DELETE CASCADE
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
CREATE INDEX scheduled_activities_activity_id_idx
ON public.scheduledactivities USING btree
(activity_id COLLATE pg_catalog."default" ASC NULLS LAST)
TABLESPACE pg_default;
CREATE INDEX scheduled_activities_id_idx
ON public.scheduledactivities USING btree
(id ASC NULLS LAST)
TABLESPACE pg_default;
CREATE INDEX scheduled_activities_idx
ON public.scheduledactivities USING btree
(solution_id ASC NULLS LAST)
TABLESPACE pg_default;
CREATE INDEX scheduledactivities_activity_index_idx
ON public.scheduledactivities USING btree
(activity_index ASC NULLS LAST)
TABLESPACE pg_default;
EDIT: Additional output from auto_explain after adding index on scheduledactivities (solution_id, activity_index)
Output: s.start_date, s.end_date, s.resources, s.activity_index, r.resource_id, sa.usedresourceset
Sort Key: r.resource_id, s.start_date
Sort Method: quicksort Memory: 6283kB
Buffers: shared hit=20159117 read=375 dirtied=190
-> Nested Loop (cost=0.85..10.76 rows=1 width=100) (actual time=5.518..122489.627 rows=20761 loops=1)
Output: s.start_date, s.end_date, s.resources, s.activity_index, r.resource_id, sa.usedresourceset
Join Filter: (s.activity_index = sa.activity_index)
Rows Removed by Join Filter: 668815615
Buffers: shared hit=20159117 read=375 dirtied=190
-> Nested Loop (cost=0.42..5.80 rows=1 width=112) (actual time=0.057..217.563 rows=20761 loops=1)
Output: s.start_date, s.end_date, s.resources, s.activity_index, s.solution, r.resource_id, r.schedule_id
Inner Unique: true
Join Filter: (s.usage_id = r.id)
Buffers: shared hit=20947 read=375 dirtied=190
-> Index Scan using rm_o_resource_usage_instance_splits_new_solution_idx on public.rm_o_resource_usage_instance_splits_new s (cost=0.42..4.44 rows=1 width=69) (actual time=0.049..17.622 rows=20761 loops=1)
Output: s.start_time, s.end_time, s.resources, s.activity_index, s.usage_id, s.start_date, s.end_date, s.solution
Index Cond: (s.solution = 12644)
Buffers: shared hit=186 read=375 dirtied=190
-> Seq Scan on public.rm_o_resource_usage r (cost=0.00..1.35 rows=1 width=59) (actual time=0.002..0.002 rows=1 loops=20761)
Output: r.id, r.schedule_id, r.resource_id
Filter: (r.schedule_id = 12644)
Rows Removed by Filter: 22
Buffers: shared hit=20761
-> Index Scan using scheduled_activities_idx on public.scheduledactivities sa (cost=0.42..4.94 rows=1 width=16) (actual time=0.007..4.654 rows=32216 loops=20761)
Output: sa.usedresourceset, sa.activity_index, sa.solution_id
Index Cond: (sa.solution_id = 12644)
Buffers: shared hit=20138170",,,,,,,,,""
The easiest way to reproduce the issue is to add more values to the three tables. I didn't delete any, only did a few thousand INSERTs.
-> Index Scan using .. s (cost=0.42..8.44 rows=1 width=69) (actual time=0.082..17.418 rows=21899 loops=1)
Index Cond: (s.solution = 12642)
The planner thinks it will find 1 row, and instead finds 21899. That error can pretty clearly lead to bad plans. And a single equality condition should be estimated quite accurately, so I'd say the statistics on your table are way off. It could be that the autovac launcher is tuned poorly so it doesn't run often enough, or it could be that select parts of your data change very rapidly (did you just insert 21899 rows with s.solution = 12642 immediately before running the query?) and so the stats can't be kept accurate enough.
-> Nested Loop ...
Join Filter: (s.activity_index = sa.activity_index)
Rows Removed by Join Filter: 705476285
-> ...
-> Index Scan using scheduled_activities_idx on public.scheduledactivities sa (cost=0.42..4.60 rows=1 width=16) (actual time=0.006..4.612 rows=32216 loops=21899)
Output: sa.usedresourceset, sa.activity_index, sa.solution_id
Index Cond: (sa.solution_id = 12642)
If you can't get it to use the Hash Join, you can at least reduce the harm of the Nested Loop by building an index on scheduledactivities (solution_id, activity_index). That way the activity_index criterion could be part of the Index Condition, rather than being a Join Filter. You could probably then drop the index exclusively on solution_id, as there is little point in maintaining both indexes.
The SQL statement of the fast plan is using WHERE r.schedule_id = 10 and returns about 22000 rows (with estimated 105).
The SQL statement of the slow plan is using WHERE r.schedule_id = 12642 and returns about 21000 rows (with estimated only 1).
The slow plan is using nested loops instead of hash joins: maybe because there is a bad estimation for joins: estimated rows is 1 but actual rows is 21899.
For example in this step:
Nested Loop (cost=0.42..9.74 rows=1 width=110) (actual time=0.091..227.705 rows=21899 loops=1)
If data does not change there is maybe a statistic issue (skew data) for some columns.
I am currently working on improving the performance of our db. And I need some help from you.
I have a table and its index like this
CREATE TABLE public.ar
(
id integer NOT NULL DEFAULT nextval('id_seq'::regclass),
user_id integer NOT NULL,
duration double precision,
is_idle boolean NOT NULL,
activity_id integer NOT NULL,
device_id integer NOT NULL,
calendar_id integer,
on_taskness integer,
week_id integer,
some_other_column_below,
CONSTRAINT id_ PRIMARY KEY (id),
CONSTRAINT a_unique_key UNIQUE (user_id, device_id, start_time_local, start_time_utc, end_time_local, end_time_utc)
)
CREATE INDEX ar_idx
ON public.ar USING btree
(week_id, calendar_id, user_id, activity_id, duration, on_taskness, is_idle)
TABLESPACE pg_default;
Then I am trying to run a query like this
EXPLAIN ANALYZE
SELECT COUNT(*)
FROM (
SELECT ar.user_id
FROM ar
WHERE ar.user_id = ANY(array[some_data]) -- data size is 352
AND ROUND(ar.duration) >0 AND ar.is_idle = false
AND ar.week_id = ANY(ARRAY[some_data]) -- data size is 37
AND ar.calendar_id = ANY(array[some_data]) -- data size is 16716
GROUP by ar.user_id
) tmp;
And below is the explain result
Aggregate (cost=31389954.72..31389954.73 rows=1 width=8) (actual time=252020.695..252020.695 rows=1 loops=1)
-> Group (cost=31389032.69..31389922.37 rows=2588 width=4) (actual time=251089.270..252020.659 rows=351 loops=1)
Group Key: ar.user_id
-> Sort (cost=31389032.69..31389477.53 rows=177935 width=4) (actual time=251089.268..251776.202 rows=6993358 loops=1)
Sort Key: ar.user_id
Sort Method: external merge Disk: 95672kB
-> Bitmap Heap Scan on ar (cost=609015.18..31371079.88 rows=177935 width=4) (actual time=1670.413..248939.440 rows=6993358 loops=1)
Recheck Cond: ((week_id = ANY ('{some_data}'::integer[])) AND (user_id = ANY ('{some_data}'::integer[])))
Rows Removed by Index Recheck: 2081028
Filter: ((NOT is_idle) AND (round(duration) > '0'::double precision) AND (calendar_id = ANY ('{some_data}'::integer[])))
Rows Removed by Filter: 534017
Heap Blocks: exact=29551 lossy=313127
-> BitmapAnd (cost=609015.18..609015.18 rows=1357521 width=0) (actual time=1666.334..1666.334 rows=0 loops=1)
-> Bitmap Index Scan on test_index_only_scan_idx (cost=0.00..272396.77 rows=6970353 width=0) (actual time=614.366..614.366 rows=7269830 loops=1)
Index Cond: ((week_id = ANY ('{some_data}'::integer[])) AND (is_idle = false))
-> Bitmap Index Scan on unique_key (cost=0.00..336529.20 rows=9948573 width=0) (actual time=1041.999..1041.999 rows=14959355 loops=1)
Index Cond: (user_id = ANY ('{some_data}'::integer[]))
Planning time: 25.563 ms
Execution time: 252029.237 ms
I used distinct as well, and the result is the same.
So my questions are below.
The ar_idx contains user_id, but when searching for rows, why does it use the unique_key instead of the index I created?
I thought group by will not do the sort(that is why I did not choose distinct), but why does the sort happen in the explain analyze?
The running time is pretty long(more than 4 minutes). How do I make it faster? Is the index wrong? Or anything else I can do.
Be advised, the ar table contains 51585203 rows.
Any help will be appreciated. Thx.
---------------------------update--------------------------
After I created this index, everything goes really fast now. I don't understand why, anyone can explain this to me?
CREATE INDEX ar_1_idx
ON public.ar USING btree
(calendar_id, user_id)
TABLESPACE pg_default;
And I changed the old index to
CREATE INDEX ar_idx
ON public.ar USING btree
(week_id, calendar, user_id, activity_id, duration, on_taskness, start_time_local, end_time_local) WHERE is_idle IS FALSE
TABLESPACE pg_default;
-----updated analyze results-----------
Aggregate (cost=31216435.97..31216435.98 rows=1 width=8) (actual time=13206.941..13206.941 rows=1 loops=1)
Buffers: shared hit=25940518 read=430315, temp read=31079 written=31079
-> Group (cost=31215436.80..31216403.88 rows=2567 width=4) (actual time=12239.336..13206.894 rows=351 loops=1)
Group Key: ar.user_id
Buffers: shared hit=25940518 read=430315, temp read=31079 written=31079
-> Sort (cost=31215436.80..31215920.34 rows=193417 width=4) (actual time=12239.334..12932.801 rows=6993358 loops=1)
Sort Key: ar.user_id
Sort Method: external merge Disk: 95664kB
Buffers: shared hit=25940518 read=430315, temp read=31079 written=31079
-> Index Scan using ar_1_idx on activity_report ar (cost=0.56..31195807.48 rows=193417 width=4) (actual time=0.275..10387.051 rows=6993358 loops=1)
Index Cond: ((calendar_id = ANY ('{some_data}'::integer[])) AND (user_id = ANY ('{some_data}'::integer[])))
Filter: ((NOT is_idle) AND (round(duration) > '0'::double precision) AND (week_id = ANY ('{some_data}'::integer[])))
Rows Removed by Filter: 590705
Buffers: shared hit=25940518 read=430315
Planning time: 25.577 ms
Execution time: 13217.611 ms
This query searches for product_groupings often purchased with product_grouping ID 99999. As this query fans out to all the orders that contain product_grouping 99999, and then joins back down to count the number of times each product_grouping has been ordered, and takes the top 10.
Is there any way to speed this query up?
SELECT product_groupings.*, count(product_groupings.id) AS product_groupings_count
FROM "product_groupings"
INNER JOIN "products" ON "product_groupings"."id" = "products"."product_grouping_id"
INNER JOIN "variants" ON "products"."id" = "variants"."product_id"
INNER JOIN "order_items" ON "variants"."id" = "order_items"."variant_id"
INNER JOIN "shipments" ON "order_items"."shipment_id" = "shipments"."id"
INNER JOIN "orders" ON "shipments"."order_id" = "orders"."id"
INNER JOIN "shipments" "shipments_often_purchased_with_join" ON "orders"."id" = "shipments_often_purchased_with_join"."order_id"
INNER JOIN "order_items" "order_items_often_purchased_with_join" ON "shipments_often_purchased_with_join"."id" = "order_items_often_purchased_with_join"."shipment_id"
INNER JOIN "variants" "variants_often_purchased_with_join" ON "order_items_often_purchased_with_join"."variant_id" = "variants_often_purchased_with_join"."id"
INNER JOIN "products" "products_often_purchased_with_join" ON "variants_often_purchased_with_join"."product_id" = "products_often_purchased_with_join"."id"
WHERE "products_often_purchased_with_join"."product_grouping_id" = 99999 AND (product_groupings.id != 99999) AND "product_groupings"."state" = 'active' AND ("shipments"."state" NOT IN ('pending', 'cancelled'))
GROUP BY product_groupings.id
ORDER BY product_groupings_count desc LIMIT 10
schema:
CREATE TABLE product_groupings (
id integer NOT NULL,
state character varying(255) DEFAULT 'active'::character varying,
brand_id integer,
product_content_id integer,
hierarchy_category_id integer,
hierarchy_subtype_id integer,
hierarchy_type_id integer,
product_type_id integer,
description text,
keywords text,
created_at timestamp without time zone,
updated_at timestamp without time zone
);
CREATE INDEX index_product_groupings_on_brand_id ON product_groupings USING btree (brand_id);
CREATE INDEX index_product_groupings_on_hierarchy_category_id ON product_groupings USING btree (hierarchy_category_id);
CREATE INDEX index_product_groupings_on_hierarchy_subtype_id ON product_groupings USING btree (hierarchy_subtype_id);
CREATE INDEX index_product_groupings_on_hierarchy_type_id ON product_groupings USING btree (hierarchy_type_id);
CREATE INDEX index_product_groupings_on_name ON product_groupings USING btree (name);
CREATE INDEX index_product_groupings_on_product_content_id ON product_groupings USING btree (product_content_id);
CREATE INDEX index_product_groupings_on_product_type_id ON product_groupings USING btree (product_type_id);
ALTER TABLE ONLY product_groupings
ADD CONSTRAINT product_groupings_pkey PRIMARY KEY (id);
CREATE TABLE products (
id integer NOT NULL,
name character varying(255) NOT NULL,
prototype_id integer,
deleted_at timestamp without time zone,
created_at timestamp without time zone,
updated_at timestamp without time zone,
item_volume character varying(255),
upc character varying(255),
state character varying(255),
volume_unit character varying(255),
volume_value numeric,
container_type character varying(255),
container_count integer,
upc_ext character varying(8),
product_grouping_id integer,
short_pack_size character varying(255),
short_volume character varying(255),
additional_upcs character varying(255)[] DEFAULT '{}'::character varying[]
);
CREATE INDEX index_products_on_additional_upcs ON products USING gin (additional_upcs);
CREATE INDEX index_products_on_deleted_at ON products USING btree (deleted_at);
CREATE INDEX index_products_on_name ON products USING btree (name);
CREATE INDEX index_products_on_product_grouping_id ON products USING btree (product_grouping_id);
CREATE INDEX index_products_on_prototype_id ON products USING btree (prototype_id);
CREATE INDEX index_products_on_upc ON products USING btree (upc);
ALTER TABLE ONLY products
ADD CONSTRAINT products_pkey PRIMARY KEY (id);
CREATE TABLE variants (
id integer NOT NULL,
product_id integer NOT NULL,
sku character varying(255) NOT NULL,
name character varying(255),
price numeric(8,2) DEFAULT 0.0 NOT NULL,
deleted_at timestamp without time zone,
supplier_id integer,
created_at timestamp without time zone,
updated_at timestamp without time zone,
inventory_id integer,
product_active boolean DEFAULT false NOT NULL,
original_name character varying(255),
original_item_volume character varying(255),
protected boolean DEFAULT false NOT NULL,
sale_price numeric(8,2) DEFAULT 0.0 NOT NULL
);
CREATE INDEX index_variants_on_inventory_id ON variants USING btree (inventory_id);
CREATE INDEX index_variants_on_product_id_and_deleted_at ON variants USING btree (product_id, deleted_at);
CREATE INDEX index_variants_on_sku ON variants USING btree (sku);
CREATE INDEX index_variants_on_state_attributes ON variants USING btree (deleted_at, product_active, protected, id);
CREATE INDEX index_variants_on_supplier_id ON variants USING btree (supplier_id);
ALTER TABLE ONLY variants
ADD CONSTRAINT variants_pkey PRIMARY KEY (id);
CREATE TABLE order_items (
id integer NOT NULL,
price numeric(8,2),
total numeric(8,2),
variant_id integer NOT NULL,
shipment_id integer,
created_at timestamp without time zone,
updated_at timestamp without time zone,
quantity integer DEFAULT 1
);
CREATE INDEX index_order_items_on_shipment_id ON order_items USING btree (shipment_id);
CREATE INDEX index_order_items_on_variant_id ON order_items USING btree (variant_id);
ALTER TABLE ONLY order_items
ADD CONSTRAINT order_items_pkey PRIMARY KEY (id);
CREATE TABLE shipments (
id integer NOT NULL,
order_id integer,
shipping_method_id integer NOT NULL,
number character varying,
state character varying(255) DEFAULT 'pending'::character varying NOT NULL,
created_at timestamp without time zone,
updated_at timestamp without time zone,
supplier_id integer,
confirmed_at timestamp without time zone,
canceled_at timestamp without time zone,
out_of_hours boolean DEFAULT false NOT NULL,
delivered_at timestamp without time zone,
uuid uuid DEFAULT uuid_generate_v4()
);
CREATE INDEX index_shipments_on_order_id_and_supplier_id ON shipments USING btree (order_id, supplier_id);
CREATE INDEX index_shipments_on_state ON shipments USING btree (state);
CREATE INDEX index_shipments_on_supplier_id ON shipments USING btree (supplier_id);
ALTER TABLE ONLY shipments
ADD CONSTRAINT shipments_pkey PRIMARY KEY (id);
CREATE TABLE orders (
id integer NOT NULL,
number character varying(255),
ip_address character varying(255),
state character varying(255),
ship_address_id integer,
active boolean DEFAULT true NOT NULL,
completed_at timestamp without time zone,
created_at timestamp without time zone,
updated_at timestamp without time zone,
tip_amount numeric(8,2) DEFAULT 0.0,
confirmed_at timestamp without time zone,
delivery_notes text,
cancelled_at timestamp without time zone,
courier boolean DEFAULT false NOT NULL,
scheduled_for timestamp without time zone,
client character varying(255),
subscription_id character varying(255),
pickup_detail_id integer,
);
CREATE INDEX index_orders_on_bill_address_id ON orders USING btree (bill_address_id);
CREATE INDEX index_orders_on_completed_at ON orders USING btree (completed_at);
CREATE UNIQUE INDEX index_orders_on_number ON orders USING btree (number);
CREATE INDEX index_orders_on_ship_address_id ON orders USING btree (ship_address_id);
CREATE INDEX index_orders_on_state ON orders USING btree (state);
ALTER TABLE ONLY orders
ADD CONSTRAINT orders_pkey PRIMARY KEY (id);
Query plan:
Limit (cost=685117.80..685117.81 rows=10 width=595) (actual time=33659.659..33659.661 rows=10 loops=1)
Output: product_groupings.id, product_groupings.featured, product_groupings.searchable, product_groupings.state, product_groupings.brand_id, product_groupings.product_content_id, product_groupings.hierarchy_category_id, product_groupings.hierarchy_subtype_id, product_groupings.hierarchy_type_id, product_groupings.product_type_id, product_groupings.meta_description, product_groupings.meta_keywords, product_groupings.name, product_groupings.permalink, product_groupings.description, product_groupings.keywords, product_groupings.created_at, product_groupings.updated_at, product_groupings.tax_category_id, product_groupings.trimmed_name, (count(product_groupings.id))
Buffers: shared hit=259132 read=85657, temp read=30892 written=30886
I/O Timings: read=5542.213
-> Sort (cost=685117.80..685117.81 rows=14 width=595) (actual time=33659.658..33659.659 rows=10 loops=1)
Output: product_groupings.id, product_groupings.featured, product_groupings.searchable, product_groupings.state, product_groupings.brand_id, product_groupings.product_content_id, product_groupings.hierarchy_category_id, product_groupings.hierarchy_subtype_id, product_groupings.hierarchy_type_id, product_groupings.product_type_id, product_groupings.meta_description, product_groupings.meta_keywords, product_groupings.name, product_groupings.permalink, product_groupings.description, product_groupings.keywords, product_groupings.created_at, product_groupings.updated_at, product_groupings.tax_category_id, product_groupings.trimmed_name, (count(product_groupings.id))
Sort Key: (count(product_groupings.id))
Sort Method: top-N heapsort Memory: 30kB
Buffers: shared hit=259132 read=85657, temp read=30892 written=30886
I/O Timings: read=5542.213
-> HashAggregate (cost=685117.71..685117.75 rows=14 width=595) (actual time=33659.407..33659.491 rows=122 loops=1)
Output: product_groupings.id, product_groupings.featured, product_groupings.searchable, product_groupings.state, product_groupings.brand_id, product_groupings.product_content_id, product_groupings.hierarchy_category_id, product_groupings.hierarchy_subtype_id, product_groupings.hierarchy_type_id, product_groupings.product_type_id, product_groupings.meta_description, product_groupings.meta_keywords, product_groupings.name, product_groupings.permalink, product_groupings.description, product_groupings.keywords, product_groupings.created_at, product_groupings.updated_at, product_groupings.tax_category_id, product_groupings.trimmed_name, count(product_groupings.id)
Buffers: shared hit=259129 read=85657, temp read=30892 written=30886
I/O Timings: read=5542.213
-> Hash Join (cost=453037.24..685117.69 rows=14 width=595) (actual time=26019.889..33658.886 rows=181 loops=1)
Output: product_groupings.id, product_groupings.featured, product_groupings.searchable, product_groupings.state, product_groupings.brand_id, product_groupings.product_content_id, product_groupings.hierarchy_category_id, product_groupings.hierarchy_subtype_id, product_groupings.hierarchy_type_id, product_groupings.product_type_id, product_groupings.meta_description, product_groupings.meta_keywords, product_groupings.name, product_groupings.permalink, product_groupings.description, product_groupings.keywords, product_groupings.created_at, product_groupings.updated_at, product_groupings.tax_category_id, product_groupings.trimmed_name
Hash Cond: (order_items_often_purchased_with_join.variant_id = variants_often_purchased_with_join.id)
Buffers: shared hit=259129 read=85657, temp read=30892 written=30886
I/O Timings: read=5542.213
-> Hash Join (cost=452970.37..681530.70 rows=4693428 width=599) (actual time=22306.463..32908.056 rows=8417034 loops=1)
Output: product_groupings.id, product_groupings.featured, product_groupings.searchable, product_groupings.state, product_groupings.brand_id, product_groupings.product_content_id, product_groupings.hierarchy_category_id, product_groupings.hierarchy_subtype_id, product_groupings.hierarchy_type_id, product_groupings.product_type_id, product_groupings.meta_description, product_groupings.meta_keywords, product_groupings.name, product_groupings.permalink, product_groupings.description, product_groupings.keywords, product_groupings.created_at, product_groupings.updated_at, product_groupings.tax_category_id, product_groupings.trimmed_name, order_items_often_purchased_with_join.variant_id
Hash Cond: (products.product_grouping_id = product_groupings.id)
Buffers: shared hit=259080 read=85650, temp read=30892 written=30886
I/O Timings: read=5540.529
-> Hash Join (cost=381952.28..493289.49 rows=5047613 width=8) (actual time=21028.128..25416.504 rows=8417518 loops=1)
Output: products.product_grouping_id, order_items_often_purchased_with_join.variant_id
Hash Cond: (order_items_often_purchased_with_join.shipment_id = shipments_often_purchased_with_join.id)
Buffers: shared hit=249520 read=77729
I/O Timings: read=5134.878
-> Seq Scan on public.order_items order_items_often_purchased_with_join (cost=0.00..82689.54 rows=4910847 width=8) (actual time=0.003..1061.456 rows=4909856 loops=1)
Output: order_items_often_purchased_with_join.shipment_id, order_items_often_purchased_with_join.variant_id
Buffers: shared hit=67957
-> Hash (cost=373991.27..373991.27 rows=2274574 width=8) (actual time=21027.220..21027.220 rows=2117538 loops=1)
Output: products.product_grouping_id, shipments_often_purchased_with_join.id
Buckets: 262144 Batches: 1 Memory Usage: 82717kB
Buffers: shared hit=181563 read=77729
I/O Timings: read=5134.878
-> Hash Join (cost=249781.35..373991.27 rows=2274574 width=8) (actual time=10496.552..20383.404 rows=2117538 loops=1)
Output: products.product_grouping_id, shipments_often_purchased_with_join.id
Hash Cond: (shipments.order_id = orders.id)
Buffers: shared hit=181563 read=77729
I/O Timings: read=5134.878
-> Hash Join (cost=118183.04..233677.13 rows=1802577 width=8) (actual time=6080.516..14318.439 rows=1899610 loops=1)
Output: products.product_grouping_id, shipments.order_id
Hash Cond: (variants.product_id = products.id)
Buffers: shared hit=107220 read=55876
I/O Timings: read=5033.540
-> Hash Join (cost=83249.21..190181.06 rows=1802577 width=8) (actual time=4526.391..11330.434 rows=1899808 loops=1)
Output: variants.product_id, shipments.order_id
Hash Cond: (order_items.variant_id = variants.id)
Buffers: shared hit=88026 read=44439
I/O Timings: read=4009.465
-> Hash Join (cost=40902.30..138821.27 rows=1802577 width=8) (actual time=3665.477..8553.803 rows=1899816 loops=1)
Output: order_items.variant_id, shipments.order_id
Hash Cond: (order_items.shipment_id = shipments.id)
Buffers: shared hit=56654 read=43022
I/O Timings: read=3872.065
-> Seq Scan on public.order_items (cost=0.00..82689.54 rows=4910847 width=8) (actual time=0.003..2338.108 rows=4909856 loops=1)
Output: order_items.variant_id, order_items.shipment_id
Buffers: shared hit=55987 read=11970
I/O Timings: read=1059.971
-> Hash (cost=38059.31..38059.31 rows=812284 width=8) (actual time=3664.973..3664.973 rows=834713 loops=1)
Output: shipments.id, shipments.order_id
Buckets: 131072 Batches: 1 Memory Usage: 32606kB
Buffers: shared hit=667 read=31052
I/O Timings: read=2812.094
-> Seq Scan on public.shipments (cost=0.00..38059.31 rows=812284 width=8) (actual time=0.017..3393.420 rows=834713 loops=1)
Output: shipments.id, shipments.order_id
Filter: ((shipments.state)::text <> ALL ('{pending,cancelled}'::text[]))
Rows Removed by Filter: 1013053
Buffers: shared hit=667 read=31052
I/O Timings: read=2812.094
-> Hash (cost=37200.34..37200.34 rows=1470448 width=8) (actual time=859.887..859.887 rows=1555657 loops=1)
Output: variants.product_id, variants.id
Buckets: 262144 Batches: 1 Memory Usage: 60768kB
Buffers: shared hit=31372 read=1417
I/O Timings: read=137.400
-> Seq Scan on public.variants (cost=0.00..37200.34 rows=1470448 width=8) (actual time=0.009..479.528 rows=1555657 loops=1)
Output: variants.product_id, variants.id
Buffers: shared hit=31372 read=1417
I/O Timings: read=137.400
-> Hash (cost=32616.92..32616.92 rows=661973 width=8) (actual time=1553.664..1553.664 rows=688697 loops=1)
Output: products.product_grouping_id, products.id
Buckets: 131072 Batches: 1 Memory Usage: 26903kB
Buffers: shared hit=19194 read=11437
I/O Timings: read=1024.075
-> Seq Scan on public.products (cost=0.00..32616.92 rows=661973 width=8) (actual time=0.011..1375.757 rows=688697 loops=1)
Output: products.product_grouping_id, products.id
Buffers: shared hit=19194 read=11437
I/O Timings: read=1024.075
-> Hash (cost=125258.00..125258.00 rows=1811516 width=12) (actual time=4415.081..4415.081 rows=1847746 loops=1)
Output: orders.id, shipments_often_purchased_with_join.order_id, shipments_often_purchased_with_join.id
Buckets: 262144 Batches: 1 Memory Usage: 79396kB
Buffers: shared hit=74343 read=21853
I/O Timings: read=101.338
-> Hash Join (cost=78141.12..125258.00 rows=1811516 width=12) (actual time=1043.228..3875.433 rows=1847746 loops=1)
Output: orders.id, shipments_often_purchased_with_join.order_id, shipments_often_purchased_with_join.id
Hash Cond: (shipments_often_purchased_with_join.order_id = orders.id)
Buffers: shared hit=74343 read=21853
I/O Timings: read=101.338
-> Seq Scan on public.shipments shipments_often_purchased_with_join (cost=0.00..37153.55 rows=1811516 width=8) (actual time=0.006..413.785 rows=1847766 loops=1)
Output: shipments_often_purchased_with_join.order_id, shipments_often_purchased_with_join.id
Buffers: shared hit=31719
-> Hash (cost=70783.52..70783.52 rows=2102172 width=4) (actual time=1042.239..1042.239 rows=2097229 loops=1)
Output: orders.id
Buckets: 262144 Batches: 1 Memory Usage: 73731kB
Buffers: shared hit=42624 read=21853
I/O Timings: read=101.338
-> Seq Scan on public.orders (cost=0.00..70783.52 rows=2102172 width=4) (actual time=0.012..553.606 rows=2097229 loops=1)
Output: orders.id
Buffers: shared hit=42624 read=21853
I/O Timings: read=101.338
-> Hash (cost=20222.66..20222.66 rows=637552 width=595) (actual time=1278.121..1278.121 rows=626176 loops=1)
Output: product_groupings.id, product_groupings.featured, product_groupings.searchable, product_groupings.state, product_groupings.brand_id, product_groupings.product_content_id, product_groupings.hierarchy_category_id, product_groupings.hierarchy_subtype_id, product_groupings.hierarchy_type_id, product_groupings.product_type_id, product_groupings.meta_description, product_groupings.meta_keywords, product_groupings.name, product_groupings.permalink, product_groupings.description, product_groupings.keywords, product_groupings.created_at, product_groupings.updated_at, product_groupings.tax_category_id, product_groupings.trimmed_name
Buckets: 16384 Batches: 4 Memory Usage: 29780kB
Buffers: shared hit=9559 read=7921, temp written=10448
I/O Timings: read=405.651
-> Seq Scan on public.product_groupings (cost=0.00..20222.66 rows=637552 width=595) (actual time=0.020..873.844 rows=626176 loops=1)
Output: product_groupings.id, product_groupings.featured, product_groupings.searchable, product_groupings.state, product_groupings.brand_id, product_groupings.product_content_id, product_groupings.hierarchy_category_id, product_groupings.hierarchy_subtype_id, product_groupings.hierarchy_type_id, product_groupings.product_type_id, product_groupings.meta_description, product_groupings.meta_keywords, product_groupings.name, product_groupings.permalink, product_groupings.description, product_groupings.keywords, product_groupings.created_at, product_groupings.updated_at, product_groupings.tax_category_id, product_groupings.trimmed_name
Filter: ((product_groupings.id <> 99999) AND ((product_groupings.state)::text = 'active'::text))
Rows Removed by Filter: 48650
Buffers: shared hit=9559 read=7921
I/O Timings: read=405.651
-> Hash (cost=66.86..66.86 rows=4 width=4) (actual time=2.223..2.223 rows=30 loops=1)
Output: variants_often_purchased_with_join.id
Buckets: 1024 Batches: 1 Memory Usage: 2kB
Buffers: shared hit=49 read=7
I/O Timings: read=1.684
-> Nested Loop (cost=0.17..66.86 rows=4 width=4) (actual time=0.715..2.211 rows=30 loops=1)
Output: variants_often_purchased_with_join.id
Buffers: shared hit=49 read=7
I/O Timings: read=1.684
-> Index Scan using index_products_on_product_grouping_id on public.products products_often_purchased_with_join (cost=0.08..5.58 rows=2 width=4) (actual time=0.074..0.659 rows=6 loops=1)
Output: products_often_purchased_with_join.id
Index Cond: (products_often_purchased_with_join.product_grouping_id = 99999)
Buffers: shared hit=5 read=4
I/O Timings: read=0.552
-> Index Scan using index_variants_on_product_id_and_deleted_at on public.variants variants_often_purchased_with_join (cost=0.09..30.60 rows=15 width=8) (actual time=0.222..0.256 rows=5 loops=6)
Output: variants_often_purchased_with_join.id, variants_often_purchased_with_join.product_id
Index Cond: (variants_often_purchased_with_join.product_id = products_often_purchased_with_join.id)
Buffers: shared hit=44 read=3
I/O Timings: read=1.132
Total runtime: 33705.142 ms
Gained a significant ~20x increase in throughput using a sub select;
SELECT product_groupings.*, count(product_groupings.id) AS product_groupings_count
FROM "product_groupings"
INNER JOIN "products" ON "products"."product_grouping_id" = "product_groupings"."id"
INNER JOIN "variants" ON "variants"."product_id" = "products"."id"
INNER JOIN "order_items" ON "order_items"."variant_id" = "variants"."id"
INNER JOIN "shipments" ON "shipments"."id" = "order_items"."shipment_id"
WHERE ("product_groupings"."id" != 99999)
AND "product_groupings"."state" = 'active'
AND ("shipments"."state" NOT IN ('pending', 'cancelled'))
AND ("shipments"."order_id" IN (
SELECT "shipments"."order_id"
FROM "shipments"
INNER JOIN "order_items" ON "order_items"."shipment_id" = "shipments"."id"
INNER JOIN "variants" ON "variants"."id" = "order_items"."variant_id"
INNER JOIN "products" ON "products"."id" = "variants"."product_id"
WHERE "products"."product_grouping_id" = 99999 AND ("shipments"."state" NOT IN ('pending', 'cancelled'))
GROUP BY "shipments"."order_id"
ORDER BY "shipments"."order_id" ASC
))
GROUP BY product_groupings.id
ORDER BY product_groupings_count desc
LIMIT 10
Although I'd welcome any further optimisations. :)
I have several large tables in Postgres 9.2 (millions of rows) where I need to generate a unique code based on the combination of two fields, 'source' (varchar) and 'id' (int). I can do this by generating row_numbers over the result of:
SELECT source,id FROM tablename GROUP BY source,id
but the results can take a while to process. It has been recommended that if the fields are indexed, and there are a proportionally small number of index values (which is my case), that a loose index scan may be a better option: http://wiki.postgresql.org/wiki/Loose_indexscan
WITH RECURSIVE
t AS (SELECT min(col) AS col FROM tablename
UNION ALL
SELECT (SELECT min(col) FROM tablename WHERE col > t.col) FROM t WHERE t.col IS NOT NULL)
SELECT col FROM t WHERE col IS NOT NULL
UNION ALL
SELECT NULL WHERE EXISTS(SELECT * FROM tablename WHERE col IS NULL);
The example operates on a single field though. Trying to return more than one field generates an error: subquery must return only one column. One possibility might be to try retrieving an entire ROW - e.g. SELECT ROW(min(source),min(id)..., but then I'm not sure what the syntax of the WHERE statement would need to look like to work with individual row elements.
The question is: can the recursion-based code be modified to work with more than one column, and if so, how? I'm committed to using Postgres, but it looks like MySQL has implemented loose index scans for more than one column: http://dev.mysql.com/doc/refman/5.1/en/group-by-optimization.html
As recommended, I'm attaching my EXPLAIN ANALYZE results.
For my situation - where I'm selecting distinct values for 2 columns using GROUP BY, it's the following:
HashAggregate (cost=1645408.44..1654099.65 rows=869121 width=34) (actual time=35411.889..36008.475 rows=1233080 loops=1)
-> Seq Scan on tablename (cost=0.00..1535284.96 rows=22024696 width=34) (actual time=4413.311..25450.840 rows=22025768 loops=1)
Total runtime: 36127.789 ms
(3 rows)
I don't know how to do a 2-column index scan (that's the question), but for purposes of comparison, using a GROUP BY on one column, I get:
HashAggregate (cost=1590346.70..1590347.69 rows=99 width=8) (actual time=32310.706..32310.722 rows=100 loops=1)
-> Seq Scan on tablename (cost=0.00..1535284.96 rows=22024696 width=8) (actual time=4764.609..26941.832 rows=22025768 loops=1)
Total runtime: 32350.899 ms
(3 rows)
But for a loose index scan on one column, I get:
Result (cost=181.28..198.07 rows=101 width=8) (actual time=0.069..1.935 rows=100 loops=1)
CTE t
-> Recursive Union (cost=1.74..181.28 rows=101 width=8) (actual time=0.062..1.855 rows=101 loops=1)
-> Result (cost=1.74..1.75 rows=1 width=0) (actual time=0.061..0.061 rows=1 loops=1)
InitPlan 1 (returns $1)
-> Limit (cost=0.00..1.74 rows=1 width=8) (actual time=0.057..0.057 rows=1 loops=1)
-> Index Only Scan using tablename_id on tablename (cost=0.00..38379014.12 rows=22024696 width=8) (actual time=0.055..0.055 rows=1 loops=1)
Index Cond: (id IS NOT NULL)
Heap Fetches: 0
-> WorkTable Scan on t (cost=0.00..17.75 rows=10 width=8) (actual time=0.017..0.017 rows=1 loops=101)
Filter: (id IS NOT NULL)
Rows Removed by Filter: 0
SubPlan 3
-> Result (cost=1.75..1.76 rows=1 width=0) (actual time=0.016..0.016 rows=1 loops=100)
InitPlan 2 (returns $3)
-> Limit (cost=0.00..1.75 rows=1 width=8) (actual time=0.016..0.016 rows=1 loops=100)
-> Index Only Scan using tablename_id on tablename (cost=0.00..12811462.41 rows=7341565 width=8) (actual time=0.015..0.015 rows=1 loops=100)
Index Cond: ((id IS NOT NULL) AND (id > t.id))
Heap Fetches: 0
-> Append (cost=0.00..16.79 rows=101 width=8) (actual time=0.067..1.918 rows=100 loops=1)
-> CTE Scan on t (cost=0.00..2.02 rows=100 width=8) (actual time=0.067..1.899 rows=100 loops=1)
Filter: (id IS NOT NULL)
Rows Removed by Filter: 1
-> Result (cost=13.75..13.76 rows=1 width=0) (actual time=0.002..0.002 rows=0 loops=1)
One-Time Filter: $5
InitPlan 5 (returns $5)
-> Index Only Scan using tablename_id on tablename (cost=0.00..13.75 rows=1 width=0) (actual time=0.002..0.002 rows=0 loops=1)
Index Cond: (id IS NULL)
Heap Fetches: 0
Total runtime: 2.040 ms
The full table definition looks like this:
CREATE TABLE tablename
(
source character(25),
id bigint NOT NULL,
time_ timestamp without time zone,
height numeric,
lon numeric,
lat numeric,
distance numeric,
status character(3),
geom geometry(PointZ,4326),
relid bigint
)
WITH (
OIDS=FALSE
);
CREATE INDEX tablename_height
ON public.tablename
USING btree
(height);
CREATE INDEX tablename_geom
ON public.tablename
USING gist
(geom);
CREATE INDEX tablename_id
ON public.tablename
USING btree
(id);
CREATE INDEX tablename_lat
ON public.tablename
USING btree
(lat);
CREATE INDEX tablename_lon
ON public.tablename
USING btree
(lon);
CREATE INDEX tablename_relid
ON public.tablename
USING btree
(relid);
CREATE INDEX tablename_sid
ON public.tablename
USING btree
(source COLLATE pg_catalog."default", id);
CREATE INDEX tablename_source
ON public.tablename
USING btree
(source COLLATE pg_catalog."default");
CREATE INDEX tablename_time
ON public.tablename
USING btree
(time_);
Answer selection:
I took some time in comparing the approaches that were provided. It's at times like this that I wish that more than one answer could be accepted, but in this case, I'm giving the tick to #jjanes. The reason for this is that his solution matches the question as originally posed more closely, and I was able to get some insights as to the form of the required WHERE statement. In the end, the HashAggregate is actually the fastest approach (for me), but that's due to the nature of my data, not any problems with the algorithms. I've attached the EXPLAIN ANALYZE for the different approaches below, and will be giving +1 to both jjanes and joop.
HashAggregate:
HashAggregate (cost=1018669.72..1029722.08 rows=1105236 width=34) (actual time=24164.735..24686.394 rows=1233080 loops=1)
-> Seq Scan on tablename (cost=0.00..908548.48 rows=22024248 width=34) (actual time=0.054..14639.931 rows=22024982 loops=1)
Total runtime: 24787.292 ms
Loose Index Scan modification
CTE Scan on t (cost=13.84..15.86 rows=100 width=112) (actual time=0.916..250311.164 rows=1233080 loops=1)
Filter: (source IS NOT NULL)
Rows Removed by Filter: 1
CTE t
-> Recursive Union (cost=0.00..13.84 rows=101 width=112) (actual time=0.911..249295.872 rows=1233081 loops=1)
-> Limit (cost=0.00..0.04 rows=1 width=34) (actual time=0.910..0.911 rows=1 loops=1)
-> Index Only Scan using tablename_sid on tablename (cost=0.00..965442.32 rows=22024248 width=34) (actual time=0.908..0.908 rows=1 loops=1)
Heap Fetches: 0
-> WorkTable Scan on t (cost=0.00..1.18 rows=10 width=112) (actual time=0.201..0.201 rows=1 loops=1233081)
Filter: (source IS NOT NULL)
Rows Removed by Filter: 0
SubPlan 1
-> Limit (cost=0.00..0.05 rows=1 width=34) (actual time=0.100..0.100 rows=1 loops=1233080)
-> Index Only Scan using tablename_sid on tablename (cost=0.00..340173.38 rows=7341416 width=34) (actual time=0.100..0.100 rows=1 loops=1233080)
Index Cond: (ROW(source, id) > ROW(t.source, t.id))
Heap Fetches: 0
SubPlan 2
-> Limit (cost=0.00..0.05 rows=1 width=34) (actual time=0.099..0.099 rows=1 loops=1233080)
-> Index Only Scan using tablename_sid on tablename (cost=0.00..340173.38 rows=7341416 width=34) (actual time=0.098..0.098 rows=1 loops=1233080)
Index Cond: (ROW(source, id) > ROW(t.source, t.id))
Heap Fetches: 0
Total runtime: 250491.559 ms
Merge Anti Join
Merge Anti Join (cost=0.00..12099015.26 rows=14682832 width=42) (actual time=48.710..541624.677 rows=1233080 loops=1)
Merge Cond: ((src.source = nx.source) AND (src.id = nx.id))
Join Filter: (nx.time_ > src.time_)
Rows Removed by Join Filter: 363464177
-> Index Only Scan using tablename_pkey on tablename src (cost=0.00..1060195.27 rows=22024248 width=42) (actual time=48.566..5064.551 rows=22024982 loops=1)
Heap Fetches: 0
-> Materialize (cost=0.00..1115255.89 rows=22024248 width=42) (actual time=0.011..40551.997 rows=363464177 loops=1)
-> Index Only Scan using tablename_pkey on tablename nx (cost=0.00..1060195.27 rows=22024248 width=42) (actual time=0.008..8258.890 rows=22024982 loops=1)
Heap Fetches: 0
Total runtime: 541750.026 ms
Rather hideous, but this seems to work:
WITH RECURSIVE
t AS (
select a,b from (select a,b from foo order by a,b limit 1) asdf union all
select (select a from foo where (a,b) > (t.a,t.b) order by a,b limit 1),
(select b from foo where (a,b) > (t.a,t.b) order by a,b limit 1)
from t where t.a is not null)
select * from t where t.a is not null;
I don't really understand why the "is not nulls" are needed, as where do the nulls come from in the first place?
DROP SCHEMA zooi CASCADE;
CREATE SCHEMA zooi ;
SET search_path=zooi,public,pg_catalog;
CREATE TABLE tablename
( source character(25) NOT NULL
, id bigint NOT NULL
, time_ timestamp without time zone NOT NULL
, height numeric
, lon numeric
, lat numeric
, distance numeric
, status character(3)
, geom geometry(PointZ,4326)
, relid bigint
, PRIMARY KEY (source,id,time_) -- <<-- Primary key here
) WITH ( OIDS=FALSE);
-- invent some bogus data
INSERT INTO tablename(source,id,time_)
SELECT 'SRC_'|| (gs%10)::text
,gs/10
,gt
FROM generate_series(1,1000) gs
, generate_series('2013-12-01', '2013-12-07', '1hour'::interval) gt
;
Select unique values for two key fields:
VACUUM ANALYZE tablename;
EXPLAIN ANALYZE
SELECT source,id,time_
FROM tablename src
WHERE NOT EXISTS (
SELECT * FROM tablename nx
WHERE nx.source =src.source
AND nx.id = src.id
AND time_ > src.time_
)
;
Generates this plan here (Pg-9.3):
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Hash Anti Join (cost=4981.00..12837.82 rows=96667 width=42) (actual time=547.218..1194.335 rows=1000 loops=1)
Hash Cond: ((src.source = nx.source) AND (src.id = nx.id))
Join Filter: (nx.time_ > src.time_)
Rows Removed by Join Filter: 145000
-> Seq Scan on tablename src (cost=0.00..2806.00 rows=145000 width=42) (actual time=0.010..210.810 rows=145000 loops=1)
-> Hash (cost=2806.00..2806.00 rows=145000 width=42) (actual time=546.497..546.497 rows=145000 loops=1)
Buckets: 16384 Batches: 1 Memory Usage: 9063kB
-> Seq Scan on tablename nx (cost=0.00..2806.00 rows=145000 width=42) (actual time=0.006..259.864 rows=145000 loops=1)
Total runtime: 1197.374 ms
(9 rows)
The hash-joins will probably disappear once the data outgrows the work_mem:
Merge Anti Join (cost=0.83..8779.56 rows=29832 width=120) (actual time=0.981..2508.912 rows=1000 loops=1)
Merge Cond: ((src.source = nx.source) AND (src.id = nx.id))
Join Filter: (nx.time_ > src.time_)
Rows Removed by Join Filter: 184051
-> Index Scan using tablename_sid on tablename src (cost=0.41..4061.57 rows=32544 width=120) (actual time=0.055..250.621 rows=145000 loops=1)
-> Index Scan using tablename_sid on tablename nx (cost=0.41..4061.57 rows=32544 width=120) (actual time=0.008..603.403 rows=328906 loops=1)
Total runtime: 2510.505 ms
Lateral joins can give you a clean code to select multiple columns in nested selects, without checking for null as no subqueries in select clause.
-- Assuming you want to get one '(a,b)' for every 'a'.
with recursive t as (
(select a, b from foo order by a, b limit 1)
union all
(select s.* from t, lateral(
select a, b from foo f
where f.a > t.a
order by a, b limit 1) s)
)
select * from t;