Speeding up a query where one of the condition is not met - postgresql

I have a simple query on a large table:
UPDATE a
SET a.xyz = b.xyz
FROM b
WHERE a.xyz IS NULL AND b.id=a.id
All of a.xyz are not NULL now. But the query takes almost as long (5 minutes) to execute as it did when all of a.xyz were NULL.
The following gets executed in less than a second
UPDATE a
SET a.xyz = 1
WHERE a.xyz IS NULL
So, I am wondering if there is a way to speed up the first query when most of a.xyz are not NULL
P.S. Clarifying: Yes, indexes on a.xyz, b.xyz, a.id, b.id are present
P.S.2. After adding composite index on (a.xyz, a.id) and additing index on a.xyz WHERE a.xyz IS NULL, the time went down to 83 seconds. But there must be a way to bring it down to less than a second since there are no records to update, and SELECT COUNT(*) FROM a WHERE a.xyz IS NULL executes in less than a second
P.S.3. SOLVED. The problem was in another trigger inadvertently firing on update. Composite index on (a.xyz, a.id) and additing index on a.xyz WHERE a.xyz IS NULL seem to have solved the rest of the speed issue

Make sure you have index for a.id and b.id
run your query with EXPLAIN to see the plan used by the db engine. You should look for something Index Scan if say Seq Scan you don't have index.
EXPLAIN ANALYZE
UPDATE a
SET a.xyz = b.xyz
FROM b
WHERE a.xyz IS NULL AND b.id=a.id
Also if you create a composite index with (a.id, a.xyz) you could get even better performance
Here is an example:
EXPLAIN ANALYZE
SELECT *
FROM projects
JOIN images
using (project_id)
WHERE project_id =1
You can see project_pk has index on projects, but doesn't have index on images.
"Nested Loop (cost=0.15..17.31 rows=273 width=92) (actual time=0.029..0.110 rows=320 loops=1)"
" -> Index Scan using project_pk on projects (cost=0.15..8.17 rows=1 width=52) (actual time=0.012..0.013 rows=1 loops=1)"
" Index Cond: (project_id = 1)"
" -> Seq Scan on images (cost=0.00..6.41 rows=273 width=44) (actual time=0.007..0.049 rows=320 loops=1)"
" Filter: (project_id = 1)"
"Planning time: 0.172 ms"
"Execution time: 0.161 ms"

Create a partial index on a.xyz:
create index a_xyz_null on a (xyz)
where xyz is null
https://www.postgresql.org/docs/9.6/static/indexes-partial.html

Related

Postgres / PostGIS query optimisation

I've put together a query which works, I'm just wanting to learn how I can optimise it. The idea of the query is that given a particular row in table A, it take its geometry and in table B finds the closest matching geometry to it filtered by certain criteria.
SELECT a.id,
closest_pt.dist,
closest_pt.name,
closest_pt.meters
FROM "hex-hex-uk" a
CROSS JOIN lateral
(
SELECT a.id,
b.name AS name,
a.geom <-> b.way AS dist,
st_distance(a.geom, b.way, FALSE) AS meters
FROM "osm-polygons-uk" b
WHERE (
b.landuse='industrial'
OR b.man_made='works')
AND st_area(b.way, FALSE)>15000
ORDER BY a.geom <-> b.way
LIMIT 1) AS closest_pt
WHERE a.id='abc'
Currently the query executes in 30-90ms, but I need to perform millions of these lookups. I tried swopping
a.id='abc' with a.id IN ('abc','def','ghi',...) and looking up 10000 at a time, but it takes 10mins+ which doesn't really add up.
Here's the query plan as it stands:
" -> Index Scan using ""hex-hex-uk_id_idx"" on ""hex-hex-uk"" a (cost=0.43..8.45 rows=1 width=168) (actual time=0.029..0.046 rows=1 loops=1)"
" Index Cond: ((id)::text = '89195c849a3ffff'::text)"
" -> Limit (cost=0.28..536.88 rows=1 width=43) (actual time=33.009..33.062 rows=1 loops=1)"
" -> Index Scan using ""idx_osm-polygons-uk_geom"" on ""osm-polygons-uk"" b (cost=0.28..4935623.77 rows=9198 width=43) (actual time=32.992..33.001 rows=1 loops=1)"
" Order By: (way <-> a.geom)"
" Filter: (((landuse = 'industrial'::text) OR (man_made = 'works'::text)) AND (st_area((way)::geography, false) > '15000'::double precision))"
" Rows Removed by Filter: 7"
"Planning Time: 0.142 ms"
"Execution Time: 33.311 ms"
What would be the process for trying to optimise a query like this? I learn best by example hence I think it makes sense to post on here rather than just reading about optimisation techniques.
Thanks!
CREATE TABLE "osm-polygons-uk" (id bigint,name text,landuse text, man_made text,way geometry);
CREATE INDEX "idx_osm-polygons-uk_geom" ON "osm-polygons-uk" USING gist (way);
ALTER TABLE "osm-polygons-uk" ADD PRIMARY KEY (id);
CREATE TABLE "hex-hex-uk" (id varchar(15), geom geometry);
CREATE UNIQUE INDEX ON "hex-hex-uk" (id);
Some great tips above. The comment about the indexed materialized view led me to create a view with only the filtered data.. it cut the number of rows down from 1 million to ~20000 and executed in a couple of seconds.
From then I tweaked the original query and it ended up blasting through 2400000 rows in a couple of minutes. A huge improvement from the original 13 hours it was going to take to run!
SELECT a.id, closest_pt.name, ST_Distance(a.geom, closest_pt.way, false) as meters
FROM "hex-hex-uk" a
CROSS JOIN LATERAL
(SELECT
id,
b.name as name,
a.geom <-> b.way as dist,
b.way as way
FROM "tmp_industrial" b
ORDER BY dist ASC
LIMIT 1) AS closest_pt WHERE a.id IN ('abc','def','ghi',...);
Thanks for the tips, it gives me a bit of a guide as to how to go about debugging query performance.

Optimizing indexes for query on large table with multiple joins

We have an images table containing around ~25 million records and when I query the table based on the values from several joins the planner's estimates are quite different from the actual results for row counts. We have other queries that are roughly the same without all of the joins and it is much faster. I would like to know what steps I can take to debug and optimize the query. Also, is it better to have one index covering all columns included in the join and the where clause or a multiple indexes one for each join column and then another with all of the fields in the where clause?
The query:
EXPLAIN ANALYZE
SELECT "images".* FROM "images"
INNER JOIN "locations" ON "locations"."id" = "images"."location_id"
INNER JOIN "users" ON "images"."creator_id" = "users"."id"
INNER JOIN "user_groups" ON "users"."id" = "user_groups"."user_id"
WHERE "images"."deleted_at" IS NULL
AND "user_groups"."group_id" = 7
AND "images"."creator_type" = 'User'
AND "images"."status" = 2
AND "locations"."active" = TRUE
ORDER BY date_uploaded DESC
LIMIT 50
OFFSET 0;
The explain:
Limit (cost=25670.61..25670.74 rows=50 width=585) (actual time=1556.250..1556.278 rows=50 loops=1)
-> Sort (cost=25670.61..25674.90 rows=1714 width=585) (actual time=1556.250..1556.264 rows=50 loops=1)
Sort Key: images.date_uploaded
Sort Method: top-N heapsort Memory: 75kB
-> Nested Loop (cost=1.28..25613.68 rows=1714 width=585) (actual time=0.097..1445.777 rows=160886 loops=1)
-> Nested Loop (cost=0.85..13724.04 rows=1753 width=585) (actual time=0.069..976.326 rows=161036 loops=1)
-> Nested Loop (cost=0.29..214.87 rows=22 width=8) (actual time=0.023..0.786 rows=22 loops=1)
-> Seq Scan on user_groups (cost=0.00..95.83 rows=22 width=4) (actual time=0.008..0.570 rows=22 loops=1)
Filter: (group_id = 7)
Rows Removed by Filter: 5319
-> Index Only Scan using users_pkey on users (cost=0.29..5.40 rows=1 width=4) (actual time=0.006..0.008 rows=1 loops=22)
Index Cond: (id = user_groups.user_id)
Heap Fetches: 18
-> Index Scan using creator_date_uploaded_Where_pub_not_del on images (cost=0.56..612.08 rows=197 width=585) (actual time=0.062..40.992 rows=7320 loops=22)
Index Cond: ((creator_id = users.id) AND ((creator_type)::text = 'User'::text) AND (status = 2))
-> Index Scan using locations_pkey on locations (cost=0.43..6.77 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=161036)
Index Cond: (id = images.location_id)
Filter: active
Rows Removed by Filter: 0
Planning time: 1.694 ms
Execution time: 1556.352 ms
We are running Postgres 9.4 on an RDS db.m4.large instance.
As for the query itself, the only thing you can do is skipping on users table. From EXPLAIN you can see that it only does an Index Only Scan without actually touching the table. So, technically your query could look like this:
SELECT images.* FROM images
INNER JOIN locations ON locations.id = images.location_id
INNER JOIN user_groups ON images.creator_id = user_groups.user_id
WHERE images.deleted_at IS NULL
AND user_groups.group_id = 7
AND images.creator_type = 'User'
AND images.status = 2
AND locations.active = TRUE
ORDER BY date_uploaded DESC
OFFSET 0 LIMIT 50
The rest is about indexes. locations seems to have very little data, so optimization here will gain you nothing. user_groups on the other hand could benefit from an index ON (user_id) WHERE group_id = 7 or ON (group_id, user_id). This should remove some extra filtering on table content.
-- Option 1
CREATE INDEX ix_usergroups_userid_groupid7
ON user_groups (user_id)
WHERE group_id = 7;
-- Option 2
CREATE INDEX ix_usergroups_groupid_userid
ON user_groups (group_id, user_id);
Of course, the biggest thing here is images. Currently, the planer would do an index scan on creator_date_uploaded_Where_pub_not_del which I suspect does not fully match the requirements. Here, multiple options come to mind depending on your usage pattern - from one where the search parameters are rather common:
-- Option 1
CREATE INDEX ix_images_creatorid_typeuser_status2_notdel
ON images (creator_id)
WHERE creator_type = 'User' AND status = 2 AND deleted_at IS NULL;
to one with completely dynamic parameters:
-- Option 2
CREATE INDEX ix_images_status_creatortype_creatorid_notdel
ON images (status, creator_type, creator_id)
WHERE deleted_at IS NULL;
The first index is preferable as it is smaller (values are filtered-out rather than indexed).
To summarize, unless you are limited by memory (or other factors), I would add indexes on user_groups and images. Correct choice of indexes must be confirmed empirically, as multiple options are usually available and the situation depends on statistical distribution of data.
Here's a different approach:
I think one of the problems is that you are doing joins 1714 times, and then just returning the first 50 results. We'll probably want to avoid extra joins as soon as possible.
For this, We'll try to have an index by date_uploaded first. And then we will filter by the rest of the columns. Also, We add creator_id for getting an index-only scan:
CREATE INDEX ix_images_sort_test
ON images (date_uploaded desc, creator_id)
WHERE creator_type = 'User' AND status = 2 AND deleted_at IS NULL;
Also you may use the generic version (unfiltered). But it should somewhat worse. Since the first column will be date_uploaded we will need to read the whole index for the filtering of the rest of the columns.
CREATE INDEX ix_images_sort_test
ON images (date_uploaded desc, status, creator_type, creator_id)
WHERE deleted_at IS NULL;
The pity here is that you are also filtering by group_id, which is on another table. But even that, It may be worth trying this approach.
Also, verify that all joined tables have an index on the foreign key.
So, add an index for user_groups as (user_id, group_id)
Also, as Boris noticed, you may remove the "Users" join.

Postgres execution plan order of where clause and order by

I have a very simple SQL:
select * from email.email_task where acquire_time < now() and state IN ('CREATED', 'RELEASED') order by creation_time asc limit 1;
I have 2 indexes created:
Index of state
Index of state, acquire_time, creation_time
Ideally I think Postgres should pick the 2nd one since it matches every column required in this SQL:
However the execution plan shows differently, it uses neither of the indexes:
Limit (cost=187404.36..187404.36 rows=1 width=743)
-> Sort (cost=187404.36..190753.58 rows=1339690 width=743)
Sort Key: creation_time
-> Seq Scan on email_task (cost=0.00..180705.91 rows=1339690 width=743)
Filter: (((state)::text = 'CREATED'::text) AND (acquire_time < now()))
I understand that if the number of rows returned arrives like 10% of total, then it would pick Seq Scan over Index Scan. (As explained at Why does PostgreSQL perform sequential scan on indexed column?
) So that's why index1 is not picked.
What I don't understand is why index2 is not picked since matches all the columns?
Then I tried a 3rd index:
Index of create_time, acquire_time, state
And this time it uses the index3 (I add the index using another smaller database
perf_1 because the original one has 2 million rows and it takes too much time)
Limit (cost=0.29..0.36 rows=1 width=75) (actual time=0.043..0.043 rows=1 loops=1)
-> Index Scan using perf_1 on email_task (cost=0.29..763.76 rows=9998 width=75) (actual time=0.042..0.042 rows=1 loops=1)
Index Cond: (acquire_time < now())
Filter: ((state)::text = ANY ('{CREATED,RELEASED}'::text[]))
It seems that, Postgres execution planner is picking the order by clause first then the where clause which is a little bit counter-intuitive.
Is my understanding correct or there are some other factors that impact the Postgres planner?
Thanks in advance.

Why PostgreSql does not use PK index?

If I want to select 0.5% rows, or even 5% rows from the following table via a PK, the query planner correctly chooses to use the PK index. Here is the table:
create table weather as
with numbers as(
select generate_series as id from generate_series(0,1048575))
select id,
50 + 50*sin(id) as temperature_in_f,
50 + 50*sin(id) as humidity_in_percent
from numbers;
alter table weather
add constraint pk_weather primary key(id);
vacuum analyze weather;
The stats are up-to-date, and the following query does use the PK index:
explain analyze select sum(w.id), sum(humidity_in_percent), count(*)
from weather as w
where w.id between 1 and 66720;
Suppose, however, that we need to join this table with another, much smaller, one:
create table lightnings
as
select id as weather_id
from weather
where humidity_in_percent between 99.99 and 100;
alter table lightnings
add constraint pk_lightnings
primary key(weather_id);
analyze lightnings;
Here is my join, in four logically equivalent forms:
explain analyze select sum(w.id), count(*) from weather as w
where w.humidity_in_percent between 99.99 and 100
and exists(select * from lightnings as l
where l.weather_id=w.id);
explain analyze select sum(w.id), count(*)
from weather as w
join lightnings as l
on l.weather_id=w.id
where w.humidity_in_percent between 99.99 and 100;
explain analyze select sum(w.id), count(*)
from lightnings as l
join weather as w
on l.weather_id=w.id
where w.humidity_in_percent between 99.99 and 100;
-- replaced explicit join with where clause
explain analyze select sum(w.id), count(*)
from lightnings as l, weather as w
where w.humidity_in_percent between 99.99 and 100
and l.weather_id=w.id;
Unfortunately the query planner resorts to scanning the whole weather table:
"Aggregate (cost=22645.68..22645.69 rows=1 width=4) (actual time=167.427..167.427 rows=1 loops=1)"
" -> Hash Join (cost=180.12..22645.52 rows=32 width=4) (actual time=2.500..166.444 rows=6672 loops=1)"
" Hash Cond: (w.id = l.weather_id)"
" -> Seq Scan on weather w (cost=0.00..22407.64 rows=5106 width=4) (actual time=0.013..158.593 rows=6672 loops=1)"
" Filter: ((humidity_in_percent >= 99.99::double precision) AND (humidity_in_percent <= 100::double precision))"
" Rows Removed by Filter: 1041904"
" -> Hash (cost=96.72..96.72 rows=6672 width=4) (actual time=2.479..2.479 rows=6672 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 235kB"
" -> Seq Scan on lightnings l (cost=0.00..96.72 rows=6672 width=4) (actual time=0.009..0.908 rows=6672 loops=1)"
"Planning time: 0.326 ms"
"Execution time: 167.581 ms"
The query planner's estimate on how many rows in weather table will be selected is rows=5106. This is more or less close to the exact value of 6672. If I select this small number of rows in weather table via id, the PK index is used. If I select the same amount via a join with another table, the query planner goes for scanning the table.
What am I missing?
select version()
"PostgreSQL 9.4.0"
Edit: if I remove the condition on humidity, the query planner correctly recognizes that the condition on weather.id is quite selective, and chooses to use the index on PK:
explain analyze select sum(w.id), count(*) from weather as w
where exists(select * from lightnings as l
where l.weather_id=w.id);
"Aggregate (cost=14677.84..14677.85 rows=1 width=4) (actual time=37.200..37.200 rows=1 loops=1)"
" -> Nested Loop (cost=0.42..14644.48 rows=6672 width=4) (actual time=0.022..36.189 rows=6672 loops=1)"
" -> Seq Scan on lightnings l (cost=0.00..96.72 rows=6672 width=4) (actual time=0.011..0.868 rows=6672 loops=1)"
" -> Index Only Scan using pk_weather on weather w (cost=0.42..2.17 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=6672)"
" Index Cond: (id = l.weather_id)"
" Heap Fetches: 0"
"Planning time: 0.321 ms"
"Execution time: 37.254 ms"
Yet adding a condition totally confuses the query planner.
Expecting the optimiser to use an index on the PK of the larger table implies that you expect the query to be driven from the smaller table. Of course, you know that the rows that the smaller table will join to in the larger one are the same as those selected by the predicate on it, but the optimiser does not.
Look at the line on the plan:
Hash Join (cost=180.12..22645.52 rows=32 width=4) (actual time=2.500..166.444 rows=6672 loops=1)"
It expects 32 rows to result from the join, but 6672 actually result.
Anyway, it pretty much has the option of:
A full scan on the smaller table, and an index lookup on the larger, with the predicate being used to filter out rows subsequent to the join (and expecting most of the rows to then be filtered out).
A full scan on both tables, with rows being removed by the predicate on the larger table, and a hash join of the result.
A scan of the larger table with rows being removed by the predicate, and an index lookup on the smaller table that may fail to find a value.
The second of these has been judged to be the lowest cost, and I think it is correct to do so based on the evidence it has, as hash joins are very efficient for joining many rows.
Of course it would probably be more efficient to place an index on weather(humidity_in_percent,id) in this particular case, but I suspect that this is a modified version of your real situation (the sum of the id column?) so specific advice may not be applicable.
I believe the differences your seeing between the first query, which uses the index and the other 3 which don't, is in the where clause.
In the first query, your where clause is on w.id, which is indexed.
In the other 3, the effective where clause is on w.humidity_in_percent. I tested the following ...
create index wh_idx on weather(humidity_in_percent);
explain analyse select sum(w.id), count(*) from weather as w
where w.humidity_in_percent between 99.99 and 100
and exists(select * from lightnings as l
where l.weather_id=w.id);
and get a much better plan. I tried to post the actual plan returned, but I'm having trouble formatting it for proper display, sorry.

How to optimize this Postgres count query

EXPLAIN ANALYZE
SELECT count(*)
FROM "businesses"
WHERE (
source = 'facebook'
OR EXISTS(
SELECT *
FROM provider_business_map pbm
WHERE
pbm.hotstepper_business_id=businesses.id
AND pbm.provider_name='facebook'
)
);
PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=233538965.74..233538965.75 rows=1 width=0) (actual time=116169.720..116169.721 rows=1 loops=1)
-> Seq Scan on businesses (cost=0.00..233521096.48 rows=7147706 width=0) (actual time=11.284..116165.646 rows=3693 loops=1)
Filter: (((source)::text = 'facebook'::text) OR (alternatives: SubPlan 1 or hashed SubPlan 2))
SubPlan 1
-> Index Scan using idx_provider_hotstepper_business on provider_business_map pbm (cost=0.00..16.29 rows=1 width=0) (never executed)
Index Cond: (((provider_name)::text = 'facebook'::text) AND (hotstepper_business_id = businesses.id))
SubPlan 2
-> Index Scan using idx_provider_hotstepper_business on provider_business_map pbm (cost=0.00..16.28 rows=1 width=4) (actual time=0.045..5.685 rows=3858 loops=1)
Index Cond: ((provider_name)::text = 'facebook'::text)
Total runtime: 116169.820 ms
(10 rows)
This query takes over a minute and it's doing a count that results in ~3000. It seems the bottleneck is the sequential scan but I'm not sure what index I would need on the database to optimize this. It's also worth noting that I haven't tuned postgres so if there's any tuning that would help it may be worth considering. Although my DB is 15GB and I don't plan on trying to fit all of that in memory anytime soon so I'm not sure changing RAM related values would help a lot.
OR is notorious for lousy performance. Try splitting it into a union of two completely separate queries on the two tables:
SELECT COUNT(*) FROM (
SELECT id
FROM businesses
WHERE source = 'facebook'
UNION -- union makes the ids unique in the result
SELECT hotstepper_business_id
FROM provider_business_map
WHERE provider_name = 'facebook'
AND hotstepper_business_id IS NOT NULL
) x
If hotstepper_business_id can not be null, you may remove the line
AND hotstepper_business_id IS NOT NULL
If you want the whole business row, you'd could simply wrap the above query with an IN (...):
SELECT * FROM businesses
WHERE ID IN (
-- above inner query
)
But a much better performing query would be to modify the above query use use a join:
SELECT *
FROM businesses
WHERE source = 'facebook'
UNION
SELECT b.*
FROM provider_business_map m
JOIN businesses b
ON b.id = m.hotstepper_business_id
WHERE provider_name = 'facebook'
I'd at least try rewriting the dependent subquery as;
SELECT COUNT(DISTINCT b.*)
FROM businesses b
LEFT JOIN provider_business_map pbm
ON b.id=pbm.hotstepper_business_id
WHERE b.source = 'facebook'
OR pbm.provider_name = 'facebook';
Unless I'm mis-reading something, an index on businesses.id exists, but make sure there are also indexes on provider_business_map.hotstepper_business_id, businesses.source and provider_business_map.provider_name for best performance.
create index index_name on businesses(source);
Since there 3,693 rows matches in more than 7 million rows it will probably use the index. Do not forget to
analyse businesses;