Postgres CTE exponentially saving time? - postgresql

I would love some clear explanation on the below, I would have thought PG would have optimized the first query to be just as fast as the second query, which uses a CTE, since it's basically using a simple index to filter and join on 2 columns. Everything in the joins and filtering, except "l"."type", has an index. This would be on PG 10.
The below takes 20 minutes+.
SELECT
transactions.id::text AS id,
transactions.amount,
transactions.currency::text AS currency,
transactions.external_id::text AS external_id,
transactions.check_sender_balance,
transactions.created,
transactions.type::text AS type,
transactions.sequence,
transactions.legacy_id::text AS legacy_id,
transactions.reference_transaction::text AS reference_transaction,
a.user_id as user_id
FROM transactions
JOIN lines l ON transactions.id = l.transaction
JOIN accounts a ON l.account = a.id
WHERE l.type='DEBIT'
AND "sequence" > 357550718
AND user_id IN ('5bf4ceb45d27fd2985a000000')
But the following, which I suppose explicitly optimizes accounts via CTE, finishes in ~2-4minutes. I would have thought PG would have optimized to match this type of performance?
WITH "accts" AS (
SELECT "id", "user_id"
FROM "accounts" WHERE "user_id" IN ('5bf4ceb45d27fd2985a000000')
)
SELECT "transactions"."id"::TEXT AS "id",
"transactions"."amount",
"transactions"."currency"::TEXT AS "currency",
"transactions"."external_id"::TEXT AS "external_id",
"transactions"."check_sender_balance",
"transactions"."created",
"transactions"."type"::TEXT AS "type",
"transactions"."sequence",
"transactions"."legacy_id"::TEXT AS "legacy_id",
"transactions"."reference_transaction"::TEXT AS "reference_transaction",
a."user_id" AS "user_id"
FROM "transactions"
JOIN "lines" "l" ON "transactions"."id" = "l"."transaction"
JOIN "accts" "a" ON "a"."id" = "l"."account"
WHERE "l"."type" = 'DEBIT'
AND "sequence" > 357550718

You have a second predicate in your second query vs your first. In your second in the CTE you are limiting it to only a specific user_id. Nowhere in your first query do you have that filter. If there is an index on the user_id field then it is probably helping your performance. You can run an explain plan on both queries separately by adding EXPLAIN to the beginning of them and see how the plan differs. This will help you figure out why there is a difference.

Related

Extremely slow planning for query with lot of joins in PostgreSQL

(Postgres v13)
I've got a query which takes 2 - 5 seconds to plan. The query joins my languages table and translations table to get translation results for multiple languages. When I add even more languages/translations to load the execution time is exponentially growing.
select
key0_.id as col_0_0_,
key0_.name as col_1_0_,
(select
count(screenshot60_.id)
from
screenshot screenshot60_
inner join
key key61_
on screenshot60_.key_id=key61_.id
where
key0_.id=key61_.id) as col_2_0_,
languages2_.tag as col_3_0_,
translatio31_.id as col_4_0_,
translatio31_.text as col_5_0_,
translatio31_.state as col_6_0_,
translatio31_.auto as col_7_0_,
translatio31_.mt_provider as col_8_0_,
languages3_.tag as col_11_0_,
translatio32_.id as col_12_0_,
translatio32_.text as col_13_0_,
translatio32_.state as col_14_0_,
translatio32_.auto as col_15_0_,
translatio32_.mt_provider as col_16_0_,
... the same over and over many times ...
languages30_.tag as col_227_0_,
translatio59_.id as col_228_0_,
translatio59_.text as col_229_0_,
translatio59_.state as col_230_0_,
translatio59_.auto as col_231_0_,
translatio59_.mt_provider as col_232_0_,
0 as col_233_0_,
0 as col_234_0_
from
key key0_
inner join
project project1_
on key0_.project_id=project1_.id
inner join
language languages2_
on project1_.id=languages2_.project_id
and (
languages2_.tag='en-US'
)
inner join
language languages3_
on project1_.id=languages3_.project_id
and (
languages3_.tag='es-PE'
)
... many times the same ...
inner join
language languages30_
on project1_.id=languages30_.project_id
and (
languages30_.tag='es-MX'
)
left outer join
translation translatio31_
on key0_.id=translatio31_.key_id
and (
translatio31_.language_id=languages2_.id
)
... many times the same ...
left outer join
translation translatio59_
on key0_.id=translatio59_.key_id
and (
translatio59_.language_id=languages30_.id
)
where
(
key0_.name in (
'base_administrative_notes.desc'
)
)
and
key0_.project_id=836
group by
key0_.id ,
languages2_.tag ,
translatio31_.id ,
languages3_.tag ,
translatio32_.id ,
... many times the same ...
languages30_.tag ,
translatio59_.id
order by
key0_.name asc nulls first,
key0_.id asc nulls first limit 1
The visualised EXPLAIN ANALYSE result: https://explain.dalibo.com/plan/uWS (the full query can be found there as well as raw output from explain (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON)).
I found in other threads that this can be caused by using too many indexes on the tables, but I only have a unique index on my translations table on key_id and language_id columns.
EDIT:
I've found out that setting join_collapse_limit to some value between 1 to 5 reduces the planning to under 200ms. Don't know if this is the best solution, but I am going to use it as a workaround for now.
As Laurenz Albe explained, the planner is probably trying to reorder the joins to optimize the query.
With n tables, the number of possible joins order is n! (factorial n).
My suggestion is to :
make sure the order is the best in your query
set that particular parameter to 1 before the query
play the query
reset the parameter
You can check Alicja's slide deck (from slide 22) where she illustrates that particular problem with examples here: https://www.postgresql.eu/events/pgconfeu2017/sessions/session/1617/slides/9/FromMinutesToMilliseconds.pdf

Select query became very very very slow in postgresql

I have one table which contains "133,072,194" records and I am trying to execute
SELECT COUNT(test)
FROM mytable
WHERE test = false
but it is taking Execution time: 128320.712 ms
I already have indexing on test column. Could you please let me know, what I can optimize or change, so my query became faster?
Because of this, my other select query is also not working.
If there are many rows where test is FALSE, you won't be able to get an exact result faster than with a sequential scan, which is slow for big tables.
If you have only few rows that satisfy the condition, you should create a partial index:
CREATE INDEX mytable_notest_ind ON mytable(id) WHERE NOT test;
(assuming that id is the primary key) and keep mytable autovacuumed often enough that you get an index only scan.
But usually exact results for queries like this are not required.
You could calculate an estimated count from the table statistics with a query like this:
SELECT t.reltuples
* (1 - t.nullfrac)
* mcv.freq AS count_false
FROM pg_stats AS s
CROSS JOIN LATERAL unnest(s.most_common_vals::text::boolean[],
s.most_common_freqs) AS mcv(val, freq)
JOIN pg_class AS t
ON s.tablename = t.relname
AND s.schemaname = t.relnamespace::regnamespace::text
WHERE s.tablename = 'mytable'
AND s.attname = 'test'
AND mcv.val = FALSE;
That would be very fast.
See my blog post for more considerations about the speed of SELECT count(*).

Ecto JOIN complications for append-only table query

I'm trying to query an Ecto table with append-only semantics, so I'd like the most recent version of a complete row for a given ID. The technique is described here, but in short: I want to JOIN a table on itself with a subquery that fetches the most recent time for an ID. In SQL this would look like:
SELECT r.*
FROM rules AS r
JOIN (
SELECT id, MAX(inserted_at) AS inserted_at FROM rules GROUP BY id
) AS recent_rules
ON (
recent_rules.id = r.id
AND recent_rules.inserted_at = r.inserted_at)
I'm having trouble expessing this in Ecto. I tried something like this:
maxes =
from(m in Rule,
select: {m.id, max(m.inserted_at)},
group_by: m.id)
from(r in Rule,
join: m in ^maxes, on: r.id == m.id and r.inserted_at == m.inserted_at)
But trying to run this, I hit a restriction:
queries in joins can only have where conditions in query
suggesting maxes must just be a SELECT _ FROM _ WHERE form.
If I try switching maxes and Rule in the JOIN:
maxes =
from(m in Rule,
select: {m.id, max(m.inserted_at)},
group_by: m.id)
from(m in maxes,
join: r in Rule, on: r.id == m.id and r.inserted_at == m.inserted_at)
then I'm not able to SELECT the whole row, just id and MAX(inserted_at).
Does anyone know how to do this JOIN? Or a better way to query append-only in Ecto? Thanks 🙂
Doing m in ^maxes is not running a subquery but either query composition (if in a from) or converting the query to a join (in a join). In both cases, you are changing the same query. Given your initial query, I believe you want subqueries.
Also note that a subquery requires the select to return a map, so we can refer to the fields later on. Something along these lines should work:
maxes =
from(m in Rule,
select: %{id: m.id, inserted_at: max(m.inserted_at)},
group_by: m.id)
from(r in Rule,
join: m in ^subquery(maxes), on: r.id == m.id and r.inserted_at == m.inserted_at)
PS: I have pushed a commit to Ecto that clarifies the error message in cases like yours.
invalid query was interpolated in a join.
If you want to pass a query to a join, you must either:
1. Make sure the query only has `where` conditions (which will be converted to ON clauses)
2. Or wrap the query in a subquery by calling subquery(query)

SQLITE : Optimize ORDER BY Query

All,
I am iOS developer. Currently we have stored 2.5 lacks data in database. And we have implemented search functionality on that. Below is the query that we are using.
select CustomerMaster.CustomerName ,CustomerMaster.CustomerNumber,
CallActivityList.CallActivityID,CallActivityList.CustomerID,CallActivityList.UserID,
CallActivityList.ActivityType,CallActivityList.Objective,CallActivityList.Result,
CallActivityList.Comments,CallActivityList.CreatedDate,CallActivityList.UpdateDate,
CallActivityList.CallDate,CallActivityList.OrderID,CallActivityList.SalesPerson,
CallActivityList.GratisProduct,CallActivityList.CallActivityDeviceID,
CallActivityList.IsExported,CallActivityList.isDeleted,CallActivityList.TerritoryID,
CallActivityList.TerritoryName,CallActivityList.Hours,UserMaster.UserName,
(FirstName ||' '||LastName) as UserNameFull,UserMaster.TerritoryID as UserTerritory
from
CallActivityList
inner join CustomerMaster
ON CustomerMaster.DeviceCustomerID = CallActivityList.CustomerID
inner Join UserMaster
On UserMaster.UserID = CallActivityList.UserID
where
(CustomerMaster.CustomerName like '%T%' or
CustomerMaster.CustomerNumber like '%T%' or
CallActivityList.ActivityType like '%T%' or
CallActivityList.TerritoryName like '%T%' or
CallActivityList.SalesPerson like '%T%' )
and CallActivityList.IsExported!='2' and CallActivityList.isDeleted != '1'
order by
CustomerMaster.CustomerName
limit 50 offset 0
Without using 'order by' The query is returning result in 0.5 second. But when i am attaching 'order by', Time is increasing to 2 seconds.
I have tried indexing but it is not making any noticeable change. Any one please help. If we are not going through Query then how can we do it fast.
Thanks in advance.
This is due to the the limit. Without ORDER BY only 50 records have to be processed and any 50 will be returned. With ORDER BY all the records have to be processed in order to determine which ones are the first 50 (in order).
The problem is that the ORDER BY is performed on a joined table. Otherise you could apply the limit on the main table (I assume it is the CallActivityList) first and then join.
SELECT ...
FROM
(SELECT ... FROM CallActivityList ORDER BY ... LIMIT 50 OFFSET 0) AS CAL
INNER JOIN CustomerMaster ON ...
INNER JOIN UserMaster ON ...
ORDER BY ...
This would reduce the costs for joining the tables. If this is not possible, try at least to join CallActivityList with CustomerMaster. Apply the limit to those and finally join with UserMaster.
SELECT ...
FROM
(SELECT ...
FROM
CallActivityList
INNER JOIN CustomerMaster ON ...
ORDER BY CustomerMaster.CustomerName
LIMIT 50 OFFSET 0) AS ActCust
INNER JOIN UserMaster ON ...
ORDER BY ...
Also, in order to make the ordering unambiguous, I would include more columns into the order by, like call date and call id. Otherwise this could result in a inconsistent paging.

Sequential scan rather than index scan

I have a bunch of tables in postgresql and I run a query as follows
SELECT DISTINCT ON ...some stuff...
FROM "rent_flats" INNER JOIN "rent_flats_linked_users"
ON "rent_flats_linked_users"."rent_flat_id" = "rent_flats"."id"
INNER JOIN "users"
ON "users"."id" = rent_flats_linked_users"."user_id"
INNER JOIN "owners"
ON "owners"."id" = "users"."profile_id" AND "users"."profile_type" = 'Owner'
INNER JOIN "phone_numbers"
ON "phone_numbers"."person_id" = "owners"."id" AND "phone_numbers"."person_type" = 'Owner'
INNER JOIN "phone_number_categories"
ON "phone_number_categories"."id" = "phone_numbers"."phone_number_category_id"
INNER JOIN "localities"
ON "localities"."id" = "rent_flats"."locality_id"
INNER JOIN "regions"
ON "regions"."id" = "localities"."region_id"
INNER JOIN "cities"
ON "cities"."id" = "regions"."city_id"
INNER JOIN "property_types"
ON "property_types"."id" = "rent_flats"."property_type_id"
INNER JOIN "apartment_types"
ON "apartment_types"."id" = "rent_flats"."apartment_type_id"
WHERE "rent_flats"."status" = 3
AND (((extract(epoch from age(current_date,rent_flats.date_added))/86400)::int) IN (cities.short_period,cities.long_period))
AND (phone_number_categories.name IN ('SMS','SMS & Mobile'))
ORDER BY rf_id, phone_numbers.priority ASC
Note: The rent_flats table contains around 5 million rows, and rent_flats_linked_users contains around 600k rows and users contains 350k rows.Other tables are small in size.
The query takes about 6.8 secs to execute and the explain analyses shows that around 50% of the total time goes in sequential scans of the rent_flats, users and rent_flats_linked_users tables and the other 30% in Hash joins.
On setting seq_scan to off...the query takes even longer to ~11 secs (in this case Hash and Hash join take upto 97.5% of the time)
Here's the explain query plan analyses.
I have put indices on the fields involved in the inner joins as well as on fields involved in the filters like phone_numbers.priority and cities.short_period and cities.long_period. But I still get a sequential scan. What can be the reasons and possible solutions to fasten the query?
I suspect that if there is a part of that query worth optimising then it is this:
(((extract(epoch from age(current_date,rent_flats.date_added))/86400)::int) IN (cities.short_period,cities.long_period))
You really need to turn that into something like:
rent_flats.date_added in (...)
Then you can index date_added, and maybe index (date_added, status).
the next step would be to make sure that the join columns are indexed.