Postgresql 9.x: Index to optimize `xpath_exists` (XMLEXISTS) queries - postgresql

We have queries of the form
select sum(acol)
where xpath_exists('/Root/KeyValue[Key="val"]/Value//text()', xmlcol)
What index can be built to speed up the where clause ?
A btree index created using
create index idx_01 using btree(xpath_exists('/Root/KeyValue[Key="val"]/Value//text()', xmlcol))
does not seem to be used at all.
EDIT
Setting enable_seqscan to off, the query using xpath_exists is much faster (one order of magnitude) and clearly shows using the corresponding index (the btree index built with xpath_exists).
Any clue why PostgreSQL would not be using the index and attempt a much slower sequential scan ?
Since I do not want to disable sequential scanning globally, I am back to square one and I am happily welcoming suggestions.
EDIT 2 - Explain plans
See below - Cost of first plan (seqscan off) is slightly higher but processing time much faster
b2box=# set enable_seqscan=off;
SET
b2box=# explain analyze
Select count(*)
from B2HEAD.item
where cluster = 'B2BOX' and ( ( xpath_exists('/MessageInfo[FinalRecipient="ABigBank"]//text()', content) ) ) offset 0 limit 1;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=22766.63..22766.64 rows=1 width=0) (actual time=606.042..606.042 rows=1 loops=1)
-> Aggregate (cost=22766.63..22766.64 rows=1 width=0) (actual time=606.039..606.039 rows=1 loops=1)
-> Bitmap Heap Scan on item (cost=1058.65..22701.38 rows=26102 width=0) (actual time=3.290..603.823 rows=4085 loops=1)
Filter: (xpath_exists('/MessageInfo[FinalRecipient="ABigBank"]//text()'::text, content, '{}'::text[]) AND ((cluster)::text = 'B2BOX'::text))
-> Bitmap Index Scan on item_counter_01 (cost=0.00..1052.13 rows=56515 width=0) (actual time=2.283..2.283 rows=4085 loops=1)
Index Cond: (xpath_exists('/MessageInfo[FinalRecipient="ABigBank"]//text()'::text, content, '{}'::text[]) = true)
Total runtime: 606.136 ms
(7 rows)
plan on explain.depesz.com
b2box=# set enable_seqscan=on;
SET
b2box=# explain analyze
Select count(*)
from B2HEAD.item
where cluster = 'B2BOX' and ( ( xpath_exists('/MessageInfo[FinalRecipient="ABigBank"]//text()', content) ) ) offset 0 limit 1;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=22555.71..22555.72 rows=1 width=0) (actual time=10864.163..10864.163 rows=1 loops=1)
-> Aggregate (cost=22555.71..22555.72 rows=1 width=0) (actual time=10864.160..10864.160 rows=1 loops=1)
-> Seq Scan on item (cost=0.00..22490.45 rows=26102 width=0) (actual time=33.574..10861.672 rows=4085 loops=1)
Filter: (xpath_exists('/MessageInfo[FinalRecipient="ABigBank"]//text()'::text, content, '{}'::text[]) AND ((cluster)::text = 'B2BOX'::text))
Rows Removed by Filter: 108945
Total runtime: 10864.242 ms
(6 rows)
plan on explain.depesz.com

Planner cost parameters
Cost of first plan (seqscan off) is slightly higher but processing time much faster
This tells me that your random_page_cost and seq_page_cost are probably wrong. You're likely on storage with fast random I/O - either because most of the database is cached in RAM or because you're using SSD, SAN with cache, or other storage where random I/O is inherently fast.
Try:
SET random_page_cost = 1;
SET seq_page_cost = 1.1;
to greatly reduce the cost param differences and then re-run. If that does the job consider changing those params in postgresql.conf..
Your row-count estimates are reasonable, so it doesn't look like a planner mis-estimation problem or a problem with bad table statistics.
Incorrect query
Your query is also incorrect. OFFSET 0 LIMIT 1 without an ORDER BY will produce unpredictable results unless you're guaranteed to have exactly one match, in which case the OFFSET ... LIMIT ... clauses are unnecessary and can be removed entirely.
You're usually much better off phrasing such queries as SELECT max(...) or SELECT min(...) where possible; PostgreSQL will tend to be able to use an index to just pluck off the desired value without doing an expensive table scan or an index scan and sort.
Tips
BTW, for future questions the PostgreSQL wiki has some good information in the performance category and a guide to asking Slow query questions.

Related

Very slow query planning time with many indexes

I have a table "nodes" with a JSONB-column "data", in which I store various types of information.
The JSON includes pieces of text in different languages, that need to be frequently searched on by end-users. Per language, I therefore create about 4 indices similar to the following (usually with a separate search dictionary for that language)
CREATE INDEX nodes_label_sv_idx
ON nodes
USING GIN (to_tsvector('swedish_text', data #>> '{label,sv}'));
This works fine when only 2 or 3 languages are present, but when adding 20 more languages (each with 4 indices for that language's path into the JSON), the query planner becomes extremely slow for some queries (180 ms), even though those queries are still executing very fast (less than 1ms). The table currently contains about 50K records.
Weird thing is, those queries are simple joins on other columns of the table (unrelated to the "data" column), so the language-related indices are completely irrelevant. Also, the more language-related indices I drop, the faster the planner becomes again.
I completely understand that the planner needs to check all (150+) indices for potential relevance, but 180ms is extreme. Anybody have a suggestion? By the way, the problem only seems to occur when using a view (not when directly using the query underlying the view).
I am using PostgresSQL 13 on Mac & Linux.
Edit:
query:
EXPLAIN (ANALYZE, BUFFERS)
select 1
from ca_nodes can
where (can.owner_id = 168 or can.aco_id = 0)
limit 1;
underlying view:
CREATE VIEW ca_nodes AS
SELECT n.nid, n.owner_id, c.aco_id,
FROM nodes n inner join acos c on n.nid = c.nid;
explain (analyze, buffers) output:
Limit (cost=0.58..32.45 rows=1 width=4) (actual time=0.038..0.039 rows=1 loops=1)
Buffers: shared hit=6
-> Merge Join (cost=0.58..15136.78 rows=475 width=4) (actual time=0.037..0.037 rows=1 loops=1)
Merge Cond: (n.nid = c.nid)
Join Filter: ((n.owner_id = 168) OR (c.aco_id = 0))
Buffers: shared hit=6
-> Index Scan using nodes_pkey on nodes n (cost=0.29..12094.35 rows=47604 width=8) (actual time=0.017..0.017 rows=1 loops=1)
Buffers: shared hit=3
-> Index Scan using acos_nid_idx on precalculated_acos c (cost=0.29..2090.35 rows=47604 width=8) (actual time=0.014..0.014 rows=1 loops=1)
Buffers: shared hit=3
Planning:
Buffers: shared hit=83
Planning Time: 180.392 ms
Execution Time: 0.079 ms

Postgres not using Covering Index with Aggregate

Engine version: 12.4
Postgres wasn't using index only scan, then I ran vacuum analyze verbose table_name. After that it started using index only scan. Earlier when I had ran analyze verbose table_name without vacuum that time index only scan wasn't used.
So it means there is very heavy dependency on vacuum to use index only plan. Is there any way to eliminate this dependency OR should we configure vacuum very regularly? frequency like daily.
Our objective is to reduce cpu usage.. Overall machine cpu usage is 10%-15% throughout the day but when this query runs then cpu goes very high( this query runs in multiple threads at same time with diff values)
EXPLAIN ANALYZE SELECT COALESCE(requested_debit, 0) AS requestedDebit, COALESCE(requested_credit, 0) AS requestedCredit
FROM (SELECT COALESCE(Sum(le.credit), 0) AS requested_credit, COALESCE(Sum(le.debit), 0) AS requested_debit
FROM ledger_entries le
WHERE le.accounting_entity_id = 1
AND le.general_ledger_id = 503
AND le.post_date BETWEEN '2020-09-10' AND '2020-11-30') AS requested_le;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Subquery Scan on requested_le (cost=66602.65..66602.67 rows=1 width=64) (actual time=81.263..81.352 rows=1 loops=1)
-> Finalize Aggregate (cost=66602.65..66602.66 rows=1 width=64) (actual time=81.261..81.348 rows=1 loops=1)
-> Gather (cost=66602.41..66602.62 rows=2 width=64) (actual time=79.485..81.331 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=65602.41..65602.42 rows=1 width=64) (actual time=74.293..74.294 rows=1 loops=3)
-> Parallel Index Only Scan using post_date_gl_id_accounting_entity_id_include_idx on ledger_entries le (cost=0.56..65203.73 rows=79735 width=8) (actual time=47.874..74.212 rows=197 loops=3)
Index Cond: ((post_date >= '2020-09-10'::date) AND (post_date <= '2020-11-30'::date) AND (general_ledger_id = 503) AND (accounting_entity_id = 1))
Heap Fetches: 0
Planning Time: 0.211 ms
Execution Time: 81.395 ms
(11 rows)
There is a very strong connection between VACUUM and index-only scans in PostgreSQL: an index-only scan can only skip fetching the table row (to check for visibility) if the block containing the row is marked all-visible in the visibility map. And the visibility map is updated by VACUUM.
So yes, you have to VACUUM often to get efficient index-only scans.
Typically, there is no need to schedule a manual VACUUM, you can simply
ALTER TABLE mytab SET (autovacuum_vacuum_scale_factor = 0.01);
(or a similar value) and let autovacuum do the job.
The only case where this is problematic are insert-only tables, because for them autovacuum won't be triggered for PostgreSQL versions below v13. In v13, you can simply change autovacuum_vacuum_insert_scale_factor, while in older versions you will have to set autovacuum_freeze_max_age to a low value for that table.

Optimizing recent event search and cache usage using `random_page_cost` on postgres

I have a table where I store information about a customer and the timestamp, and time range of an event.
The indices I use look as follows:
event_index (customer_id, time)
state_index (customer_id, end, start desc)
The vast majority of queries query the last few days about state and events.
This is a sample query text (events have identical an identical problem as I'll describe for states):
SELECT "states".*
FROM "states"
WHERE ("states"."customer_id" = $1 AND "states"."start" < $2)
AND ("states"."end" IS NULL OR "states"."end" > $3)
AND ("states"."obsolete" = $4)
ORDER BY "states"."start" DESC
I see that sometimes the query planner only uses only the customer_id to filter, and then filters using a heap scan all rows for the customer:
Sort (cost=103089.00..103096.17 rows=2869 width=78)
Sort Key: start DESC
-> Bitmap Heap Scan on states (cost=1222.56..102924.23 rows=2869 width=78)
Recheck Cond: (customer_id = '----'::bpchar)
Filter: ((NOT obsolete) AND ((start)::double precision < '1557711009'::double precision) AND ((end IS NULL) OR ((end)::double precision > '1557666000'::double precision)))
-> Bitmap Index Scan on states_index (cost=0.00..1221.85 rows=26820 width=0)
Index Cond: (customer_id = '----'::bpchar)
This is in contrast to what I see in a session manually:
Sort Key: start DESC
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on states (cost=111.12..9338.04 rows=1 width=78) (actual time=141.674..141.674 rows=0 loops=1)
Recheck Cond: (((customer_id = '-----'::bpchar) AND (end IS NULL) AND (start < '1557349200'::numeric)) OR ((customer_id = '----'::bpchar) AND (end > '1557249200'::numeric) AND (start < '1557349200'::numeric)))
Filter: ((NOT obsolete) AND ((title)::text = '---'::text))
Rows Removed by Filter: 112
Heap Blocks: exact=101
-> BitmapOr (cost=111.12..111.12 rows=2333 width=0) (actual time=4.198..4.198 rows=0 loops=1)
-> Bitmap Index Scan on states_index (cost=0.00..4.57 rows=1 width=0) (actual time=0.086..0.086 rows=0 loops=1)
Index Cond: ((customer_id = '----'::bpchar) AND (end IS NULL) AND (start < '1557349200'::numeric))
-> Bitmap Index Scan on state_index (cost=0.00..106.55 rows=2332 width=0) (actual time=4.109..4.109 rows=112 loops=1)
Index Cond: ((customer_id = '---'::bpchar) AND (end > '1557262800'::numeric) AND (start < '1557349200'::numeric))
In other words - the query planner sometimes chooses to use only the first column of the index which slows the query significantly.
I can see why it makes sense to just bring the entire customer data when its small enough and filter in memory, but the problem is this data is very sparse and is probably not entirely cached (data from a year ago is probably not cached for the customer, database is a few hundreds of GBs). If the index would use the timestamps to the fullest extent (as in the second example) - the result should be much faster since recent data is cached.
I used a partial index on the last week to see if the query time drops but postgres only uses it sometimes. This solves the problem when the partial index is used since old rows do not exist in that index - but sadly postgres still selects the bigger index even when it doesn't have to. I ran vacuum analyze but to no visible effect.
I tried to see the cache hits using this:
Database Name | Temporary files | Size of temporary files | Block Hits | Block Reads
------------------+-----------------+-------------------------+---------------+-------------
customers | 1922 | 18784440622 | 69553504584 | 2401546773
And then I calculated (block_hits/(block_hits + block_reads)):
>>> 69553504584.0 / (69553504584.0 + 2401546773.0)
0.9666243477322406
So this shows me ~96.6% cache (I want it much closer to 100 because I know the nature of the queries)
I also tried increasing statistics (SET STATISTICS) on the customer_id, start and end since it seemed to be a suggestion to people facing query planner issues. It didn't help as well (and I ran analyze after...).
After reading further about this issue I saw that there is a way to make the query planner prefer index scans using lower random_page_cost than the default (4). I also saw a post backing that here:
https://amplitude.engineering/how-a-single-postgresql-config-change-improved-slow-query-performance-by-50x-85593b8991b0
Does this make sense for my use case? Will it make the query planner use the index to the fullest more often (preferably always)?
If not - is there something else I can do to lower the query time? I know partitioning can be very effective but seems to be an overkill and is not fully supported on my current postgres version (9.5.9) as far as I can tell from what I've read.
Update: After lowering random_page_cost I don't see a conclusive difference. There are still times where the query planner chooses to only use part of the. index for a much slower result.
Any suggestions are very welcome.
Thanks :)

Slow on first query

I'm having troubles when I perform the first query on a table. Subsequent queries are much faster, even if I change the range date to look for. I assume that PostgreSQL implements a caching mechanism that allows the subsequent queries to be much faster. I can try to warmup the cache so the first user request can hit the cache. However, I think I can somehow improve the following query:
SELECT
y.id, y.title, x.visits, x.score
FROM (
SELECT
article_id, visits,
COALESCE(ROUND((visits / NULLIF(hits ,0)::float)::numeric, 4), 0) score
FROM (
SELECT
article_id, SUM(visits) visits, SUM(hits) hits
FROM
article_reports
WHERE
a.site_id = 'XYZ' AND a.date >= '2017-04-13' AND a.date <= '2017-06-28'
GROUP BY
article_id
) q ORDER BY score DESC, visits DESC LIMIT(20)
) x
INNER JOIN
articles y ON x.article_id = y.id
Any ideas on how can I improve this. The following is the result of EXPLAIN:
Nested Loop (cost=84859.76..85028.54 rows=20 width=272) (actual time=12612.596..12612.836 rows=20 loops=1)
-> Limit (cost=84859.34..84859.39 rows=20 width=52) (actual time=12612.502..12612.517 rows=20 loops=1)
-> Sort (cost=84859.34..84880.26 rows=8371 width=52) (actual time=12612.499..12612.503 rows=20 loops=1)
Sort Key: q.score DESC, q.visits DESC
Sort Method: top-N heapsort Memory: 27kB
-> Subquery Scan on q (cost=84218.04..84636.59 rows=8371 width=52) (actual time=12513.168..12602.649 rows=28965 loops=1)
-> HashAggregate (cost=84218.04..84301.75 rows=8371 width=36) (actual time=12513.093..12536.823 rows=28965 loops=1)
Group Key: a.article_id
-> Bitmap Heap Scan on article_reports a (cost=20122.78..77122.91 rows=405436 width=36) (actual time=135.588..11974.774 rows=398242 loops=1)
Recheck Cond: (((site_id)::text = 'XYZ'::text) AND (date >= '2017-04-13'::date) AND (date <= '2017-06-28'::date))
Heap Blocks: exact=36911
-> Bitmap Index Scan on index_article_reports_on_site_id_and_article_id_and_date (cost=0.00..20021.42 rows=405436 width=0) (actual time=125.846..125.846 rows=398479 loops=1)"
Index Cond: (((site_id)::text = 'XYZ'::text) AND (date >= '2017-04-13'::date) AND (date <= '2017-06-28'::date))
-> Index Scan using articles_pkey on articles y (cost=0.42..8.44 rows=1 width=128) (actual time=0.014..0.014 rows=1 loops=20)
Index Cond: (id = q.article_id)
Planning time: 1.443 ms
Execution time: 12613.689 ms
Thanks in advance
There are two levels of "cache" that Postgres uses:
OS file cache
shared buffers.
Important: Postgres directly controls only the second one, and relies on the first one, which is under OS' control.
First thing I would check are these two settings in postgresql.conf:
effective_cache_size – usually I set it to ~3/4 of all RAM available. Notice that it's not a setting that tells Postgres how to allocate memory, it's just "an advice" to Postgres planner telling some estimate of OS file cache size
shared_buffers – usually I set it to 1/4 of RAM size. This is allocation setting.
Also, I'd check other memory-related settings (work_mem, maintenance_work_mem) to understand how much RAM might be consumed, so will my effective_cache_size estimation be correct at most times.
But if you just turned your Postgres on, the first queries will most probably be long because there is no data in OS file cache and in shared buffers. You can check it with advanced EXPLAIN options:
EXPLAIN (ANALYZE, BUFFERS) SELECT ...
-- you will see how many buffers were fetched from disk ("read") or from cache ("hit")
Here you can find good material on using EXPLAIN: http://www.dalibo.org/_media/understanding_explain.pdf
Additionally, there is an extension aiming to solve "cold cache" problem: pg_prewarm https://www.postgresql.org/docs/current/static/pgprewarm.html
Also, working with SSD disks instead of magnetic ones will mean that disk reads will be much faster.
Have fun and well working Postgres :-)
If it is the first query after inserting several rows you must run an
ANALYZE
in all the database or over the involved tables. Try executing it at database level.

PostgreSQL chooses not to use index despite improved performance

I had a DB in MySQL and am in the process of moving to PostgreSQL with a Django front-end.
I have a table of 650k-750k rows on which I perform the following query:
SELECT "MMG", "Gene", COUNT(*) FROM at_summary_typing WHERE "MMG" != '' GROUP BY "MMG", "Gene" ORDER BY COUNT(*);
In the MySQL this returns in ~0.5s. However when I switched to PostgreSQL the same query takes ~3s. I have put an index on MMG and Gene together to try and speed it up but when using EXPLAIN (analyse, buffers, verbose) I see the output shows the index is not used :
Sort (cost=59013.54..59053.36 rows=15927 width=14) (actual time=2880.222..2885.475 rows=39314 loops=1)
Output: "MMG", "Gene", (count(*))
Sort Key: (count(*))
Sort Method: external merge Disk: 3280kB
Buffers: shared hit=16093 read=11482, temp read=2230 written=2230
-> GroupAggregate (cost=55915.50..57901.90 rows=15927 width=14) (actual time=2179.809..2861.679 rows=39314 loops=1)
Output: "MMG", "Gene", count(*)
Buffers: shared hit=16093 read=11482, temp read=1819 written=1819
-> Sort (cost=55915.50..56372.29 rows=182713 width=14) (actual time=2179.782..2830.232 rows=180657 loops=1)
Output: "MMG", "Gene"
Sort Key: at_summary_typing."MMG", at_summary_typing."Gene"
Sort Method: external merge Disk: 8168kB
Buffers: shared hit=16093 read=11482, temp read=1819 written=1819
-> Seq Scan on public.at_summary_typing (cost=0.00..36821.60 rows=182713 width=14) (actual time=0.010..224.658 rows=180657 loops=1)
Output: "MMG", "Gene"
Filter: ((at_summary_typing."MMG")::text <> ''::text)
Rows Removed by Filter: 559071
Buffers: shared hit=16093 read=11482
Total runtime: 2888.804 ms
After some searching I found that I could force the use of the index by setting SET enable_seqscan = OFF; and the EXPLAIN now shows the following :
Sort (cost=1181591.18..1181631.00 rows=15927 width=14) (actual time=555.546..560.839 rows=39314 loops=1)
Output: "MMG", "Gene", (count(*))
Sort Key: (count(*))
Sort Method: external merge Disk: 3280kB
Buffers: shared hit=173219 read=87094 written=7, temp read=411 written=411
-> GroupAggregate (cost=0.42..1180479.54 rows=15927 width=14) (actual time=247.546..533.202 rows=39314 loops=1)
Output: "MMG", "Gene", count(*)
Buffers: shared hit=173219 read=87094 written=7
-> Index Only Scan using mm_gene_idx on public.at_summary_typing (cost=0.42..1178949.93 rows=182713 width=14) (actual time=247.533..497.771 rows=180657 loops=1)
Output: "MMG", "Gene"
Filter: ((at_summary_typing."MMG")::text <> ''::text)
Rows Removed by Filter: 559071
Heap Fetches: 739728
Buffers: shared hit=173219 read=87094 written=7
Total runtime: 562.735 ms
Performance now comparable with the MySQL.
The problem is that I understand that setting this is bad practice and that I should try and find a way to improve my query/encourage the use of the index automatically. However I'm very inexperienced with PostgreSQL and cannot work out how or why it is choosing to use a Seq Scan over an Index Scan in the first place.
why it is choosing to use a Seq Scan over an Index Scan in the first place
Because the seq scan is actually twice as fast as the index scan (224ms vs. 497ms) despite the fact that the index was nearly completely in the cache, but the table was not.
So choosing the seq scan was the right thing to do.
The bottleneck in the first plan is the sorting and grouping that needs to be done on disk.
The better strategy would be to increase work_mem to something more realistic than the really small default of 4MB. You might want to start with something like 16MB, by running
set work_mem=16MB;
before running your query. If that doesn't remove the "Sort Method: external merge Disk" steps, increase it work_mem further.
By increasing the work_mem it also is possible that Postgres switches to a hash aggregate instead of the sorting that it currently does which will probably be faster anyway (but isn't feasible if not enough memory is available)
Once you find a good value, you might want to make that permanent by putting the new value into postgresql.conf
Don't set this too high: that memory may be requested multiple times for each query.
If your where condition is static, you could also create a partial index matching that criteria:
create index on at_summary_typing ("MMG", "Gene")
where "MMG" <> '';
Don't forget to analyze the table to update the statistics.