Query slows down 5 fold after copying DB (on the same computer!) - postgresql

In a location based app,there's a specific query which has to run fast:
SELECT count(*) FROM users
WHERE earth_box(ll_to_earth(40.71427000, -74.00597000), 50000) #> ll_to_earth(latitude, longitude)
However, when after copying the database by using Postgres' tools:
pg_dump dummy_users > dummy_users.dump
createdb slow_db
psql slow_db < dummy_users.dump
the query takes 2.5 seconds instead of 0.5 seconds on slow_db!!
The planner chooses a different route in slow_db, eg
Explain analyze on slow_db:
"Aggregate (cost=10825.18..10825.19 rows=1 width=8) (actual time=2164.396..2164.396 rows=1 loops=1)"
" -> Bitmap Heap Scan on users (cost=205.45..10818.39 rows=2714 width=0) (actual time=26.188..2155.680 rows=122836 loops=1)"
" Recheck Cond: ('(1281995.9045467733, -4697354.822067326, 4110397.4955141144),(1381995.648489849, -4597355.078124251, 4210397.23945719)'::cube #> (ll_to_earth(latitude, longitude))::cube)"
" Rows Removed by Index Recheck: 364502"
" Heap Blocks: exact=57514 lossy=33728"
" -> Bitmap Index Scan on distance_index (cost=0.00..204.77 rows=2714 width=0) (actual time=20.068..20.068 rows=122836 loops=1)"
" Index Cond: ((ll_to_earth(latitude, longitude))::cube <# '(1281995.9045467733, -4697354.822067326, 4110397.4955141144),(1381995.648489849, -4597355.078124251, 4210397.23945719)'::cube)"
"Planning Time: 1.002 ms"
"Execution Time: 2164.807 ms"
explain analyze on the origin db:
"Aggregate (cost=8807.01..8807.02 rows=1 width=8) (actual time=239.524..239.525 rows=1 loops=1)"
" -> Index Scan using distance_index on users (cost=0.41..8801.69 rows=2130 width=0) (actual time=0.156..233.760 rows=122836 loops=1)"
" Index Cond: ((ll_to_earth(latitude, longitude))::cube <# '(1281995.9045467733, -4697354.822067326, 4110397.4955141144),(1381995.648489849, -4597355.078124251, 4210397.23945719)'::cube)"
"Planning Time: 3.928 ms"
"Execution Time: 239.546 ms"
For both tables there's an index on the location which was created in the exact same way:
CREATE INDEX
distance_index ON users USING gist (ll_to_earth(latitude, longitude))
I've tried to run maintenance tools (analyze\vaccum etc) before and after running that query, with or without the index, doesn't help!
Both DBS run on the exact same machine (so same postgres server,postgres dist,configuration).
Data on both DBS is the same (one single table), and isn't changing.
The Postgres version = 12.8.
psql's \l output for those databases:
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-------------+----------+----------+---------+-------+-----------------------
dummy_users | yoni | UTF8 | en_IL | en_IL |
slow_db | yoni | UTF8 | en_IL | en_IL |
What is going on?
(Thanks to Laurenz Albe) - after
SET enable_bitmapscan = off; and SET enable_seqscan = off; on the slow database, ran the query again here is the EXPLAIN (ANALYZE, BUFFERS) output:
"Aggregate (cost=11018.63..11018.64 rows=1 width=8) (actual time=213.544..213.545 rows=1 loops=1)"
" Buffers: shared hit=11667 read=110537"
" -> Index Scan using distance_index on users (cost=0.41..11011.86 rows=2711 width=0) (actual time=0.262..207.164 rows=122836 loops=1)"
" Index Cond: ((ll_to_earth(latitude, longitude))::cube <# '(1282077.0159892815, -4697331.573647572, 4110397.4955141144),(1382076.7599323571, -4597331.829704497, 4210397.23945719)'::cube)"
" Buffers: shared hit=11667 read=110537"
"Planning Time: 0.940 ms"
"Execution Time: 213.591 ms"

Manual VACUUM / ANALYZE after restore
After restoring a new database, there are no column statistics yet. Normally, autovacuum will kick in eventually, but since "data [...] isn't changing", autovacuum wouldn't be triggered.
For the same reason (data isn't changing), I suggest to run this once after restoring your single table:
VACUUM (ANALYZE, FREEZE) users;
You might as well run FREEZE for a table that's never changed.
(FULL isn't necessary, since there are no dead tuples in a freshly restored table.)
Explanation for the plan change
With everything else being equal, I suspect at least two major problems:
Bad column statistics
Bad database configuration (the more severe problem)
See:
Keep PostgreSQL from sometimes choosing a bad query plan
In the slow DB, Postgres expects rows=2714, while it expects rows=2130 in the fast one. The difference may not seem huge, but may well be enough to tip Postgres over to the other query plan (that turns out to be inferior).
Seeing that Postgres actually finds rows=122836, either estimate is bad. The one in the slow DB is actually less bad. But the bitmap scan turns out to be slower than the index scan, even with many more qualifying rows than expected. (!) So your database configuration is most probably way off. The main problem typically is the default random_page_cost of 4, while a realistic setting for fully cached read-only table is much closer to 1. Maybe 1.1 to allow for some additional cost. There are a couple other settings that encourage index scans. Like effective_cache_size. Start here:
https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
Estimates are just that: estimates. And column statistics are also just that: statistics. So not exact but subject to random variation. You might increase the statistics target to increase the validity of column statistics.
Cheap random reads favor index scans and discourage bitmap index scans.
More qualifying rows favor a bitmap index scan. Less favor an index scan. See:
Postgres not using index when index scan is much better option

Related

Slow postgres query even though it does bitmap index scan

I have a table with 4707838 rows. When I run the following query on this table it takes around 9 seconds to execute.
SELECT json_agg(
json_build_object('accessorId',
p."accessorId",
'mobile',json_build_object('enabled', p.mobile,'settings',
json_build_object('proximityAccess', p."proximity",
'tapToAccess', p."tapToAccess",
'clickToAccessRange', p."clickToAccessRange",
'remoteAccess',p."remote")
),'
card',json_build_object('enabled',p."card"),
'fingerprint',json_build_object('enabled',p."fingerprint"))
) AS permissions
FROM permissions AS p
WHERE p."accessPointId"=99
The output of explain analyze is as follows:
Aggregate (cost=49860.12..49860.13 rows=1 width=32) (actual time=9011.711..9011.712 rows=1 loops=1)
Buffers: shared read=29720
I/O Timings: read=8192.273
-> Bitmap Heap Scan on permissions p (cost=775.86..49350.25 rows=33991 width=14) (actual time=48.886..8704.470 rows=36556 loops=1)
Recheck Cond: ("accessPointId" = 99)
Heap Blocks: exact=29331
Buffers: shared read=29720
I/O Timings: read=8192.273
-> Bitmap Index Scan on composite_key_accessor_access_point (cost=0.00..767.37 rows=33991 width=0) (actual time=38.767..38.768 rows=37032 loops=1)
Index Cond: ("accessPointId" = 99)
Buffers: shared read=105
I/O Timings: read=32.592
Planning Time: 0.142 ms
Execution Time: 9012.719 ms
This table has a btree index on accessorId column and composite index on (accessorId,accessPointId).
Can anyone tell me what could be the reason for this query to be slow even though it uses an index?
Over 90% of the time is waiting to get data from disk. At 3.6 ms per read, that is pretty fast for a harddrive (suggesting that much of the data was already in the filesystem cache, or that some of the reads brought in neighboring data that was also eventually required--that is sequential reads not just random reads) but slow for a SSD.
If you set enable_bitmapscan=off and clear the cache (or pick a not recently used "accessPointId" value) what performance do you get?
How big is the table? If you are reading a substantial fraction of the table and think you are not getting as much benefit from sequential reads as you should be, you can try making your OSes readahead settings more aggressive. On Linux that is something like sudo blockdev --setra ...
You could put all columns referred to by the query into the index, to enable index-only scans. But given the number of columns you are using that might be impractical. You could want "accessPointId" to be the first column in the index. By the way, is the index currently used really on (accessorId,accessPointId)? It looks to me like "accessPointId" is really the first column in that index, not the 2nd one.
You could cluster the table by an index which has "accessPointId" as the first column. That would group the related records together for faster access. But note it is a slow operation and takes a strong lock on the table while it is running, and future data going into the table won't be clustered, only the current data.
You could try to increase effective_io_concurrency so that you can have multiple io requests outstanding at a time. How effective this is will depend on your hardware.

Aurora postgres slect is slow

I'm trying to figure out what is causing the Aurora RDS Postgres database to deteriorate in terms of performance over time.
I have a quite big table in a multi tenant architecture. But only one tenant became quite slow to perform a SELECT on this table.
I have tried to run reindex on the table and the schema, I have also done a vacuum on that table but I can't see any improvement.
Strangely if I do a backup and restore of this schema(then same size of tables) it is really fast for few days then it starts to deteriorate.
There are few batch jobs with spark that ingest and populate the schema but I do not know the details, I assume that something happens in the ingestion process.
What I'm trying to understand is what a backup and restore does more the reindex and vacuum? What am I missing?
EXPLAIN (ANALYZE, BUFFERS) SELECT...
"Limit (cost=0.56..14527.17 rows=1 width=99) (actual time=12056.841..12056.842 rows=1 loops=1)"
" Buffers: shared hit=62835 read=34334"
" -> Index Scan using idx_namekey on delqmst (cost=0.56..7379519.03 rows=508 width=99) (actual time=12056.840..12056.840 rows=1 loops=1)"
" Filter: ((upper((dmacctg)::text) ~~ '6'::text) AND (upper((dmacct)::text) ~~ '0000000000025454448'::text))"
" Rows Removed by Filter: 793350"
" Buffers: shared hit=62835 read=34334"
"Planning time: 11.435 ms"
"Execution time: 12056.876 ms"```
As you can see it took around 12 seconds if I do backup and restore of the schema it takes around 1 second.

Postgres not using Covering Index with Aggregate

Engine version: 12.4
Postgres wasn't using index only scan, then I ran vacuum analyze verbose table_name. After that it started using index only scan. Earlier when I had ran analyze verbose table_name without vacuum that time index only scan wasn't used.
So it means there is very heavy dependency on vacuum to use index only plan. Is there any way to eliminate this dependency OR should we configure vacuum very regularly? frequency like daily.
Our objective is to reduce cpu usage.. Overall machine cpu usage is 10%-15% throughout the day but when this query runs then cpu goes very high( this query runs in multiple threads at same time with diff values)
EXPLAIN ANALYZE SELECT COALESCE(requested_debit, 0) AS requestedDebit, COALESCE(requested_credit, 0) AS requestedCredit
FROM (SELECT COALESCE(Sum(le.credit), 0) AS requested_credit, COALESCE(Sum(le.debit), 0) AS requested_debit
FROM ledger_entries le
WHERE le.accounting_entity_id = 1
AND le.general_ledger_id = 503
AND le.post_date BETWEEN '2020-09-10' AND '2020-11-30') AS requested_le;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Subquery Scan on requested_le (cost=66602.65..66602.67 rows=1 width=64) (actual time=81.263..81.352 rows=1 loops=1)
-> Finalize Aggregate (cost=66602.65..66602.66 rows=1 width=64) (actual time=81.261..81.348 rows=1 loops=1)
-> Gather (cost=66602.41..66602.62 rows=2 width=64) (actual time=79.485..81.331 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=65602.41..65602.42 rows=1 width=64) (actual time=74.293..74.294 rows=1 loops=3)
-> Parallel Index Only Scan using post_date_gl_id_accounting_entity_id_include_idx on ledger_entries le (cost=0.56..65203.73 rows=79735 width=8) (actual time=47.874..74.212 rows=197 loops=3)
Index Cond: ((post_date >= '2020-09-10'::date) AND (post_date <= '2020-11-30'::date) AND (general_ledger_id = 503) AND (accounting_entity_id = 1))
Heap Fetches: 0
Planning Time: 0.211 ms
Execution Time: 81.395 ms
(11 rows)
There is a very strong connection between VACUUM and index-only scans in PostgreSQL: an index-only scan can only skip fetching the table row (to check for visibility) if the block containing the row is marked all-visible in the visibility map. And the visibility map is updated by VACUUM.
So yes, you have to VACUUM often to get efficient index-only scans.
Typically, there is no need to schedule a manual VACUUM, you can simply
ALTER TABLE mytab SET (autovacuum_vacuum_scale_factor = 0.01);
(or a similar value) and let autovacuum do the job.
The only case where this is problematic are insert-only tables, because for them autovacuum won't be triggered for PostgreSQL versions below v13. In v13, you can simply change autovacuum_vacuum_insert_scale_factor, while in older versions you will have to set autovacuum_freeze_max_age to a low value for that table.

How can I force postgres 9.4 to hit a gin full text index a little more predictably? See my query plan bug

POSTGRES 9.4 has been generating a pretty poor query plan for a full text query with LIMIT 10 at the end:
SELECT * FROM Tbl
WHERE to_tsvector('english'::regconfig, ginIndexedColumn) ## to_tsquery('rareword')
LIMIT 10
this generates:
"Limit (cost=0.00..306.48 rows=10 width=702) (actual time=5470.323..7215.486 rows=3 loops=1)"
" -> Seq Scan on tbl (cost=0.00..24610.69 rows=803 width=702) (actual time=5470.321..7215.483 rows=3 loops=1)"
" Filter: (to_tsvector('english'::regconfig, ginIndexedColumn) ## to_tsquery('rareword'::text))"
" Rows Removed by Filter: 609661"
"Planning time: 0.436 ms"
"Execution time: 7215.573 ms"
using an index defined by:
CREATE INDEX fulltext_idx
ON Tbl
USING gin
(to_tsvector('english'::regconfig, ginIndexedColumn));
and it takes 5 or 6 seconds to execute. Even LIMIT 12 is slow.
However, the same query with LIMIT 13 (the lowest limit that hits the index)
SELECT * FROM Tbl
WHERE to_tsvector('english'::regconfig, ginIndexedColumn) ## to_tsquery('rareword')
LIMIT 13
hits the index just fine and takes a few thousandths of a second. See output below:
"Limit (cost=350.23..392.05 rows=13 width=702) (actual time=2.058..2.062 rows=3 loops=1)"
" -> Bitmap Heap Scan on tbl (cost=350.23..2933.68 rows=803 width=702) (actual time=2.057..2.060 rows=3 loops=1)"
" Recheck Cond: (to_tsvector('english'::regconfig, ginIndexedColumn) ## to_tsquery('rareword'::text))"
" Heap Blocks: exact=2"
" -> Bitmap Index Scan on fulltext_idx (cost=0.00..350.03 rows=803 width=0) (actual time=2.047..2.047 rows=3 loops=1)"
" Index Cond: (to_tsvector('english'::regconfig, ginIndexedColumn) ## to_tsquery('rareword'::text))"
"Planning time: 0.324 ms"
"Execution time: 2.145 ms"
The reason why the query plan is poor is that the word is rare and there's only 2 or 3 records in the whole 610K record table that satisfy the query, meaning the sequential scan the query optimizer picks will have to scan the whole table before the limit is ever filled. The sequential scan would obviously be quite fast if the word is common because the limit would be filled in no time.
Obviously, this little bug is no big deal. I'll simply use Limit 13 instead of 10. What's three more items. But it took me so long to realize the limit clause might affect whether it hits the index. I'm worried that there might be other little surprises in store with other SQL functions that prevent the index from being hit. What I'm looking for is assistance in tweaking Postgres to hit the GIN index all the time instead of sometimes for this particular table.
I'm quite willing to forgo possibly cheaper queries if I could be satisfied that the index is always being hit. It's incredibly fast. I don't care to save any more microseconds.
Well, it's obviously an incorrect selectivity estimation. The planner thinks that to_tsvector('english'::regconfig, ginIndexedColumn) ## to_tsquery('rareword') predicate will result in 803 rows, but actually there are only 3.
To tweak PostgreSQL to use the index you can:
Rewrite the query, for example using CTE, to postpone application of LIMIT:
WITH t as (
SELECT * FROM Tbl
WHERE to_tsvector('english'::regconfig, ginIndexedColumn) ## to_tsquery('rareword')
)
SELECT * FROM t
LIMIT 10
Of course, it makes LIMIT absolutely inefficient. (But in case of GIN index it's anyway not as efficient as it may be, because GIN cannot fetch results tuple-by-tuple. Instead it returns all the TIDs at once using bitmap. See also gin_fuzzy_search_limit.)
Set enable_seqscan=off or increase seq_page_cost to discourage the planner from using sequential scans (doc).
It can however be undesirable if your query should use seqscans of other tables.
Use pg_hint_plan extension.
Increasing the cost-estimate of the to_tsvector function as described here will probably solve the problem. This cost will automatically be increased in the next release (9.5) so adopting that change early should be considered a rather safe tweak to make.

postgreSQL table query speed not improved after rows deletion

I have a table (let's called it table_a). It got around 15 million rows. It's a simple table that have a primary key.
Recently I created a backup table(let's called it table_a_bkp) and moved 12 million rows from table_a.
I used simple SQL (delete/insert) to perform the task.
Both tables have same structure and use same tablespace.
Query speed of table_a doesn't improved even total data rows reduced to 2+ million.
In fact table_a_bkp(12m rows) even have faster query speed than table_a(2m rows).
Checked with pg_stat_all_tables two tables both seems auto vacuum & analyze after deletion performed.
I expected table_a query speed should be improved as it got much less data now...
DB Version : PostgreSQL 9.1 hosted on Linux
EXPLAIN (backup table is faster than 1st table even rows is much larger) :
EXPLAIN (ANALYZE TRUE, COSTS TRUE, BUFFERS TRUE) select count(*) from txngeneral
"Aggregate (cost=742732.94..742732.95 rows=1 width=0) (actual time=73232.598..73232.599 rows=1 loops=1)"
" Buffers: shared hit=8910 read=701646"
" -> Seq Scan on txngeneral (cost=0.00..736297.55 rows=2574155 width=0) (actual time=17.614..72763.873 rows=2572550 loops=1)"
" Buffers: shared hit=8910 read=701646"
"Total runtime: 73232.647 ms"
EXPLAIN (ANALYZE TRUE, COSTS TRUE, BUFFERS TRUE) select count(*) from txngeneral_bkp
"Aggregate (cost=723840.13..723840.14 rows=1 width=0) (actual time=57134.270..57134.270 rows=1 loops=1)"
" Buffers: shared hit=96 read=569895"
" -> Seq Scan on txngeneral_bkp (cost=0.00..693070.30 rows=12307930 width=0) (actual time=5.436..54889.543 rows=12339180 loops=1)"
" Buffers: shared hit=96 read=569895"
"Total runtime: 57134.321 ms"
Resolved: VACUUM FULL did speed up table scan.
You should VACUUM ANALYZE your original table. (Probably a full VACUUM will be needed.) Since the new table was created and populated all at once, they form more or less contiguous 'blocks', while the tuples of original table are spread all over the disk.