Reading pg_buffercache output - postgresql

I am using postgres-9.3 (in CenOS 6.9) and trying to understand the pg_buffercache table output.
I ran this:
SELECT c.relname,count(*) AS buffers FROM pg_class c INNER JOIN
pg_buffercache b ON b.relfilenode=c.relfilenode INNER JOIN
pg_database d ON (b.reldatabase=d.oid AND
d.datname=current_database()) GROUP BY c.relname
ORDER BY 2 DESC LIMIT 5;
and the output below showed one of the tables using 6594 buffers. This was during when I had tons of INSERT followed by SELECT and UPDATE in the data_main table).
relname | buffers
------------------+---------
data_main | 6594
objt_main | 1897
objt_access | 788
idx_data_mai | 736
I also ran "select * from pg_buffercache where is dirty" which showed around 50 entries.
How should I interpret these numbers? Does the buffer count correspond to all the transactions since I created the extension or the recent ones. How can I find out if my specific operation using the proper amount of buffers?
Here's my setting:
# show shared_buffers;
shared_buffers
----------------
1GB
# show work_mem;
work_mem
----------
128kB
# show maintenance_work_mem;
maintenance_work_mem
----------------------
64GB
And the current free mem (I have 64GM memory in this machine). And I have a mixed workload machine with period bursts of INSERTS and lots of SELECTS. Currently the database and tables are small but will grow to at least 2 million rows.
$ free -m
total used free shared buffers cached
Mem: 64375 33483 30891 954 15 15731
/+ buffers/cache: 18097 46278
Swap: 32767 38 32729
Basically, I am trying to understand how to properly use this pg_buffercache table. Should I ran this query periodically? And do I need to change my shared_buffers accordingly.

I did some reading and testing and this is what I have found. Found a userful query here: How large is a "buffer" in PostgreSQL
Here are a few notes for others that have similar questions.
You will need to create the extension for each database. So "\c db_name" then "create extension pg_buffercache".
Same for running the queries.
Restarting the database clears the queries.

Related

CloudSQL with PostgreSQL very slow performance

I wanted to migrate from BigQuery to CloudSQL to save cost.
My problem is that CloudSQL with PostgreSQL is very very slow compare to BigQuery.
A query that takes 1.5 seconds in BigQuery takes almost 4.5 minutes(!) on CloudSQL with PostgreSQL.
I have CloudSQL with PostgreSQL server with the following configs:
My database have a main table with 16M rows (around 14GB in RAM).
A example query:
EXPLAIN ANALYZE
SELECT
"title"
FROM
public.videos
WHERE
EXISTS (SELECT
*
FROM (
SELECT
COUNT(DISTINCT CASE WHEN LOWER(param) LIKE '%thriller%' THEN '0'
WHEN LOWER(param) LIKE '%crime%' THEN '1' END) AS count
FROM
UNNEST(categories) AS param
) alias
WHERE count = 2)
ORDER BY views DESC
LIMIT 12 OFFSET 0
The table is a videos tables with categories column as text[].
The search condition here looks where there is a categories which is like '%thriller%' and like '%crime%' exactly two times
The EXPLAIN ANALYZE of this query gives this output (CSV): link.
The EXPLAIN (BUFFERS) of this query gives this output (CSV): link.
Query Insights graph:
Memory profile:
BigQuery reference for the same query on the same table size:
Server config: link.
Table describe: link.
My goal is to have Cloud SQL with the same query speed as Big Query
For anyone coming here wondering how to tune their postgres machine on cloud sql they call it flags and you can do it from the UI although not all the config options are edit able.
https://cloud.google.com/sql/docs/postgres/flags#console
The initial query looks overcomplicated. It could be rewritten as:
SELECT v."title"
FROM public.videos v
WHERE array_to_string(v.categories, '^') ILIKE ALL (ARRAY['%thriller%', '%crime%'])
ORDER BY views DESC
LIMIT 12 OFFSET 0;
db<>fiddle demo
PostGreSQL is very slow by design on every queries involving COUNT aggregate function and there is absolutly nothing to do except materialized view to enforces the performances.
The tests I have made on my machine with 48 cores about COUNT performances compare from PostGreSQL to MS SQL Server is clear : SQL Server is between 61 and 561 times faster in all situations, and with columnstore index SQL Server can be 1,533 time faster…
The same speed is reached when using any other RDBMS. The explanation is clearly the PG MVCC that maintain ghost rows inside table and index pages, that needs to browse every rows to know if it is an active or ghost row... In all the other RDBMS, the count is done by reading only one information at the top of the page (number of rows in the page) and also by using parallelized access or in SQL Server a batch access and not a row access...
There is nothing to do to speed up the count in PG until the storage engine will not been enterely rewrite to avoid ghost slots inside pages...
I believe you need to use a full-text search and the special GIN index. The steps:
Create the helper function for index: CREATE OR REPLACE FUNCTION immutable_array_to_string(text[]) RETURNS text as $$ SELECT array_to_string($1, ','); $$ LANGUAGE sql IMMUTABLE;
Create index itself:
CREATE INDEX videos_cats_fts_idx ON videos USING gin(to_tsvector('english', LOWER(immutable_array_to_string(categories))));
Use the following query:
SELECT title FROM videos WHERE (to_tsvector('english', immutable_array_to_string(categories)) ## (to_tsquery('english', 'thriller & crime'))) limit 12 offset 0;
Be aware that this query has a different meaning for 'crime' and 'thriller'. They are not just substrings. They are tokens in English phrases. But it looks that actually it is better for your task. Also, this index is not good for frequently changed data. It should work fine when you have mostly read-only data.
PS
This answer is inspired by answer & comments: https://stackoverflow.com/a/29640418/159923
Apart from the sql syntax optimization, have you tried Postgresql tune?
I check the explaination has found only two workers in parallel and 25KMemory used in sorting.
Workers Planned: 2"
Sort Method: quicksort Memory: 25kB"
For your query, it is typical OLAP query. it performance usually related the memory(memory and cpu cores used(workers). The default postgres use KB level memory and few workers. You can tune your postgresql.conf to optimized it work as OLAP type database.
===================================================
Here is my suggestion: use more memory(9MB as work mem ) and more cpu(max 16)
# DB Version: 13
# OS Type: linux
# DB Type: dw
# Total Memory (RAM): 24 GB
# CPUs num: 16
# Data Storage: ssd
max_connections = 40
shared_buffers = 6GB
effective_cache_size = 18GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 500
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 9830kB
min_wal_size = 4GB
max_wal_size = 16GB
max_worker_processes = 16
max_parallel_workers_per_gather = 8
max_parallel_workers = 16
max_parallel_maintenance_workers = 4
You can add it to you postgresql.conf last line. And restart your postgresql server to make it effect.
To further optimization,
reduce the connection and increase the work_mem.
200* 9830 is about 2GB memory for all connections. If you has less( for example, 100) connections, you can get more memory for query working.
====================================
Regarding using text array type and unnest. you can try to add proper index.
That's all, good luck.
WangYong

PostgreSQL "pg_prewarm" buffer size

Table orders contains total 1,500,000 toples. After a fresh restart of the system, I ran the following query:
SELECT pg_prewarm('orders');
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM orders WHERE o_totalprice < 100
Which gave a buffer output as following:
Buffers: shared hit=15768 read=10327
The select statement returns no records.
Now my question is, how did PostgreSQL calculate that it will take 15768 blocks in buffer?
Your shared_buffers is set to 128MB, right?
128 MB of shared buffers translates to 16384 blocks of size 8KB in the cache.
So when you run pg_prewarm('orders'), PostgreSQL will read the complete table into shared buffers. Now the table is bigger than your shared_buffers, so the first blocks “drop out” of the cache again when the last blocks are read, because shared_buffers cannot fit them all.
Increase shared_buffers if you want to have the whole table in the cache.

Postgres multi-column index is taking forever to complete

I have a table with around 270,000,000 rows and this is how I created it.
CREATE TABLE init_package_details AS
SELECT pcont.package_content_id as package_content_id,
pcont.activity_id as activity_id,
pc.org_id as org_id,
pc.bed_type as bed_type,
pc.is_override as is_override,
pmmap.package_id as package_id,
pcont.activity_qty as activity_qty,
pcont.charge_head as charge_head,
pcont.activity_charge as charge,
COALESCE(pc.charge,0) - COALESCE(pc.discount,0) as package_charge
FROM a pc
JOIN b od ON
(od.org_id = pc.org_id AND od.status='A')
JOIN c pm ON
(pc.package_id=pm.package_id)
JOIN d pmmap ON
(pmmap.pack_master_id=pm.package_id)
JOIN e pcont ON
(pcont.package_id=pmmap.package_id);
I need to build index on the init_package_details table.
This table is getting created at around 5-6 mins.
I have created btree index like,
CREATE INDEX init_package_details_package_content_id_idx
ON init_package_details(package_content_id);`
which is taking 10 mins (More than the time to create and populate the table itself)
And, when I create another index like,
CREATE INDEX init_package_details_package_act_org_bt_id_idx
ON init_package_details(activity_id,org_id,bed_type);
It just freezes and taking forever to complete. I waited for around 30 mins before I manually cancelled it.
Below are stats from iotop -o if it helps,
When I created table Averaging around 110-120 MB/s (This is how 270 million rows got inserted in 5-6 mins)
When I created First Index, It was averaging at around 70 MB/s
On second index, it is snailing at 5-7 MB/s
Could someone explain Why is this happening? Is there anyway I can speedup the index creations here?
EDIT 1: There are no other connections accessing the table. And, pg_stat_activity shows active as status throughout the running time. This happens inside a transaction (this is happening between BEGIN and COMMIT, it contains many other scripts in same .sql file).
EDIT 2:
postgres=# show work_mem ;
work_mem
----------
5MB
(1 row)
postgres=# show maintenance_work_mem;
maintenance_work_mem
----------------------
16MB
Building indexes takes a long time, that's normal.
If you are not bottlenecked on I/O, you are probably on CPU.
There are a few things to improve the performance:
Set maintenance_work_mem very high.
Use PostgreSQL v11 or better, where several parallel workers can be used.

High number of live/dead tuples in postgresql/ Vacuum not working

There is a table , which has 200 rows . But number of live tuples showing there is more than that (around 60K) .
select count(*) from subscriber_offset_manager;
count
-------
200
(1 row)
SELECT schemaname,relname,n_live_tup,n_dead_tup FROM pg_stat_user_tables where relname='subscriber_offset_manager' ORDER BY n_dead_tup
;
schemaname | relname | n_live_tup | n_dead_tup
------------+---------------------------+------------+------------
public | subscriber_offset_manager | 61453 | 5
(1 row)
But as seen from pg_stat_activity and pg_locks , we are not able to track any open connection .
SELECT query, state,locktype,mode
FROM pg_locks
JOIN pg_stat_activity
USING (pid)
WHERE relation::regclass = 'subscriber_offset_manager'::regclass
;
query | state | locktype | mode
-------+-------+----------+------
(0 rows)
I also tried full vacuum on this table , Below are results :
All the times no rows are removed
some times all the live tuples become dead tuples .
Here is output .
vacuum FULL VERBOSE ANALYZE subscriber_offset_manager;
INFO: vacuuming "public.subscriber_offset_manager"
INFO: "subscriber_offset_manager": found 0 removable, 67920 nonremovable row versions in 714 pages
DETAIL: 67720 dead row versions cannot be removed yet.
CPU 0.01s/0.06u sec elapsed 0.13 sec.
INFO: analyzing "public.subscriber_offset_manager"
INFO: "subscriber_offset_manager": scanned 710 of 710 pages, containing 200 live rows and 67720 dead rows; 200 rows in sample, 200 estimated total rows
VACUUM
SELECT schemaname,relname,n_live_tup,n_dead_tup FROM pg_stat_user_tables where relname='subscriber_offset_manager' ORDER BY n_dead_tup
;
schemaname | relname | n_live_tup | n_dead_tup
------------+---------------------------+------------+------------
public | subscriber_offset_manager | 200 | 67749
and after 10 sec
SELECT schemaname,relname,n_live_tup,n_dead_tup FROM pg_stat_user_tables where relname='subscriber_offset_manager' ORDER BY n_dead_tup
;
schemaname | relname | n_live_tup | n_dead_tup
------------+---------------------------+------------+------------
public | subscriber_offset_manager | 68325 | 132
How Our App query to this table .
Our application generally select some rows and based on some business calculation, update the row .
select query -- select based on some id
select * from subscriber_offset_manager where shard_id=1 ;
update query -- update some other column for this selected shard id
around 20 threads do this in parallel and One thread works on only one row .
app is writen in java and we are using hibernate to do db operations .
Postgresql version is 9.3.24
One more interesting observation :
- when i stop my java app and then do full vacuum , it works fine (number of rows and live tuples become equal). So there is something wrong if we select and update continuously from java app . –
Problem/Issue
These live tuples some times go to dead tuples and after some times again comes to live .
Due to above behaviour select from the table taking time and increasing load on server as lots of live/deadtuples are there ..
I know three things that keep VACUUM from doing its job:
Long running transactions.
Prepared transactions that did not get committed.
Stale replication slots.
See my blog post for details.
I got the issue ☺ .
For Understanding the issue consider the following flow :
Thread 1 -
Opens a hibernate session
Make some queries on Table-A
Select from subscriber_offset_manager
Update subscriber_offset_manager .
Closes the Session .
Many Threads of Type Thread-1 running in parallel .
Thread 2 -
These type of threads are running in parallel .
Opens a hibernate session
Make some select queries on Table-A
Does not close session .(session leak .)
Temporary Solution - If i close all those connection made by Thread-2 by using pg_cancel_backend then vacuuming starts working .
Also we have recreated the issue many times and tried this solution and it worked .
Now, there are following doubts which are still not answered .
Why postgres is not showing any data related to table "subscriber_offset_manager" .
This issue is not re-creating when instead of running Thread-2 , if we run select on Table-A , using psql .
why postgres is working like this with jdbc .
Some more mind blowing observation :
event if we run queries on "subscriber_offset_manager" in different session then also issue coming ;
we found many instance here where Thread 2 is working on some third table "Table-C" and issue is coming
all these type od transactions state in pg_stat_activity is "idle_in_transaction ."
#Erwin Brandstetter and #Laurenz Albe , if you know there is bug related to postgres/jdbc .
There might be locks after all, your query might be misleading:
SELECT query, state,locktype,mode
FROM pg_locks
JOIN pg_stat_activity USING (pid)
WHERE relation = 'subscriber_offset_manager'::regclass
pg_locks.pid can be NULL, then the join would eliminate rows. The manual for Postgres 9.3:
Process ID of the server process holding or awaiting this lock, or null if the lock is held by a prepared transaction
Bold emphasis mine. (Still the same in pg 10.)
Do you get anything for the simple query?
SELECT * FROM pg_locks
WHERE relation = 'subscriber_offset_manager'::regclass;
This could explain why VACUUM complains:
DETAIL: 67720 dead row versions cannot be removed yet.
This, in turn, would point to problems in your application logic / queries, locking more rows than necessary.
My first idea would be long running transactions, where even a simple SELECT (acquiring a lowly ACCESS SHARE lock) can block VACUUM from doing its job. 20 threads in parallel might chain up and lock out VACUUM indefinitely. Keep your transactions (and their locks) as brief as possible. And make sure your queries are optimized and don't lock more rows than necessary.
One more thing to note: transaction isolation levels SERIALIZABLE or REPEATABLE READ make it much harder for VACUUM to clean up. Default READ COMMITTED mode is less restrictive, but VACUUM can still be blocked as discussed.
Related:
What are the consequences of not ending a database transaction?
Postgres UPDATE … LIMIT 1
VACUUM VERBOSE outputs, nonremovable “dead row versions cannot be removed yet”?

Do I need to manually VACUUM temporary tables in PostgreSQL?

Consider I have an application server which:
uses connection pooling (with a relatively high number of allowed idle connections),
can run for months, and
makes heavy use of temporary tables (which are not DROP'ped on COMMIT).
The above means that I may have N "eternal" database sessions "holding" N temporary tables, which will only be dropped when the server is restarted.
I'm well aware that the autovacuum daemon can't access those temporary tables.
My question is, if I make frequent INSERT's to and DELETE's from temporary tables, and the tables are supposed to "live" for a long time, then do I need to manually VACUUM those tables after a deletion, or a single manual ANALYZE would be enough?
Currently, if I execute
select
n_tup_del,
n_live_tup,
n_dead_tup,
n_mod_since_analyze,
vacuum_count,
analyze_count
from
pg_stat_user_tables
where
relname = '...'
order by
n_dead_tup desc;
I see the that vacuum_count is always zero:
n_tup_del n_live_tup n_dead_tup n_mod_since_analyze vacuum_count analyze_count
64 3 64 0 0 16
50 1 50 26 0 3
28 1 28 2 0 5
7 1 7 4 0 4
3 1 3 2 0 4
1 6 1 8 0 2
0 0 0 0 0 0
which may mean that manual VACUUM is indeed required.
https://www.postgresql.org/docs/current/static/sql-commands.html
ANALYZE — collect statistics about a database
VACUUM — garbage-collect
and optionally analyze a database
vacuum can optionaly also analyze. So if all you want - fresh stats - just analyze. If you want to "recover" unused rows, then vacuum. I f you want both, use vacuum analyze
We had and application which was running for 24+ hours using a lot of long living quite heavy updated temp tables and we used ANALYZE on them. But there is a problem with VACUUM - if you try to use in function you get an error:
ERROR: VACUUM cannot be executed from a function or multi-command string
CONTEXT: SQL statement "vacuum xxxxxx"
PL/pgSQL function inline_code_block line 4 at SQL statement
SQL state: 25001
But later we discovered, that temp tables actually were not so advantageous at least for our app. Technically they are normal tables existing as datafiles on disk in so called temporary tablespace (either pg_default or you can set it in postgresql.conf file). But they use only so called temp_buffers - they are not loaded into shared_buffers. So you have to set temp_buffers properly and rely more on Linux cache. And as you already mentioned - autovacuum daemon "does not see" them. Therefore we later switched to using normal tables.