We run postgresql 9.5.2 in an RDS instance. One thing we noticed was that a certain table sometimes grow very rapidly in size.
The table in question has only 33k rows and ~600 columns. All columns are numeric (decimal(25, 6)). After vacuum full, the "total_bytes" as reported in the following query
select c.relname, pg_total_relation_size(c.oid) AS total_bytes
from pg_class c;
is about 150MB. However, we observed this grew to 71GB at one point. In a recent episode, total_bytes grew by 10GB in a 30 minute period.
During the episode mentioned above, we had a batch update query that runs ~4 times per minute that updates every record in the table. However, during other times table size remained constant despite similar update activities.
I understand this is probably caused by "dead records" being left over from the updates. Indeed when this table grow too big simply running vacuum full will shrink it to its normal size (150M). My questions are
have other people experienced similar rapid growth in table size in postgresql and is this normal?
if our batch update queries are causing the rapid growth in table size, why doesn't it happen every time? In fact I tried to to reproduce it manually by running something like
update my_table set x = x * 2
but couldn't -- table size remained the same before and after the query.
The problem is having 600 columns in a single table, which is never a good idea. This is going to cause a lot of problems, table size is just one of them.
From the PostgreSQL docs...
The actual storage requirement [for numeric values] is two bytes for each group of four decimal digits, plus three to eight bytes overhead.
So decimal(25, 6) is something like 8 + (31 / 4 * 2) or about 24 bytes per column. At 600 columns per row that's about 14,400 bytes per row or 14k per row. At 33,000 rows that's about 450 megs.
If you're updating every row 4 times per minute, that's going to leave about 1.8 gigs per minute of dead rows.
You should fix your schema design.
You shouldn't need to touch every row of a table 4 times a minute.
You should ask a question about redesigning that table and process.
Related
We've noticed one of our tables growing considerably on PG 12. This table is the target of very frequent updates, with a mix of column types, including a very large text column (often with over 50kb of data) - we run a local cron job that looks for rows older than X time and set the text column to a null value (as we no longer need the data for that particular column after X amount of time).
We understand this does not actually free up disk space due to the MVCC model, but we were hoping that auto-vacuum would take care of this. To our surprise, the table continues to grow (now over 40gb worth) without auto-vacuum running. Running a vacuum manually has addressed the issue and we no longer see growth.
This has lead me to investigate other tables, I'm realising that I don't understand how auto-vacuum is triggered at all.
Here is my understanding of how it works, which hopefully someone can pick apart:
I look for tables that have a large amount of dead tuples in them:
select * from pg_stat_all_tables ORDER BY n_dead_tup desc;
I identify tableX with 33169557 dead tuples (n_dead_tup column).
I run a select * from pg_class ORDER BY reltuples desc; to check how many estimated rows there are on table tableX
I identify 1725253 rows via the reltuples column.
I confirm my autovacuum settings: autovacuum_vacuum_threshold = 50 and autovacuum_vacuum_scale_factor = 0.2
I apply the formula threshold + pg_class.reltuples * scale_factor, so, 50 + 1725253 * 0.2 which returns 345100.6
It is my understanding that auto-vacuum will start on this table once ~345100 dead tuples are found. But tableX is already at a whopping 33169557 dead tuples!, The last_autovacuum on this table was back in February.
Any clarification would be welcome.
Your algorithm is absolutely correct.
Here are some reasons why things could go wrong:
autovacuum runs, but is so slow that it never gets done
If you see no running autovacuum, that is not your problem.
autovacuum runs, but a long running open transaction prevents it from removing dead tuples
other tables need to be vacuumed more urgently (to avoid transaction ID wraparound), so the three workers are busy with other things
autovacuum runs, but conflicts with high concurrent locks on the table (LOCK TABLE, ALTER TABLE, ...)
This makes autovacuum give up and try again later.
autovacuum is disabled, perhaps only for that table
I have a table with around 270,000,000 rows and this is how I created it.
CREATE TABLE init_package_details AS
SELECT pcont.package_content_id as package_content_id,
pcont.activity_id as activity_id,
pc.org_id as org_id,
pc.bed_type as bed_type,
pc.is_override as is_override,
pmmap.package_id as package_id,
pcont.activity_qty as activity_qty,
pcont.charge_head as charge_head,
pcont.activity_charge as charge,
COALESCE(pc.charge,0) - COALESCE(pc.discount,0) as package_charge
FROM a pc
JOIN b od ON
(od.org_id = pc.org_id AND od.status='A')
JOIN c pm ON
(pc.package_id=pm.package_id)
JOIN d pmmap ON
(pmmap.pack_master_id=pm.package_id)
JOIN e pcont ON
(pcont.package_id=pmmap.package_id);
I need to build index on the init_package_details table.
This table is getting created at around 5-6 mins.
I have created btree index like,
CREATE INDEX init_package_details_package_content_id_idx
ON init_package_details(package_content_id);`
which is taking 10 mins (More than the time to create and populate the table itself)
And, when I create another index like,
CREATE INDEX init_package_details_package_act_org_bt_id_idx
ON init_package_details(activity_id,org_id,bed_type);
It just freezes and taking forever to complete. I waited for around 30 mins before I manually cancelled it.
Below are stats from iotop -o if it helps,
When I created table Averaging around 110-120 MB/s (This is how 270 million rows got inserted in 5-6 mins)
When I created First Index, It was averaging at around 70 MB/s
On second index, it is snailing at 5-7 MB/s
Could someone explain Why is this happening? Is there anyway I can speedup the index creations here?
EDIT 1: There are no other connections accessing the table. And, pg_stat_activity shows active as status throughout the running time. This happens inside a transaction (this is happening between BEGIN and COMMIT, it contains many other scripts in same .sql file).
EDIT 2:
postgres=# show work_mem ;
work_mem
----------
5MB
(1 row)
postgres=# show maintenance_work_mem;
maintenance_work_mem
----------------------
16MB
Building indexes takes a long time, that's normal.
If you are not bottlenecked on I/O, you are probably on CPU.
There are a few things to improve the performance:
Set maintenance_work_mem very high.
Use PostgreSQL v11 or better, where several parallel workers can be used.
I have a large PostgreSQL table of 29 million rows. The size (according to the stats tab in pgAdmin is almost 9GB.) The table is post-gis enabled with an empty geometry column.
I want to UPDATE the geometry column using ST_GeomFromText, reading from X and Y coordinate columns (SRID: 27700) stored in the same table. However, running this query on the whole table at once results in 'out of disk space' and 'connection to server lost' errors... the latter being less frequent.
To overcome this, should I UPDATE the 29 million rows in batches/stages? How can I do 1 million rows (the FIRST 1 million), then do the next 1 million rows until I reach 29 million?
Or are there other more efficient ways of updating large tables like this?
I should add, the table is hosted in AWS.
My UPDATE query is:
UPDATE schema.table
SET geom = ST_GeomFromText('POINT(' || eastingcolumn || ' ' || northingcolumn || ')',27700);
You did not give any server specs, writing 9GB can be pretty fast on recent hardware.
You should be OK with one, long, update - unless you have concurrent writes to this table.
A common trick to overcome this problem (a very long transaction, locking writes to the table) is to split the UPDATE into ranges based on the primary key, ran in separate transactions.
/* Use PK or any attribute with a known distribution pattern */
UPDATE schema.table SET ... WHERE id BETWEEN 0 AND 1000000;
UPDATE schema.table SET ... WHERE id BETWEEN 1000001 AND 2000000;
For high level of concurrent writes people use more subtle tricks (like: SELECT FOR UPDATE / NOWAIT, lightweight locks, retry logic, etc).
From my original question:
However, running this query on the whole table at once results in 'out of disk space' and 'connection to server lost' errors... the latter being less frequent.
Turns out our Amazon AWS instance database was running out of space, stopping my original ST_GeomFromText query from completing. Freeing up space fixed it.
On an important note, as suggested by #mlinth, ST_Point ran my query far quicker than ST_GeomFromText (24mins vs 2hours).
My final query being:
UPDATE schema.tablename
SET geom = ST_SetSRID(ST_Point(eastingcolumn,northingcolumn),27700);
I am relatively new to using Postgres, but am wondering what could be the workaround here.
I have a table with about 20 columns and 250 million rows, and an index created for the timestamp column time (but no partitions).
Queries sent to the table have been failing (although using the view first/last 100 rows function in PgAdmin works), running endlessly. Even simple select * queries.
For example, if I want to LIMIT a selection of the data to 10
SELECT * from mytable
WHERE time::timestamp < '2019-01-01'
LIMIT 10;
Such a query hangs - what can be done to optimize queries in a table this large? When the table was of a smaller size (~ 100 million rows), queries would always complete. What should one do in this case?
If time is of data type timestamp or the index is created on (time::timestamp), the query should be fast as lightning.
Please show the CREATE TABLE and the CREATE INDEX statement, and the EXPLAIN output for the query for more details.
"Query that doesn't complete" usually means that it does disk swaps. Especially when you mention the fact that with 100M rows it manages to complete. That's because index for 100M rows still fits in your memory. But index twice this size doesn't.
Limit won't help you here, as database probably decides to read the index first, and that's what kills it.
You could try and increase available memory, but partitioning would actually be the best solution here.
Partitioning means smaller tables. Smaller tables means smaller indexes. Smaller indexes have better chances to fit into your memory.
The table contains around 700 000 data. Is there any way to make the query run faster?
This table is stored on a server.
I have tried to run the query by taking the specific columns.
If select * from table_name is unusually slow, check for these things:
Network speed. How large is the data and how fast is your network? For large queries you may want to think about your data in bytes instead of rows. Run select bytes/1024/1024/1024 gb from dba_segments where segment_name = 'TABLE_NAME'; and compare that with your network speed.
Row fetch size. If the application or IDE is fetching one-row-at-a-time, each row has a large overhead with network lag. You may need to increase that setting.
Empty segment. In a few weird cases the table's segment size can increase and never shrink. For example, if the table used to have billions of rows, and they were deleted but not truncated, the space would not be released. Then a select * from table_name may need to read a lot of empty extents to get to the real data. If the GB size from the above query seems too large, run alter table table_name move; to rebuild the table and possible save space.
Recursive query. A query that simple almost couldn't have a bad execution plan. It's possible, but rate, for a recursive query has a bad execution plan. While the query is running, look at select * from gv$sql where users_executing > 0;. There might be a data dictionary query that's really slow and needs to be tuned.