PostgreSQL database size increasing - postgresql

I have a strange problem. The size of my postgresql (8.3) is increasing. So I made a dump and then cleaned up the database and then re-imported the dump. The database size was reduced by roughly 50%.
Some infomation:
(1) AUTOVACUUM and REINDEX are running regularly in background.
(2) Database encoding is ASCII.
(3) Database location: /database/pgsql/data
(4) System: Suse-Ent. 10.
Any hints are appreciated

If the dead tuples have stacked up beyond what can be accounted for in max_fsm_pages, a regular VACUUM will not be able to free everything. The end result is that the database will grow larger and larger over time as dead space continues to accumulate. Running a VACUUM FULL should fix this problem. Unfortunately it can take a very long time on a large database.
If you're running into this problem frequently, you either need to vacuum more often (autovacuum can help here) or increase the max_fsm_pages setting. When running VACUUM VERBOSE it will tell you how many pages were freed and give you a warning if max_fsm_pages was exceeded, this can help you determine what this value should be. See the manual for more information. http://www.postgresql.org/docs/8.3/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-FSM
Fortunately, 8.4's visibility map resolves this issue. Despesz has a great story on the subject as usual: http://www.depesz.com/index.php/2008/12/08/waiting-for-84-visibility-maps/

Without knowing more specifics about your particular setup, a couple of things come to mind. When AUTOVACUUM runs, is it trying to reclaim disk space, and can you verify that it is through server logs?
Secondly, especially if the previous answer was no, your AUTOVACUUM values may be incorrect. I would highly recommend reading the following on the subject: http://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html#AUTOVACUUM

running reindex shouldn't be necessary.
run database wide vacuum with verbose, and check the last lines for fsm settings hint - maybe it is what is wrong.

Did you try a VACUUM FULL, too? (Warning, it locks your database for a long time.) I am not sure that AUTOVACUUM is so eager...

If you haven't already, check your system for long-running idle transactions. They will prevent VACUUM (both manual and auto) from clearing out space.

Related

How to determine how much "slack" in postgres database?

I've got a postgres database which I recently vacuumed. I understand that process marks space as available for future use, but for the most part does not return it to the OS.
I need to track how close I am to using up that available "slack space" so I can ensure the entire database does not start to grow again.
Is there a way to see how much empty space the database has inside it?
I'd prefer to just do a VACUUM FULL and monitor disk consumption, but I can't lock the table for a prolonged period, nor do I have the disk space.
Running version 13 on headless Ubuntu if that's important.
Just like internal free space is not given back to the OS, it also isn't shared between tables or other relations (like indexes). So having freespace in one table isn't going to help if a different table is the one growing. You can use pg_freespacemap to get a fast approximate answer for each table, or pgstattuple for more detailed data.

Reclaim disk space without locking table - PostgreSQL 10

I have a couple of tables in a PostgreSQL database which are used very frequently (for Insert/Delete purposes). Sometimes, their size of the tables grow up to GB's. How do I reclaim the disk space from these tables without locking them. These tables need to be used almost all the times so I can't afford getting them locked. VACUUM FULL reclaims the disk space however locks the table so I can't use FULL option.
Can someone please suggest a way?
Thanks
Often you can avoid the problem by configuring autovacuum sufficiently aggressive that it can keep up with the change rate.
If that doesn't do the trick, or if you have regular mass DELETEs, look into a tool like pg_squeeze or pg_repack.

How to Remove Dead Row Version in Postgres 9.2

I ran a vaccum on the tables of my database and it appears it is not helping me. For example I have a huge table and when I run vacuum on it, it returns there are 87887889 dead row versions that can not be deleted
My question is how to get rid of these dead rows
You have two basic options if a routine vacuum is insufficient. Both require a full table lock.
VACUUM FULL. This requires no additional disk space but takes a while to complete.
CLUSTER. This rewrites a table in a physical order optimized for a given index. It requires additional space to do the rewrite, but is much faster.
In general I would recommend using CLUSTER during a maintenance window if disk space allows.

Executing VACUUM FULL on postgresql db which is stopped/down

I am attempting to restart a postgresql db which has stopped/is down and requires a VACUUM.
http://suwala.eu/blog/2010/10/09/how-to-vacuum-postgresql/
Following the above sequence of commands, I can't seem to get the last line to execute right.
$ postgres -D /var/lib/pgsql/data YOUR_DATABASE_NAME < /tmp/fix.sql
This gives me an error that says
postgres: invalid argument: "YOUR_DATABASE_NAME"
Try "postgres --help" for more information.
Any idea why?
CLARIFICATION
The 'YOUR_DATABASE_NAME' and the data directory I used on my server are the correct ones.
The referenced "how-to-vacuum-postgresql" page referenced in the question gives some very bad advice when it recommends VACUUM FULL. All that is needed is a full-database vacuum, which is simply a VACUUM run as the database superuser against the entire database (i.e., you don't specify any table name).
A VACUUM FULL works differently based on the version, but it eliminates all space within the heap files which is held by the database for quick re-use, and releases it to the OS. This can be much slower than the minimum needed to get back to a usable database, by orders of magnitude. And since any inserts or updates after the VACUUM FULL require OS calls to re-allocate space to the database, it can cause slower execution afterward, unless your database had a lot of bloat. (Although, if you turned off autovacuum, it might be in horrible shape, but you probably want to get back on your feet first, and sort that out later.)
Another issue with VACUUM FULL before version 9.0 is that while it eliminates bloat in a table's heap files, it tends to increase bloat in its index files, sometimes dramatically. If you issue a VACUUM FULL, you should normally follow it with a REINDEX to get the indexes back into good shape.
The page referenced in the question also fails to heed the advice given in the PostgreSQL docs at http://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND to use single-user mode:
since the system will not execute commands once it has gone into the
safety shutdown mode, the only way to do this is to stop the server
and use a single-user backend to execute VACUUM. The shutdown mode is
not enforced by a single-user backend. See the postgres reference page
for details about using a single-user backend.
As others have mentioned -- there is almost no use case where turning off autovacuum is beneficial. It may be useful to supplement the autovacuum activity with explicit vacuums on large tables, or you may want to adjust autovacuum configuration, but really -- don't turn it off or you will see bloat which saps performance and you'll run into transaction ID wraparound problems periodically. People who notice a performance hit when autovacuum is performing maintenance sometimes have an instinct to make it less aggressive in triggering, but that is usually counter-productive. It is generally better to adjust the autovacuum cost limitation parameters to pace the work, rather than have it neglect tables which need maintenance.
This appears to be an issue in PostgreSQL, as according to documentation for 9.0 and 8.3 it should work with those versions but doesn't.
However, using --single switch makes it work:
postgres --single -D [path-to-data-dir] [db-name] < /tmp/fix.sql

Free space after massive postgres delete

I have a 9 million row table. I figured out that a large amount of it (around 90%) can be freed up. What actions are needed after the cleanup? Vacuum, reindex etc.
If you want to free up space on the file system, either VACUUM FULL or CLUSTER can help you. You may also want to run ANALYZE after these, to make sure the planner has up-to-date statistics but this is not specifically required.
It is important to note using VACUUM FULL places an ACCESS EXCLUSIVE lock on your table(s) (blocking any operation, writes & reads), so you probably want to take your application offline for the duration.
In PostgreSQL 8.2 and earlier, VACUUM FULL is probably your best bet.
In PostgreSQL 8.3 and 8.4, the CLUSTER command was significantly improved, so VACUUM FULL is not recommended -- it's slow and it will bloat your indexes. `CLUSTER will re-create indexes from scratch and without the bloat. In my experience, it's usually much faster too. CLUSTER will also sort the whole physical table using an index, so you must pick an index. If you don't know which, the primary key will work fine.
In PostgreSQL 9.0, VACUUM FULL was changed to work like CLUSTER, so both are good.
It's hard to make predictions, but on a properly tuned server with commodity hardware, 9 million rows shouldn't take longer than 20 minutes.
See the documentation for CLUSTER.
PostgreSQL wiki about VACUUM FULL and recovering dead space
You definitely want to run a VACUUM, to free up that space for future inserts. If you want to actually reclaim that space on disk, making it available to the OS, you'll need to run VACUUM FULL. Keep in mind that VACUUM can run concurrently, but VACUUM FULL requires an exclusive lock on the table.
You will also want to REINDEX, since the indexes will remain bloated even after the VACUUM runs. If possible, a much faster way to do this is to drop the index and create it again from scratch.
You'll also want to ANALYZE, which you can just combine with the VACUUM.
See the documentation for more info.
Hi
Don't it be more optimal to create a temporary table with 10% of needed records. Then drop original table and rename temporary to original ...
I'm relatively new to the world of Postgres, but I understand VACUUM ANALYZE is recommended. I think there's also a sub-option which just frees up space. I found reindex useful as well when doing batch inserts or deletes. Yes I've been working with tables with a similar number of rows, and the speed increase is very noticeable (UBuntu, Core 2 Quad)