Deleted rows from PostgreSQL table but disk space not released - postgresql

I run df -h and it says 36Gb used, 9Gb available on my server. I can see that my PostgreSQL db file is 26Gb.
I delete millions of rows from my database, roughly half of the data in there.
I run df -h and it says the exact same numbers: 36Gb used, 9Gb available on disk.
I googled and found something about the VACUUM command, which said the deleted rows are still taking disk space until you run vacuum, I did this in verbose mode and it looks good but still no disk space released.

Related

FATAL: could not write lock file "postmaster.pid": No space left on device

I'm running a PostgreSQL database in a docker container. I allocated 300Gb for docker and everything was going great, having like 90% of free space. Some day I got this error, PostgreSQL Database directory appears to contain a database; Skipping initialization:: FATAL: could not write lock file "postmaster.pid": No space left on device, which I don't know where it came from and how to actually fix it. Temporarily, I just increased the size to 400Gb, which fixed it, but why? Now I have 95% space left, which goes according to the previous output of 90% free space. I tried to trace using df -h and some other commands to check for disk usage and didn't find anything. Has anyone faced something similar?
I tried df -h to check for disk usage and allocate more docker resources

psql runs out of memory when restoring dump

I have a PostgreSQL text dump file approximatley 4.5GB in size (uncompressed) that I am trying to restore, but always fails due to running out of memory.
Interestingly enough, no matter what I try it always fails at the exact same line number of the dump file, which leads me to believe the changes I have attempted have had no effect. (I did look at this line number in the file and it is just another row of data, nothing significant is occurring at that point in the file.)
I am using psql with the -f option, as I read that can be better than the standard input. Both methods fail, however.
I have tried the following:
increase work_mem from 4MB to 128MB
increase shared buffers from 128MB to 2GB
increase VM memory from 8GB to 16GB
Using both Top and PG_Top I can see (what I believe shows) both the OS and database still have memory available when psql fails. I'm not doubting that something somewhere is running out of memory, I just wish I had a better way of telling what exactly that was.
Other information that may be helpful:
PostgreSQL 10.5
Ubuntu 16.04 LTS running on VMWare Workstation

unable to recover disk space postgresql

The system is FC21 with Postgresql 9.3.9. The cluster has 6 databases and uses 38 GB of storage in the pgsql directory. Recently over 20GB of redundant data has been removed. Each db has been vacuumed with a 'vacuum all' command twice, additionally the entire cluster has been vacuumed twice with a vacuumdb -a command. All ran successfully. Postgresql has been stopped and restarted.
For verification a pg_dumpall command creates an 12GB file.
All the tables from one db were removed:
select pg_size_pretty(pg_database_size('db'));
Shows over 6GB remaining.
How can the space be recovered? It seems unreasonable to have to do a pg_restore to recover the space. I have read and re-read the 'recovering disk space' document.
A VACUUM command will only reclaim space that is at the end of table-files. You will want VACUUM FULL or vacuumdb -f.
You might also want to consider reindexdb since all this row rewriting might leave your indexes a little bloated.

How to reclaim space occupied by unused LOBs in PostgreSQL

I have a medium sized database cluster running on PostgreSQL 8.3.
The database stores digital files (images) as LOBs.
There is a fair bit of activity in the database cluster, a lot of content is created and deleted in an ongoing manner.
Even though the application table which hosts the OIDs, gets maintained properly by the application (when an image file is deleted), the size of the database cluster grows continuously.
Auto-vacuuming is active so this shouldn't happen.
LOBs are NOT deleted from the database when the rows of your application table (containing the OIDs) get deleted.
This also means that space will NOT be reclaimed by the VACUUM process.
In order to get rid of unused LOBs, you have run VACUUMLO on the databases. Vacuumlo will delete all of the unreferenced LOBs from a database.
Example call:
vacuumlo -U postgres -W -v <database_name>
(I only included the -v to make vacuumlo a bit more verbose so that you see how many LOBs it removes)
After vacuumlo has deleted the LOBs, you can run VACUUM FULL (or let the auto-vacuum process run).

postgreSQL vacuum temp files?

I've got a "little" problem. A week ago my database was reaching full disk capacity. I deleted many rows in different tables trying to free up disk space. After which I tried running a full vacuum which did not complete.
What I want to know is. When I stopped the vacuum from fully completing does it leave any temp files on the disk that I have to delete manually?
I now have a database which is at a 100% disk capacity, which needlessly to say is a big problem.
Any tips to free disk space?
I'm running SUSE with a postgres 8.1.4 database.
First of all:
UPGRADE
Even if you can't to 8.2, 8.3 or 8.4 - at least upgrade to newest 8.1 (which is 8.1.17 at the moment, but will be 8.1.18 in 1-2 days).
Second: diagnose what is the problem.
Use du tool to diagnose where exactly did the space go. What directory is occupying too much space?
Check with df what is total used space, and then check how much of it is PostgreSQL directory.
The best option is to:
cd YOUR_PGDATA_DIR
du -sk *
cd base
du -sk *
cd LARGEST DIR FROM PREVIOUS COMMAND
du -sk * | sort -nr | head
Now, that you know which directory in PGDATA is using space you can do something about it.
if it's logs or pg_temp - restart pg or remove logs (pg_clog and pg_xlog are not logs in common meaning of the word, never delete anything from there!).
If it's something in your base directory, then:
numerical directories in base directory relate to databases. You can check it with:
select oid, datname from pg_database;
When you know the database that is using most of the space, connect to it, and check which files are using most of the space.
File names will be numerical with optional ".digits" suffix - this suffix is (for now) irrelevant, and you can check what exactly the file represents by issuing:
select relname from pg_class where relfilenode = <NUMBER_FROM_FILE_NAME>;
Once you know which tables/indexes use most of the space - you can VACUUM FULL it, or (much better) issue CLUSTER command on them.
On the new tangent to your problem, you can find out what in the database is using lots of space using a query. That can help you locate candidates to TRUNCATE to reclaim enough working space to clean up the ones with deleted info.
Note that deleting lots of rows but not VACUUMing frequently enough to keep disk space in check will often lead to a condition called index bloat, which VACUUM FULL doesn't help with at all. You'll know you're there when the query I suggested shows most of your space is taken up by indexes rather than regular tables. You'll need CLUSTER, which needs as much free disk space as the table itself to rebuild everything, to recover from that problem.