I am getting space issue while running a batch process on PostgreSQL database.
However, df -h command shows that machine has enough space
below is the exact error
org.springframework.dao.DataAccessResourceFailureException: PreparedStatementCallback; SQL [INSERT into BATCH_JOB_INSTANCE(JOB_INSTANCE_ID, JOB_NAME, JOB_KEY, VERSION) values (?, ?, ?, ?)]; ERROR: could not extend file "base/16388/16452": No space left on device
Hint: Check free disk space.
What is causing this issue?
EDIT
postgres data directory is /var/opt/rh/rh-postgresql96/lib/pgsql/data
df -h /var/opt/rh/rh-postgresql96/lib/pgsql/data
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 100G 63G 38G 63% /
Most likely there are some queries that create large temporary files which fill up your hard disk temporarily. These files will be deleted as soon as the query is done (or has failed), so the file system has enough free space when you look.
Set log_temp_files = 10240 in postgresql.conf (and reload) to log all temporary files exceeding 10 MB, then you can check the log file to see if this is indeed the reason.
Try to identify the bad queries and fix them.
If temporary files are not the problem, maybe temporary tables are. They are dropped automatically when the database session ends. Does your application use temporary tables?
Another possibility might be files created by something else than the database.
Related
I'm running a PostgreSQL database in a docker container. I allocated 300Gb for docker and everything was going great, having like 90% of free space. Some day I got this error, PostgreSQL Database directory appears to contain a database; Skipping initialization:: FATAL: could not write lock file "postmaster.pid": No space left on device, which I don't know where it came from and how to actually fix it. Temporarily, I just increased the size to 400Gb, which fixed it, but why? Now I have 95% space left, which goes according to the previous output of 90% free space. I tried to trace using df -h and some other commands to check for disk usage and didn't find anything. Has anyone faced something similar?
I tried df -h to check for disk usage and allocate more docker resources
I'm running a query that duplicates a very large table (92 million rows) on PostgreSQL. After a 3 iterations I got this error message:
The query was:
CREATE TABLE table_name
AS SELECT * FROM big_table
The issue isn't due to lack of space in the database cluster: at 0.3% of max possible storage at the time of running the query, table size is about 0.01% of max storage including all replicas. I also checked temporary files and it's not that.
You are definitely running out of file system resources.
Make sure you got the size right:
SELECT pg_table_size('big_table');
Don't forget that the files backing the new table are deleted after the error, so it is no surprise that you have lots of free space after the statement has failed.
One possibility is that you are not running out of disk space, but of free i-nodes. How to examine the free resources differs from file system to file system; for ext4 on Linux it would be
sudo dumpe2fs -h /dev/... | grep Free
The Postgres file system is out of synch with the database.
Basically, some tables were dropped in Postgres but the files under /base still exist.
Now I have orphaned files taking up too much space. How do I reclaim the disk space back?
I'd rather not mess with deleting files from the PG data directory unless I am certain they can be deleted.
I tried locating the file system oid in pg_class but they don't exist.
When I calculate db sizes using SELECT pg_size_pretty(pg_database_size(db));
The Master is 100G larger than the slave so there is definitely space that needs to be reclaimed but how?
I've got a "little" problem. A week ago my database was reaching full disk capacity. I deleted many rows in different tables trying to free up disk space. After which I tried running a full vacuum which did not complete.
What I want to know is. When I stopped the vacuum from fully completing does it leave any temp files on the disk that I have to delete manually?
I now have a database which is at a 100% disk capacity, which needlessly to say is a big problem.
Any tips to free disk space?
I'm running SUSE with a postgres 8.1.4 database.
First of all:
UPGRADE
Even if you can't to 8.2, 8.3 or 8.4 - at least upgrade to newest 8.1 (which is 8.1.17 at the moment, but will be 8.1.18 in 1-2 days).
Second: diagnose what is the problem.
Use du tool to diagnose where exactly did the space go. What directory is occupying too much space?
Check with df what is total used space, and then check how much of it is PostgreSQL directory.
The best option is to:
cd YOUR_PGDATA_DIR
du -sk *
cd base
du -sk *
cd LARGEST DIR FROM PREVIOUS COMMAND
du -sk * | sort -nr | head
Now, that you know which directory in PGDATA is using space you can do something about it.
if it's logs or pg_temp - restart pg or remove logs (pg_clog and pg_xlog are not logs in common meaning of the word, never delete anything from there!).
If it's something in your base directory, then:
numerical directories in base directory relate to databases. You can check it with:
select oid, datname from pg_database;
When you know the database that is using most of the space, connect to it, and check which files are using most of the space.
File names will be numerical with optional ".digits" suffix - this suffix is (for now) irrelevant, and you can check what exactly the file represents by issuing:
select relname from pg_class where relfilenode = <NUMBER_FROM_FILE_NAME>;
Once you know which tables/indexes use most of the space - you can VACUUM FULL it, or (much better) issue CLUSTER command on them.
On the new tangent to your problem, you can find out what in the database is using lots of space using a query. That can help you locate candidates to TRUNCATE to reclaim enough working space to clean up the ones with deleted info.
Note that deleting lots of rows but not VACUUMing frequently enough to keep disk space in check will often lead to a condition called index bloat, which VACUUM FULL doesn't help with at all. You'll know you're there when the query I suggested shows most of your space is taken up by indexes rather than regular tables. You'll need CLUSTER, which needs as much free disk space as the table itself to rebuild everything, to recover from that problem.
I'm trying to configure a PostgreSQL 9.6 database to limit the size of the pg_xlog folder. I've read a lot of threads about this issue or similar ones but nothing I've tried helped.
I wrote an setup script for my Postgresql 9.6 service instance. It executes initdb, registers a windows service, starts it, creates an empty database and restores a dump into the database. After the script is done, the database structure is fine, the data is there but the xlog folder already contains 55 files (880 mb).
To reduce the size of the folder, I tried setting wal_keep_segments to 0 or 1, setting the max_wal_size to 200mb, reducing the checkpoint_timeout, setting archive_mode off and archive_command to an empty string. I can see the properties have been set correctly when I query pg_settings.
I then forced checkpoints through SQL, vacuumed the database, restarted the windows service and tried pg_archivecleanup, nothing really worked. My xlog folder downsized to 50 files (800 mb), not anywhere near the 200 mb limit I set in the config.
I have no clue what else to try. If anyone can tell me what I'm doing wrong, I would be very grateful. If more information is required, I'll be glad to provide it.
Many thanks
PostgreSQL won't aggressively remove WAL segments that have already been allocated when max_wal_size was at the default value of 1GB.
The reduction will happen gradually, whenever a WAL segment is full and needs to be recycled. Then PostgreSQL will decide whether to delete the file (if max_wal_size is exceeded) or rename it to a new WAL segment for future use.
If you don't want to wait that long, you could force a number of WAL switches by calling the pg_switch_xlog() function, that should reduce the number of files in your pg_xlog.