I have used below query to find out temporary files present in postgresql-9.6 instance
SELECT datname, temp_files AS "Temporary files",temp_bytes AS "Size of temporary files" FROM pg_stat_database
order by temp_bytes desc;
result is as per below
why Postgresql maintaining temporary tables when there is no active session?
You are misunderstanding what those numbers are. They are totals over the lifetime of the database. They are not numbers for currently present temporary files.
Related
I have ~2.5 TB database, which is divided into tablespaces. The problem is that ~250 GB are stored in pg_defalut tablespace.
I have 3 tables and 6 tablespaces: 1 per each table and 1 for its index. Each tablespace directory is not empty, so there are no missing tablespaces for some tables/indexes. But the size of data/main/base/OID_of_database directory is about 250 GB.
Can anyone tell me what is stored there, is it OK, and if not, how can I move it to tablespace?
I am using PostgreSQL 10.
Inspect the base subdirectory of the data directory. It will contain number directories that correspond to your databases and perhaps a pgsql_tmp directory.
Find out which directory contains the 250GB. Map directory names to databases using
SELECT oid, datname
FROM pg_database;
Once you have identified the directory, change into it and see what it contains.
Map the numbers to database objects using
SELECT relname, relkind, relfilenode
FROM pg_class;
(Make sure you are connected to the correct database.)
Now you know which objects take up the space.
If you had frequent crashes during operations like ALTER TABLE or VACUUM (FULL), the files may be leftovers from that. They can theoretically be deleted, but I wouldn't do that without consulting with a PostgreSQL expert.
I have a situation in which summing the size of the tables in a tablespace (using pg_class among others) reveals that there is 550G of datafiles in a particular tablespace in a particular database.
However, there is 670G of files in that directory on the server.
FWIW, I don't know how that can be. No files have been written to that directory via any mechanism other than Postgres. My best guess is perhaps the database crashed while an autovacuum was going on, leaving orphan files laying around...does that sound plausible?)
SO, I've worked out a way, by reading the contents of a ls command into the database, strip off the numeric extensions for tables > 1G in size, and compare them with the contents of pg_class, and have, in fact, found about 120G of files not reflected in pg_class.
My question is, is it safe for me to delete these files, or could they be in active use by the database but not reflected in pg_class?
Do not manually delete files in the PostgreSQL data directory.
This is not safe and will corrupt your database.
The safe way to purge any files that don't belong to the database is to perform a pg_dumpall, stop the server, remove the data directory and the contents of all tablespace directories, breate a new cluster with inindb and restore the dump.
If you want to investigate the issue, you could try to create a new tablespace and move everything from the old to the new tablespace. I will describe that in the rest of my answer.
Move all the tables and indexes in all databases to the new tablespace:
ALTER TABLE ALL IN TABLESPACE oldtblsp SET TABLESPACE newtblsp;
ALTER INDEX ALL IN TABLESPACE oldtblsp SET TABLESPACE newtblsp;
If oldtblsp is the default tablespace of a database:
ALTER DATABASE mydb SET TABLESPACE newtblsp;
Then run a checkpoint:
CHECKPOINT;
Make sure you forgot no database:
SELECT datname
FROM pg_database d
JOIN pg_tablespace s
ON d.dattablespace = s.oid
WHERE s.spcname = 'oldtblsp';
Make sure that there are no objects in the old tablespace by running this query in all databases:
SELECT t.relname, t.relnamespace::regnamespace, t.relkind
FROM pg_class t
JOIN pg_tablespace s
ON t.reltablespace = s.oid
WHERE s.spcname = 'oldtblsp';
This should return no results.
Now the old tablespace should be empty and you can
DROP TABLESPACE oldtblsp;
If you really get an error
ERROR: tablespace "tblsp" is not empty
there might be some files left behind.
Delete them at your own risk...
I have a loaded OLTP db. I ALTER TABLE.. ADD PK on 100GB relation - want to check the progress. But until it is built I haven't it in pg_catalog for other transactions, so can't just select it.
I tried find ./base/14673648/ -ctime 1 also -mtime - hundreds of files, an dthen I thought - why do I think it has created a filenode?.. Just because it ate some space.
So forgive my ignorance and advise - how do I check the size on PK being created so far?
Update: I can sum ./base/pgsql_tmp/pgsql_tmpPID.N. where PID is pid of session that creats PK as per docs:
Temporary files (for operations such as sorting more data than can fit
in memory) are created within PGDATA/base/pgsql_tmp, or within a
pgsql_tmp subdirectory of a tablespace directory if a tablespace other
than pg_default is specified for them. The name of a temporary file
has the form pgsql_tmpPPP.NNN, where PPP is the PID of the owning
backend and NNN distinguishes different temporary files of that
backend.
New question: How can I get it from pg_catalog?
pondstats=# select pg_size_pretty(temp_bytes) from pg_stat_database where datid = 14673648;
pg_size_pretty
----------------
89 GB
(1 row)
shows the sum of all temp files, not per relation
A primary key is implemented with a unique index, and that has files in the data directory.
Unfortunately there is no way to check the progress of index creation (unless you know your way around the source and attach to the backend with a debugger).
You only need to concentrate on relation files are do not in the output of
SELECT relfilenode FROM pg_class
WHERE relfilenode <> 0
UNION
SELECT pg_relation_filenode(oid) FROM pg_class
WHERE pg_relation_filenode(oid) IS NOT NULL;
Once you know which file belongs to your index-in-creation (it should be growing fast, unless there is a lock blocking the statement) you can start guessing how long it has to go by comparing it to files belonging to a comparable index on a comparable table.
All pretty hand-wavy, I'm afraid.
I have a problem encountered lately in our Postgres database, when I query: select * from myTable,
it results to, 'could not open relation with OID 892600370'. And it's front end application can't run properly anymore. Base on my research, I determined the column that has an error but I want exactly to locate the rows OID of the column so that I can modify it. Please help.
Thank you in advance.
You've got a corrupted database. Might be a bug, but more likely bad hardware. If you have a recent backup, just use that. I'm guessing you don't though.
Make sure you locate any backups of either the database or its file tree and keep them safe.
Stop the PostgreSQL server and take a file backup of the entire database tree (base, global, pg_xlog - everything at that level). It is now safe to start fiddling...
Now, start the database server again and dump tables one at a time. If a table won't dump, try dropping any indexes and foreign-key constraints and give it another go.
For a table that won't dump, it might be just certain rows. Drop any indexes and dump a range of rows using COPY ... SELECT. That should let you narrow down any corrupted rows and get the rest.
Now you have a mostly-recovered database, restore it on another machine and take whatever steps are needed to establish what is damaged/lost and what needs to be done.
Run a full set of tests on the old machine and see if anything needs replacement. Consider whether your monitoring needs improvement.
Then - make sure you keep proper backups next time, that way you won't have to do all this, you'll just use them instead.
could not open relation with OID 892600370
A relation is a table or index. A relation's OID is the OID of the row in pg_class where this relation is defined.
Try select relname from pg_class where oid=892600370;
Often it's immediately obvious from relname what this relation is, otherwise you want to look at the other fields in pg_class: relnamespace, relkind,...
I am running PostgreSQL on Windows 8 using the OpenGeo Suite. I'm running out of disk space on a large join. How can I change the temporary directory where the "hash-join temporary file" gets stored?
I am looking at the PostgreSQL configuration file and I don't see a tmp file directory.
Note: I am merging two tables with 10 million rows using a variable text field which is set to a primary key.
This is my query:
UPDATE blocks
SET "PctBlack1" = race_blocks."PctBlack1"
FROM race_blocks
WHERE race_blocks.esriid = blocks.geoid10
First, make sure you have an index on these columns (of both tables). This would make PostgreSQL use less temporary files. Also, set the GUC work_mem to as high as possible, to make PostgreSQL use more memory for operations like this.
Now, if still need, to change the temporary path, you first need to create a tablespace (if you didn't do it already):
CREATE TABLESPACE temp_disk LOCATION 'F:\pgtemp';
Then, you have to set the GUC temp_tablespaces. You can set it per database, per user, at postgresql.conf or inside the current session (before your query):
SET temp_tablespaces TO 'temp_disk';
UPDATE blocks
SET "PctBlack1" = race_blocks."PctBlack1"
FROM race_blocks
WHERE race_blocks.esriid = blocks.geoid10
One more thing, the user must have CREATE privilege to use this:
GRANT CREATE ON TABLESPACE temp_disk TO app_user;
I was unable to set the F:/pgtemp directory directly in PostgreSQL due to a lack of permissions.
So I created a symlink to it using the windows command line "mklink /D" (a soft link). Now PostgreSQL writes temporary files to c:\Users\Administrator.opengeo\pgdata\Administrator\base\pgsql_tmp they get stored on the F: drive.