PostgreSQL - restored database smaller than original - postgresql

I have made a backup of my PostgreSQL database using pg_dump to ".sql" file.
When I restored the database, its size was 2.8GB compared with 3.7GB of the source (original) database. The application that access the database appears to work fine.
What is the reason to smaller size of the restored database?

The short answer is that database storage is more optimised for speed than space.
For instance, if you inserted 100 rows into a table, then deleted every row with an odd numbered ID, the DBMS could write out a new table with only 50 rows, but it's more efficient for it to simply mark the deleted rows as free space and reuse them when you next insert a row. Therefore the table takes up twice as much space as is currently needed.
Postgres's use of "MVCC", rather than locking, for transaction management makes this even more likely, since an UPDATE usually involves writing a new row to storage, then marking the old row deleted once no transactions are looking at it.
By dumping and restoring the database, you are recreating a DB without all this free space. This is essentially what the VACUUM FULL command does - it rewrites the current data into a new file, then deletes the old file.
There is an extension distributed with Postgres called pg_freespace which lets you examine some of this. e.g. you can list the main table size (not including indexes and columns stored in separate "TOAST" tables) and free space used by each table with the below:
Select oid::regclass::varchar as table,
pg_size_pretty(pg_relation_size(oid)/1024 * 1024) As size,
pg_size_pretty(sum(free)) As free
From (
Select c.oid,
(pg_freespace(c.oid)).avail As free
From pg_class c
Join pg_namespace n on n.oid = c.relnamespace
Where c.relkind = 'r'
And n.nspname Not In ('information_schema', 'pg_catalog')
) tbl
Group By oid
Order By pg_relation_size(oid) Desc, sum(free) Desc;

The reason is simple: during its normal operation, when rows are updated, PostgreSQL adds a new copy of the row and marks the old copy of the row as deleted. This is multi-version concurrency control (MVCC) in action. Then VACUUM reclaims the space taken by the old row for data that can be inserted in the future, but doesn't return this space to the operating system as it's in the middle of a file. Note that VACUUM isn't executed immediately, only after enough data has been modified in the table or deleted from the table.
What you're seeing is entirely normal. It just shows that PostgreSQL database will be larger in size than the sum of the sizes of the rows. Your new database will most likely eventaully grow to 3.7GB when you start actively using it.

Related

PostgreSQL Database size is not equal to sum of size of all tables

I am using an AWS RDS PostgreSQL instance. I am using below query to get size of all databases.
SELECT datname, pg_size_pretty(pg_database_size(datname))
from pg_database
order by pg_database_size(datname) desc
One database's size is 23 GB and when I ran below query to get sum of size of all individual tables in this particular database, it was around 8 GB.
select pg_size_pretty(sum(pg_total_relation_size(table_schema || '.' || table_name)))
from information_schema.tables
As it is an AWS RDS instance, I don't have rights on pg_toast schema.
How can I find out which database object is consuming size?
Thanks in advance.
The documentation says:
pg_total_relation_size ( regclass ) → bigint
Computes the total disk space used by the specified table, including all indexes and TOAST data. The result is equivalent to pg_table_size + pg_indexes_size.
So TOAST tables are covered, and so are indexes.
One simple explanation could be that you are connected to a different database than the one that is shown to be 23GB in size.
Another likely explanation would be materialized views, which consume space, but do not show up in information_schema.tables.
Yet another explanation could be that there have been crashes that left some garbage files behind, for example after an out-of-space condition during the rewrite of a table or index.
This is of course harder to debug on a hosted platform, where you don't have shell access...

Bloating of pg_attribute caused by repetitive temporary table creations

I have a process which is creating thousands of temporary tables a day to import data into a system.
It is using the form of:
create temp table if not exists test_table_temp as
select * from test_table where 1=0;
This very quickly creates a lot of dead rows in pg_attribute as it is constantly making lots of new columns and deleting them shortly afterwards for these tables. I have seen solutions elsewhere that suggest using on commit delete rows. However, this does not appear to have the desired effect either.
To test the above, you can create two separate sessions on a test database. In one of them, check:
select count(*)
from pg_catalog.pg_attribute;
and also note down the value for n_dead_tup from:
select n_dead_tup
from pg_stat_sys_tables
where relname = 'pg_attribute';
On the other one, create a temp table (will need another table to select from):
create temp table if not exists test_table_temp on commit delete rows as
select * from test_table where 1=0;
The count query for pg_attribute immediately goes up, even before we reach the commit. Upon closing the temp table creation session, the pg_attribute value goes down, but the n_dead_tup goes up, suggesting that vacuuming is still required.
I guess my real question is have I missed something above, or is the only way of dealing with this issue vacuuming aggressively and taking the performance hit that comes with it?
Thanks for any responses in advance.
No, you have understood the situation correctly.
You either need to make autovacuum more aggressive, or you need to use fewer temporary tables.
Unfortunately you cannot change the storage parameters on a catalog table – at least not in a supported fashion that will survive an upgrade – so you will have to do so for the whole cluster.

PostgreSQL: UPDATE large table

I have a large PostgreSQL table of 29 million rows. The size (according to the stats tab in pgAdmin is almost 9GB.) The table is post-gis enabled with an empty geometry column.
I want to UPDATE the geometry column using ST_GeomFromText, reading from X and Y coordinate columns (SRID: 27700) stored in the same table. However, running this query on the whole table at once results in 'out of disk space' and 'connection to server lost' errors... the latter being less frequent.
To overcome this, should I UPDATE the 29 million rows in batches/stages? How can I do 1 million rows (the FIRST 1 million), then do the next 1 million rows until I reach 29 million?
Or are there other more efficient ways of updating large tables like this?
I should add, the table is hosted in AWS.
My UPDATE query is:
UPDATE schema.table
SET geom = ST_GeomFromText('POINT(' || eastingcolumn || ' ' || northingcolumn || ')',27700);
You did not give any server specs, writing 9GB can be pretty fast on recent hardware.
You should be OK with one, long, update - unless you have concurrent writes to this table.
A common trick to overcome this problem (a very long transaction, locking writes to the table) is to split the UPDATE into ranges based on the primary key, ran in separate transactions.
/* Use PK or any attribute with a known distribution pattern */
UPDATE schema.table SET ... WHERE id BETWEEN 0 AND 1000000;
UPDATE schema.table SET ... WHERE id BETWEEN 1000001 AND 2000000;
For high level of concurrent writes people use more subtle tricks (like: SELECT FOR UPDATE / NOWAIT, lightweight locks, retry logic, etc).
From my original question:
However, running this query on the whole table at once results in 'out of disk space' and 'connection to server lost' errors... the latter being less frequent.
Turns out our Amazon AWS instance database was running out of space, stopping my original ST_GeomFromText query from completing. Freeing up space fixed it.
On an important note, as suggested by #mlinth, ST_Point ran my query far quicker than ST_GeomFromText (24mins vs 2hours).
My final query being:
UPDATE schema.tablename
SET geom = ST_SetSRID(ST_Point(eastingcolumn,northingcolumn),27700);

Postgres Vacuum doesnt free up space

I have a table in my database which is occupying 161GB hard disk space. Only 5 gb free space is left out of 200Gb harddisk.
The following command shows that my table is consuming 161GB harddisk space,
select pg_size_pretty(pg_total_relation_size('Employee'));
There are close to 527 rows in the table. Now I deleted 250 rows. Again I checked the pg_total_relation_size of Employee. Still the size is 161GB.
After seeing the output of the above query, I ran the vacuum command:
VACUUM VERBOSE ANALYZE Employee;
I checked if the VACUUM actually happened using,
SELECT relname, last_vacuum, last_autovacuum FROM pg_stat_user_tables;
I can see the last vacuum time matching the time I ran the VACUUM command.
I also ran the below command to see if there any dead tuples,
SELECT relname, n_dead_tup FROM pg_stat_user_tables;
n_dead_tup count for Employee table is 0.
Still after all these above commands if I run,
select pg_size_pretty(pg_total_relation_size('Employee'));
it still shows 161GB.
May I please know the reason behind this? Also please correct me on how to free interface_list.
vacuum doesn't physically "free" space. It only marks no longer used space as re-usable. So subsequent UPDATE or INSERT statements can use that space instead of appending to the table.
Quote from the manual
The standard form of VACUUM removes dead row versions in tables and indexes and marks the space available for future reuse. However, it will not return the space to the operating system, except in the special case where one or more pages at the end of a table become entirely free and an exclusive table lock can be easily obtained
(emphasis mine)
If you re-insert the 250 deleted rows, you will see that the table doesn't grow again, as the newly inserted rows simply use the space that was marked free by vacuum.
If you actually want to physically reduce the size of the table to size that is "needed", you need to run vacuum full.
Quote from the manual
VACUUM FULL actively compacts tables by writing a complete new version of the table file with no dead space. This minimizes the size of the table, but can take a long time. It also requires extra disk space for the new copy of the table, until the operation completes
(emphasis mine)

Evaluate how much space will be freed by VACUUM in Redshift

According to AWS doc:
Amazon Redshift does not automatically reclaim and reuse space that is freed when you delete rows and update rows.
Before running VACUUM, is there a way to know or evaluate how much space will be free from disk by the VACUUM?
Thx
References:
http://docs.aws.amazon.com/redshift/latest/dg/t_Reclaiming_storage_space202.html
http://docs.aws.amazon.com/redshift/latest/dg/r_VACUUM_command.html
You can calculate the amount of storage that will be freed up from a vacuum command by looking up the tbl_rows column in the svv_table_info view. This includes rows that are marked for deletion. Compare that to a select count(*) from the same table and you'll have a ratio. Something like this on a theoretical table named factsales.
select (select cast(count(*) as numeric(12,0)) from factsales) /
cast(tbl_rows as numeric(12,0))
as "percentage of non deleted rows"
from svv_table_info where "table" = 'factsales'
There doesn't appear to be a straightforward way to execute dynamic SQL and cursors so to get this same ratio across all tables you'd have to execute the code from an external source or programming language i.e. python.
Its not an extremely accurate way, but you can query svv_table_info and look for the column deleted_pct. This will give you a rough idea, in percentage terms, about what fraction of the table needs to be rebuilt using vacuum.
You can run it for all the tables in your system to get this estimate for the whole system.