CloudSQL instance has less storage usage after importing - postgresql

I have a CloudSQL instance (PostgreSQL) with 112.7 GB data in it:
I wanted to transfer the data in this instance to another one.
I did an export first and then created another instance and imported the data there.
All went good. However, the resulting instance has less storage usage, only 102 GB:
No errors found in the logs. I am wondering where the 10GB data went.
Is this expected?

This is caused due to fragmentation in the table. In the case of MySQL:
One symptom of fragmentation is that a table takes more space than it
“should” take. How much that is exactly, is difficult to determine.
All InnoDB data and indexes are stored in B-trees, and their fill
factor may vary from 50% to 100%. Another symptom of fragmentation is
that a table scan such as this takes more time than it “should”
take:...
In the PostgreSQL docs (see the section 23.1.2. Recovering Disk Space), it is explained:
In PostgreSQL, an UPDATE or DELETE of a row does not immediately
remove the old version of the row. This approach is necessary to gain
the benefits of multiversion concurrency control (MVCC, see Chapter
13): the row version must not be deleted while it is still potentially
visible to other transactions. But eventually, an outdated or deleted
row version is no longer of interest to any transaction. The space it
occupies must then be reclaimed for reuse by new rows, to avoid
unbounded growth of disk space requirements. This is done by running
VACUUM.
Also read the Vacuum the Dirt out of Your Database doc in order to see steps to overcome this.
Hope this helps.

Related

How to determine how much "slack" in postgres database?

I've got a postgres database which I recently vacuumed. I understand that process marks space as available for future use, but for the most part does not return it to the OS.
I need to track how close I am to using up that available "slack space" so I can ensure the entire database does not start to grow again.
Is there a way to see how much empty space the database has inside it?
I'd prefer to just do a VACUUM FULL and monitor disk consumption, but I can't lock the table for a prolonged period, nor do I have the disk space.
Running version 13 on headless Ubuntu if that's important.
Just like internal free space is not given back to the OS, it also isn't shared between tables or other relations (like indexes). So having freespace in one table isn't going to help if a different table is the one growing. You can use pg_freespacemap to get a fast approximate answer for each table, or pgstattuple for more detailed data.

Heroku Postgres database size doesn't go down after deleting rows

I'm using a Dev level database on Heroku that was about 63GB and approaching about 9.9 million rows (close to the limit of 10 million for this tier). I ran a script that deleted about 5 million rows I didn't need, and now (few days later) in the Postgres control panel/using pginfo:table-size it shows roughly 4.7 million rows but it's still at 63GB. 64 is the limit for he next tier so I need to reduce the size.
I've tried vacuuming but pginfo:bloat said the bloat was only about 3GB. Any idea what's happening here?
If you have [vacuum][1]ed the table, don't worry about the size one disk still remaining unchanged. The space has been marked as reusable by new rows. So you can easily add another 4.7 million rows and the size on disk wont grow.
The standard form of VACUUM removes dead row versions in tables and
indexes and marks the space available for future reuse. However, it
will not return the space to the operating system, except in the
special case where one or more pages at the end of a table become
entirely free and an exclusive table lock can be easily obtained. In
contrast, VACUUM FULL actively compacts tables by writing a complete
new version of the table file with no dead space. This minimizes the
size of the table, but can take a long time. It also requires extra
disk space for the new copy of the table, until the operation
completes.
If you want to shrink it on disk, you will need to VACUUM FULL which locks the tables and needs as much extra space as the size of the tables when the operation is in progress. So you will have to check your quota before you try this and your site will be unresponsive.
Update:
You can get a rough idea about the size of your data on disk by querying the pg_class table like this:
SELECT SUM(relpages*8192) from pg_class
Another method is a query of this nature:
SELECT pg_database_size('yourdbname');
This link: https://www.postgresql.org/docs/9.5/static/disk-usage.html provides additional information on disk usage.

When should one vacuum a database, and when analyze?

I just want to check that my understanding of these two things is correct. If it's relevant, I am using Postgres 9.4.
I believe that one should vacuum a database when looking to reclaim space from the filesystem, e.g. periodically after deleting tables or large numbers of rows.
I believe that one should analyse a database after creating new indexes, or (periodically) after adding or deleting large numbers of rows from a table, so that the query planner can make good calls.
Does that sound right?
vacuum analyze;
collects statistics and should be run as often as much data is dynamic (especially bulk inserts). It does not lock objects exclusive. It loads the system, but is worth of. It does not reduce the size of table, but marks scattered freed up place (Eg. deleted rows) for reuse.
vacuum full;
reorganises the table by creating a copy of it and switching to it. This vacuum requires additional space to run, but reclaims all not used space of the object. Therefore it requires exclusive lock on the object (other sessions shall wait it to complete). Should be run as often as data is changed (deletes, updates) and when you can afford others to wait.
Both are very important on dynamic database
Correct.
I would add that you can change the value of the default_statistics_target parameter (default to 100) in the postgresql.conf file to a higher number, after which, you should restart your server and run analyze to obtain more accurate statistics.

How to Remove Dead Row Version in Postgres 9.2

I ran a vaccum on the tables of my database and it appears it is not helping me. For example I have a huge table and when I run vacuum on it, it returns there are 87887889 dead row versions that can not be deleted
My question is how to get rid of these dead rows
You have two basic options if a routine vacuum is insufficient. Both require a full table lock.
VACUUM FULL. This requires no additional disk space but takes a while to complete.
CLUSTER. This rewrites a table in a physical order optimized for a given index. It requires additional space to do the rewrite, but is much faster.
In general I would recommend using CLUSTER during a maintenance window if disk space allows.

What is the effect on record size of reordering columns in PostgreSQL?

Since Postgres can only add columns at the end of tables, I end up re-ordering by adding new columns at the end of the table, setting them equal to existing columns, and then dropping the original columns.
So, what does PostgreSQL do with the memory that's freed by dropped columns? Does it automatically re-use the memory, so a single record consumes the same amount of space as it did before? But that would require a re-write of the whole table, so to avoid that, does it just keep a bunch of blank space around in each record?
The question is old, but since both answers are wrong or misleading, I'll add another one.
When updating a row, Postgres writes a new row version and the old one is eventually removed by VACUUM after no running transaction can see it any more.
Plain VACUUM does not return disk space from the physical file that contains the table to the system, unless it finds completely dead or empty blocks at the physical end of the table. You need to run VACUUM FULL or CLUSTER to aggressively compact the table and return excess space to the system. This is not typically desirable in normal operation. Postgres can re-use dead tuples to keep new row versions on the same data page, which benefits performance.
In your case, since you update every row, the size of the table is doubled (from its minimum size). It's advisable to run VACUUM FULL or CLUSTER to return the bloat to the system.
Both take an exclusive lock on the table. If that interferes with concurrent access, consider pg_repack, which can do the same without exclusive locks.
To clarify: Running CLUSTER reclaims the space completely. No VACUUM FULL is needed after CLUSTER (and vice versa).
More details:
PostgreSQL 9.2: vacuum returning disk space to operating system
From the docs:
The DROP COLUMN form does not physically remove the column, but simply makes it invisible to SQL operations. Subsequent insert and update operations in the table will store a null value for the column. Thus, dropping a column is quick but it will not immediately reduce the on-disk size of your table, as the space occupied by the dropped column is not reclaimed. The space will be reclaimed over time as existing rows are updated.
You'll need to do a CLUSTER followed by a VACUUM FULL to reclaim the space.
Why do you "reorder" ? There is no order in SQL, it doesn't make sence. If you need a fixed order, tell your queries what order you need or use a view, that's what views are made for.
Diskspace will be used again after vacuum, auto_vacuum will do the job. Unless you disabled this process.
Your current approach will kill overall performance (table locks), indexes have to be recreated, statistics go down the toilet, etc. etc. And in the end, you end up with the same situation you allready had. So why the effort?