Is database back up size is same as database size - postgresql

am working with PostgreSQL i checked following command then it returns 12MB
SELECT pg_size_pretty(pg_database_size('itcs'));
but when i took back up using pgadmin back up size is 1MB why this difrence

If you are taking a logical backup (with pg_dump), the backup contains only the data, no empty pages, no old versions of rows, no padding, no indexes. It may also be compressed. All that can greatly reduce the size.
If you are taking a physical backup, the backup more or less consists of the actual database files as they are, plus recovery logs to get them to a consistent state. So that would be roughly the same size as the database itself (but you can also compress it).

Related

Postgres database dump size larger than physical size

I just made an pg_dump backup from my database and its size is about 95GB but the size of the direcory /pgsql/data is about 38GB.
I run a vacuum FULL and the size of the dump does not change. The version of my postgres installation is 9.3.4, on a CentOS release 6.3 server.
It is very weird the size of the dump comparing with the physical size or I can consider this normal?
Thanks in advance!
Regards.
Neme.
The size of pg_dump output and the size of a Postgres cluster (aka 'instance') on disk have very, very little correlation. Consider:
pg_dump has 3 different output formats, 2 of which allow compression on-the-fly
pg_dump output contains only schema definition and raw data in a text (or possibly "binary" format). It contains no index data.
The text/"binary" representation of different data types can be larger or smaller than actual data stored in the database. For example, the number 1 stored in a bigint field will take 8 bytes in a cluster, but only 1 byte in pg_dump.
This is also why VACUUM FULL had no effect on the size of the backup.
Note that a Point In Time Recovery (PITR) based backup is entirely different from a pg_dump backup. PITR backups are essentially copies of the data on disk.
Postgres does compress its data in certain situations, using a technique called TOAST:
PostgreSQL uses a fixed page size (commonly 8 kB), and does not allow tuples to span multiple pages. Therefore, it is not possible to store very large field values directly. To overcome this limitation, large field values are compressed and/or broken up into multiple physical rows. This happens transparently to the user, with only small impact on most of the backend code. The technique is affectionately known as TOAST (or "the best thing since sliced bread").

What affects DB2 restored database size?

I have database TESTDB with following details:
Database size: 3.2GB
Database Capacity: 302 GB
One of its tablespaces has its HWM too high due to an SMP extent, so it is not letting me reduce the high water mark.
My backup size is around 3.2 GB (As backups contains only used pages)
If I restore this database backup image via a redirected restore, what will be the newly restored database's size?
Will it be around 3.2 GB or around 302 GB?
The short answer is that RESTORE DATABASE will produce a target database that occupies about as much disk space as the source database did when it was backed up.
On its own, the size of a DB2 backup image is not a reliable indicator of how big the target database will be. For one thing, DB2 provides the option to compress the data being backed up, which can make the backup image significantly smaller than the DB2 object data it contains.
As you correctly point out, the backup image only contains non-empty extents (blocks of contiguous pages), but the RESTORE DATABASE command will recreate each tablespace container to its original size (including empty pages) unless you specify different container locations and sizes via the REDIRECT parameter.
The 302GB of capacity you're seeing is from GET_DBSIZE_INFO and similar utilities, and is quite often larger than the total storage the database currently occupies. This is because DB2's capacity calculation includes not only unused pages in DMS tablespaces, but also any free space on volumes or drives that are used by an SMS tablespace (most DB2 LUW databases contain at least one SMS tablespace).

db.repairDatabase() did not reduce size of database

I have a db that is about 8G.
I did copy db to generate a copy.
Then I pruned the copy using js console.
Then I ran a reapir DB and the copy is still the exact size as the original.
In all likelihood, this means that you did not free up enough space to return an entire extent or file to the OS. Imagine that you have five 2GB files (MongoDB preallocates files in 2GB increments after the first few smaller files) and now imagine that you had 8GB of data in this DB. The last file will always be empty because MongoDB preallocates a file before it needs it. So 8GBs are occupying four 2GB files and one 2GB file is empty.
Now you do some pruning - maybe even 1.8GBs worth of deleting of stuff. You run repairDB which rewrites every single record, as compactly as possible in a new set of database files. Except it still needs the same five 2GB files because the fourth file has 100MB of data and the last file always has to be empty.
You can look at the output of db.stats() to see what the data size is compared to the storage size, but the fact is that these are relatively small numbers compared to the size of allocated files and that's likely why you are seeing what you are seeing.

Compact command not freeing up space in MongoDB 2.0

I just installed MongoDB 2.0 and tried to run the compact command instead of the repair command in earlier versions. My database is empty at the moment, meaning there is only one collection with 0 entries and the two system collections (indices, users). Currently the db takes about 4 GB of space on the harddisk. The db is used as a temp queue with all items being removes after they have been processed.
I tried to run the following in the mongo shell.
use mydb
db.theOnlyCollection.runCommand("compact")
It returns with
ok: 1
But still the same space is taken on the harddisk. I tried to compact the system collections as well, but this did not work.
When I run the normal repair command
db.repairDatabase()
the database is compacted and only takes 400 MB.
Anyone has an idea why the compact command is not working?
Thanks a lot for your help.
Best
Alex
Collection compaction is not supposed to decrease the size of data files. Main point is to defragment collection and index data - combine unused space gaps into continuous space allowing new data to be stored there. Moreover it may actually increase the size of data files:
Compaction may increase the total size of your data files by up to 2GB. Even in this case, total collection storage space will decrease.
http://www.mongodb.org/display/DOCS/compact+Command#compactCommand-Effectsofacompaction

Firebird database keeps getting larger

This one is interesting to me - despite the almost inane title. I have used Firebird for a long time, but not until recently noticed an interesting behavior.
I am using embedded Firebird 1.5, and noticed that if I stuff the database full of blobs (lets say 10mb worth), the size of the database increases. I can then delete all the fields in the database, and the file size of the DB remains at its expanded size. Currently it is at 20mb and is completely empty.
I know that Firebird has this built into its architecture (for quick indexing, speed issues etc), but I always thought it would decrease back down to its original ~2mb default.
Does anyone have any suggestions to 'deflate' the file size? The reason being is that this is a space conscious issue. If I had tons of space to work with, I wouldn't care. However that is not the case, and I need things to be as optimal as possible
The only way to free unused space in a firebird database is to do a backup then an immediate restore of that backup (Reference: Firebird FAQ).
Here is a good technical explanation of why this is so.
Note that Firebird will reuse the currently unused space - ie. if you put another 10MB of blobs in now, the database should not grow to 30MB.