Backup taken from pgadmin is smaller than backup taken from pgdump - postgresql

Hello experts I am using postgres 9.5 . When I take a backup from pgadmin it has 950 MB size but when i take the same db backup from pgdump.exe command the backup size is with 7.5 GB. I am confused which backup file will be secured for me that I can use to restore? the restoring process is also slow in postgresql. Please help me.

When you backup something in pgadmin it just calls pg_dump with appropriate options, so both your backups are made by the same pg_dump utility.
I guess you're comparing dumps in two different formats.
Default format for pg_dump is plain, which is basically an enormous uncompressed SQL file.
As for pgadmin, it uses custom format by default, which is a highly compressed binary file.
Also note that pgadmin always displays the actual pg_dump command used to create your dump in the log window, along with its full output.
You should be able to call this command in your command prompt to generate an identical backup file.
You can read more about different output formats and other pg_dump options in PostgreSQL docs.

Related

restore db postgres - format .gz

I am trying to restore a database on my local machine. I downloaded this database from the server in file.gz format. Inside this archive is the file file.out. How can I restore the entire structure on my computer through
pg_admin or via console?
If I understand correctly, you need to use the pg_restore command. But all my attempts end with a syntax error.

how to decompress .sql extension file in windows server

I have taken full backup of postgresql database which consists of 100 databases. The backup format is .sql (eg pg_dumpall.exe -U postgres > D:\Backup\fullbkp.sql) now one of my database got crashed and I want to extract this file to get that database backup only for restoration.
I have searched a lot but couldn't find any way to decompress so that I can get that particular database from full backup file.
Please suggest !!!!
Regards
Sadam
Such a backup is not compressed. Also, it contains a backup of all databases in th cluster, and there is no easy way to extract a single database.
Create a new PostgreSQL cluster with initdb, restore the dump there using psql, then use pg_dump to extract the single database you need.

Postgresql - restore SQL dump with tablespaces

I'm planning to move some tables to different tablespaces (folders) on my PROD Linux box.
Overnight DB backups are done using pg_dumpall
I have also DEV environment working under Windows OS Im usually restoring sql dump (made on Linux).
Im worrying now how to restore such sql dumps, having pointers to Linux partition, in Linux notation.
I read on various webpages that same folder structure has to be created in order to restore non-standard tablespaces. But folder paths in Windows and Linux looks totally different (c:\... vs /opt/...)
Is there any command line switch allowing remap tablespace to other (Windows-like location) during restore? If not how you guys manage that scenario ?
I guess I shoud be able to archieve that by editing this SQL dump file - but it's huge, few hundred gigs file, also it is a bit problematic to automate
You can retrieve the actual tablespace definitions with a separate pg_dumpall command. You still need to do some editing, but the output is not that large. (similar for users)
pg_dumpall --tablespaces-only mydatabasename >stuff.out
There is no option to remap tablespace names during import, so you will need to create them in your Windows installation with the same name - the actual location physical location ("folder structure") is irrelevant as the SQL dump only references them by name.
If the script contains the create tablespace command you need to change that command to use a directory/path name that exists on your system before you can run the SQL script. But you only need to change that, all other places will refer to the tablespace name, not the folder path.
Typically pg_dump is easier than pg_dumpall for moving databases around (e.g. because of tablespaces).

pg_dump is available in AgensGraph?

I know the function "pg_dump" for backup and restore.
But I never tried because so scared.
Question is simple. Can I use that function for graph data?
Or Is the other function supported for that? There's no information in their documentation.
You may refer the postgreSQL pg_dump document coz nothing different from doing bakcup on postgreSQL.
I referred the guide to create dump script with crontab and both dump and restore worked fine.
In my case, I used pg_dump for creating dump file and restore it with psql. You may choose pg_restore instead if necessary.
agens#karl ~] pg_dump --port=5432 --username=agens --file=agens.dump agens
agens#karl ~] psql --port=5432 --username=agens --dbname=agens2 -f agens.dump
However, I no longer use pg_dump for backup task due to the incremental bakcup requirement. So I googled available backup OSS for postgresql. Among the options I searched, pg_rman is currently what I am using.
It made me easier to build a scheduling script for archive backup every 6 hours, incremental backup every day and full backup every week and those jobs are working properly more than 2 months so far.
Restoring the data on other severs is tested successfully as well.
Hope this helpful for you.

PostgreSQL backup with smallest output files

We have a Postgresql database that is over 732 GB when backed as a file system backup. When we do a pg_dump we can get it down to 585 GB. If I combined the pg_dump with the PITR method will this give me the best backup with smallest backup data file size? My plan was to run the pg_start_backup, then the pg_dump, then the pg_stop_backup. I know the documentation states to run a file system backup but I want a smaller backup data set. I would then copy off WAL files and then backup them up at night.
To truly get the smallest file, you'll have to try compressing your pg_dump -Fc dump file with one of many compression tools and settings. Using gzip or xz with maximum possible compression would be a start. This will of course require an excellent CPU and lots of CPU time.