Automatic backup of PostgreSQL database in Ubuntu - postgresql

How to take an automatic backup of a PostgreSQL database in Ubuntu?
Or is there a script available to take time-to-time PostgreSQL database backups?

you can use the following:
sudo crontab -e
at the end of the file add this:
0 6 * * * sudo pg_dump -U USERNAME -h REMOTE_HOST -p REMOTE_PORT NAME_OF_DB > LOCATION_AND_NAME_OF_BACKUP_FILE
This command will take an automated backup of your selected db everyday at 6:00 AM (after changing options of the command to fit to ur db)

It is advisable to create a backup every time with a new name in order to be able to restore data to a specific date. Also good practice to send notifications in case the backups fail.
Here is a good script for automatic backup, as well as general recommendations for automating backups:
How to Automate PostgreSQL Database Backups

Related

pg_dump is available in AgensGraph?

I know the function "pg_dump" for backup and restore.
But I never tried because so scared.
Question is simple. Can I use that function for graph data?
Or Is the other function supported for that? There's no information in their documentation.
You may refer the postgreSQL pg_dump document coz nothing different from doing bakcup on postgreSQL.
I referred the guide to create dump script with crontab and both dump and restore worked fine.
In my case, I used pg_dump for creating dump file and restore it with psql. You may choose pg_restore instead if necessary.
agens#karl ~] pg_dump --port=5432 --username=agens --file=agens.dump agens
agens#karl ~] psql --port=5432 --username=agens --dbname=agens2 -f agens.dump
However, I no longer use pg_dump for backup task due to the incremental bakcup requirement. So I googled available backup OSS for postgresql. Among the options I searched, pg_rman is currently what I am using.
It made me easier to build a scheduling script for archive backup every 6 hours, incremental backup every day and full backup every week and those jobs are working properly more than 2 months so far.
Restoring the data on other severs is tested successfully as well.
Hope this helpful for you.

database backup in jelastic can't be done from the app node

My Goal is to have an automatic database backup that will be sent to my s3 backet
Jelastic has a good documentation how to run the pg_dump inside the database node/container, but in order to obtain the backup file you have to do it manually using an FTP add-ons!
But As I said earlier my goal is to send the backup file automatically to my s3 backet, what I tried to do is to run the pg_dump from my app node instead of postgresql node (hopefully I can have some control from the app side), the command I run basically looks like this:
PGPASSWORD="my_database_password" pg_dump --host "nodeXXXX-XXXXX.jelastic.XXXXX.net"
-U my_db_username -p "5432" -f sql_backup.sql "database_name" 2> $LOG_FILE
The output of my log file is :
pg_dump: server version: 10.3; pg_dump version: 9.4.10
pg_dump: aborting because of server version mismatch
The issue here is that the database node has a different pg_dump version than the nginx/app node, so the backup can't be performed! I looked around but can't find an easy way to solve this. Am open to any alternative way that helps to achieve my initial goal.

Best way to make PostgreSQL backups

I have a site that uses PostgreSQL. All content that I provide in my site is created at a development environment (this happens because it's webcrawler content). The only information created at the production environment is information about the users.
I need to find a good way to update data stored at production. May I restore to production only the tables updated at development environment and PostgreSQL will update this records at production or the best way would be to backup the users information at production, insert them at development and restore the whole database at production?
Thank you
You can use pg_dump to export the data just from the non-user tables in the development environment and pg_restore to bring that into prod.
The -t switch will let you pick specific tables.
pg_dump -d <database_name> -t <table_name>
https://www.postgresql.org/docs/current/static/app-pgdump.html
There are many tips arounds this subject here and here.
I'd suggest you to take a look on these links before everything.
If your data is discarded at each update process then a plain dump will be enough. You can redirect pg_dump output directly to psql connected on production to avoid pg_restore step, something like below:
#Of course you must drop tables to load it again
#so it'll be reasonable to make a full backup before this
pg_dump -Fp -U user -h host_to_dev -T=user your_db | psql -U user -h host_to_production your_db
You might asking yourself "Why he's saying to drop my tables"?
Bulk loading data on a fresh table is faster than deleting old data and inserting again. A quote from the docs:
Creating an index on pre-existing data is quicker than updating it incrementally as each row is loaded.
Ps¹: If you can't connect on both environment at same time then you need to do pg_restore manually.
Ps²: I don't recommend it but you can append --clean option on pg_dump to generate DROP statements automatically. Be extreme careful with this option to avoid dropping unnexpected objects.

postgres db backup is small after crontab

I'm trying to do a simple postgres backup with crontab. Here is the command I use:
# m h dom mon dow user command
49 13 * * * postgres /usr/bin/pg_dump store | bzip2 > /home/backups/postgres/$(date +"\%Y-\%m-\%d")_store.sq.bz2
A backup file is created but it is very small (looks like 14 bytes).
I can run this command just fine in the terminal (with a filesize that matches my db).
The log files don't mention any errors (grep CRON /var/log/syslog). Any idea what might be off?
The key to the solution of this, is to realize is that running the 'same' command in bash and running via Cron isn't the same thing!
For e.g. when run via Cron obvious defaults (.bash_profile / .pgpass / default binary paths) aren't the same and thus what works on bash may not work in Cron.
For a checklist:
Ensure Bzip2 is replaced with the complete path (for e.g. /usr/bin/bzip2 on CentOS / RHEL)
Ensure that 'Store' Database is readable by the Cron command (for e.g. adding -U Postgres would be a good addition). If the DB login is dependent on .pgpass file, it wouldn't work with cron. In such scenarios, you'd need to ensure pg_hba.conf is configured for this purpose (for e.g. you could allow 'trust' authentication for a specific-known DB / machine / user combination etc.
Ensure that /home/backups/postgres/... is accessible for writing, for obvious reasons.

Remote trigger a postgres database backup

I would like to backup my production database before and after running a database migration from my deploy server (not the database server) I've got a Postgresql 8.4 server sitting on a CentOS 5 machine. The website accessing the database is on a Windows 2008 server running an MVC.Net application, it checks out changes in the source code, compiles the project, runs any DB Changes, then deploys to IIS.
I have the DB server set up to do a crontab job backup for daily backups, but I also want a way of calling a backup from the deploy server during the deploy process. From what I can figure out, there isn't a way to tell the database from a client connection to back itself up. If I call pg_dump from the web server as part of the deploy script it will create the backup on the web server (not desirable). I've looked at the COPY command, and it probably won't give me what I want. MS SQLServer lets you call the BACKUP command from within a DB Connection which will put the backups on the database machine.
I found this post about MySQL, and that it's not a supported feature in MySQL. Is Postgres the same? Remote backup of MySQL database
What would be the best way to accomplish this? I thought about creating a small application that makes an SSH connection to the DB Server, then calls pg_dump? This would mean I'm storing SSH connection information on the server, which I'd really rather not do if possible.
Create a database user pgbackup and assign him read-only privileges to all your database tables.
Setup a new OS user pgbackup on CentOS server with a /bin/bash shell.
Login as pgbackup and create a pair of ssh authentication keys without passphrase, and allow this user to login using generated private key:
su - pgbackup
ssh-keygen -q -t rsa -f ~/.ssh/id_rsa -N ""
cp -a ~/.ssh/.id_rsa.pub ~/.ssh/authorized_keys
Create a file ~pgbackup/.bash_profile:
exec pg_dump databasename --file=`date +databasename-%F-%H-%M-%S-%N.sql`
Setup your script on Windows to connect using ssh and authorize using primary key. It will not be able to do anything besides creating a database backup, so it would be reasonably safe.
I think this could be possible if you create a trigger that uses the PostgreSQL module dblink to make a remote database connection from within PL/pgSQL.
I'm not sure what you mean but I think you can just use pg_dump from your Windows computer:
pg_dump --host=centos-server-name > backup.sql
You'd need to install Windows version of PostgreSQL there, so pg_dump.exe would be installed, but you don't need to start PostgreSQL service or even create a tablespace there.
Hi Mike you are correct,
Using the pg_dump we can save the backup only on the local system. In our case we have created a script on the db server for taking the base backup. We have created a expect script on another server which run the script on database server.
All our servers are linux servers , we have done this using the shell script.