How to automate using a production postgres database backup in local Flask environment - postgresql

We use Postgres and Flask for our website, and we use the production database dump locally pretty often. To get a fresh dump, I use a remote desktop connection (RDC) to connect to pgAdmin then use RDC again to copy .bak file from server and save it locally. Likewise, I use a local instance of pdAdmin to restore the database state from the backup.
My manager asked me to automate this process to use production database each time when a local Flask instance is launched. How can I do that?

You could write a shell script that dumps the database to a local file using pg_dump, then use pg_restore to build a new local database from that dump. You could probably even just pass the output from pg_dump to pg_restore... something like
pg_dump --host <remote-database-host> --dbname <remote-database-name> --username <remote-username> > pg_restore --host <local-database-host> --username <local-username>
To get your password into pg_dump / pg_restore you'll probably want to use a .pgpass file, as described here: How to pass in password to pg_dump?
If you want this to happen automatically when you launch a Flask instance locally, you could call the shell script from your initialization code using a subprocess call if a LOCAL_INSTANCE environment variable is set, or something along those lines

Related

Dump Contents of RDS Postgres Query

Short Version of this Question:
I'd like to dump the contents of a Postgres query from a db instance hosted in RDS inside of a shell script.
Complete Version:
Right now I'm writing a shell script that I would like to dump the contents of a query into a .dump file from a source database, and run the dump file on a destination database instance. Both db instances are hosted in RDS.
MySQL allows you to do this using the mysqldump tool, but the recommended answer to this problem in Postgres seems to be to use the COPY command. However, the COPY command isn't available in RDS instances. The recommended solution in this case seems to be to use the '\copy' command, which does the same thing locally using the psql tool. However, it doesn't seem like this is a support option inside of a shell script.
What's the best way to accomplish this?
Thank you!
I am not familiar with shell, but I have used batch file in Windows to dump output of query to a file and to import the file on another instance.
Here is what I used to export from postgres RDS to file on Windows.
SET PGPASSWORD=your_password
cd "C:\Program Files (x86)\pgAdmin 4\v3\runtime"
psql -h your_host -U your_username -d your_databasename -c "\copy (your_query) TO
path\file_name.sql"
All above commands are in one batch file.

database backup in jelastic can't be done from the app node

My Goal is to have an automatic database backup that will be sent to my s3 backet
Jelastic has a good documentation how to run the pg_dump inside the database node/container, but in order to obtain the backup file you have to do it manually using an FTP add-ons!
But As I said earlier my goal is to send the backup file automatically to my s3 backet, what I tried to do is to run the pg_dump from my app node instead of postgresql node (hopefully I can have some control from the app side), the command I run basically looks like this:
PGPASSWORD="my_database_password" pg_dump --host "nodeXXXX-XXXXX.jelastic.XXXXX.net"
-U my_db_username -p "5432" -f sql_backup.sql "database_name" 2> $LOG_FILE
The output of my log file is :
pg_dump: server version: 10.3; pg_dump version: 9.4.10
pg_dump: aborting because of server version mismatch
The issue here is that the database node has a different pg_dump version than the nginx/app node, so the backup can't be performed! I looked around but can't find an easy way to solve this. Am open to any alternative way that helps to achieve my initial goal.

How openshift - postgresql load data from local file?

I created app on openshift and added a cartridge postgresql.
There is no manage application like pgadmin supported by openshift.
I manage the DB by PuTTY under windows.
But how I can import the local data to the DB on openshift.
Thank you in advance!
First take backup of your database using below command.
pg_dump dbname > outfile
Then
add the postgresql database like so
rhc-ctl-app –a postgresApp –e add-postgresql-8.4
then access the remote psql and you can restore using this command pg_restore

Heroku: Storing local MongoDB to MongoLab

It might be a dead simple question yet I still wanted to ask. I've created a Node.js application and deployed it on Heroku. I've also set up the database connection without having any trouble as well.
However, I cannot get the load the local data in my MongoDB to MongoLab I use on heroku. I've searched google and could not find a useful solution so I ended up trying these commands;
mongodump
And:
mongorestore -h mydburl:mydbport -d mydbname -u myusername -p mypassword --db Collect.1
Now when I run the command mongorestore, I received the error;
ERROR: multiple occurrences
Import BSON files into MongoDB.
When I take a look at the DB file for MongoDB I've specified and used during the local development, I see that there are files Collect.0, Collect.1 and Collect.ns. Now I know that my db name is 'Collect' since when I use the shell I always type `use Collect`. So I specified the db as Collect.1 in command line but I still receive the same errors. Should I remove all the other collect files or there is another way around?
You can't use 'mongorestore' against the raw database files. 'mongorestore' is meant to work off of a dump file generated by 'mongodump'. First us 'mongodump' to dump your local database and then use 'mongorestore' to restore that dump file.
If you go to the Tools tab in the MongoLab UI for your database, and click 'Import / Export' you can see an example of each command with the correct params for your database.
Email us at support#mongolab.com if you continue to have trouble.
-will
This can done by two steps.
1.Dump the database
mongodump -d mylocal_db_name -o dump/
2.Restore the database
mongorestore -h xyz.mongolab.com:12345 -d remote_db_name -u username -p password dump/mylocal_db_name/

Remote trigger a postgres database backup

I would like to backup my production database before and after running a database migration from my deploy server (not the database server) I've got a Postgresql 8.4 server sitting on a CentOS 5 machine. The website accessing the database is on a Windows 2008 server running an MVC.Net application, it checks out changes in the source code, compiles the project, runs any DB Changes, then deploys to IIS.
I have the DB server set up to do a crontab job backup for daily backups, but I also want a way of calling a backup from the deploy server during the deploy process. From what I can figure out, there isn't a way to tell the database from a client connection to back itself up. If I call pg_dump from the web server as part of the deploy script it will create the backup on the web server (not desirable). I've looked at the COPY command, and it probably won't give me what I want. MS SQLServer lets you call the BACKUP command from within a DB Connection which will put the backups on the database machine.
I found this post about MySQL, and that it's not a supported feature in MySQL. Is Postgres the same? Remote backup of MySQL database
What would be the best way to accomplish this? I thought about creating a small application that makes an SSH connection to the DB Server, then calls pg_dump? This would mean I'm storing SSH connection information on the server, which I'd really rather not do if possible.
Create a database user pgbackup and assign him read-only privileges to all your database tables.
Setup a new OS user pgbackup on CentOS server with a /bin/bash shell.
Login as pgbackup and create a pair of ssh authentication keys without passphrase, and allow this user to login using generated private key:
su - pgbackup
ssh-keygen -q -t rsa -f ~/.ssh/id_rsa -N ""
cp -a ~/.ssh/.id_rsa.pub ~/.ssh/authorized_keys
Create a file ~pgbackup/.bash_profile:
exec pg_dump databasename --file=`date +databasename-%F-%H-%M-%S-%N.sql`
Setup your script on Windows to connect using ssh and authorize using primary key. It will not be able to do anything besides creating a database backup, so it would be reasonably safe.
I think this could be possible if you create a trigger that uses the PostgreSQL module dblink to make a remote database connection from within PL/pgSQL.
I'm not sure what you mean but I think you can just use pg_dump from your Windows computer:
pg_dump --host=centos-server-name > backup.sql
You'd need to install Windows version of PostgreSQL there, so pg_dump.exe would be installed, but you don't need to start PostgreSQL service or even create a tablespace there.
Hi Mike you are correct,
Using the pg_dump we can save the backup only on the local system. In our case we have created a script on the db server for taking the base backup. We have created a expect script on another server which run the script on database server.
All our servers are linux servers , we have done this using the shell script.