Dump Contents of RDS Postgres Query - postgresql

Short Version of this Question:
I'd like to dump the contents of a Postgres query from a db instance hosted in RDS inside of a shell script.
Complete Version:
Right now I'm writing a shell script that I would like to dump the contents of a query into a .dump file from a source database, and run the dump file on a destination database instance. Both db instances are hosted in RDS.
MySQL allows you to do this using the mysqldump tool, but the recommended answer to this problem in Postgres seems to be to use the COPY command. However, the COPY command isn't available in RDS instances. The recommended solution in this case seems to be to use the '\copy' command, which does the same thing locally using the psql tool. However, it doesn't seem like this is a support option inside of a shell script.
What's the best way to accomplish this?
Thank you!

I am not familiar with shell, but I have used batch file in Windows to dump output of query to a file and to import the file on another instance.
Here is what I used to export from postgres RDS to file on Windows.
SET PGPASSWORD=your_password
cd "C:\Program Files (x86)\pgAdmin 4\v3\runtime"
psql -h your_host -U your_username -d your_databasename -c "\copy (your_query) TO
path\file_name.sql"
All above commands are in one batch file.

Related

Is it possible to import postgresql custom-format dump file in cloud sql without using pg_restore?

I downloaded the postgresql .dmp file from the chembl database.
I want to import this into gcp cloudsql.
When I run it with the console and gcloud command, I get the following error:
Importing data into Cloud SQL instance...failed.
ERROR: (gcloud.sql.import.sql) [ERROR_RDBMS] exit status 1
The input is a PostgreSQL custom-format dump.
Use the pg_restore command-line client to restore this dump to a database.
Can I import custom-format dmp files without using the pg_restore command?
https://cloud.google.com/sql/docs/postgres/import-export/importing
There is a description of pg_restore in the document on that site, but I didn't get it right.
In the case of custom-format files, is it necessary to pg_restore after uploading them to the cloud shell?
According to the CloudSQL docs:
Only plain SQL format is supported by the Cloud SQL Admin API.
The custom format is allowed if the dump file is intended for use with pg_restore.
If you cannot use pg_restore for some reason, I would spin up a local Postgres instance (i.e., on your laptop) and use pg_restore to restore the database.
After loading into your local database, you can use pg_dump to dump to file in plaintext format, then load into CloudSQL with the console or gcloud command.

How to automate using a production postgres database backup in local Flask environment

We use Postgres and Flask for our website, and we use the production database dump locally pretty often. To get a fresh dump, I use a remote desktop connection (RDC) to connect to pgAdmin then use RDC again to copy .bak file from server and save it locally. Likewise, I use a local instance of pdAdmin to restore the database state from the backup.
My manager asked me to automate this process to use production database each time when a local Flask instance is launched. How can I do that?
You could write a shell script that dumps the database to a local file using pg_dump, then use pg_restore to build a new local database from that dump. You could probably even just pass the output from pg_dump to pg_restore... something like
pg_dump --host <remote-database-host> --dbname <remote-database-name> --username <remote-username> > pg_restore --host <local-database-host> --username <local-username>
To get your password into pg_dump / pg_restore you'll probably want to use a .pgpass file, as described here: How to pass in password to pg_dump?
If you want this to happen automatically when you launch a Flask instance locally, you could call the shell script from your initialization code using a subprocess call if a LOCAL_INSTANCE environment variable is set, or something along those lines

database backup in jelastic can't be done from the app node

My Goal is to have an automatic database backup that will be sent to my s3 backet
Jelastic has a good documentation how to run the pg_dump inside the database node/container, but in order to obtain the backup file you have to do it manually using an FTP add-ons!
But As I said earlier my goal is to send the backup file automatically to my s3 backet, what I tried to do is to run the pg_dump from my app node instead of postgresql node (hopefully I can have some control from the app side), the command I run basically looks like this:
PGPASSWORD="my_database_password" pg_dump --host "nodeXXXX-XXXXX.jelastic.XXXXX.net"
-U my_db_username -p "5432" -f sql_backup.sql "database_name" 2> $LOG_FILE
The output of my log file is :
pg_dump: server version: 10.3; pg_dump version: 9.4.10
pg_dump: aborting because of server version mismatch
The issue here is that the database node has a different pg_dump version than the nginx/app node, so the backup can't be performed! I looked around but can't find an easy way to solve this. Am open to any alternative way that helps to achieve my initial goal.

Heroku: Storing local MongoDB to MongoLab

It might be a dead simple question yet I still wanted to ask. I've created a Node.js application and deployed it on Heroku. I've also set up the database connection without having any trouble as well.
However, I cannot get the load the local data in my MongoDB to MongoLab I use on heroku. I've searched google and could not find a useful solution so I ended up trying these commands;
mongodump
And:
mongorestore -h mydburl:mydbport -d mydbname -u myusername -p mypassword --db Collect.1
Now when I run the command mongorestore, I received the error;
ERROR: multiple occurrences
Import BSON files into MongoDB.
When I take a look at the DB file for MongoDB I've specified and used during the local development, I see that there are files Collect.0, Collect.1 and Collect.ns. Now I know that my db name is 'Collect' since when I use the shell I always type `use Collect`. So I specified the db as Collect.1 in command line but I still receive the same errors. Should I remove all the other collect files or there is another way around?
You can't use 'mongorestore' against the raw database files. 'mongorestore' is meant to work off of a dump file generated by 'mongodump'. First us 'mongodump' to dump your local database and then use 'mongorestore' to restore that dump file.
If you go to the Tools tab in the MongoLab UI for your database, and click 'Import / Export' you can see an example of each command with the correct params for your database.
Email us at support#mongolab.com if you continue to have trouble.
-will
This can done by two steps.
1.Dump the database
mongodump -d mylocal_db_name -o dump/
2.Restore the database
mongorestore -h xyz.mongolab.com:12345 -d remote_db_name -u username -p password dump/mylocal_db_name/

Can PostgreSQL COPY read CSV from a remote location?

I've been using JDBC with a local Postgres DB to copy data from CSV files into the database with the Postgres COPY command. I use Java to parse the existing CSV file into a CSV format matches the tables in the DB. I then save this parsed CSV to my local disk. I then have JDBC execute a COPY command using the parsed CSV to my local DB. Everything works as expected.
Now I'm trying to perform the same process on a Postgres DB on a remote server using JDBC. However, when JDBC tries to execute the COPY I get
org.postgresql.util.PSQLException: ERROR: could not open file "C:\data\datafile.csv" for reading: No such file or directory
Am I correct in understanding that the COPY command tells the DB to look locally for this file? I.E. the remote server is looking on its C: drive (doesn't exist).
If this is the case, is there anyway to indicate to the copy command to look on my computer rather than "locally" on the remote machine? Reading through the copy documentation I didn't find anything that indicated this functionality.
If the functionality doesn't exist, I'm thinking of just populating the whole database locally then copying to database to the remote server but just wanted to check that I wasn't missing anything.
Thanks for your help.
Create your sql file as follows on your client machine
COPY testtable (column1, c2, c3) FROM STDIN WITH CSV;
1,2,3
4,5,6
\.
Then execute, on your client
psql -U postgres -f /mylocaldrive/copy.sql -h remoteserver.example.com
If you use JDBC, the best solution for you is to use the PostgreSQL COPY API
http://jdbc.postgresql.org/documentation/publicapi/org/postgresql/copy/CopyManager.html
Otherwise (as already noted by others) you can use \copy from psql which allows accessing the local files on the client machine
To my knowledge, the COPY command can only be used to read locally (either from stdin or from file) from the machine where the database is running.
You could make a shell script where you run you the java conversion, then use psql to do a \copy command, which reads from a file on the client machine.