database backup in jelastic can't be done from the app node - postgresql

My Goal is to have an automatic database backup that will be sent to my s3 backet
Jelastic has a good documentation how to run the pg_dump inside the database node/container, but in order to obtain the backup file you have to do it manually using an FTP add-ons!
But As I said earlier my goal is to send the backup file automatically to my s3 backet, what I tried to do is to run the pg_dump from my app node instead of postgresql node (hopefully I can have some control from the app side), the command I run basically looks like this:
PGPASSWORD="my_database_password" pg_dump --host "nodeXXXX-XXXXX.jelastic.XXXXX.net"
-U my_db_username -p "5432" -f sql_backup.sql "database_name" 2> $LOG_FILE
The output of my log file is :
pg_dump: server version: 10.3; pg_dump version: 9.4.10
pg_dump: aborting because of server version mismatch
The issue here is that the database node has a different pg_dump version than the nginx/app node, so the backup can't be performed! I looked around but can't find an easy way to solve this. Am open to any alternative way that helps to achieve my initial goal.

Related

Trying to dump a PostgreSQL-10 DB running in a CentOS 7 machine and restore it into a Windows 10 machine

I am trying to execute a backup of my PostgreSQL-10 database running on a CentOS 7 machine and then to restore it in a development machine running Windows 10, but I am getting errors during the restore process:
pg_restore: [custom archiver] WARNING: ftell mismatch with expected position -- ftell used
I have made sure that the commands' parameters passed in both dump and restore are the same:
pg_dump --format=c --compress=9 --encoding=UTF-8 -n public --verbose --username=postgres databaseName -W -f /usr/local/production-dump.backup
However it does not work at all. Even though the schema is restored, the data is not, because right before the restore process is going to start restoring data, it gives a "pipe has ended" error and does not proceed with the full restore process. I am using the "custom" format because the plain SQL or tar formats generate huge backup files.
What am I doing wrong? Is there any parameter that I need to pass to the dump or restore commands?
The likely explanation is that the file was modified during file transfer. Could you calculate a checksum of the file before and after transfer and verify that it is the same?
If the file did not change, then you have probably found a PostgreSQL bug. If you have a dump that you can share and that exhibits the problem, please report this problem to PostgreSQL.

Dump Contents of RDS Postgres Query

Short Version of this Question:
I'd like to dump the contents of a Postgres query from a db instance hosted in RDS inside of a shell script.
Complete Version:
Right now I'm writing a shell script that I would like to dump the contents of a query into a .dump file from a source database, and run the dump file on a destination database instance. Both db instances are hosted in RDS.
MySQL allows you to do this using the mysqldump tool, but the recommended answer to this problem in Postgres seems to be to use the COPY command. However, the COPY command isn't available in RDS instances. The recommended solution in this case seems to be to use the '\copy' command, which does the same thing locally using the psql tool. However, it doesn't seem like this is a support option inside of a shell script.
What's the best way to accomplish this?
Thank you!
I am not familiar with shell, but I have used batch file in Windows to dump output of query to a file and to import the file on another instance.
Here is what I used to export from postgres RDS to file on Windows.
SET PGPASSWORD=your_password
cd "C:\Program Files (x86)\pgAdmin 4\v3\runtime"
psql -h your_host -U your_username -d your_databasename -c "\copy (your_query) TO
path\file_name.sql"
All above commands are in one batch file.

Custom pg:dump options with Heroku pg:backups capture?

When developing, I need to pull the latest database so I know I'm working with the latest data. However, we keep a table full of Archives that I don't need to bother downloading because it's a very large table.
I know pg_dump allows for custom parameters that will let you exclude a certain table from being dumped.
Without doing anything crazy like having 2 databases, 1 for data and 1 for archives, is there any way to download everything BUT the archives table from Heroku?
I still need it to keep backups of the archives table, but I don't want to be downloading it. Can I just do a pg_dump when needed that is seperate from the backups?
I know it's a long shot, but any suggestions would be greatly appreciated.
You can't add any custom pg_dump options when using heroku pg:backups capture. This command actually calls an undocumented Heroku Postgres API and it doesn't pass any parameters (see here for the code if you are curious).
What you can do is run your own pg_dump dump command that points to the Heroku Postgres instance.
Get the connection info with pg:credentials where DATABASE_URL can also be the the database color if you have more than one database attached to the app:
> heroku pg:credentials DATABASE_URL --app app_name
Connection info string:
"dbname=zzxcasdqwe host=ec2-1-1-1-1.compute-1.amazonaws.com port=1111 user=asdfasdf password=qwertyqwerty sslmode=require"
Connection URL:
postgres://asdfasdf:qwertyqwerty#ec2-1-1-1-1.compute-1.amazonaws.com:1111/zzxcasdqwe
Take either the the connection info string or the connection url and include that as the first argument to pg_dump and add your custom options
pg_dump "dbname=zzxcasdqwe host=ec2-1-1-1-1.compute-1.amazonaws.com port=1111 user=asdfasdf password=qwertyqwerty sslmode=require"\
-n schema -t table -O -x -Fc -f dump.out
# OR
pg_dump postgres://asdfasdf:qwertyqwerty#ec2-1-1-1-1.compute-1.amazonaws.com:1111/zzxcasdqwe \
-n schema -t table -O -x -Fc -f dump.out
I also co-wrote a Heroku plugin (parse_db_url) that will parse DATABASE_URL's into other formats like pg_dump, pg_restore, pgpass etc. I find it useful when dealing with several different Heroku databases.

pg_basebackup fails with message: could not create directory

I'm trying to create a hot standby server using PostgreSQL 9.3.5 and Red Hat 6.5
I receive the folowing error when running pg_basebackup:
$ pg_basebackup -h 172.28.250.10 -D /var/lib/pgsql/9.3/data -U replicador -v -P
pg_basebackup: could not create directory "/var/lib/pgsql/9.3/data/osm_indices":
File exists
/var/lib/pgsql/9.3/data exists and is empty when I launch the tool and when it fails there is data at /var/lib/pgsql/9.3/data/osm_indices. The DB has 5 tablespaces and 4 are completely copied.
Both servers are running the same O.S. and DB server version.
I've tried the same with 2 different masters and 3 slaves with the same result, but not always is the same tablespaces that fails to copy.
Thanks,
Luis.
It looks like you might have tablespaces inside the data directory.
You should not do that. Tablespaces are meant to be separate paths, and some of the tools assume that they will be.
Move the tablespaces outside the datadir and pg_basebackup should behave, so long as you have corresponding paths on the destination server.

Heroku: Storing local MongoDB to MongoLab

It might be a dead simple question yet I still wanted to ask. I've created a Node.js application and deployed it on Heroku. I've also set up the database connection without having any trouble as well.
However, I cannot get the load the local data in my MongoDB to MongoLab I use on heroku. I've searched google and could not find a useful solution so I ended up trying these commands;
mongodump
And:
mongorestore -h mydburl:mydbport -d mydbname -u myusername -p mypassword --db Collect.1
Now when I run the command mongorestore, I received the error;
ERROR: multiple occurrences
Import BSON files into MongoDB.
When I take a look at the DB file for MongoDB I've specified and used during the local development, I see that there are files Collect.0, Collect.1 and Collect.ns. Now I know that my db name is 'Collect' since when I use the shell I always type `use Collect`. So I specified the db as Collect.1 in command line but I still receive the same errors. Should I remove all the other collect files or there is another way around?
You can't use 'mongorestore' against the raw database files. 'mongorestore' is meant to work off of a dump file generated by 'mongodump'. First us 'mongodump' to dump your local database and then use 'mongorestore' to restore that dump file.
If you go to the Tools tab in the MongoLab UI for your database, and click 'Import / Export' you can see an example of each command with the correct params for your database.
Email us at support#mongolab.com if you continue to have trouble.
-will
This can done by two steps.
1.Dump the database
mongodump -d mylocal_db_name -o dump/
2.Restore the database
mongorestore -h xyz.mongolab.com:12345 -d remote_db_name -u username -p password dump/mylocal_db_name/