pg_dump from remote server to localhost - postgresql

Hi can anyone help me dump from a postgreSQL database on a remote AWS server to a postgreSQL database on my local machine.
I've been trying to do it using the answer in this stack post but it keeps failing.
The command I'm using is
pg_dump -C -h ssh ubuntu#ec2-59-16-143-85.eu-west-1.compute.amazonaws.com -U dev_user paycloud_dev | psql -h localhost -U dev_user paycloud_dev
But I keep getting the error
pg_dump: too many command-line arguments (first is "paycloud_dev")
Can't figure out what I'm doing wrong
Just to add, dev_user is the role I've set up in postgreSQL on both the local machine and remote server. paycloud_dev is the name of the database on both (owner is dev_user)
Edit 1
Tried the command below as per a post that has since been deleted for some reason
pg_dump -C -h ec2-59-16-143-85.eu-west-1.compute.amazonaws.com -U dev_user paycloud_dev | psql -h localhost -U dev_user paycloud_dev
This is now giving me the error
pg_dump: [archiver (db)] connection to database "paycloud_dev" failed: could not connect to server: Connection refused
Is the server running on host "ec2-59-16-143-85.eu-west-1.compute.amazonaws.com" (59.16.143.85) and accepting
TCP/IP connections on port 5432?
I went on to AWS and noted that is the elastic ip of the server. I then tried the following (the private IP address)
pg_dump -C -h 170.30.43.35 -U dev_user paycloud_dev | psql -h localhost -U dev_user paycloud_dev
This asks me for the password for paycloud_dev and when I enter it pauses for a good 2 or 3 minutes and comes back with:
pg_dump: [archiver (db)] connection to database "paycloud_dev" failed: could not connect to server: Connection refused
Is the server running on host "170.30.43.35" and accepting
TCP/IP connections on port 5432?
I've tried editing the AWS security group to add a rule that accepts all traffic (port range 0-65535) but the same error is occurring.
Edit 2
Tried the following as per post by pokoli
ssh ubuntu#ec2-59-16-143-85.eu-west-1.compute.amazonaws.com pg_dump -C -h -U dev_user paycloud_dev | psql -U dev_user paycloud_dev
It's not working though. It first asks me for the psql password for my laptop then before I can input anything, it gives an error.
[sudo] password for alzer: pg_dump: too many command-line arguments (first is "paycloud_dev")
Try "pg_dump --help" for more information.
Anyone?

You have to connect with ssh to remote host, execute the dump and pipe it to your local machine. The following command should do:
ssh ubuntu#ec2-59-16-143-85.eu-west-1.compute.amazonaws.com -C pg_dump -U dev_user paycloud_dev | psql -U dev_user paycloud_dev
The command will ask for password of both users if needed and the playcloud_dev database should exists on localhost, otherwise the dump will fail.

Try do this over ssh tunnel.
ssh -fT -L 5432:127.0.0.1:5432 %remote_user_login%#%your_aws_host% sleep 10
pg_dump -C -h localhost -U dev_user paycloud_dev | psql -h localhost -U dev_user paycloud_dev
First line create ssh tunnel and port mapping.
And view your AWS security settings in "Security Groups", may be you forgot opening ports.

Related

How do I access PostgreSQL by both socket and port?

I have started a PostgreSQL process as follows:
pg_ctl start -w -D /path/to/data -l /path/to/log -o "-F -k /path/to/unix/socket -h ''"
(alternatively -h '*' instead of -h '')
Aside, references for pg_ctl (the outer function) and postgres (the -o parameter)
and created a user (with password) and database:
createuser -P admin
createdb -O admin db
I can connect to the database via the unix socket (does not trigger a password prompt):
psql -h /path/to/unix/socket -U admin -d dbname
but connecting via the TCP port fails:
psql -h localhost -U admin -d dbname
Password for user snaprevs_admin:
psql: FATAL: password authentication failed for user "admin"
What do I need to change so that both the unix socket and TCP connections work as expected?
If you are performing a custom pg_ctl launch, you should check that there isn't a default service already running. If there is it will consume the default TCP port.
To stop the default service and prevent it from starting on next boot:
sudo systemctl disable postgresql --now
If you only want to stop it temporarily:
sudo service postgresql stop
You should now be able to stop and re-start your custom service (with -h '*'), then log in over TCP.

Postgres connection issue

I installed and started my Postgres database with brew (on my Mac). I also defined an entry in my /etc/hosts file (I tried both with 127.0.0.1 postgres and with postgres).
However, when I try
psql -h postgres -U postgres -p 5432
I cannot connect
psql: could not connect to server: Connection refused.
However, when I try with
psql -h localhost -U postgres -p 5432
I can connect. What is needed to be able to connect with: psql -h postgres -U postgres -p 5432
Make sure your PostgreSQL server is willing to accept tcp/ip connections on port 5432.
In your PostgreSQL configuration file check these values.
listen_addresses = '*'
port = 5432

pgAdmin3 backup over ssh tunnel

I have a running postgresql server on amazone ec2. I connect with pgAdmin3 to it over ssh tunnel directly configured in pgAdmin3 from my mac.
I can make queries and see the full schema, no problem about that.
If I try to make a backup of the database (from pgAdmin3 GUI) then I get (even if the connection is actually open and working) the following exception:
/Applications/pgAdmin3.app/Contents/SharedSupport/pg_dump --host localhost --port 5432 --username "MY_USERNAME" --role "MY_ROLE" --no-password --format custom --encoding UTF8 --verbose --file "/Users/XXX/filename" "DATABASENAME"
pg_dump: [archiver (db)] connection to database "DATABASENAME" failed: could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
Process ended with Exitcode 1.
Any idea why pg_dump in the background can not connect over the ssh tunnel?
alternative until i found a solution is to make it by terminal
ssh <HOST> "pg_dump -U <USERNAME> -W -h localhost -F c <DATABASENAME> | gzip -c" > ./backup.sql.gz
This line worked for me:
ssh -o "Compression=no" server_adress "pg_dump -Z9 -Fc -U postgres db_name" > backup_name.dump

pg_dump postgres database from remote server when port 5432 is blocked

I'm trying to pg_dump a SQL database on a remote server in our DMZ. There are 2 problems.
there is not a lot of space left on the remote server so the normal command run to locally backup the database
pg_dump -C database > sqldatabase.sql.bak won't work due to space issues.
I also can't run the other version of pg_dump command to dump database from remote server to local server using:
pg_dump -C -h remotehost -U remoteuser db_name | psql localhost -U localuser db_name
as the server is in our DMZ and port 5432 is blocked. What I'm looking to see is if it is possible to pg_dump the database and immediatly save it (ssh or some other form) as a file to a remote server.
What I was trying was: pg_dump -C testdb | ssh admin#ourserver.com | > /home/admin/testdb.sql.bak
Does anyone know if what i am trying to achieve is possible?
You can connect with ssh to your remote server, do with the connect the pg_dump call and send the output back to stdout of local machine.
ssh user#remote_machine "pg_dump -U dbuser -h localhost -C --column-inserts" \
> backup_file_on_your_local_machine.sql
let's create a backup from remote postgresql database using pg_dump:
pg_dump -h [host address] -Fc -o -U [database user] <database name> > [dump file]
later it could be restored at the same remote server using:
sudo -u postgres pg_restore -C mydb_backup.dump
Ex:
pg_dump -h 67.8.78.10 -Fc -o -U myuser mydb > mydb_backup.dump
complete (all databases and objects)
pg_dumpall -U myuser -h 67.8.78.10 --clean --file=mydb_backup.dump
restore from pg_dumpall --clean:
psql -f mydb_backup.dump postgres #it doesn't matter which db you select here
Copied from: https://codepad.co/snippet/73eKCuLx
You can try to dump part of the table to a file in your local machine like this (assume your local machine has psql installed):
psql -h ${db_host} -p 5432 -U ${db_user} -d ${db_name} \
-c "\copy (SELECT * FROM my_table LIMIT 10000) to 'some_local_file.csv' csv;"
And you can import the exported csv into another db later like this:
COPY my_table FROM '/path/to/some_local_file.csv' WITH (FORMAT csv);
One possible solution - pipe through ssh - has been mentioned.
You also could make your DB server listen on the public inet address, add a hostssl entry for your backup machine to pg_hba.conf, maybe configure a client certificate for security, and then simply run the dump on the client/backup machine with pg_dump -h dbserver.example.com ...
This is simpler for unattended backups.
For the configuration of the connection (sslmode) see also the supported environment variables.
If you would like to periodically backup a database PostgreSQL that is inside of a container in the remote server to your local host by using pg_dump over ssh, this is useful for you:
https://github.com/omidraha/periodic-pgdump-over-ssh

Postgresql (pgsql) client expecting sockets in different location from postgrsql-server

While trying to connect to postgres running locally on my workstation, I get:
$ sudo -u postgres psql -c "create role ..."
could not change directory to "/home/esauer/workspace/cfme"
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
My postgres-server install creates sockets in /var/run/postgresql.
How do I get the client to look in the proper location?
Check the --host option with psql --help.
Then you can make it permanent by setting it in your .psqlrc user file.
In your case try:
psql -h /var/run/postgresql -d your_database