Imagine this situation. I have a server that has only 1 GB of usable space. A Postgres database takes about 600MB of that (as per SELECT pg_size_pretty(pg_database_size('dbname'));), and other stuff another 300MB, so I have only 100 MB free space.
I want to take a dump of this database (to move to another server).
Naturally a simple solution of pg_dump dbname > dump fails with a Quota exceeded error.
I tried to condense it first with VACUUM FULL (not sure if it would help for the dump size, but anyway), but it failed because of disk limitation as well.
I have SSH access to this server. So I was wondering: is there a way to pipe the output of pg_dump over ssh so that it would be output to my home machine?
(Ubuntu is installed both on the server and on the local machine.)
Other suggestions are welcome as well.
Of course there is.
On your local machine do something like:
ssh -L15432:127.0.0.1:5432 user#remote-machine
Then on your local machine you can do something like:
pg_dump -h localhost -p 15432 ...
This sets up a tunnel from port 15432 on your local box to 5432 on the remote one. Assuming permissions etc allow you to connect, you are good to go.
(if the machine is connected to a network) you can do everything from remote, given sufficient authorisation:
from your local machine:
pg_dump -h source_machine -U user_id the_database_name >>the_output.dmp
And you can even pipe it straight into your local machine (after taking care of user roles and creation of DBs, etc):
pg_dump -h ${ORIG_HOST} -U ${ORIG_USER} -d ${ORIG_DB} \
-Fc --create | pg_restore -c -C | psql -U postgres template1
pg_dump executes on the local (new) machine
but it connects to the $ORIG_HOST as user $ORIG_USER to db $ORIG_DB
pg_restore also runs on the local machine
pg_restore is not really needed (here) but can come in handy to drop/rename/create databases, etc
psql runs on the local machine, it accepts a stream of SQL and data from the pipe, and executes/inserts it to the (newly created) database
the connection to template1 is just a stub, because psql insists on being called with a database name
if you want to build a command-pipe like this, you should probably start by replacing the stuff after one of the | pipes by more or less, or redirect it to a file.
You might need to import system-wide things (usernames and tablespaces) first
Related
I have building and testing a psql database on my local machine, and the time is come to move it to a server. I have my dump file, but I'm not sure how to actually get it on the server to restore it there. Obviously something like the following doesn't work:
psql -U user -d datbase < C:/dbbackups/dbbackup.dump
But what is the proper way to get my dump on the server from my local to properly restore it?
You can use pg_dump.
pg_dump -C -h localhost -U localuser dbname | psql -h remotehost -U remoteuser dbname
If you want to use your dump you could of course also transfer your dump file over to the new server securly using e.g. SFTP and then restore it there as you would on your local machine.
I am trying to backup Postgres database but I always get 0 bytes backup size. If I use the verbose switch, I can see that it is stuck at
pg_dump: saving database definition
I have tried taking backup of specific table and it works fine. Below is my command:
pg_dump -U backupuser -p 5432 -Ft -v -d database > database.tar
I was able to resolve this. When I was creating the backup, I checked the postgres logs and it was clearly written there that during backup the server wasn't reachable. Now to resolve this, I have added a command to restart the postgres before the scheduled backup and it is working fine now.
I have a database server without much disk space, so I took a backup of the entire db (let's just call it redblue) and saved it locally using the following command (I don't have pg running on my computer):
ssh admin#w.x.y.z "pg_dump -U postgres redblue -h localhost " \
>> db_backup_redblue.sql
I'd like to now restore it to another server (1.2.3.4) which contains an older version of "redblue" database - however wanted to ask if this is right before I try it:
ssh admin#1.2.3.4 "pg_restore -U postgres -C redblue" \
<< db_backup_redblue.sql
I wasn't sure if I need to do -C with the name of the db or not?
Will the above command overwrite/restore the remote database with the file I have locally?
Thanks!
No, that will do nothing good.
You have to start pg_restore on the machine where the dump is. Actually, since this is a plain format dump, you have to use psql rather than pg_restore:
psql -h 1.2.3.4 -U postgres -d redblue -f db_backup_redblue.sql
That requires that there is already an empty database redblue on the target system.
If you want to replace an existing database, you have to use the --clean and --create options with pg_dump.
If you want to use SSL, you'll have to configure the PostgreSQL server to accept SSL connections, see the documentation.
I'd recommend the “custom” format of pg_dump.
Of course, you can do this :) Assuming you use ssh keys to authorize user from source host to destination host.
On the source host you do the pg_dump, then pipe through ssh to destination host like this:
pg_dump -C nextcloud | ssh -i .ssh/pg_nextcloud_key postgres#192.168.0.54 psql -d template1
Hope that helps ;)
I have an application which creates a postgres database and the user who owns the postgres database has /bin/nologin as their shell.
The database functions without error and is also otherwise secure. Due to best practices it makes sense to not assign a Linux shell to the postgres db owner, postgres runs as this user.
I however have to take a pg_dump of this database for archiving purposes, how can I do that without assigning a valid shell to the username the database runs as?
This is no problem at all.
pg_dump is a client tool, and you can use it as a different user on the same machine or from a different machine.
Use the -h <host/socket> -p <port> options of pg_dump to connect to a database server that might be on a different machine and use -U <user> to specify which database user to connect as.
sudo -u pe-postgres /opt/puppetlabs/server/apps/postgresql/bin/pg_dumpall -c -f <BACKUP_FILE>.sql
does the trick
I have a dockerized postgres 9.3.5 OLTP instance that I'm going to update to 9.5.2. Instead of shutting it down and doing a pg_dumpal to a file and then load it I would like to spin up a new docker container and pipe the database pg_dumpall -h localhost -p [port] -U postgres | psql -h localhost -U postgres -p [port]. I'm thinking I could alter the postgresql.conf to be in read only mode. This would temporarily mess with my app but at least users could still SELECT. Is there a better way to go about this? Are there any big issues with putting in read only mode while I pipe the database over?