I have a dockerized postgres 9.3.5 OLTP instance that I'm going to update to 9.5.2. Instead of shutting it down and doing a pg_dumpal to a file and then load it I would like to spin up a new docker container and pipe the database pg_dumpall -h localhost -p [port] -U postgres | psql -h localhost -U postgres -p [port]. I'm thinking I could alter the postgresql.conf to be in read only mode. This would temporarily mess with my app but at least users could still SELECT. Is there a better way to go about this? Are there any big issues with putting in read only mode while I pipe the database over?
Related
I want to take a DB dump from a remote server and then copy this dump to my local.
I tried couple of commands but didn't worked.
Lastly I tried the command below;
pg_dump -h 10.10.10.70 -p 5432 -U postgres -d mydb | gzip > db1.gz
I succesffully take the DB and tried with restore from Pgadmin, it gives;
pg_restore: error: input file appears to be a text format dump. Please use psql
But at this point I can't use psql, I have to use Pgadmin and not sure if I'm able to get successfully DB dump to my local. I mean I can't verify with restore.
How can I take DB dump from remote server to my local?
Thanks!
Use the "custom" format:
pg_dump -F c -h 10.10.10.70 -p 5432 -U postgres -f mydb.dmp mydb
That can be restores with pg_restore and hence with pgAdmin.
You do not have to use pgAdmin. pgAdmin uses pg_restore, and there is nothing that keeps you from using it too.
Can anyone help me out how to backup and restore a specific database instance if I have multiple PostgreSQL instances running on a single server?
For example, I have db1, db2 and db3 on a single server. How do I backup db1 and restore it without affecting db2 and db3?
Here's how I restart the instances separately.
/usr/pgsql-9.6/bin/pg_ctl restart -D /var/lib/pgsql/9.6/db1
/usr/pgsql-9.6/bin/pg_ctl restart -D /var/lib/pgsql/9.6/db2
/usr/pgsql-9.6/bin/pg_ctl restart -D /var/lib/pgsql/9.6/db3
Thank you, #FatFreddy.
I was able to backup and restore a specific database instance on a server having multiple PostgreSQL instances using the following commands:
Backup: pg_dumpall -p 5435 > /var/lib/pgsql/9.6/db1/PostgreSQL_db1_{date}.sql
Restore: psql -U postgres -p 5435 -f /var/lib/pgsql/9.6/db1/PostgreSQL_db1_{date}.sql
I have a database server without much disk space, so I took a backup of the entire db (let's just call it redblue) and saved it locally using the following command (I don't have pg running on my computer):
ssh admin#w.x.y.z "pg_dump -U postgres redblue -h localhost " \
>> db_backup_redblue.sql
I'd like to now restore it to another server (1.2.3.4) which contains an older version of "redblue" database - however wanted to ask if this is right before I try it:
ssh admin#1.2.3.4 "pg_restore -U postgres -C redblue" \
<< db_backup_redblue.sql
I wasn't sure if I need to do -C with the name of the db or not?
Will the above command overwrite/restore the remote database with the file I have locally?
Thanks!
No, that will do nothing good.
You have to start pg_restore on the machine where the dump is. Actually, since this is a plain format dump, you have to use psql rather than pg_restore:
psql -h 1.2.3.4 -U postgres -d redblue -f db_backup_redblue.sql
That requires that there is already an empty database redblue on the target system.
If you want to replace an existing database, you have to use the --clean and --create options with pg_dump.
If you want to use SSL, you'll have to configure the PostgreSQL server to accept SSL connections, see the documentation.
I'd recommend the “custom” format of pg_dump.
Of course, you can do this :) Assuming you use ssh keys to authorize user from source host to destination host.
On the source host you do the pg_dump, then pipe through ssh to destination host like this:
pg_dump -C nextcloud | ssh -i .ssh/pg_nextcloud_key postgres#192.168.0.54 psql -d template1
Hope that helps ;)
I have an application which creates a postgres database and the user who owns the postgres database has /bin/nologin as their shell.
The database functions without error and is also otherwise secure. Due to best practices it makes sense to not assign a Linux shell to the postgres db owner, postgres runs as this user.
I however have to take a pg_dump of this database for archiving purposes, how can I do that without assigning a valid shell to the username the database runs as?
This is no problem at all.
pg_dump is a client tool, and you can use it as a different user on the same machine or from a different machine.
Use the -h <host/socket> -p <port> options of pg_dump to connect to a database server that might be on a different machine and use -U <user> to specify which database user to connect as.
sudo -u pe-postgres /opt/puppetlabs/server/apps/postgresql/bin/pg_dumpall -c -f <BACKUP_FILE>.sql
does the trick
Imagine this situation. I have a server that has only 1 GB of usable space. A Postgres database takes about 600MB of that (as per SELECT pg_size_pretty(pg_database_size('dbname'));), and other stuff another 300MB, so I have only 100 MB free space.
I want to take a dump of this database (to move to another server).
Naturally a simple solution of pg_dump dbname > dump fails with a Quota exceeded error.
I tried to condense it first with VACUUM FULL (not sure if it would help for the dump size, but anyway), but it failed because of disk limitation as well.
I have SSH access to this server. So I was wondering: is there a way to pipe the output of pg_dump over ssh so that it would be output to my home machine?
(Ubuntu is installed both on the server and on the local machine.)
Other suggestions are welcome as well.
Of course there is.
On your local machine do something like:
ssh -L15432:127.0.0.1:5432 user#remote-machine
Then on your local machine you can do something like:
pg_dump -h localhost -p 15432 ...
This sets up a tunnel from port 15432 on your local box to 5432 on the remote one. Assuming permissions etc allow you to connect, you are good to go.
(if the machine is connected to a network) you can do everything from remote, given sufficient authorisation:
from your local machine:
pg_dump -h source_machine -U user_id the_database_name >>the_output.dmp
And you can even pipe it straight into your local machine (after taking care of user roles and creation of DBs, etc):
pg_dump -h ${ORIG_HOST} -U ${ORIG_USER} -d ${ORIG_DB} \
-Fc --create | pg_restore -c -C | psql -U postgres template1
pg_dump executes on the local (new) machine
but it connects to the $ORIG_HOST as user $ORIG_USER to db $ORIG_DB
pg_restore also runs on the local machine
pg_restore is not really needed (here) but can come in handy to drop/rename/create databases, etc
psql runs on the local machine, it accepts a stream of SQL and data from the pipe, and executes/inserts it to the (newly created) database
the connection to template1 is just a stub, because psql insists on being called with a database name
if you want to build a command-pipe like this, you should probably start by replacing the stuff after one of the | pipes by more or less, or redirect it to a file.
You might need to import system-wide things (usernames and tablespaces) first