pg_dumpall save results to another server? - postgresql

I have used most of the space on my Postgres server and I can not save the results of pg_dumpall on my machine. Is there anyway to get the results of pg_dumpall to another server while pg_dumpall is running?

pg_dumpall is a client application. It can run on any computer that can (remotely) connect to the Postgres server.
So you can run pg_dumpall on the server that has space, connecting to the actual database server using the --host=... parameter.
Or you can store the output of pg_dumpall on a network drive.

Related

Want to execute a local sql file (using \i to execute multiple sql files) using psql command on ssh server

I have multiple .sql files on my local machine, which I was executing rapidly with one command
psql .....-f abc.sql (containing \i def.sql; \i ghi.sql) on db server hosted on aws.
Now, I have established SSH tunneling with this db server and want to execute my local sql files on this server to rapidly create, drop, update objects on db hosted on aws.
I have tried
1. cat abc.sql | ssh -i abc.pem user#server.com psql -c
simple : How to execute my local files on SSH server using psql command?
You don't need the -c at the end of the command. However, when the script encounters the \i command, it will try to load the files locally from the server. You will either have to send them there via scp, or include them completely inline to the master script.

postgres user with /bin/nologin

I have an application which creates a postgres database and the user who owns the postgres database has /bin/nologin as their shell.
The database functions without error and is also otherwise secure. Due to best practices it makes sense to not assign a Linux shell to the postgres db owner, postgres runs as this user.
I however have to take a pg_dump of this database for archiving purposes, how can I do that without assigning a valid shell to the username the database runs as?
This is no problem at all.
pg_dump is a client tool, and you can use it as a different user on the same machine or from a different machine.
Use the -h <host/socket> -p <port> options of pg_dump to connect to a database server that might be on a different machine and use -U <user> to specify which database user to connect as.
sudo -u pe-postgres /opt/puppetlabs/server/apps/postgresql/bin/pg_dumpall -c -f <BACKUP_FILE>.sql
does the trick

Dump Postgres database when space is tight?

Imagine this situation. I have a server that has only 1 GB of usable space. A Postgres database takes about 600MB of that (as per SELECT pg_size_pretty(pg_database_size('dbname'));), and other stuff another 300MB, so I have only 100 MB free space.
I want to take a dump of this database (to move to another server).
Naturally a simple solution of pg_dump dbname > dump fails with a Quota exceeded error.
I tried to condense it first with VACUUM FULL (not sure if it would help for the dump size, but anyway), but it failed because of disk limitation as well.
I have SSH access to this server. So I was wondering: is there a way to pipe the output of pg_dump over ssh so that it would be output to my home machine?
(Ubuntu is installed both on the server and on the local machine.)
Other suggestions are welcome as well.
Of course there is.
On your local machine do something like:
ssh -L15432:127.0.0.1:5432 user#remote-machine
Then on your local machine you can do something like:
pg_dump -h localhost -p 15432 ...
This sets up a tunnel from port 15432 on your local box to 5432 on the remote one. Assuming permissions etc allow you to connect, you are good to go.
(if the machine is connected to a network) you can do everything from remote, given sufficient authorisation:
from your local machine:
pg_dump -h source_machine -U user_id the_database_name >>the_output.dmp
And you can even pipe it straight into your local machine (after taking care of user roles and creation of DBs, etc):
pg_dump -h ${ORIG_HOST} -U ${ORIG_USER} -d ${ORIG_DB} \
-Fc --create | pg_restore -c -C | psql -U postgres template1
pg_dump executes on the local (new) machine
but it connects to the $ORIG_HOST as user $ORIG_USER to db $ORIG_DB
pg_restore also runs on the local machine
pg_restore is not really needed (here) but can come in handy to drop/rename/create databases, etc
psql runs on the local machine, it accepts a stream of SQL and data from the pipe, and executes/inserts it to the (newly created) database
the connection to template1 is just a stub, because psql insists on being called with a database name
if you want to build a command-pipe like this, you should probably start by replacing the stuff after one of the | pipes by more or less, or redirect it to a file.
You might need to import system-wide things (usernames and tablespaces) first

How to restore PostgreSQL database backup without psql and pg_restore?

I am using postgresql on an embedded device running yocto (customized linux). The package containing postgresql does not provide psql or pg_restore. /usr/bin provides the following tools:
pg_basebackup indicates that I am able to create backups. But how am I supposed to restore a backup within a terminal? With pg_restore or psql this would not be a problem for me.
Please note: I want to use a backup created on my windows/linux computer to create the initial database on the embedded device.
Create backup of the db using command:
pg_basebackup -h {host_ip} -D pg_backup -U {USER}
for restoring just change the data folder to pg_backup. Now start the psql server.

How to Transfer MySQL Data From Raspberry Pi to my PC

I've started using Raspberry Pi few days back. I need to transfer a database from my Raspberry Pi to my PC.
I've installed MySQL on the Raspberry Pi and put some data in the database. I've already installed MySQL on my PC. I need to transfer data from the MySQL database on the Raspberry Pi to another MySQL on the PC.
Is this possible through LAN....?
Or is there another technique to transfer the data from the Raspberry Pi to the PC?
Is there any technique to transfer directly from one MySQL to another MySQL?
Use mysqldump to output a file containing a bunch of SQL queries that can rebuild your database and then run those queries on your PC database like so:
pi$ mysqldump -u username -p > mysql.dump
pi$ mysql -u username -p --host=<your pc's ip> < mysql.dump
Instead of copying the file(s), you can pipe the output directly to the remote database.
pi_shell> mysqldump -uuser -ppassword --single-transaction database_name | mysql -uuser -ppassword -hremote_mysql_db database_name
Back up the database on the pi,
copy the file on the other computer
then restore it on your computer.
Please see this site for reference