I've started using Raspberry Pi few days back. I need to transfer a database from my Raspberry Pi to my PC.
I've installed MySQL on the Raspberry Pi and put some data in the database. I've already installed MySQL on my PC. I need to transfer data from the MySQL database on the Raspberry Pi to another MySQL on the PC.
Is this possible through LAN....?
Or is there another technique to transfer the data from the Raspberry Pi to the PC?
Is there any technique to transfer directly from one MySQL to another MySQL?
Use mysqldump to output a file containing a bunch of SQL queries that can rebuild your database and then run those queries on your PC database like so:
pi$ mysqldump -u username -p > mysql.dump
pi$ mysql -u username -p --host=<your pc's ip> < mysql.dump
Instead of copying the file(s), you can pipe the output directly to the remote database.
pi_shell> mysqldump -uuser -ppassword --single-transaction database_name | mysql -uuser -ppassword -hremote_mysql_db database_name
Back up the database on the pi,
copy the file on the other computer
then restore it on your computer.
Please see this site for reference
Related
I have used most of the space on my Postgres server and I can not save the results of pg_dumpall on my machine. Is there anyway to get the results of pg_dumpall to another server while pg_dumpall is running?
pg_dumpall is a client application. It can run on any computer that can (remotely) connect to the Postgres server.
So you can run pg_dumpall on the server that has space, connecting to the actual database server using the --host=... parameter.
Or you can store the output of pg_dumpall on a network drive.
I'm logged into remote using ssh
ubuntu#ubuntu:~$
now I switch to postgres account using sudo su - postgres command and it send me to postgres#ubuntu:~$
now here I'm able to take dump using pg_dump command.
e.g. postgres#ubuntu:~$ pg_dump db_name > mydbdump.sql
so far looks good. but from here I want to copy this dump file to my local machine or even to my origin/default ubuntu user on remote(ubuntu#ubuntu:~$). so that from there I can scp.
how do I copy these dump sql files from postgres acccount to ubuntu on remote?
If you are using ubuntu, default directory would be /var/lib/postgresql
so you can directly scp from remote to local.
On your local machine run this command
user#user:~$ scp ubuntu#someaddress.com:/var/lib/postgresql/mydbdump.sql /path/to/local/dir
I am attempting to use a Zabbix server running on an Ubuntu virtual machine to monitor the Postgres database in our application running on the same host machine (not a VM). To be clear, I am trying to connect from a Linux Ubuntu virtual machine on my computer to Postgres also running not in a VM on the same computer. Zabbix makes use of ODBC, so a preliminary step in the process is to get the ODBC connection to Postgres working correctly. However, I am having a problem.
Steps I have taken:
installed unixODBC via sudo apt-get install unixodbc unixodbc-dev
installed unixODBC driver for Postgres via sudo apt-get install odbc-postgresql
configured odbc.ini to the following:
[test]
Description = test database
Driver = /usr/lib/x86_64-linux-gnu/odbc/psqlodbca.so
Setup = /usr/lib/x86_64-linux-gnu/odbc/libodbcpsqlS.so
Server = 192.168.240.1
User = postgres
Password =
Port = 5432
Database = mydb
Yet when I test the connection via:
isql test -v
I get the following error:
[08001][unixODBC]could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
More notes:
I can successfully connect to Postgres from the admin running on the local (non VM) machine
port 5432 has been completely opened from Windows Firewall on the local machine
telnet to 192.168.240.1 (the network IP of the local machine) on port 5432 succeeds
This all implies that the problem has to do with the ODBC configuration in the Ubuntu VM. I spent several hours searching and trying various things but to no avail. If I can get isql to work correctly, I should be in business, as Zabbix basically sits right on top of ODBC for its database monitoring functions.
Thanks in advance for your help.
I think your configuration options are a little off. Try this:
[test]
Driver = /usr/lib/x86_64-linux-gnu/odbc/psqlodbca.so
Setup = /usr/lib/x86_64-linux-gnu/odbc/libodbcpsqlS.so
Database = mydb
Servername = 192.168.240.1
UserName = postgres
Password =
Port = 5432
Protocol = 7.4
Using Servername instead of Server might be sufficient.
I'd recommend the following steps to getting ODBC and PostgreSQL to play together (ignoring the apt install steps, since you already did these):
sudo odbcinst -i -d -f /usr/share/psqlodbc/odbcinst.ini.template
sudo odbcinst -i -s -l -n test -f /usr/share/doc/odbc-postgresql/examples/odbc.ini.template
sudo nano /etc/odbc.ini
Here's what these do:
Sets up your odbcinst.ini file with the files in the right places.
Sets up your odbc.ini file (for the system).
Edits the system odbc.ini file you created in step 2, where you can replace options to match your needs.
If you do not want the odbc.ini file to be system-wide, you can also limit it to just the user if you call step #2 without the -l parameter. In that case, it'll create or modify a ~/.odbc.ini file, which you can edit for your needs.
The unixODBC folks seem to recommend using odbcinst for setting this stuff up, as it knows where to put the files. Unfortunately, to use it to great effect, you'd need to know where to find the drivers' template files for your driver. The paths I've provided here match the ones for the Ubuntu package.
I am trying to take a mongodb dump from amazon aws server.
Kinldy share the command
From Local it is working
sudo mongodump -d db** -o /opt/backup/
How to do it from server
sudo mongodump -d db** -i /opt/x.pem ubuntu#ip:/
There are three things you need to do in order to make sure remote mongodump is possible -
Make sure the security group allows for communications between your
computer and port 27017 (or any other port mongo is running on your
server)
Check if mongodb is configured to bind a specific IP (by default it
is binded to 127.0.0.1 which allows for local communications only)
Change your mongodump command to something like this - mongodump
-d <db**> -u <username> -p <password> --host <server_ip/dns>
Having said that, it is often better to ssh into the server and dump the data locally, then zip it and copy it to your local machine in order to minimize network load. If you have ssh access to the server this would be a much better (and more secure) approach for dumping your data.
Imagine this situation. I have a server that has only 1 GB of usable space. A Postgres database takes about 600MB of that (as per SELECT pg_size_pretty(pg_database_size('dbname'));), and other stuff another 300MB, so I have only 100 MB free space.
I want to take a dump of this database (to move to another server).
Naturally a simple solution of pg_dump dbname > dump fails with a Quota exceeded error.
I tried to condense it first with VACUUM FULL (not sure if it would help for the dump size, but anyway), but it failed because of disk limitation as well.
I have SSH access to this server. So I was wondering: is there a way to pipe the output of pg_dump over ssh so that it would be output to my home machine?
(Ubuntu is installed both on the server and on the local machine.)
Other suggestions are welcome as well.
Of course there is.
On your local machine do something like:
ssh -L15432:127.0.0.1:5432 user#remote-machine
Then on your local machine you can do something like:
pg_dump -h localhost -p 15432 ...
This sets up a tunnel from port 15432 on your local box to 5432 on the remote one. Assuming permissions etc allow you to connect, you are good to go.
(if the machine is connected to a network) you can do everything from remote, given sufficient authorisation:
from your local machine:
pg_dump -h source_machine -U user_id the_database_name >>the_output.dmp
And you can even pipe it straight into your local machine (after taking care of user roles and creation of DBs, etc):
pg_dump -h ${ORIG_HOST} -U ${ORIG_USER} -d ${ORIG_DB} \
-Fc --create | pg_restore -c -C | psql -U postgres template1
pg_dump executes on the local (new) machine
but it connects to the $ORIG_HOST as user $ORIG_USER to db $ORIG_DB
pg_restore also runs on the local machine
pg_restore is not really needed (here) but can come in handy to drop/rename/create databases, etc
psql runs on the local machine, it accepts a stream of SQL and data from the pipe, and executes/inserts it to the (newly created) database
the connection to template1 is just a stub, because psql insists on being called with a database name
if you want to build a command-pipe like this, you should probably start by replacing the stuff after one of the | pipes by more or less, or redirect it to a file.
You might need to import system-wide things (usernames and tablespaces) first