Take mongdb dump from amazon awz from local - mongodb

I am trying to take a mongodb dump from amazon aws server.
Kinldy share the command
From Local it is working
sudo mongodump -d db** -o /opt/backup/
How to do it from server
sudo mongodump -d db** -i /opt/x.pem ubuntu#ip:/

There are three things you need to do in order to make sure remote mongodump is possible -
Make sure the security group allows for communications between your
computer and port 27017 (or any other port mongo is running on your
server)
Check if mongodb is configured to bind a specific IP (by default it
is binded to 127.0.0.1 which allows for local communications only)
Change your mongodump command to something like this - mongodump
-d <db**> -u <username> -p <password> --host <server_ip/dns>
Having said that, it is often better to ssh into the server and dump the data locally, then zip it and copy it to your local machine in order to minimize network load. If you have ssh access to the server this would be a much better (and more secure) approach for dumping your data.

Related

How to restore remote MongoDB server with local mongodump data

We have a remote MongoDB server and we have mongodump data on a local developer's machine. What is the best way to restore the remote MongoDB server data with the local data? Is there a mongo command that we can use?
Alright so we did this in two steps. I think you can do it in one step, with just mongorestore.
First we moved the data from the local machine to the remote machine with the scp command:
scp <path-to-mongofile> <remote-host>:<absolute-file-path>
then we ssh'd into the remote mongod server, and used mongorestore to restore the db
mongorestore --host=$HOST --port=$PORT -u $ADMIN_USER -p $PSWD --db <your-db> <absolute-path-to-restore-db> --authenticationDatabase "admin"
but I think the first scp command is redundant. In fact, if you cannot ssh into the server running mongod, then you will have to use the mongorestore command directly from the local developer's machine.
Just use mongorestore but point it towards the remote server, such as:
$ mongorestore -h ds01234567.mlab.com:12345 -d heroku_fj33kf -u <user> -p <password> <input db directory>
From MongoLab's docs

Dump Postgres database when space is tight?

Imagine this situation. I have a server that has only 1 GB of usable space. A Postgres database takes about 600MB of that (as per SELECT pg_size_pretty(pg_database_size('dbname'));), and other stuff another 300MB, so I have only 100 MB free space.
I want to take a dump of this database (to move to another server).
Naturally a simple solution of pg_dump dbname > dump fails with a Quota exceeded error.
I tried to condense it first with VACUUM FULL (not sure if it would help for the dump size, but anyway), but it failed because of disk limitation as well.
I have SSH access to this server. So I was wondering: is there a way to pipe the output of pg_dump over ssh so that it would be output to my home machine?
(Ubuntu is installed both on the server and on the local machine.)
Other suggestions are welcome as well.
Of course there is.
On your local machine do something like:
ssh -L15432:127.0.0.1:5432 user#remote-machine
Then on your local machine you can do something like:
pg_dump -h localhost -p 15432 ...
This sets up a tunnel from port 15432 on your local box to 5432 on the remote one. Assuming permissions etc allow you to connect, you are good to go.
(if the machine is connected to a network) you can do everything from remote, given sufficient authorisation:
from your local machine:
pg_dump -h source_machine -U user_id the_database_name >>the_output.dmp
And you can even pipe it straight into your local machine (after taking care of user roles and creation of DBs, etc):
pg_dump -h ${ORIG_HOST} -U ${ORIG_USER} -d ${ORIG_DB} \
-Fc --create | pg_restore -c -C | psql -U postgres template1
pg_dump executes on the local (new) machine
but it connects to the $ORIG_HOST as user $ORIG_USER to db $ORIG_DB
pg_restore also runs on the local machine
pg_restore is not really needed (here) but can come in handy to drop/rename/create databases, etc
psql runs on the local machine, it accepts a stream of SQL and data from the pipe, and executes/inserts it to the (newly created) database
the connection to template1 is just a stub, because psql insists on being called with a database name
if you want to build a command-pipe like this, you should probably start by replacing the stuff after one of the | pipes by more or less, or redirect it to a file.
You might need to import system-wide things (usernames and tablespaces) first

Backup a mongodb using a ssh

Guys I am trying to backup a database.
First I connect to the server using a ssh tunel, then I execute the following command:
mongodump -d mydatabase -o ~/myfolder
and I get this message:
connected to: 127.0.0.1 Thu Feb 6 18:00:56 DATABASE: mydatabase to
/home/backups/myfolder/myfolder
As you can see, the mongodump is creating a folder inside a folder, but inside this folder I don't have any files, no json, no bson file.
Could someone, explain me how to make a backup on my server using ssh and the move the files to my local machine.
Thanks in advance.
this is the command you are looking for.
this command will access your server database locally
4321 is a port number which can be any port number in which you run your mongodb server root#144.154.22.11 and this is your server ip.
ssh -L 4321:localhost:27017 root#144.154.22.11 -f -N
and after this
mongodump --port 4321
this command will make your mongodb dump.

Copying MongoDB Database into Local Machine

I have a MongoDB database that resides on a remote server machine whose IP address is 192.168.1.20 on a local network. For development and testing purposes, and since I am not allowed to modify or delete the database on the server for security purposes, I want to copy the database on my local machine for my personal use.
Can anyone please tell me, how do I achieve this?
I do this by creating a dump of the remote db to my local machine, which I then restore:
Make sure you have a mongo instance up and running (eg. run mongod.exe from your bin folder in a terminal window. On my windows computer that's C:\mongodb\bin)
Make a dump from remote db: Open a new terminal window, move to the bin folder again, run:
mongodump -h example.host.com --port 21018 -d dbname --username username --password yourpass
(Change the parameters to suit your own situation.)
Restore the dumped database: Once the dump has been made, run the following command so that you have a local db:
mongorestore -d theNameYouWantForYourLocalDB dump\nameOfRemoteDB
(replace nameOfRemoteDB with the name of the remote db, the same as in previous command, and replace theNameYouWantForYourLocalDB with the name that you want your new local db to have)
There is copy database command which I guess should be good fit for your need.
db.copyDatabase("DATABASENAME", "DATABASENAME", "localhost:27018");
Alternatively, you can just stop MongoDb, copy the database files to another server and run an instance of MongoDb there.
EDIT 2020-04-25
Quote from MongoDB documentation
MongoDB 4.0 deprecates the copydb and the clone commands and their mongo shell helpers db.copyDatabase() and db.cloneDatabase().
As alternatives, users can use mongodump and mongorestore (with the mongorestore options --nsFrom and --nsTo) or write a script using the drivers.
Reference here
This should be a comment to the answer of #malla, but I don't have enough reputation to comment so I'm posting it here for other's reference.
In step 2, When you are trying to dump file from a remote server, remember to add out option so that you can restore locally later: (in my first try, I didn't add it and it failed, saying dump\db_name was not found).I'm not sure whether my way efficient or not. But it worked for me.
Step 2:
mongodump -h example.host.com --port 21018 -d dbname --username username --password yourpass --out <path_you_want_to_dump>
Step 3:
mongorestore -d theNameYouWantForYourLocalDB \<path_you_want_to_dump> + nameOfRemoteDB
The mongoexport command:
http://docs.mongodb.org/manual/core/import-export/
Or, mongodump command:
http://docs.mongodb.org/manual/reference/program/mongodump/
mongodb has commandline tools for importing and exporting. Take a look at mongodump --collection collection --db test and mongorestore --collection people --db accounts dump/accounts/
http://docs.mongodb.org/v2.2/reference/mongodump/
http://docs.mongodb.org/v2.2/reference/mongorestore/
this even works over the network
You can use the mongoexport command to copy the database to your local machine.

mongoimport hangs when running from within firewall

I am trying to import data to my mongodb sevrer that is hosted on the cloud.
I run the following command from a linux server that is inside a corporate firewall:
mongoimport --host myhost:10081 --db mydb -u myusr -p mypass --collection imptest --file test.dat --drop --stopOnError
The import starts running, connects to the remote mongod successfully, creates one record of data (checked my db) and then simply hangs forever with no error message.
I am quite sure that this happens due to some firewall settings which block communications back from the mongo server - when I do the same thing from outside the firewall it works perfectly.
Can I make mongoimport work with more optimistic WriteConcern, and not wait for acks? Or better yet, how can I find out which port being blocked is causing me the trouble?
I assume there are some ports which are most certainly open, like 22 for SSH. You could try setting up an SSH tunnel from within your firewall to the cloud based server. Then you need to forward connections on the mongoDB ports through the SSH tunnel.