Backup mongodb directly to remote server - mongodb

Is there a way to save mongodump archive directly on remote machine (via ssh or rsync) without saving in locally?
I see --out option in docs but no suitable examples

If you do not specify the name of the --archive, it will just go to stdout, where you catch it via | ssh. This is work for me
mongodump --db dbname --gzip --archive | ssh user#remotehost "cat > /path/to/dump.gz"
Or
mongodump --db dbname --archive | gzip -c | ssh user#remotehost "cat > /path/to/dump.gz"

Related

Dumping and restoring MongoDB in a single command

I'm trying to dump an existing MongoDB into a new MongoDB without using dump files at all.
Is there a way to pass data from mongodump into mongorestore using pipes?
something like this:
mongodump --host=0.0.0.0 --port=27017 --db=database_name | mongorestore --host 0.0.0.0 -d database_name --port 27000
Thanks!
I found this solution:
mongodump -vvvvv --host=0.0.0.0 --port=27017 --db=database_name --archive | mongorestore --archive -vvvvv
Explained:
mongodump a MongoDB util to dump (backup) a database
-vvvvv v for verbose, write to stdout the output of the command, more data per v, you can use -v -vv -vvv etc...
--host specify the host of the MongoDB you want to dump
--host specify the port of the MongoDB you want to dump
--archive writes the output to a specified archive file or, if the archive file is unspecified, writes to the standard output (stdout)
| takes the output of the command and passes it as input to the next command
mongorestore a MongoDB util to restore a database

how to create a copy of my mongo database to remote server

I want to copy the db from a server to another server
since i found some answer that works from my machine to another machine at home
(the 2 machine have the same username):
ssh -fN -L 27117:localhost:27017 remote_host
mongodump --port 27117 --db dbname --archive | mongorestore --archive
i tried doing it by
ssh -fN -L 27117:localhost:27017 myUsername#remote_host
mongodump --port 27117 --db dbname --archive | mongorestore --archive
but it doest work
i also want to know what does -fN and -L means
thanks in advance maam/sir

How can I replace the following 2 commands to use BSON file?

I use the following command to backup my local db
mongodump -h 127.0.0.1 --port 8001 -d meteor -c products --archive --gzip > dump.gz
Then I use the following command to restore on my server
cat dump.gz | ssh root#66.205.148.23 "cat | docker exec -i mongodb mongorestore --archive --gzip"
I want to do the same but with only one collection. Adding the -c parameter to the above commands does not work when trying to restore. I get a message that states that the -c param can only be used with BSON files.
How can I do the above for only one collection using the -c parameter?
Thanks
Use --out option for mongodump instead of --archieve to write BSON files
Specifies the directory where mongodump will write BSON files for the dumped databases. By default, mongodump saves output files in a directory named dump in the current working directory.
To send the database dump to standard output, specify “-” instead of a path. Write to standard output if you want process the output before saving it, such as to use gzip to compress the dump. When writing standard output, mongodump does not write the metadata that writes in a .metadata.json file when writing to files directly.
You cannot use the --archive option with the --out option.
This will create folder dump with BSON files
mongodump -h 127.0.0.1 --port 8001 -d meteor --gzip --out dump
To restore:
mongorestore -h 127.0.0.1 --port 8001 -d meteor --gzip -c collname foo dump/meteor/collname.bson.gz

Mongorestore through stdin to db with different name

I have been trying to find a way to do this but cannot, and I have a feeling there isn't an easy way to do what I want.
I have been testing using mongodump | mongorestore as a way to avoid creating an actual stored set of files, which is very useful on a cloud-based service. So far, I have been testing this by specifying the same db, although it technically isn't necessary, like so...
mongodump -h hostname -d dumpdb --excludeCollection bigcollection --archive | mongorestore --archive -d dumpdb --drop
mongodump -h hostname -d dumpdb -c bigcollection --archive --queryFile onlythese.json | mongorestore --archive -d dumpdb -c bigcollection --drop
I have found that these options work best for me; when I attempt to specify the -o - option with the single db, with the --archive removed, I was having some issues, but since this worked, I didn't mess with it.
Since I was restoring to the same db and because only the collections that were in there at the time were restored, I realize I can knock off the -d and -c in both mongorestore commands. But, it was easy to do, and is the set up for the next step...
All I wanted to do was restore the specified db in two steps, to a db of a different name, like so...
mongodump -h hostname -d dumpdb --excludeCollection bigcollection --archive | mongorestore --archive -d restoredb --drop
mongodump -h hostname -d dumpdb -c bigcollection --archive --queryFile onlythese.json | mongorestore --archive -d restoredb -c bigcollection --drop
The dump works fine, but restore does not. Based on the limited documentation, my assumption is that the dump db and restore db need to be the same for this to work; if the db specified in mongorestore is not in the dump, it just won't restore.
I find this to be rather annoying; my other thought was to restore the db as is, and just copy it to the other db.
I have thought about using other tools such as mongo-dump-stream, but considering everything was working swimmingly so far, I was hoping that it would work with the default tools.
On another point, I did dump the archive to a file (like so: > dumpdb.tar) and attempt to restore from there (like so --archive=dumpdb.tar) which is how I confirmed that the db needed to be in the dump.
Any suggestions/comments/hacks would be welcome.
As of version 3.4 of mongorestore, you can accomplish this using the --nsFrom and --nsTo options, which provide a pattern-based way to manipulate the names of your collections and/or dbs between the source and destination.
For example, to dump from a database named dumpdb into a new database named restoredb:
mongodump -h hostname -d dumpdb --archive | mongorestore --archive --nsFrom "dumpdb.*" --nsTo "restoredb.*" --drop
More from the mongodb docs: https://docs.mongodb.com/manual/reference/program/mongorestore/#change-collections-namespaces-during-restore
I haven't really found an answer to this, but, based on mongodb Jira, it looks like this is an open issue, and a feature request on their timeline, to actually be able to restore to a different DB than what you started with.
In the meantime, I came up with a not-so-great, but okay solution: as the archive file also contains the metadata, which in the code for mongorestore shows that that is all that is changed, you can replace the name of the DB in the binary file stream with one of the same length; so, my code now looks like this:
mongodump -h hostname -d dumpdb --excludeCollection bigcollection --archive | bbe -e "s/$(echo -n dumpdb | xxd -g 0 -u -ps -c 256 | sed -r 's/[A-F0-9]{2}/\\x&/g')\x00/$(echo -n rstrdb | xxd -g 0 -u -ps -c 256 | sed -r 's/[A-F0-9]{2}/\\x&/g')\x00/g" | mongorestore --archive --drop
You'll notice that I used bbe and dropped the -d tag, because this just changes the name of db in the archived metadata, and will restore to the new one.
The obvious issue here is that you cannot change the length gracefully; a longer name won't work, and shorter requires padding, which probably won't work. Otherwise, for those looking for a better solution than dumping to a file, or running a copyDatabase command (don't do it!), this is a workable solution.

How to export all collections in MongoDB?

I want to export all collections in MongoDB by the command:
mongoexport -d dbname -o Mongo.json
The result is:
No collection specified!
The manual says, if you don't specify a collection, all collections will be exported.
However, why doesn't this work?
http://docs.mongodb.org/manual/reference/mongoexport/#cmdoption-mongoexport--collection
My MongoDB version is 2.0.6.
For lazy people, use mongodump, it's faster:
mongodump -d <database_name> -o <directory_backup>
And to "restore/import" it (from directory_backup/dump/):
mongorestore -d <database_name> <directory_backup>
This way, you don't need to deal with all collections individually. Just specify the database.
Note that I would recommend against using mongodump/mongorestore for big data storages. It is very slow and once you get past 10/20GB of data it can take hours to restore.
I wrote bash script for that. Just run it with 2 parameters (database name, dir to store files).
#!/bin/bash
if [ ! $1 ]; then
echo " Example of use: $0 database_name [dir_to_store]"
exit 1
fi
db=$1
out_dir=$2
if [ ! $out_dir ]; then
out_dir="./"
else
mkdir -p $out_dir
fi
tmp_file="fadlfhsdofheinwvw.js"
echo "print('_ ' + db.getCollectionNames())" > $tmp_file
cols=`mongo $db $tmp_file | grep '_' | awk '{print $2}' | tr ',' ' '`
for c in $cols
do
mongoexport -d $db -c $c -o "$out_dir/exp_${db}_${c}.json"
done
rm $tmp_file
For local and remote dump and restore:
For Local
Local dump
mongodump -d mydb -o ./mongo-backup
Local restore
mongorestore -d mydb ./mongo-backup/mydb
For remote
Remote dump
mongodump --uri "mongodb+srv://Admin:MYPASS#appcluster.15lf4.mongodb.net/mytestdb" -o ./mongo-backup
Remote restore
mongorestore --uri "mongodb+srv://Admin:MYPASS#appcluster.15lf4.mongodb.net/mytestdb" ./mongo-backup/mytestdb
Update:
If you're using mongo 4.0 you may encounter a snapshot error, Then you can run with this argument: --forceTableScan. See here for more information. The error is something like this:
mongodump error reading collection: BSON field 'FindCommandRequest.snapshot' is an unknown field.
To export all collections:
mongodump -d database_name -o directory_to_store_dumps
To restore them:
mongorestore -d database_name directory_backup_where_mongodb_tobe_restored
Follow the steps below to create a mongodump from the server and import it another server/local machine which has a username and a password
1. mongodump -d dbname -o dumpname -u username -p password
2. scp -r user#remote:~/location/of/dumpname ./
3. mongorestore -d dbname dumpname/dbname/ -u username -p password
Please let us know where you have installed your Mongo DB? (either in Ubuntu or in Windows)
For Windows:
Before exporting you must connect to your Mongo DB in cmd prompt and make sure that you are able to connect to your local host.
Now open a new cmd prompt and execute the below command,
mongodump --db database name --out path to save
eg: mongodump --db mydb --out c:\TEMP\op.json
Visit https://www.youtube.com/watch?v=hOCp3Jv6yKo for more details.
For Ubuntu:
Login to your terminal where Mongo DB is installed and make sure you are able to connect to your Mongo DB.
Now open a new terminal and execute the below command,
mongodump -d database name -o file name to save
eg: mongodump -d mydb -o output.json
Visit https://www.youtube.com/watch?v=5Fwd2ZB86gg for more details.
Previous answers explained it well, I am adding my answer to help in case you are dealing with a remote password protected database
mongodump --host xx.xxx.xx.xx --port 27017 --db your_db_name --username your_user_name --password your_password --out /target/folder/path
I realize that this is quite an old question and that mongodump/mongorestore is clearly the right way if you want a 100% faithful result, including indexes.
However, I needed a quick and dirty solution that would likely be forwards and backwards compatible between old and new versions of MongoDB, provided there's nothing especially wacky going on. And for that I wanted the answer to the original question.
There are other acceptable solutions above, but this Unix pipeline is relatively short and sweet:
mongo --quiet mydatabase --eval "db.getCollectionNames().join('\n')" | \
grep -v system.indexes | \
xargs -L 1 -I {} mongoexport -d mydatabase -c {} --out {}.json
This produces an appropriately named .json file for each collection.
Note that the database name ("mydatabase") appears twice. I'm assuming the database is local and you don't need to pass credentials but it's easy to do that with both mongo and mongoexport.
Note that I'm using grep -v to discard system.indexes, because I don't want an older version of MongoDB to try to interpret a system collection from a newer one. Instead I'm allowing my application to make its usual ensureIndex calls to recreate the indexes.
You can do it using the mongodump command
Step 1 : Open command prompt
Step 2 : go to bin folder of your mongoDB installation (C:\Program Files\MongoDB\Server\4.0\bin)
Step 3 : then execute the following command
mongodump -d your_db_name -o destination_path
your_db_name = test
destination_path = C:\Users\HP\Desktop
Exported files will be created in destination_path\your_db_name folder (in this example C:\Users\HP\Desktop\test)
References : o7planning
In case you want to connect a remote mongoDB server like mongolab.com, you should pass connection credentials
eg.
mongoexport -h id.mongolab.com:60599 -u username -p password -d mydb -c mycollection -o mybackup.json
If you are OK with the bson format, then you can use the mongodump utility with the same -d flag. It will dump all the collections to the dump directory (the default, can be changed via the -o option) in the bson format. You can then import these files using the mongorestore utility.
If you're dealing with remote databases you can try these commands given that you don't mind the output being BSON
1. Dump out as a gzip archive
mongodump --uri="mongodb://YOUR_USER_ID:YOUR_PASSWORD#YOUR_HOST_IP/YOUR_DB_NAME" --gzip --archive > YOUR_FILE_NAME
2. Restore (Copy a database from one to another)
mongorestore --uri="mongodb://$targetUser:$targetPwd#$targetHost/$targetDb" --nsFrom="$sourceDb.*" --nsTo="$targetDb.*" --gzip --archive
You can use mongo --eval 'printjson(db.getCollectionNames())' to get the list of collections
and then do a mongoexport on all of them.
Here is an example in ruby
out = `mongo #{DB_HOST}/#{DB_NAME} --eval "printjson(db.getCollectionNames())"`
collections = out.scan(/\".+\"/).map { |s| s.gsub('"', '') }
collections.each do |collection|
system "mongoexport --db #{DB_NAME} --collection #{collection} --host '#{DB_HOST}' --out #{collection}_dump"
end
I needed the Windows batch script version. This thread was useful, so I thought I'd contribute my answer to it too.
mongo "{YOUR SERVER}/{YOUR DATABASE}" --eval "rs.slaveOk();db.getCollectionNames()" --quiet>__collections.txt
for /f %%a in ('type __collections.txt') do #set COLLECTIONS=%%a
for %%a in (%COLLECTIONS%) do mongoexport --host {YOUR SERVER} --db {YOUR DATABASE} --collection %%a --out data\%%a.json
del __collections.txt
I had some issues using set /p COLLECTIONS=<__collections.txt, hence the convoluted for /f method.
I found after trying lots of convoluted examples that very simple approach worked for me.
I just wanted to take a dump of a db from local and import it on a remote instance:
on the local machine:
mongodump -d databasename
then I scp'd my dump to my server machine:
scp -r dump user#xx.xxx.xxx.xxx:~
then from the parent dir of the dump simply:
mongorestore
and that imported the database.
assuming mongodb service is running of course.
If you want, you can export all collections to csv without specifying --fields (will export all fields).
From http://drzon.net/export-mongodb-collections-to-csv-without-specifying-fields/ run this bash script
OIFS=$IFS;
IFS=",";
# fill in your details here
dbname=DBNAME
user=USERNAME
pass=PASSWORD
host=HOSTNAME:PORT
# first get all collections in the database
collections=`mongo "$host/$dbname" -u $user -p $pass --eval "rs.slaveOk();db.getCollectionNames();"`;
collections=`mongo $dbname --eval "rs.slaveOk();db.getCollectionNames();"`;
collectionArray=($collections);
# for each collection
for ((i=0; i<${#collectionArray[#]}; ++i));
do
echo 'exporting collection' ${collectionArray[$i]}
# get comma separated list of keys. do this by peeking into the first document in the collection and get his set of keys
keys=`mongo "$host/$dbname" -u $user -p $pass --eval "rs.slaveOk();var keys = []; for(var key in db.${collectionArray[$i]}.find().sort({_id: -1}).limit(1)[0]) { keys.push(key); }; keys;" --quiet`;
# now use mongoexport with the set of keys to export the collection to csv
mongoexport --host $host -u $user -p $pass -d $dbname -c ${collectionArray[$i]} --fields "$keys" --csv --out $dbname.${collectionArray[$i]}.csv;
done
IFS=$OIFS;
If you want to dump all collections in all databases (which is an expansive interpretation of the original questioner's intent) then use
mongodump
All the databases and collections will be created in a directory called 'dump' in the 'current' location
you can create zip file by using following command .It will create zip file of database {dbname} provided.You can later import the following zip file in you mongo DB.
Window filepath=C:\Users\Username\mongo
mongodump --archive={filepath}\+{filename}.gz --gzip --db {dbname}
Here's what worked for me when restoring an exported database:
mongorestore -d 0 ./0 --drop
where ./contained the exported bson files. Note that the --drop will overwrite existing data.
if you want to use mongoexport and mongoimport to export/import each collection from database, I think this utility can be helpful for you.
I've used similar utility couple of times;
LOADING=false
usage()
{
cat << EOF
usage: $0 [options] dbname
OPTIONS:
-h Show this help.
-l Load instead of export
-u Mongo username
-p Mongo password
-H Mongo host string (ex. localhost:27017)
EOF
}
while getopts "hlu:p:H:" opt; do
MAXOPTIND=$OPTIND
case $opt in
h)
usage
exit
;;
l)
LOADING=true
;;
u)
USERNAME="$OPTARG"
;;
p)
PASSWORD="$OPTARG"
;;
H)
HOST="$OPTARG"
;;
\?)
echo "Invalid option $opt"
exit 1
;;
esac
done
shift $(($MAXOPTIND-1))
if [ -z "$1" ]; then
echo "Usage: export-mongo [opts] <dbname>"
exit 1
fi
DB="$1"
if [ -z "$HOST" ]; then
CONN="localhost:27017/$DB"
else
CONN="$HOST/$DB"
fi
ARGS=""
if [ -n "$USERNAME" ]; then
ARGS="-u $USERNAME"
fi
if [ -n "$PASSWORD" ]; then
ARGS="$ARGS -p $PASSWORD"
fi
echo "*************************** Mongo Export ************************"
echo "**** Host: $HOST"
echo "**** Database: $DB"
echo "**** Username: $USERNAME"
echo "**** Password: $PASSWORD"
echo "**** Loading: $LOADING"
echo "*****************************************************************"
if $LOADING ; then
echo "Loading into $CONN"
tar -xzf $DB.tar.gz
pushd $DB >/dev/null
for path in *.json; do
collection=${path%.json}
echo "Loading into $DB/$collection from $path"
mongoimport $ARGS -d $DB -c $collection $path
done
popd >/dev/null
rm -rf $DB
else
DATABASE_COLLECTIONS=$(mongo $CONN $ARGS --quiet --eval 'db.getCollectionNames()' | sed 's/,/ /g')
mkdir /tmp/$DB
pushd /tmp/$DB 2>/dev/null
for collection in $DATABASE_COLLECTIONS; do
mongoexport --host $HOST -u $USERNAME -p $PASSWORD -db $DB -c $collection --jsonArray -o $collection.json >/dev/null
done
pushd /tmp 2>/dev/null
tar -czf "$DB.tar.gz" $DB 2>/dev/null
popd 2>/dev/null
popd 2>/dev/null
mv /tmp/$DB.tar.gz ./ 2>/dev/null
rm -rf /tmp/$DB 2>/dev/null
fi
If you have this issue:
Failed: can't create session: could not connect to server: connection() : auth error: sasl conversation error: unable to authenticate using mechanism "SCRAM-SHA-1": (AuthenticationFailed) Authentication failed.
then add --authenticationDatabase admin
eg:
mongodump -h 192.168.20.30:27018 --authenticationDatabase admin -u dbAdmin -p dbPassword -d dbName -o path/to/folder
If you want to backup all the dbs on the server, without having the worry about that the dbs are called, use the following shell script:
#!/bin/sh
md=`which mongodump`
pidof=`which pidof`
mdi=`$pidof mongod`
dir='/var/backup/mongo'
if [ ! -z "$mdi" ]
then
if [ ! -d "$dir" ]
then
mkdir -p $dir
fi
$md --out $dir >/dev/null 2>&1
fi
This uses the mongodump utility, which will backup all DBs if none is specified.
You can put this in your cronjob, and it will only run if the mongod process is running. It will also create the backup directory if none exists.
Each DB backup is written to an individual directory, so you can restore individual DBs from the global dump.
I dump all collection on robo3t.
I run the command below on vagrant/homestead. It's work for me
mongodump --host localhost --port 27017 --db db_name --out db_path
Some of the options are now deprecated, in version 4.4.5 here is how I have done it
mongodump --archive="my-local-db" --db=my
mongorestore --archive="my-local-db" --nsFrom='my.*' --nsTo='mynew.*'
Read more about restore here: https://docs.mongodb.com/database-tools/mongorestore/
First, of Start the Mongo DB - for that go to the path as ->
C:\Program Files\MongoDB\Server\3.2\bin and click on the mongod.exe file to start MongoDB server.
Command in Windows to Export
Command to export MongoDB database in Windows from "remote-server" to the local machine in directory C:/Users/Desktop/temp-folder from the remote server with the internal IP address and port.
C:\> mongodump --host remote_ip_address:27017 --db <db-name> -o C:/Users/Desktop/temp-folder
Command in Windows to Import
Command to import MongoDB database in Windows to "remote-server" from local machine directory C:/Users/Desktop/temp-folder/db-dir
C:\> mongorestore --host=ip --port=27017 -d <db-name> C:/Users/Desktop/temp-folder/db-dir
This is the simplest technique to achieve your aim.
mongodump -d db_name -o path/filename.json
#mongodump using sh script
#!/bin/bash
TIMESTAMP=`date +%F-%H%M`
APP_NAME="folder_name"
BACKUPS_DIR="/xxxx/tst_file_bcup/$APP_NAME"
BACKUP_NAME="$APP_NAME-$TIMESTAMP"
/usr/bin/mongodump -h 127.0.0.1 -d <dbname> -o $BACKUPS_DIR/$APP_NAME/$BACKUP_NAME
tar -zcvf $BACKUPS_DIR/$BACKUP_NAME.tgz $BACKUPS_DIR/$APP_NAME/$BACKUP_NAME
rm -rf /home/wowza_analytics_bcup/wowza_analytics/wowza_analytics
### 7 days old backup delete automaticaly using given command
find /home/wowza_analytics_bcup/wowza_analytics/ -mindepth 1 -mtime +7 -delete
There are multiple options depending on what you want to do
1) If you want to export your database to another mongo database, you should use mongodump. This creates a folder of BSON files which have metadata that JSON wouldn't have.
mongodump
mongorestore --host mongodb1.example.net --port 37017 dump/
2) If you want to export your database into JSON you can use mongoexport except you have to do it one collection at a time (this is by design). However I think it's easiest to export the entire database with mongodump and then convert to JSON.
# -d is a valid option for both mongorestore and mongodump
mongodump -d <DATABASE_NAME>
for file in dump/*/*.bson; do bsondump $file > $file.json; done
Even in mongo version 4 there is no way to export all collections at once. Export the specified collection to the specified output file from a local MongoDB instance running on port 27017 you can do with the following command:
.\mongoexport.exe --db=xstaging --collection=products --out=c:/xstaging.products.json
Open the Connection
Start the server
open new Command prompt
Export:
mongo/bin> mongoexport -d webmitta -c domain -o domain-k.json
Import:
mongoimport -d dbname -c newCollecionname --file domain-k.json
Where
webmitta(db name)
domain(Collection Name)
domain-k.json(output file name)