Migrating databases in MongoDB is a pretty well understood problem domain and there are a range of tools available to do so on a host-level. Everything from mongodump and mongoexport to rsync on the data files. If you're getting very fancy, you can use network mounts like SSHFS and NFS to mitigate diskspace and IOPS constraint problems.
Migrating a Database on a Host
# Using a temporary archive
mongodump --db my_db --gzip --archive /tmp/my_db.dump --port 27017
mongorestore --db my_db --gzip --archive /tmp/my_db.dump --port 27018
rm /tmp/my_db.dump
# Or you can stream it...
mongodump --db my_db --port 27017 --archive \
| mongorestore --db my_db --port 27018 --archive
Performing the same migrations in a containerized environment, however, can be somewhat more complicated and the lightweight, purpose-specific nature of containers means that you often don't have the same set of tools available to you.
As an engineer managing containerized infrastructure, I'm interested in what approaches can be used to migrate a database from one container/cluster to another whether for backup, cluster migration or development (data sampling) purposes.
For the purpose of this question, let's assume that the database is NOT a multi-TB cluster spread across multiple hosts and seeing thousands(++) of writes per second (i.e. that you can make a backup and have "enough" data for it to be valuable without needing to worry about replicating oplogs etc).
I've used a couple of approaches to solve this before. The specific approach depends on what I'm doing and what requirements I need to work within.
1. Working with files inside the container
# Dump the old container's DB to an archive file within the container
docker exec $OLD_CONTAINER \
bash -c 'mongodump --db my_db --gzip --archive /tmp/my_db.dump'
# Copy the archive from the old container to the new one
docker cp $OLD_CONTAINER:/tmp/my_db.dump $NEW_CONTAINER:/tmp/my_db.dump
# Restore the archive in the new container
docker exec $NEW_CONTAINER \
bash -c 'mongorestore --db my_db --gzip --archive /tmp/my_db.dump'
This approach works quite well and avoids many encoding issues encountered when piping data over stdout, however it also doesn't work particularly well when migrating to containers on different hosts (you need to docker cp to a local file and then repeat the process to copy that local file to the new host) as well as when migrating from, say, Docker to Kubernetes.
Migrating to a different Docker cluster
# Dump the old container's DB to an archive file within the container
docker -H old_cluster exec $OLD_CONTAINER \
bash -c 'mongodump --db my_db --gzip --archive /tmp/my_db.dump'
docker -H old_cluster exec $OLD_CONTAINER rm /tmp/my_db.dump
# Copy the archive from the old container to the new one (via your machine)
docker -H old_cluster cp $OLD_CONTAINER:/tmp/my_db.dump /tmp/my_db.dump
docker -H new_cluster cp /tmp/my_db.dump $NEW_CONTAINER:/tmp/my_db.dump
rm /tmp/my_db.dump
# Restore the archive in the new container
docker -H new_cluster exec $NEW_CONTAINER \
bash -c 'mongorestore --db my_db --gzip --archive /tmp/my_db.dump'
docker -H new_cluster exec $NEW_CONTAINER rm /tmp/my_db.dump
Downsides
The biggest downside to this approach is the need to store temporary dump files everywhere. In the base case scenario, you would have a dump file in your old container and another in your new container; in the worst case you'd have a 3rd on your local machine (or potentially on multiple machines if you need to scp/rsync it around). These temp files are likely to be forgotten about, wasting unnecessary space and cluttering your container's filesystem.
2. Copying over stdout
# Copy the database over stdout (base64 encoded)
docker exec $OLD_CONTAINER \
bash -c 'mongodump --db my_db --gzip --archive 2>/dev/null | base64' \
| docker exec $NEW_CONTAINER \
bash -c 'base64 --decode | mongorestore --db my_db --gzip --archive'
Copying the archive over stdout and passing it via stdin to the new container allows you to remove the copy step and join the commands into a beautiful little one liner (for some definition of beautiful). It also allows you to potentially mix-and-match hosts and even container schedulers...
Migrating between different Docker clusters
# Copy the database over stdout (base64 encoded)
docker -H old_cluster exec $(docker -H old_cluster ps -q -f 'name=mongo') \
bash -c 'mongodump --db my_db --gzip --archive 2>/dev/null | base64' \
| docker -H new_cluster exec $(docker -H new_cluster ps -q -f 'name=mongo') \
bash -c 'base64 --decode | mongorestore --db my_db --gzip --archive'
Migrating from Docker to Kubernetes
# Copy the database over stdout (base64 encoded)
docker exec $(docker ps -q -f 'name=mongo') \
bash -c 'mongodump --db my_db --gzip --archive 2>/dev/null | base64' \
| kubectl exec mongodb-0 \
bash -c 'base64 --decode | mongorestore --db my_db --gzip --archive'
Downsides
This approach works well in the "success" case, but in situations where it fails to dump the database correctly the need to suppress the stderr stream (with 2>/dev/null) can cause serious headaches for debugging the cause.
It is also 33% less network efficient than the file case, since it needs to base64 encode the data for transport (potentially a big issue for larger databases). As with all streaming modes, there's also no way to inspect the data that was sent after the fact, which might be an issue if you need to track down an issue.
UPDATE: this post applied to meteor.com free hosting, which has been shutdown and replaced with Galaxy, a paid Meteor hosting service
I'm using this command
C:\kanjifinder>meteor mongo --url kanjifinder.meteor.com
to get access credentials to my deployed mongo app, but I can't get mongoimport to work with the credentials. I think I just don't exactly understand which part is the username, password and client. Could you break it down for me?
result from server (I modified it to obfuscate the real values):
mongodb://client:e63aaade-xxxx-yyyy-93e4-de0c1b80416f#meteor.m0.mongolayer.com:27017/kanjifinder_meteor_com
my mongoimport attempt (fails authentication):
C:\mongodb\bin>mongoimport -h meteor.m0.mongolayer.com:27017 -u client -p e63aaade-xxxx-yyyy-93e4-de0c1b80416f --db meteor --collection kanji --type csv --file c:\kanjifinder\kanjifinder.csv --headerline
OK got it. This helped:
http://docs.mongodb.org/manual/reference/connection-string/
mongoimport --host meteor.m0.mongolayer.com --port 27017 --username client --password e63aaade-xxxx-yyyy-93e4-de0c1b80416f --db kanjifinder_meteor_com --collection kanji --type csv --file c:\kanjifinder\kanjifinder.csv --headerline
Using mongodump and mongorestore also works:
Dump data from existing mongodb (mongodb url: mongodb://USER:PASSWORD#DBHOST/DBNAME)
mongodump -h DBHOST -d DBNAME -u USER -p PASSWORD
This will create a "dump" directory, with all the data going to dump/DBNAME.
Get the mongodb url for the deployed meteor app (i.e. www.mymeteorapp.com)
meteor mongo --url METEOR_APP_URL
Note: the PASSWORD expires every min.
Upload the db dump data to the meteor app (using an example meteor db url)
mongorestore -u client -p dcc56e04-a563-4147-eff4-5ae7c1253c9b -h production-db-b2.meteor.io:27017 -db www_mymeteorapp_com dump/DBNAME/
All the data should get transferred!
If you get auth_failed error message your mongoimport version is too different from what's being used in meteor.com. So you need to upgrade. For ubuntu see https://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/#install-the-latest-stable-version-of-mongodb
#!/bin/sh
# Script to import csvfile to meteor application deployed to free meteor.com hosting.
# Make sure your versions of mongo match with the metor.com mongo versions.
# As Jan 2016 it seems to be 3.x something. Tested with mongoimport 3.12.
if [ $# -eq 0 ]
then
echo "usage: $0 xxx.meteor.com collection filename.csv"
exit 1
fi
URL=$1
COLLECTION=$2
FILE=$3
echo Connecting to $URL, please stand by.... collection=$COLLECTION file=$FILE
PUPMS=`meteor mongo --url $URL | sed 's/mongodb:\/\// -u /' | sed 's/:/ -p /' | sed 's/#/ -h /' | sed 's/\// -d /'`
mongoimport -v $PUPMS --type csv --headerline --collection $COLLECTION --file $FILE
I want to export all collections in MongoDB by the command:
mongoexport -d dbname -o Mongo.json
The result is:
No collection specified!
The manual says, if you don't specify a collection, all collections will be exported.
However, why doesn't this work?
http://docs.mongodb.org/manual/reference/mongoexport/#cmdoption-mongoexport--collection
My MongoDB version is 2.0.6.
For lazy people, use mongodump, it's faster:
mongodump -d <database_name> -o <directory_backup>
And to "restore/import" it (from directory_backup/dump/):
mongorestore -d <database_name> <directory_backup>
This way, you don't need to deal with all collections individually. Just specify the database.
Note that I would recommend against using mongodump/mongorestore for big data storages. It is very slow and once you get past 10/20GB of data it can take hours to restore.
I wrote bash script for that. Just run it with 2 parameters (database name, dir to store files).
#!/bin/bash
if [ ! $1 ]; then
echo " Example of use: $0 database_name [dir_to_store]"
exit 1
fi
db=$1
out_dir=$2
if [ ! $out_dir ]; then
out_dir="./"
else
mkdir -p $out_dir
fi
tmp_file="fadlfhsdofheinwvw.js"
echo "print('_ ' + db.getCollectionNames())" > $tmp_file
cols=`mongo $db $tmp_file | grep '_' | awk '{print $2}' | tr ',' ' '`
for c in $cols
do
mongoexport -d $db -c $c -o "$out_dir/exp_${db}_${c}.json"
done
rm $tmp_file
For local and remote dump and restore:
For Local
Local dump
mongodump -d mydb -o ./mongo-backup
Local restore
mongorestore -d mydb ./mongo-backup/mydb
For remote
Remote dump
mongodump --uri "mongodb+srv://Admin:MYPASS#appcluster.15lf4.mongodb.net/mytestdb" -o ./mongo-backup
Remote restore
mongorestore --uri "mongodb+srv://Admin:MYPASS#appcluster.15lf4.mongodb.net/mytestdb" ./mongo-backup/mytestdb
Update:
If you're using mongo 4.0 you may encounter a snapshot error, Then you can run with this argument: --forceTableScan. See here for more information. The error is something like this:
mongodump error reading collection: BSON field 'FindCommandRequest.snapshot' is an unknown field.
To export all collections:
mongodump -d database_name -o directory_to_store_dumps
To restore them:
mongorestore -d database_name directory_backup_where_mongodb_tobe_restored
Follow the steps below to create a mongodump from the server and import it another server/local machine which has a username and a password
1. mongodump -d dbname -o dumpname -u username -p password
2. scp -r user#remote:~/location/of/dumpname ./
3. mongorestore -d dbname dumpname/dbname/ -u username -p password
Please let us know where you have installed your Mongo DB? (either in Ubuntu or in Windows)
For Windows:
Before exporting you must connect to your Mongo DB in cmd prompt and make sure that you are able to connect to your local host.
Now open a new cmd prompt and execute the below command,
mongodump --db database name --out path to save
eg: mongodump --db mydb --out c:\TEMP\op.json
Visit https://www.youtube.com/watch?v=hOCp3Jv6yKo for more details.
For Ubuntu:
Login to your terminal where Mongo DB is installed and make sure you are able to connect to your Mongo DB.
Now open a new terminal and execute the below command,
mongodump -d database name -o file name to save
eg: mongodump -d mydb -o output.json
Visit https://www.youtube.com/watch?v=5Fwd2ZB86gg for more details.
Previous answers explained it well, I am adding my answer to help in case you are dealing with a remote password protected database
mongodump --host xx.xxx.xx.xx --port 27017 --db your_db_name --username your_user_name --password your_password --out /target/folder/path
I realize that this is quite an old question and that mongodump/mongorestore is clearly the right way if you want a 100% faithful result, including indexes.
However, I needed a quick and dirty solution that would likely be forwards and backwards compatible between old and new versions of MongoDB, provided there's nothing especially wacky going on. And for that I wanted the answer to the original question.
There are other acceptable solutions above, but this Unix pipeline is relatively short and sweet:
mongo --quiet mydatabase --eval "db.getCollectionNames().join('\n')" | \
grep -v system.indexes | \
xargs -L 1 -I {} mongoexport -d mydatabase -c {} --out {}.json
This produces an appropriately named .json file for each collection.
Note that the database name ("mydatabase") appears twice. I'm assuming the database is local and you don't need to pass credentials but it's easy to do that with both mongo and mongoexport.
Note that I'm using grep -v to discard system.indexes, because I don't want an older version of MongoDB to try to interpret a system collection from a newer one. Instead I'm allowing my application to make its usual ensureIndex calls to recreate the indexes.
You can do it using the mongodump command
Step 1 : Open command prompt
Step 2 : go to bin folder of your mongoDB installation (C:\Program Files\MongoDB\Server\4.0\bin)
Step 3 : then execute the following command
mongodump -d your_db_name -o destination_path
your_db_name = test
destination_path = C:\Users\HP\Desktop
Exported files will be created in destination_path\your_db_name folder (in this example C:\Users\HP\Desktop\test)
References : o7planning
In case you want to connect a remote mongoDB server like mongolab.com, you should pass connection credentials
eg.
mongoexport -h id.mongolab.com:60599 -u username -p password -d mydb -c mycollection -o mybackup.json
If you are OK with the bson format, then you can use the mongodump utility with the same -d flag. It will dump all the collections to the dump directory (the default, can be changed via the -o option) in the bson format. You can then import these files using the mongorestore utility.
If you're dealing with remote databases you can try these commands given that you don't mind the output being BSON
1. Dump out as a gzip archive
mongodump --uri="mongodb://YOUR_USER_ID:YOUR_PASSWORD#YOUR_HOST_IP/YOUR_DB_NAME" --gzip --archive > YOUR_FILE_NAME
2. Restore (Copy a database from one to another)
mongorestore --uri="mongodb://$targetUser:$targetPwd#$targetHost/$targetDb" --nsFrom="$sourceDb.*" --nsTo="$targetDb.*" --gzip --archive
You can use mongo --eval 'printjson(db.getCollectionNames())' to get the list of collections
and then do a mongoexport on all of them.
Here is an example in ruby
out = `mongo #{DB_HOST}/#{DB_NAME} --eval "printjson(db.getCollectionNames())"`
collections = out.scan(/\".+\"/).map { |s| s.gsub('"', '') }
collections.each do |collection|
system "mongoexport --db #{DB_NAME} --collection #{collection} --host '#{DB_HOST}' --out #{collection}_dump"
end
I needed the Windows batch script version. This thread was useful, so I thought I'd contribute my answer to it too.
mongo "{YOUR SERVER}/{YOUR DATABASE}" --eval "rs.slaveOk();db.getCollectionNames()" --quiet>__collections.txt
for /f %%a in ('type __collections.txt') do #set COLLECTIONS=%%a
for %%a in (%COLLECTIONS%) do mongoexport --host {YOUR SERVER} --db {YOUR DATABASE} --collection %%a --out data\%%a.json
del __collections.txt
I had some issues using set /p COLLECTIONS=<__collections.txt, hence the convoluted for /f method.
I found after trying lots of convoluted examples that very simple approach worked for me.
I just wanted to take a dump of a db from local and import it on a remote instance:
on the local machine:
mongodump -d databasename
then I scp'd my dump to my server machine:
scp -r dump user#xx.xxx.xxx.xxx:~
then from the parent dir of the dump simply:
mongorestore
and that imported the database.
assuming mongodb service is running of course.
If you want, you can export all collections to csv without specifying --fields (will export all fields).
From http://drzon.net/export-mongodb-collections-to-csv-without-specifying-fields/ run this bash script
OIFS=$IFS;
IFS=",";
# fill in your details here
dbname=DBNAME
user=USERNAME
pass=PASSWORD
host=HOSTNAME:PORT
# first get all collections in the database
collections=`mongo "$host/$dbname" -u $user -p $pass --eval "rs.slaveOk();db.getCollectionNames();"`;
collections=`mongo $dbname --eval "rs.slaveOk();db.getCollectionNames();"`;
collectionArray=($collections);
# for each collection
for ((i=0; i<${#collectionArray[#]}; ++i));
do
echo 'exporting collection' ${collectionArray[$i]}
# get comma separated list of keys. do this by peeking into the first document in the collection and get his set of keys
keys=`mongo "$host/$dbname" -u $user -p $pass --eval "rs.slaveOk();var keys = []; for(var key in db.${collectionArray[$i]}.find().sort({_id: -1}).limit(1)[0]) { keys.push(key); }; keys;" --quiet`;
# now use mongoexport with the set of keys to export the collection to csv
mongoexport --host $host -u $user -p $pass -d $dbname -c ${collectionArray[$i]} --fields "$keys" --csv --out $dbname.${collectionArray[$i]}.csv;
done
IFS=$OIFS;
If you want to dump all collections in all databases (which is an expansive interpretation of the original questioner's intent) then use
mongodump
All the databases and collections will be created in a directory called 'dump' in the 'current' location
you can create zip file by using following command .It will create zip file of database {dbname} provided.You can later import the following zip file in you mongo DB.
Window filepath=C:\Users\Username\mongo
mongodump --archive={filepath}\+{filename}.gz --gzip --db {dbname}
Here's what worked for me when restoring an exported database:
mongorestore -d 0 ./0 --drop
where ./contained the exported bson files. Note that the --drop will overwrite existing data.
if you want to use mongoexport and mongoimport to export/import each collection from database, I think this utility can be helpful for you.
I've used similar utility couple of times;
LOADING=false
usage()
{
cat << EOF
usage: $0 [options] dbname
OPTIONS:
-h Show this help.
-l Load instead of export
-u Mongo username
-p Mongo password
-H Mongo host string (ex. localhost:27017)
EOF
}
while getopts "hlu:p:H:" opt; do
MAXOPTIND=$OPTIND
case $opt in
h)
usage
exit
;;
l)
LOADING=true
;;
u)
USERNAME="$OPTARG"
;;
p)
PASSWORD="$OPTARG"
;;
H)
HOST="$OPTARG"
;;
\?)
echo "Invalid option $opt"
exit 1
;;
esac
done
shift $(($MAXOPTIND-1))
if [ -z "$1" ]; then
echo "Usage: export-mongo [opts] <dbname>"
exit 1
fi
DB="$1"
if [ -z "$HOST" ]; then
CONN="localhost:27017/$DB"
else
CONN="$HOST/$DB"
fi
ARGS=""
if [ -n "$USERNAME" ]; then
ARGS="-u $USERNAME"
fi
if [ -n "$PASSWORD" ]; then
ARGS="$ARGS -p $PASSWORD"
fi
echo "*************************** Mongo Export ************************"
echo "**** Host: $HOST"
echo "**** Database: $DB"
echo "**** Username: $USERNAME"
echo "**** Password: $PASSWORD"
echo "**** Loading: $LOADING"
echo "*****************************************************************"
if $LOADING ; then
echo "Loading into $CONN"
tar -xzf $DB.tar.gz
pushd $DB >/dev/null
for path in *.json; do
collection=${path%.json}
echo "Loading into $DB/$collection from $path"
mongoimport $ARGS -d $DB -c $collection $path
done
popd >/dev/null
rm -rf $DB
else
DATABASE_COLLECTIONS=$(mongo $CONN $ARGS --quiet --eval 'db.getCollectionNames()' | sed 's/,/ /g')
mkdir /tmp/$DB
pushd /tmp/$DB 2>/dev/null
for collection in $DATABASE_COLLECTIONS; do
mongoexport --host $HOST -u $USERNAME -p $PASSWORD -db $DB -c $collection --jsonArray -o $collection.json >/dev/null
done
pushd /tmp 2>/dev/null
tar -czf "$DB.tar.gz" $DB 2>/dev/null
popd 2>/dev/null
popd 2>/dev/null
mv /tmp/$DB.tar.gz ./ 2>/dev/null
rm -rf /tmp/$DB 2>/dev/null
fi
If you have this issue:
Failed: can't create session: could not connect to server: connection() : auth error: sasl conversation error: unable to authenticate using mechanism "SCRAM-SHA-1": (AuthenticationFailed) Authentication failed.
then add --authenticationDatabase admin
eg:
mongodump -h 192.168.20.30:27018 --authenticationDatabase admin -u dbAdmin -p dbPassword -d dbName -o path/to/folder
If you want to backup all the dbs on the server, without having the worry about that the dbs are called, use the following shell script:
#!/bin/sh
md=`which mongodump`
pidof=`which pidof`
mdi=`$pidof mongod`
dir='/var/backup/mongo'
if [ ! -z "$mdi" ]
then
if [ ! -d "$dir" ]
then
mkdir -p $dir
fi
$md --out $dir >/dev/null 2>&1
fi
This uses the mongodump utility, which will backup all DBs if none is specified.
You can put this in your cronjob, and it will only run if the mongod process is running. It will also create the backup directory if none exists.
Each DB backup is written to an individual directory, so you can restore individual DBs from the global dump.
I dump all collection on robo3t.
I run the command below on vagrant/homestead. It's work for me
mongodump --host localhost --port 27017 --db db_name --out db_path
Some of the options are now deprecated, in version 4.4.5 here is how I have done it
mongodump --archive="my-local-db" --db=my
mongorestore --archive="my-local-db" --nsFrom='my.*' --nsTo='mynew.*'
Read more about restore here: https://docs.mongodb.com/database-tools/mongorestore/
First, of Start the Mongo DB - for that go to the path as ->
C:\Program Files\MongoDB\Server\3.2\bin and click on the mongod.exe file to start MongoDB server.
Command in Windows to Export
Command to export MongoDB database in Windows from "remote-server" to the local machine in directory C:/Users/Desktop/temp-folder from the remote server with the internal IP address and port.
C:\> mongodump --host remote_ip_address:27017 --db <db-name> -o C:/Users/Desktop/temp-folder
Command in Windows to Import
Command to import MongoDB database in Windows to "remote-server" from local machine directory C:/Users/Desktop/temp-folder/db-dir
C:\> mongorestore --host=ip --port=27017 -d <db-name> C:/Users/Desktop/temp-folder/db-dir
This is the simplest technique to achieve your aim.
mongodump -d db_name -o path/filename.json
#mongodump using sh script
#!/bin/bash
TIMESTAMP=`date +%F-%H%M`
APP_NAME="folder_name"
BACKUPS_DIR="/xxxx/tst_file_bcup/$APP_NAME"
BACKUP_NAME="$APP_NAME-$TIMESTAMP"
/usr/bin/mongodump -h 127.0.0.1 -d <dbname> -o $BACKUPS_DIR/$APP_NAME/$BACKUP_NAME
tar -zcvf $BACKUPS_DIR/$BACKUP_NAME.tgz $BACKUPS_DIR/$APP_NAME/$BACKUP_NAME
rm -rf /home/wowza_analytics_bcup/wowza_analytics/wowza_analytics
### 7 days old backup delete automaticaly using given command
find /home/wowza_analytics_bcup/wowza_analytics/ -mindepth 1 -mtime +7 -delete
There are multiple options depending on what you want to do
1) If you want to export your database to another mongo database, you should use mongodump. This creates a folder of BSON files which have metadata that JSON wouldn't have.
mongodump
mongorestore --host mongodb1.example.net --port 37017 dump/
2) If you want to export your database into JSON you can use mongoexport except you have to do it one collection at a time (this is by design). However I think it's easiest to export the entire database with mongodump and then convert to JSON.
# -d is a valid option for both mongorestore and mongodump
mongodump -d <DATABASE_NAME>
for file in dump/*/*.bson; do bsondump $file > $file.json; done
Even in mongo version 4 there is no way to export all collections at once. Export the specified collection to the specified output file from a local MongoDB instance running on port 27017 you can do with the following command:
.\mongoexport.exe --db=xstaging --collection=products --out=c:/xstaging.products.json
Open the Connection
Start the server
open new Command prompt
Export:
mongo/bin> mongoexport -d webmitta -c domain -o domain-k.json
Import:
mongoimport -d dbname -c newCollecionname --file domain-k.json
Where
webmitta(db name)
domain(Collection Name)
domain-k.json(output file name)
Is there a simple way to export the data from a meteor deployed app?
So, for example, if I had deployed an app named test.meteor.com...
How could I easily download the data that has been collected by that app - so that I could run it locally with data from the deployed app?
To get the URL for your deployed site at meteor.com use the command (you may need to provide your site password if you password protected it):
meteor mongo --url YOURSITE.meteor.com
Which will return something like :
mongodb://client:PASSWORD#sky.member1.mongolayer.com:27017/YOURSITE_meteor_com
Which you can give to a program like mongodump
mongodump -u client -h sky.member1.mongolayer.com:27017 -d YOURSITE_meteor_com\
-p PASSWORD
The password is only good for one minute. For usage:
$ meteor --help mongo
And here's how to do the opposite: (uploading your local monogo db to meteor)
https://gist.github.com/IslamMagdy/5519514
# How to upload local db to meteor:
# -h = host, -d = database name, -o = dump folder name
mongodump -h 127.0.0.1:3002 -d meteor -o meteor
# get meteor db url, username, and password
meteor mongo --url myapp.meteor.com
# -h = host, -d = database name (app domain), -p = password, folder = the path to the dumped db
mongorestore -u client -h c0.meteor.m0.mongolayer.com:27017 -d myapp_meteor_com -p 'password' folder/
Based on Kasper Souren's solution I created an updated script that works with current versions of Meteor and also works when you protect your remote Meteor app with a password.
Please create the following script parse-mongo-url.coffee:
spawn = require('child_process').spawn
mongo = spawn 'meteor', ['mongo', '--url', 'YOURPROJECT.meteor.com'], stdio: [process.stdin, 'pipe', process.stderr]
mongo.stdout.on 'data', (data) ->
data = data.toString()
m = data.match /mongodb:\/\/([^:]+):([^#]+)#([^:]+):27017\/([^\/]+)/
if m?
process.stdout.write "-u #{m[1]} -p #{m[2]} -h #{m[3]} -d #{m[4]}"
else
if data == 'Password: '
process.stderr.write data
Then execute it like this in a *nix shell:
mongodump `coffee parse-mongo-url.coffee`
I have created a tool, mmongo, that wraps all the Mongo DB client shell commands for convenient use on a Meteor database. If you use npm (Node Package Manager), you can install it with:
npm install -g mmongo
Otherwise, see README.
To back up your Meteor database, you can now do:
mmongo test.meteor.com dump
To upload it to your local development meteor would be:
mmongo restore dump/test_meteor_com
And if you accidentally delete your production database:
mmongo test.meteor.com --eval 'db.dropDatabase()' # whoops!
You can easily restore it:
mmongo test.meteor.com restore dump/test_meteor_com
If you'd rather export a collection (say tasks) to something readable:
mmongo test.meteor.com export -c tasks -o tasks.json
Then you can open up tasks.json in your text editor, do some changes and insert the changes with:
mmongo test.meteor.com import tasks.json -c tasks --upsert
Github, NPM
I suppose your data is in a mongodb database, so if that's the case, the question is more mongo-related than meteor. You may take a look at mongoexport and mongoimport command line tools.
Edit (for example):
mongoexport -h flame.mongohq.com:12345 -u my_user -p my_pwd -d my_db -c my_coll
You need to install mongodb on your computer to have this command line tool, and obviously you need your mongodb informations. In the above example, I connect to MongoHQ (flame.mongohq.com is the host, '12345' is the port of your mongo server), but I don't know which Mongo host is actually used by the meteor hosting. If you tried the Meteor examples (TODOs, Leaderboard, etc.) locally, chances are you already installed Mongo, since it uses a local server by default.
Here is another solution in bash
#! /bin/bash
# inspired by http://stackoverflow.com/questions/11353547/bash-string-extraction-manipulation
# http://www.davidpashley.com/articles/writing-robust-shell-scripts/
set -o nounset
set -o errexit
set -o pipefail
set -x
# stackoverflow.com/questions/7216358/date-command-on-os-x-doesnt-have-iso-8601-i-option
function nowString {
date -u +"%Y-%m-%dT%H:%M:%SZ"
}
NOW=$(nowString)
# prod_url="mongodb://...:...#...:.../..."
prod_pattern="mongodb://([^:]+):([^#]+)#([^:]+):([^/]+)/(.*)"
prod_url=$(meteor mongo katapoolt --url | tr -d '\n')
[[ ${prod_url} =~ ${prod_pattern} ]]
PROD_USER="${BASH_REMATCH[1]}"
PROD_PASSWORD="${BASH_REMATCH[2]}"
PROD_HOST="${BASH_REMATCH[3]}"
PROD_PORT="${BASH_REMATCH[4]}"
PROD_DB="${BASH_REMATCH[5]}"
PROD_DUMP_DIR=dumps/${NOW}
mkdir -p dumps
# local_url="mongodb://...:.../..."
local_pattern="mongodb://([^:]+):([^/]+)/(.*)"
local_url=$(meteor mongo --url | tr -d '\n')
[[ ${local_url} =~ ${local_pattern} ]]
LOCAL_HOST="${BASH_REMATCH[1]}"
LOCAL_PORT="${BASH_REMATCH[2]}"
LOCAL_DB="${BASH_REMATCH[3]}"
mongodump --host ${PROD_HOST} --port ${PROD_PORT} --username ${PROD_USER} --password ${PROD_PASSWORD} --db ${PROD_DB} --out ${PROD_DUMP_DIR}
mongorestore --port ${LOCAL_PORT} --host ${LOCAL_HOST} --db ${LOCAL_DB} ${PROD_DUMP_DIR}/${PROD_DB}
meteor-backup is by far the easiest way to do this.
sudo npm install -g meteor-db-utils
meteor-backup [domain] [collection...]
As of March 2015 you still need to specify all collections you want to fetch though (until this issue is resolved).
Stuff from the past below
I'm doing
mongodump $(meteor mongo -U example.meteor.com | coffee url2args.cfee)
together with this little coffeescript, with a mangled extension in order not to confuse Meteor, url2args.cfee:
stdin = process.openStdin()
stdin.setEncoding 'utf8'
stdin.on 'data', (input) ->
m = input.match /mongodb:\/\/(\w+):((\w+-)+\w+)#((\w+\.)+\w+):27017\/(\w+)/
console.log "-u #{m[1]} -h #{m[4]} -p #{m[2]} -d #{m[6]}"
(it would be nicer if meteor mongo -U --mongodumpoptions would give these options, or if mongodump would accept the mongo:// URL)
# How to upload local db to meteor:
# -h = host, -d = database name, -o = dump folder name
mongodump -h 127.0.0.1:3001 -d meteor -o meteor
# get meteor db url, username, and password
meteor mongo --url myapp.meteor.com
# -h = host, -d = database name (app domain), -p = password, folder = the path to the dumped db
mongorestore -u client -h http://production-db-a2.meteor.io:27017 -d myapp_meteor_com -p 'password' folder/
While uploading local db to remote db, having an assertion Exception
shubham#shubham-PC:$ mongorestore -u client -h http://production-db-a2.meteor.io:27017 -d myapp_meteor_com -p my_password local/
2015-04-22T16:37:38.504+0530 Assertion failure _setName.size() src/mongo/client/dbclientinterface.h 219
2015-04-22T16:37:38.506+0530 0xdcc299 0xd6c7c8 0xd4bfd2 0x663468 0x65d82e 0x605f98 0x606442 0x7f5d102f8ec5 0x60af41
mongorestore(_ZN5mongo15printStackTraceERSo+0x39) [0xdcc299]
mongorestore(_ZN5mongo10logContextEPKc+0x198) [0xd6c7c8]
mongorestore(_ZN5mongo12verifyFailedEPKcS1_j+0x102) [0xd4bfd2]
mongorestore(_ZN5mongo16ConnectionStringC2ENS0_14ConnectionTypeERKSsS3_+0x1c8) [0x663468]
mongorestore(_ZN5mongo16ConnectionString5parseERKSsRSs+0x1ce) [0x65d82e]
mongorestore(_ZN5mongo4Tool4mainEiPPcS2_+0x2c8) [0x605f98]
mongorestore(main+0x42) [0x606442]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f5d102f8ec5]
mongorestore() [0x60af41]
terminate called after throwing an instance of 'mongo::AssertionException'
what(): assertion src/mongo/client/dbclientinterface.h:219
Aborted (core dumped)
I made this simple Rakefile to copy the live db to local.
To restore the live db to my local machine I just do...
rake copy_live_db
Replace myapp with the name of your meteor.com - e.g myapp.meteor.com.
require 'rubygems'
require 'open-uri'
desc "Backup the live db to local ./dump folder"
task :backup_live_db do
uri = `meteor mongo myapp --url`
pass = uri.match(/client:([^#]+)#/)[1]
puts "Using live db password: #{pass}"
`mongodump -h meteor.m0.mongolayer.com:27017 -d myapp_meteor_com -u client -p #{pass}`
end
desc "Copy live database to local"
task :copy_live_db => :backup_live_db do
server = `meteor mongo --url`
uri = URI.parse(server)
`mongorestore --host #{uri.host} --port #{uri.port} --db meteor --drop dump/myapp_meteor_com/`
end
desc "Restore last backup"
task :restore do
server = `meteor mongo --url`
uri = URI.parse(server)
`mongorestore --host #{uri.host} --port #{uri.port} --db meteor --drop dump/myapp_meteor_com/`
end
To use an existing local mongodb database on your meteor deploy myAppName site, you need to dump, then restore the mongodb.
Follow the instructions above to mongodump (remember the path) and then run the following to generate your 'mongorestore' (replaces the second step and copy/pasting):
CMD=meteor mongo -U myAppName.meteor.com | tail -1 | sed 's_mongodb://\([a-z0-9\-]*\):\([a-f0-9\-]*\)#\(.*\)/\(.*\)_mongorestore -u \1 -p \2 -h \3 -d \4_'
then
$CMD /path/to/dump
From Can mongorestore take a single url argument instead of separate arguments?
I think you can use a remotely mounted file system via sshfs and then rsync to synchronize the mongodb's folder itself or your entire Meteor folder I believe as well. This is like doing an incremental backup and potentially more efficient.
It's possible to use the same solution for sending changes of your code, etc. so why not get you database changes back at the same time too?! (killing 2 birds with 1 stone)
Here is a simple bash script that lets you dump your database from meteor.com hosted sites.
#!/bin/bash
site="rankz.meteor.com"
name="$(meteor mongo --url $site)"
echo $name
IFS='#' read -a mongoString <<< "$name"
echo "HEAD: ${mongoString[0]}"
echo "TAIL: ${mongoString[1]}"
IFS=':' read -a pwd <<< "${mongoString[0]}"
echo "${pwd[1]}"
echo "${pwd[1]:2}"
echo "${pwd[2]}"
IFS='/' read -a site <<< "${mongoString[1]}"
echo "${site[0]}"
echo "${site[1]}"
mongodump -u ${pwd[1]:2} -h ${site[0]} -d ${site[1]}\
-p ${pwd[2]}