I'm failing miserably to be able to restore a single collection into an existing database.
I'm running Ubuntu 14.04 with mongo version 2.6.7
There is a dump/mydbname/contents.bson based off my home directory.
If I run
mongorestore --collection contents --db mydbname
Then I get:
connected to: 127.0.0.1
don't know what to do with file [dump]
If I add in the path
mongorestore --collection contents --db mydbname --dbpath dump/mydbname
Then I get
If you are running a mongod on the same path you should connect to that instead of direct data file access
I've tried various other combinations, options, etc. and just can't puzzle it out, so I'm coming to the community for help!
If you want to restore a single collection then you have to specifiy the dump file of the collection. The dump file of the collection is found in the 'dump/dbname/' folder. So assuming your dump folder is in your current working directory, the command would go something like -
mongorestore --db mydbname --collection mycollection dump/mydbname/mycollection.bson
I think this is now done with the --nsInclude option:
mongorestore --nsInclude test.purchaseorders dump/
dump/ is the folder with your mongodump data, test is the db, and purchaseorders is the collection.
https://docs.mongodb.com/manual/reference/program/mongorestore/
Steps to restore specific collection in the mongodb.
1) Go to the directory where your dump folder exists.
2) Execute following command by modifying according to your db name and your collection name.
mongorestore --db mydbname --collection mycollection dump/mydbname/mycollection.bson
If you get Failed: yourdbname.collection.name: error creating indexes for collection.name: createIndex error: The field 'safe' is not valid for an index specification error, then you can use following command:
mongorestore --db mydbname --collection mycollection dump/mydbname/mycollection.bson --noIndexRestore
If you are restoring multiple collections, you can use a loop:
for file in "$HOME/mongodump/dev/<your-db>/"* ; do
if [[ "$file" != "*metadata*" && "$file" != "system.*" && "$file" != "locks.*" ]]; then
file="$(basename "$file”)"
mongorestore \
--db cdt_dev \
--collection "${file%.*}" \ # filename w/o extension
--host "<your-host>" \
--authenticationDatabase "<your-auth-db>" \
-u "user" \
-p "pwd" \
"$HOME/mongodump/dev/<your-db>/$file"
fi;
done
Related
In MongoDB, is it possible to dump a database and restore the content to a different database? For example like this:
mongodump --db db1 --out dumpdir
mongorestore --db db2 --dir dumpdir
But it doesn't work. Here's the error message:
building a list of collections to restore from dumpdir dir
don't know what to do with subdirectory "dumpdir/db1", skipping...
done
You need to actually point at the "database name" container directory "within" the output directory from the previous dump:
mongorestore -d db2 dumpdir/db1
And usually just <path> is fine as a positional argument rather than with -dir which would only be needed when "out of position" i.e "in the middle of the arguments list".
p.s. For archive backup file (tested with mongorestore v3.4.10)
mongorestore --gzip --archive=${BACKUP_FILE_GZ} --nsFrom "${DB_NAME}.*" --nsTo "${DB_NAME_RESTORE}.*"
mongodump --db=DB_NAME --out=/path-to-dump
mongorestore --nsFrom "DB_NAME.*" --nsTo "NEW_DB_NAME.*" /path-to-dump
In addition to the answer of Blakes Seven, if your databases use authentication I got this to work using the --uri option, which requires a recent mongo version (>3.4.6):
mongodump --uri="mongodb://$sourceUser:$sourcePwd#$sourceHost/$sourceDb" --gzip --archive | mongorestore --uri="mongodb://$targetUser:$targetPwd#$targetHost/$targetDb" --nsFrom="$sourceDb.*" --nsTo="$targetDb.*" --gzip --archive
Thank you! #Blakes Seven
Adding Docker notes:
container names are interchangeable with container ID's
(assumes authenticated, assumes named container=my_db and new_db)
dump:
docker exec -it my_db bash -c "mongodump --uri mongodb://db:password#localhost:27017/my_db --archive --gzip | cat > /tmp/backup.gz"
copy to workstation:
docker cp my_db:/tmp/backup.gz c:\backups\backup.gz
copy into new container(form backups folder):
docker cp .\backup.gz new_db:/tmp
restore from container tmp folder:
docker exec -it new_db bash -c "mongorestore --uri mongodb://db:password#localhost:27017/new_db --nsFrom 'my_db.*' --nsTo 'new_db.*' --gzip --archive=/tmp/backup.gz"
You can restore DB with another name. The syntax is:
mongorestore --port 27017 -u="username" -p="password"
--nsFrom "dbname.*"
--nsTo "new_dbname.*"
--authenticationDatabase admin /backup_path
I want to import dump data from my .gz file.
Location of file is home/Alex/Documents/Abc/dump.gz and the name of db is "Alex".
I have tried mongorestore --gzip --db "Alex" /home/Alex/Documents/Abc/dump.gz
But it shows error:
2018-10-31T12:54:58.359+0530 the --db and --collection args should
only be used when restoring from a BSON file. Other uses are
deprecated and will not exist in the future; use --nsInclude instead
2018-10-31T12:54:58.359+0530 Failed: file
/home/Alex/Documents/Abc/dump.gz does not have .bson extension.
How can I import it?
Dump command:
mongodump --host localhost:27017 --gzip --db Alex --out ./testSO
Restore Command:
mongorestore --host localhost:27017 --gzip --db Alex ./testSO/Alex
Works perfectly!
While using archive:
Dump command:
mongodump --host localhost:27017 --archive=dump.gz --gzip --db Alex
Restore Command:
mongorestore --host localhost:27017 --gzip --archive=dump.gz --db Alex
Note:- While using archive you need to stick with the database name.
Different database name or collection name is not supported. For more info.
This is what worked for me in the latest versions (100.5.1) of mongodump.
mongorestore --uri=<CONNECTION_URI> --gzip --archive=<ARCHIVE_NAME> --nsFrom "<SOURCE_DB_NAME>.*" --nsTo "<DEST_DB_NAME>.*"
Unpack .tgz files and restore the DB
tar zxvf fileNameHere.tgz
mongorestore --port 27017 -u="username" -p="password" --authenticationDatabase admin /bacup_path
mongorestore doesn't find the BSON files inside the gzip file because the mongodump was made with different paths than the restore one.
To solve the problem, the fastest and safest way is to extract the gzip file and go to the upper folder containing the json and bson files for run the mongorestore.
For example, the dump.gz file was made in such a way that the backup are saved within the data/backup/mongo/dump/ path folders
Extracting the dump.gz file with command tar -xvf dump.gz you will find a folder named data with the subfolders data/backup/mongo/dump/ inside (inside the dump folder are present all backup file with json and bson extension, these files represent databases and collections, etc.)
Go to the higher folder, that containing the dump folder eg. cd data/backup/mongo/
Now you can run the restore command
mongorestore --authenticationDatabase admin dump/
Where dump/ is the folder that containing the backup files.
You may need to use the arguments -h to point the server host (eg. localhost) and -u followed by the username enabled to make the restore operations (eg. root)
In MongoDB, is it possible to dump a database and restore the content to a different database? For example like this:
mongodump --db db1 --out dumpdir
mongorestore --db db2 --dir dumpdir
But it doesn't work. Here's the error message:
building a list of collections to restore from dumpdir dir
don't know what to do with subdirectory "dumpdir/db1", skipping...
done
You need to actually point at the "database name" container directory "within" the output directory from the previous dump:
mongorestore -d db2 dumpdir/db1
And usually just <path> is fine as a positional argument rather than with -dir which would only be needed when "out of position" i.e "in the middle of the arguments list".
p.s. For archive backup file (tested with mongorestore v3.4.10)
mongorestore --gzip --archive=${BACKUP_FILE_GZ} --nsFrom "${DB_NAME}.*" --nsTo "${DB_NAME_RESTORE}.*"
mongodump --db=DB_NAME --out=/path-to-dump
mongorestore --nsFrom "DB_NAME.*" --nsTo "NEW_DB_NAME.*" /path-to-dump
In addition to the answer of Blakes Seven, if your databases use authentication I got this to work using the --uri option, which requires a recent mongo version (>3.4.6):
mongodump --uri="mongodb://$sourceUser:$sourcePwd#$sourceHost/$sourceDb" --gzip --archive | mongorestore --uri="mongodb://$targetUser:$targetPwd#$targetHost/$targetDb" --nsFrom="$sourceDb.*" --nsTo="$targetDb.*" --gzip --archive
Thank you! #Blakes Seven
Adding Docker notes:
container names are interchangeable with container ID's
(assumes authenticated, assumes named container=my_db and new_db)
dump:
docker exec -it my_db bash -c "mongodump --uri mongodb://db:password#localhost:27017/my_db --archive --gzip | cat > /tmp/backup.gz"
copy to workstation:
docker cp my_db:/tmp/backup.gz c:\backups\backup.gz
copy into new container(form backups folder):
docker cp .\backup.gz new_db:/tmp
restore from container tmp folder:
docker exec -it new_db bash -c "mongorestore --uri mongodb://db:password#localhost:27017/new_db --nsFrom 'my_db.*' --nsTo 'new_db.*' --gzip --archive=/tmp/backup.gz"
You can restore DB with another name. The syntax is:
mongorestore --port 27017 -u="username" -p="password"
--nsFrom "dbname.*"
--nsTo "new_dbname.*"
--authenticationDatabase admin /backup_path
I created a java application on openshift with the mongoDb cartridge.
My application runs fine, both locally on jboss AS7 as on openshift.
So far so good.
Now I would like to import an csv into the mongoDb on the openshift cloud.
The command is fairly simple:
mongoimport -d dbName -c collectionName --type csv data.csv --headerline
This works fine locally, and I know how to connect to the openshift-shell and remote mongo-db. But my question is: how can I use a locally stored file (data.csv) when executing this commando in a ssh-shell.
I found this on the openshift forum, but I don't realy know what this tmp directory is and how to use it.
I work on windows, so I use Cygwin as a shell-substitute.
Thanks for any help
The tmp directory is shorthand for /tmp. On Linux, it's a directory that is cleaned out whenever you restart the computer, so it's a good place for temporary files.
So, you could do something like:
$ rsync data.csv openshiftUsername#openshiftHostname:/tmp
$ ssh openshiftUsername#openshiftHostname
$ mongoimport -d dbName -c collectionName --type csv /tmp/data.csv --headerline
This is what I needed in October 2014:
mongoimport --host $OPENSHIFT_MONGODB_DB_HOST --port $OPENSHIFT_MONGODB_DB_PORT -u admin -p 123456789 -d dbName -c users /tmp/db.json
Note that I used a json file instead of csv
When using Openshift you must use the environment variables to ensure your values are always correct. Click here to read more about Openshift Envrionment variables
SSH into your openshift server then run (remember to change the bold bits in the command to match your values):
mongoimport --headerline --type csv \
--host $OPENSHIFT_NOSQL_DB_HOST \
--port $OPENSHIFT_NOSQL_DB_PORT \
--db **your db name** \
--collection **your collection name** \
--username $OPENSHIFT_NOSQL_DB_USERNAME \
--password $OPENSHIFT_NOSQL_DB_PASSWORD \
--file ~/**your app name**/data/**your csv file name**
NOTE
When importing csv files using mongoimport the data is saved as strings and numbers only. It will not save arrays or objects. If you have arrays or object to be saved you must first convert your csv file into a proper json file and then mongoimport the json file.
I installed RockMongo on my openshift instance to manage the mongodb.
It's a nice userinterface, a bit like phpMyAdmin for mysql
Users who wish to use mongorestore the following worked for me:
First copy your dump using scp to the data dir on openshift:
scp yourfile.bson yourhex#yourappname.rhcloud.com:app-root/data
rhc ssh into your app and cd to the app-root/data folder.
mongorestore --host $OPENSHIFT_MONGODB_DB_HOST
--port $OPENSHIFT_MONGODB_DB_PORT
--username $OPENSHIFT_MONGODB_DB_USERNAME
--password $OPENSHIFT_MONGODB_DB_PASSWORD
-d yourdb
-c yourcollection
yourfilename.bson --drop
Similar to Simon's answer, but this is how I imported .json to the database:
mongoimport --host $OPENSHIFT_MONGODB_DB_HOST -u admin -p 123456 --db dbname --collection grades < grades.json
Dumped a MongoDB successfully:
$ mongodump -h ourhost.com:portnumber -d db_name01 -u username -p
I need to import or export it to a testserver and have struggle with it, please help me figure out.
I tried some ways:
$ mongoimport -h host.com:port -c dbname -d dbname_test -u username -p
connected to host.
Password: ...
Gives this error:
assertion: 9997 auth failed: { errmsg: "auth fails", ok: 0.0 }
$ mongoimport -h host.com:port -d dbname_test -u username -p
Gives this error:
no collection specified!
How to specify which collection to use? What should I use for -d? What I'd like to upload or what I want to use as test out there? I would like to import the full DB not only collection of it.
The counterpart to mongodump is mongorestore (and the counterpart to mongoimport is mongoexport) -- the major difference is in the format of the files created and understood by the tools (dump and restore read and write BSON files; export and import deal with text file formats: JSON, CSV, TSV.
If you've already run mongodump, you should have a directory named dump, with a subdirectory for each database that was dumped, and a file in those directories for each collection. You can then restore this with a command like:
mongorestore -h host.com:port -d dbname_test -u username -p password dump/dbname/
Assuming that you want to put the contents of the database dbname into a new database called dbname_test.
You may have to specify the authentication database
mongoimport -h localhost:27017 --authenticationDatabase admin -u user -p -d database -c collection --type csv --headerline --file awesomedata.csv
For anyone else might reach this question after all these years (like I did), and if you are using
a dump which was created using mongodump
and trying to restore from a dump directory
and going to be using the default port 27017
All you got to do is,
mongorestore dump/
Refer to the mongorestore doc for more info. cheers!
When you do a mongodump it will dump in a binary format. You need to use mongorestore to "import" this data.
Mongoimport is for importing data that was exported using mongoexport