I have the following DB:
"_id" : ...,
"index" : [
...,
...,
...
],
"value" : {
...,
...,
...,
...,
...,
...
}
I want to export all record for which the second element of index is "London" so I used:
mongoexport --db DbReport --collection cityconsumption --query {'index.1':"London"} --csv --out /tmp/me/Query1.csv --username 'DBReport' --password '...' --fields 'index,value'
but I got an error:
assertion: 10340 Failure parsing JSON string near: index.1:1^
could you please help me.
Thanks,
Amir
Related
I have following bson and json files from https://github.com/Apress/def-guide-to-mongodb/tree/master/9781484211830/The%20Definitive%20Guide%20to%20MongoDB
$ ls .
aggregation.bson aggregation.metadata.json mapreduce.bson mapreduce.metadata.json storage.bson text.json
How can I import them into MongoDB?
I tried to import each of them as a collection, but failed:
$ mongorestore -d test -c aggregation
2018-07-18T01:44:25.376-0400 the --db and --collection args should only be used when restoring from a BSON file. Other uses are deprecated and will not exist in the future; use --nsInclude instead
2018-07-18T01:44:25.377-0400 using default 'dump' directory
2018-07-18T01:44:25.377-0400 see mongorestore --help for usage information
2018-07-18T01:44:25.377-0400 Failed: mongorestore target 'dump' invalid: stat dump: no such file or directory
I am not sure if I specify the file aggregation.bson correctly, but the above command is what I learned from a similar example in a book.
Thanks.
UPDATE
In the following, why did the first fail and the second succeed? Which command shall I use?
$ mongoimport -d test -c aggregation --file aggregation.bson
2018-07-18T09:45:44.698-0400 connected to: localhost
2018-07-18T09:45:44.720-0400 Failed: error processing document #1: invalid character 'ยบ' looking for beginning of value
2018-07-18T09:45:44.720-0400 imported 0 documents
$ mongoimport -d test -c aggregation --file aggregation.metadata.json
2018-07-18T09:46:05.058-0400 connected to: localhost
2018-07-18T09:46:05.313-0400 imported 1 document
mongoimport --db dbName --collection collectionName --type json --file fileName.json
Update:
C:\Program Files\MongoDB\Server\4.0\bin>mongorestore -d test -c aggregation aggregation.bson
2018-07-19T10:28:39.963+0300 checking for collection data in aggregation.bson
2018-07-19T10:28:40.099+0300 restoring test.aggregation from aggregation.bson
2018-07-19T10:28:41.113+0300 no indexes to restore
2018-07-19T10:28:41.113+0300 finished restoring test.aggregation (1000 documents)
2018-07-19T10:28:41.113+0300 done
So I tried it and it worked fine for me do you have the file in your bin folder or maybe the command you used wasn't complete?
db.aggregation.find().pretty().limit(2)
{
"_id" : ObjectId("51de841747f3a410e3000001"),
"num" : 1,
"color" : "blue",
"transport" : "train",
"fruits" : [
"orange",
"banana",
"kiwi"
],
"vegetables" : [
"corn",
"broccoli",
"potato"
]
}
{
"_id" : ObjectId("51de841747f3a410e3000005"),
"num" : 5,
"color" : "yellow",
"transport" : "plane",
"fruits" : [
"lemon",
"cherry",
"dragonfruit"
],
"vegetables" : [
"mushroom",
"capsicum",
"zucchini"
]
}
My doc:
db.org.insert({
"id" : 28,
"organisation" : "Mickey Mouse company",
"country" : "US",
"contactpersons" : [{
"title" : "",
"typecontact" : "D",
"mobilenumber" : "757784854",
"firstname" : "Mickey",
"lastname" : "Mouse",
"emailaddress" : "mickey#mouse.com"
},
{
"title" : "",
"typecontact" : "E",
"mobilenumber" : "757784854",
"firstname" : "Donald",
"lastname" : "Duck",
"emailaddress" : "donald#duck.com"
}],
"modifieddate" : "2013-11-21T16:04:49+0100"
});
My query:
mongoexport --host localhost --db sample --collection org --type csv --fields country,contactpersons.0.firstname,contactpersons.0.emailaddress --out D:\info_docs\org.csv
By this query, I'm able to get only the first document values of the contactpersons.But, I'm trying to export the second document values also.
How can I resolve this issue ? Can anyone please help me out regarding this ...
You're getting exactly the first document in contactpersons because you are only exporting the first element of the array (contactpersons.0.firstname). mongoexport can't export several or all elements of an array, so what you need to do is to unwind the array and save it in another collection. You can do this with the aggregation framework.
First, do an $unwind of contactpersons, then $project the fields you want to use (in your example, country and contactpersons), and finally save the output in a new collection with $out.
db.org.aggregate([
{$unwind: '$contactpersons'},
{$project: {_id: 0, org_id: '$id', contacts: '$contactpersons', country: 1}},
{$out: 'aggregate_org'}
])
Now you can do a mongoexport of contacts (which is the result of the $unwind of contactpersons) and country.
mongoexport --host localhost --db sample --collection aggregate_org --type=csv --fields country,contacts.firstname,contacts.emailaddress --out D:\info_docs\org.csv
I have a MongoDB collection with the following documents. Some of the documents have 1 field and some have 2. I am interested in exporting only those that I have the field "productid". I am using the query below but getting the error: "cannot unmarshal string into GO value of type map[string] interface {}".
The document looks like this:
[
{
"id" : 1,
},
{
"id" : 2,
},
{
"id" : 3
"Product Info":
{
"ProductName" : "test"
}
}
]
The MognoExport command I am using is as follows: mongoexport --username x --password x --host x --db mydb --collection mycol --query '{"Product Info.ProductName":{"$exists":true}}' --type=csv --fields id,productid --out "c:\myfile.csv"
I fixed this issue by updating my script to:
mongoexport --username x --password x --host x --db mydb --collection mycol --query "{ 'Product Info.ProductName':{$exists:true}}" --type=csv --fields id,productid --out "c:\myfile.csv"
I'm working on a java program to pass from MongoDB to Neo4j.
I have to export some Mongo documents in a csv file.
I have, for example, this document:
"coached_Team" : [
{
"team_id" : "Pal.00",
"in_charge" : {
"from" : {
"day" : 25,
"month" : 9,
"year" : 2013
}
},
"matches" : 75
}
]
I have to export in csv. I read some other questions, for example this and I used that tip to export my document.
To export in csv I use this command:
Z:\path\to\Mongo\3.0\bin>mongoexport --db <database> --collection
<collection> --type=csv --fields coached_Team.0.team_id,coached_Team.0.in_charge.from.day,
coached_Team.0.in_charge.from.month,coached_Team.0.in_charge.from.year,
coached_Team.0.matches --out "C:\path\to\output\file\output.csv
But, it did not work for me:
Is there anyway to tar gzip mongo dumps like you can do with MySQL dumps?
For example, for mysqldumps, you can write a command as such:
mysqldump -u <username> --password=<password> --all-databases | gzip > all-databases.`date +%F`.gz
Is there an equivalent way to do the same for mongo dumps?
For mongo dumps I run this command:
mongodump --host localhost --out /backup
Is there a way to just pipe that to gzip? I tried, but that didn't work.
Any ideas?
Version 3.2 introduced gzip and archive option:
mongodump --db <yourdb> --gzip --archive=/path/to/archive
Then you can restore with:
mongorestore --gzip --archive=/path/to/archive
Update (July 2015):
TOOLS-675 is now marked as complete, which will allow for dumping to an archive format in 3.2 and gzip will be one of the options in the 3.2 versions of the mongodump/mongorestore tools. I will update with the relevant docs once they are live for 3.2
Original answer (3.0 and below):
You can do this with a single collection by outputting mongodump to stdout, then piping it to a compression program (gzip, bzip2) but you will only get data (no index information) and you cannot do it for a full database (multiple collections) for now. The relevant feature request for this functionality is SERVER-5190 for upvoting/watching purposes.
Here is a quick sample run through of what is possible, using bzip2 in this example:
./mongo
MongoDB shell version: 2.6.1
connecting to: test
> db.foo.find()
{ "_id" : ObjectId("53ad8a3eb74b5ae2ff0ec93a"), "a" : 1 }
{ "_id" : ObjectId("53ad8ba445be9c4f7bd018b4"), "a" : 2 }
{ "_id" : ObjectId("53ad8ba645be9c4f7bd018b5"), "a" : 3 }
{ "_id" : ObjectId("53ad8ba845be9c4f7bd018b6"), "a" : 4 }
{ "_id" : ObjectId("53ad8baa45be9c4f7bd018b7"), "a" : 5 }
>
bye
$ ./mongodump -d test -c foo -o - | bzip2 - > foo.bson.bz2
connected to: 127.0.0.1
$ bunzip2 foo.bson.bz2
$ ./bsondump foo.bson
{ "_id" : ObjectId( "53ad8a3eb74b5ae2ff0ec93a" ), "a" : 1 }
{ "_id" : ObjectId( "53ad8ba445be9c4f7bd018b4" ), "a" : 2 }
{ "_id" : ObjectId( "53ad8ba645be9c4f7bd018b5" ), "a" : 3 }
{ "_id" : ObjectId( "53ad8ba845be9c4f7bd018b6" ), "a" : 4 }
{ "_id" : ObjectId( "53ad8baa45be9c4f7bd018b7" ), "a" : 5 }
5 objects found
Compare that with a straight mongodump (you get the same foo.bson but the extra foo.metadata.json describing the indexes is not included above):
$ ./mongodump -d test -c foo -o .
connected to: 127.0.0.1
2014-06-27T16:24:20.802+0100 DATABASE: test to ./test
2014-06-27T16:24:20.802+0100 test.foo to ./test/foo.bson
2014-06-27T16:24:20.802+0100 5 documents
2014-06-27T16:24:20.802+0100 Metadata for test.foo to ./test/foo.metadata.json
$ ./bsondump test/foo.bson
{ "_id" : ObjectId( "53ad8a3eb74b5ae2ff0ec93a" ), "a" : 1 }
{ "_id" : ObjectId( "53ad8ba445be9c4f7bd018b4" ), "a" : 2 }
{ "_id" : ObjectId( "53ad8ba645be9c4f7bd018b5" ), "a" : 3 }
{ "_id" : ObjectId( "53ad8ba845be9c4f7bd018b6" ), "a" : 4 }
{ "_id" : ObjectId( "53ad8baa45be9c4f7bd018b7" ), "a" : 5 }
5 objects found
Export Mongodb as
mongodump --host <host-ip> --port 27017 --db <database> --authenticationDatabase admin --username <username> --password <password> --gzip --archive > dump_`date "+%Y-%m-%d"`.gz
Import as
mongodump --host <host-ip> --port 27017 --db <database> --authenticationDatabase admin --username <username> --password <password> --gzip --archive=mongodump.gz
If you want to do it passing uri for your MongoDB replica set cluster
Dump:
mongodump --uri='mongodb://user:pass#primary_host,secondary_host/<db-name>?replicaSet=<replica-name>&authSource=admin' --gzip --archive > dump_`date "+%Y-%m-%d"`.gz
Restore:
mongorestore --uri='mongodb://user:pass#primary_host,secondary_host/<db-name>?replicaSet=<replica-name>&authSource=admin' --gzip --archive=<dump-file>.gz