How to import comma separated JSON documents into Mongodb? - mongodb

I have dozens of JSON files given by my colleague, they are comma separated documents, look like as follows:
{
"_id" : ObjectId("566a8d08b9ac7b7dc2ddb90a"),
"login_ip" : "180.173.143.x",
"login_time" : NumberLong("1478757697373"),
"logout_time" : NumberLong("1478757878035"),
"role" : NumberInt("5"),
"server_ip" : "115.28.94.x",
"thirdPartyId" : NumberInt("-1"),
"ver" : NumberInt("1036")
},
{
"_id" : ObjectId("566a8d0db9ac7b7dc2ddb90b"),
"login_ip" : "116.226.162.x",
"login_time" : NumberLong("1456103011531"),
"logout_time" : NumberLong("1456111567354"),
"role" : NumberInt("10002"),
"server_ip" : "115.28.94.x",
"thirdPartyId" : NumberInt("6"),
"ver" : NumberInt("1056")
},
...
I've tried to import them to my local mongodb with mongoimport tool, but it has trouble to locate the starting position of the second document, complains about the syntax of these files, in spite of the fact that the first document is parsed into the db.
+ mongoimport.exe --db eques --collection users_2017_04_25 --type json --bypassDocumentValidation --file 'E:\sample\mongdo/users_2017_04_25.json'
2017-06-12T14:01:32.029+0800 connected to: localhost
2017-06-12T14:01:32.040+0800 Failed: error processing document #2: invalid character ',' looking for beginning of value
2017-06-12T14:01:32.040+0800 imported 0 documents
PS: there're many files to be imported, please don't suggest me turn them into JSON arrays.
Please help.

The JSON documents above is expressed in BSON much of it mongoimport tool doesn't understand.
You should export your data in standard JSON format first(in Mongo world, it's call Strict mode), then feed it to mongoimport tool.

Related

mongoexport convert numeric value

I'm trying to export phone numbers from a collection. Below is the sample document
{ "_id" : ObjectId("5ad5cf864717256ff02b4923"),"userName":"9619324746", "firstName" : "D H", "contactPhone" : 9619324746}
The export command that I used is below
mongoexport --db dbname --collection accounts --type=json --out accounts.json --fields contactPhone,userName
And the contents of JSON looks like below
{"_id":{"$oid":"5ad5cf864717256ff02b4923"},"userName":"9619324746","contactPhone":9.619324746e+09}
Can somebody help me to get the contactPhone value not converted? Thank you.
-Srini
If mongoexport exported 123 as 123.0, then 123 was a Double type in the document. You should try inserting the value as a 32- or 64-bit integer
db.collection.insert({
"tweetId" : NumberLong(1234567)
})
mongoexport exports JSON, using strict mode JSON representation, which inserts some type information into the JSON so MongoDB JSON parsers (like mongoimport) can reproduce the correct BSON data types while the exported JSON still conforms to the JSON standard
{ "tweetId" : { "$numberLong" : "1234567" } }
To preserve all the type information, use mongodump/mongorestore instead. To export all field values as strings, you'll need to write your own script with a driver that fetches each doc and stringifies all the values.

How to get mongodb schema dump

I can take mongodb data backup but I am not sure about mongodb schama backup.
Is there any way to take dump of MONGODB schema only not the data ?
You need to use mongorestore... which is used for things like importing json, or csv, etc.
You can read more about mongorestore in the docs below; I'd take a look and read up on them as they are very helpful.
http://www.mongodb.org/display/DOCS/Import+Export+Tools#ImportExportTools-mongorestore
You can also check out http://learnmongo.com for tips and help!
or you can visit the links
How to use the dumped data by mongodump? hope this may be helpful for you.
MongoDB is an NoSQL Database.
There is no fixed schema for any collection, so there are no functions available in mongo shell to find the collection schema.
Fixed Schema is applicable for RDBMS databases. In NoSQL DB, such as mongodb it is not required, but you can enforce same schema using your implementation logic, if required.
A document in a same collection, can be of different schema's. Please see example below
db.mycollection.insert([
{ "_id":1, "name":"A"},
{ "_id":2, "name":"CD", "age":29},
{ "_id":3, "name":"AB", "age":28},
{ "_id":4, "name":"ABC", "age":27, "emailId":"abc#xyz.com"},
{ "_id":5, "name":"ABCD", "age":29, "emailId":"abcd#xyz.com"}]);
db.mycollection.find();
{ "_id" : 1, "name" : "A" }
{ "_id" : 2, "name" : "CD", "age" : 29 }
{ "_id" : 3, "name" : "AB", "age" : 28 }
{ "_id" : 4, "name" : "ABC", "age" : 27, "emailId" : "abc#xyz.com" }
{ "_id" : 5, "name" : "ABCD", "age" : 29, "emailId" : "abcd#xyz.com" }
An approach to find the schema
In Mongo Shell
var k = db.mycollection.findOne();
for ( i in k){print (i)};
_id
name
this approach will work for you if all the documents in your collection follows the same schema.
Here's how I did it:
mongodump --uri="mongodb://localhost/mydb" -o ./mydb-dump
find ./mydb-dump -name *.bson -exec truncate -s 0 {} \;
Explanation: I'm dumping the whole database, then truncating all the .bson files (which hold collection data) to zero bytes.
Limitation: Obviously, this is only practical if the source database is small, otherwise you're generating a huge data dump only to throw away most of it.
To restore this-
mongorestore --uri="mongodb://some-other-server/mydb" ./mydb-dump
If there's a better way to do this, I'd love to know what it is!
MongoDB Compass GUI has a way to export the schema to JSON.
At the time of this post, there doesn't seem to be a way to do this by bulk, so this will have to be done for each collection one by one.
From the docs:
You can export your schema after analyzing it. This is useful for
sharing your schema and comparing schemas across collections.
If you have not already done so, analyze your schema:
Select your desired collection and click the Schema tab. Click
Analyze Schema.
Once your schema has been analyzed, export your schema:
In the top menu bar, click Collection. From the dropdown, click Share
Schema as JSON.
Your schema is copied to your clipboard as a JSON object.
See full docs here ~ https://www.mongodb.com/docs/compass/master/schema/export/

MongoDB: use MongoImport with csv to update single field only

I am trying to update a single field in each document in my collection using a csv and Mongoimport with –upsert included.
However the process removes all other fields in the document.
I have a Books Collection with documents like:
{
"_id" : "knOIv8ZUUK",
"Price" : 2.2,
"Title" : "Rats Ahoy"
}
{
"_id" : "okYEGuWznv",
"Price" : 3.3,
"Title" : "Friendly Fish"
}
a csv file:
_id,Price
knOIv8ZUUK,2.2
okYEGuWznv,3.3
And import using:
mongoimport --db local --collection Books --upsert --type csv
--headerline --file c:\import\newPrice
With results deleting the Title field
{
"_id" : "knOIv8ZUUK",
"Price" : 2.2
}
{
"_id" : "okYEGuWznv",
"Price" : 3.3
}
I, incorrectly, thought Upsert would just update an imported field.
So is there another process I can use to update just 1 field in large number of documents?
thanks
From mongoimport --upsertFields doc
You can need to use mode merge:
Merge existing documents that match a document in the import file with the new document. mongoimport will insert all other documents. Merge Matching Documents during Import describes how to use --mode merge.
and specify the field name, default is '_id'
--mode merge --upsertFields <fieldname>
so for your case just
--mode merge --upsertFields
New feature has been added since version 3.4 Documentation here.
Check this option
--mode insert|upsert|merge
In your case you can use this may be:
--mode merge

Querying objectId in Meteor

I recently started to load mongodb using mongoimport and realized that it has added an ObjectId field associated with the "_id". When I query this using the "meteor mongo" commandline it works fine:
meteor:PRIMARY> db.Warehouses.find({"_id":ObjectId("571b7a89a990b5b8779b1315")})
{ "_id" : ObjectId("571b7a89a990b5b8779b1315"), "name" : "Stephan Lumber", "street" : "23 East St", "city" : "Plano", "state" : "TX"}
meteor:PRIMARY>
My code can read the value in "_id" using console.log( "id ", currentId)
It returns ObjectID("571b7a89a990b5b8779b1315")
the value currentId contains the current warehouse ID selected.
However, when I try to use this to access the data in the code I keep getting "undefined" errors. I have tried many different ways. Here are a few:
warehouse = Warehouses.findOne({"_id":Mongo.ObjectID(currentId)});
warehouse = Warehouses.findOne({"_id":ObjectId(currentId)});
Also for some reason "ObjectId" in not recognized on the latter.
I don't know what else to try. Any help would be appreciated.
You don't have to add anything like Mongo.ObjectID or ObjectId you just have to write directly currentId.
warehouse = Warehouses.findOne({"_id": currentId});

How do you import binary data with mongoimport?

I've tried every combination to import binary data for Mongo and I CANNOT get it to work. I've tried using new BinData(0, <bindata>) and I've tried using
{
"$binary" : "<bindata>",
"$type" : "0"
}
The first one gives me a parsing error. The second gives me an error reading "Invalid use of a reserved field name."
I can import other objects fine. For reference, I'm trying to import a BASE64-encoded image string. Here is my current version of the JSON I'm using:
{"_id" : "72984ce4-de03-407f-8911-e7b03f0fec26","OriginalWidth" : 73, "OriginalHeight" : 150, { "$binary" : "", "$type" : "0" }, "ContentType" : "image/jpeg", "Name" : "test.jpg", "Type" : "5ade8812-e64a-4c64-9e23-b3aa7722cfaa"}
I actually figured out this problem and thought I'd come back to SO to help anyone out who might be struggling.
Essentially, what I was doing was using C# to generate a JSON file. That file was used on an import script that ran and brought in all kinds of data. One of the fields in a collection required storing binary image data as a Base64-encoded string. The Mongo docs (Import Export Tools and Importing Interesting Types) were helpful, but only to a certain point.
To format the JSON properly for this, I had to use the following C# snippet to get an image file as a byte array and dump it into a string. There is a more efficient way of doing this for larger strings (StringBuilder for starters), but I'm simplifying for the purpose of illustrating the example:
byte[] bytes = File.ReadAllBytes(imageFile);
output = "{\"Data\" : {\"$binary\" : \"" + Convert.ToBase64String(bytes) + "\", \"$type\" : \"00\"}, \"ContentType\" : \"" + GetMimeType(fileInfo.Name) + "\", \"Name\" : \"" + fileInfo.Name + "\"}";
I kept on failing on the type part, by the way. It translates to generic binary data is specified in the BSON spec here: http://bsonspec.org/#/specification.
If you want to skip straight to the JSON, the above code output a string very similar to this:
{"Data": {"$binary": "[Byte array as Base64 string]", "$type": "00"}, "ContentType": "image/jpeg", "Name": "test.jpg"}
Then, I just used the mongoimport tool to process the resulting JSON.
Note: since I'm already in C#, I could've just used the Mongo DLL and done processing there, but for this particular case, I had to create the JSON files raw in the code. Fun times.