how to import large json file into mongodb using mongoimport? - mongodb

I am trying to import the large data set json file into mongodb using mongoimport.
mongoimport --db test --collection sam1 --file 1234.json --jsonArray
error:
2014-07-02T15:57:16.406+0530 error: object to insert too large
2014-07-02T15:57:16.406+0530 tried to import 1 objects

Please try add this option: --batchSize 1
Like:
mongoimport --db test --collection sam1 --file 1234.json --batchSize 1
Data will be parsed and stored into the database batchwise

Related

Is there any way to import JSON zip file into mongodb using mongoimport?

I have created a zip file from a large JSON file(containing json array). I want to use this zip file in mongoimport command. Is it possible to import this zip file in mongodb using mongoimport command?
COMMAND:
mongoimport --db test --collection inventory ^ --authenticationDatabase admin --username <user> --password <password> ^ --drop --file ~\downloads\inventory.crud.json.zip --jsonArray
OUTPUT:
Failed: error reading separator after document #1: bad JSON array
format
Since this is a zip file it does not find a valid json array. Is there a way to unzip file in mongoimport command?

Import and export single collection in mongo using cmd

i try to import database collection but i found import whole database not single collection
mongorestore -d db_name dump_folder_path
so here is solution to import and export single collection in mongodb cmd
For Import
mongoimport --db Mydatabase --collection mycollection --drop --file ~/var/www/html/collection/mycollection.json
For Export
mongoexport --collection=mycollection --db=Mydatabase --out=/var/www/html/collection/mycollection.json
Hope this works :)

How to import a large json file into Mongodb?

I have a big json file (~ 300GB) which is composed of many of dicts in it and I am trying to import this file into MongoDB. The method that I tried was mongoimport by using this:
mongoimport --db <DB_NAME> --collection <COLLECTION_NAME> --file <FILE_DIRECTORY> --jsonArray --batchSize 1
but it shows the error something like this after some insertions Failed: error processing document #89602: unexpected EOF. I have no idea why it happens.
Any other methods to make it work?

mongoimport csv with headerline and datatype

I'm trying to import a csv into mongodb using the following command:
mongoimport --db users --collection contacts --file data.csv
--headerline
The database exists but not the collection, I want to create it and use the first row of the csv as the field names. Why am I getting error:
error validating settings: must specify --fields, --fieldFile or
--headerline to import this file type
I also would like to know:
how to copy/import data from one collection into another (basically
the syntax)
how datatypes from csv are handled in mongodb when
imported; do I need to specify datatypes for headers or will mongodb
read it from csv types?
To solve this:
Either make sure the first line of your data.csv file has field names of the data to be parsed and then execute:
mongoimport --db users --collection contacts --type csv --headerline --file data.csv
Or
Define the list of field names that the values of csv would be parsed in using --fields
mongoimport --db users --collection contacts --type csv --file data.csv --fields["name","surname","etc"]
You should write command like this:
mongoimport --db users --collection contacts --type csv --file data.csv --fields "name","surname","etc"

I want to import the json file only if they don't exist

I am using mongo 3.4
I want to import json file from json array to mongod using bash script, and I want to import the json file only if they don't exist. I tried with --upsert but it does not work.
Is there any easy way to do it? Thanks
mongoimport --db dbName --collection collectionName --file fileName.json --jsonArray --upsert
mongoimport -d dbName -c collectionName jsonFile.json -vvvvv
Even though the output of mongoimport says that n of objects were imported, the exsiting document with same data has not been overwritten.
if use --upsert it will update the existing document.
Found similar discussion here