mongoexport does not write any records to json output file - mongodb

I have tried to export a json file from MongoDB with mongoexport in the following way:
$ mongoexport --db db --collection ds --dbpath ~/db --out ds.json
exported 0 records
Sat Apr 20 23:13:18 dbexit:
Sat Apr 20 23:13:18 [tools] shutdown: going to close listening sockets...
Sat Apr 20 23:13:18 [tools] shutdown: going to flush diaglog...
Sat Apr 20 23:13:18 [tools] shutdown: going to close sockets...
Sat Apr 20 23:13:18 [tools] shutdown: waiting for fs preallocator...
Sat Apr 20 23:13:18 [tools] shutdown: closing all files...
Sat Apr 20 23:13:18 [tools] closeAllFiles() finished
Sat Apr 20 23:13:18 [tools] shutdown: removing fs lock...
Sat Apr 20 23:13:18 dbexit: really exiting now
I do not understand why the created json file empty is, because the database actually contains the following data:
$ mongo
MongoDB shell version: 2.2.3
connecting to: test
> use ds
switched to db ds
> db.ds.find().pretty()
{
"_id" : "1_522311",
"chr" : 1,
"kg" : {
"yri" : {
"major" : "D",
"minor" : "A",
"maf" : 0.33036
},
"ceu" : {
"major" : "C",
"minor" : "A",
"maf" : 0.05263
}
},
"pos" : 522311
}
{
"_id" : "1_223336",
"chr" : 1,
"kg" : {
"yri" : {
"major" : "G",
"minor" : "C",
"maf" : 0.473214
},
"ceu" : {
"major" : "C",
"minor" : "G",
"maf" : 0.017544
},
"jptchb" : {
"major" : "C",
"minor" : "G",
"maf" : 0.220339
}
},
"pos" : 223336
}
What did I do wrong?
Thank you in advance.

It appears that you have a database called ds:
> use ds
switched to db ds
use ds switches the current database to the ds database (db from the shell is just an alias for the current database).
Then, you have a collection called ds as well:
> db.ds.find().pretty()
So, that means you have a ds database with a ds collection (ds.ds).
You should then use the export like this with the --db option set to ds (assuming the path to the database is correct):
mongoexport --db ds --collection ds --dbpath ~/db --out ds.json
3.0+ Update: --dbpath is unavailable.

I know this answer may don't satisfied the question, but I hope it will help people struggling with mongoexport does not write any records to json output file
My problem was that I was using quotes. For example:
$mongoexport --db 'my-database' --collection 'my-collection' --out ds.json
but the correct query is (without quotes):
$mongoexport --db my-database --collection my-collection --out ds.json
I discover this when I did a $mongodump and it creates a folder with quotes. This was very estrange to me, but I understood that mongoexport interprets the quotes as part of the name. so when I removed it, it workout fine.

Related

Monstache not initiating the first synchronisation of current data from MongoDB and keeps waiting for events on the change stream

It's my first time to use monstache.
In fact I've migrated my infrastructure from on premise to the cloud and I'm now using mongoDB Atlas, and AWS opensearch.
I've installed monstache on an aws ec2 instance and well configured it. Everything seems working and monstache is connected to Elasticsearch and MongoDB, but it's indexing documents that have been migratred into mongoDB atlas in Elasticsearch. It keeps waiting for events on my collection/index like this
[ec2-user#ip-172-31-1-200 ~]$ journalctl -u monstache.service -f
-- Logs begin at Wed 2022-11-09 10:22:04 UTC. --
Jan 26 08:54:00 ip-172-31-1-200.eu-west-3.compute.internal systemd[1]: Starting monstache sync service...
Jan 26 08:54:00 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:00 Started monstache version 6.1.0
Jan 26 08:54:00 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:00 Successfully connected to MongoDB version 4.4.18
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:01 Successfully connected to Elasticsearch version 7.10.2
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal systemd[1]: Started monstache sync service.
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:01 Joined cluster HA
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:01 Starting work for cluster HA
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:01 Listening for events
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:01 Watching changes on collection wtlive.myuser
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:01 Resuming from timestamp {T:1674723241 I:1}
Should I absolutely initiate a write on the mongoDB collection for monstache to start syncing? Why doesn't it start syncing current data from mongoDB?
My elasticsearch still shows 0 document count while the collection is full of document in mongoDB.
[ec2-user#ip-172-31-0-5 ~]$ curl --insecure -u es-appuser https://vpc-wtlive-domain-staging-om2cbdeex4qk6trkdrcb3dg4vm.eu-west-3.es.amazonaws.com/_cat/indices?v
Enter host password for user 'es-appuser':
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open wtlive.myuser YzqLx9_uTZ2qFVjFF2CMag 1 1 0 0 416b 208b
green open .opendistro_security Jb1fLqGjRd2vvluoX-ZgKw 1 1 9 4 129kb 64.4kb
green open .kibana_1 v6WdqQDvSN2L16EZTXxuHQ 1 1 30 2 70.4kb 33.4kb
green open .kibana_252235597_esappuser_1 OY1bbDGvTqK8oEgwopzbhQ 1 1 1 0 10.1kb 5kb
[ec2-user#ip-172-31-0-5 ~]$
Here is my monstache configuration :
[ec2-user#ip-172-31-1-200 ~]$ cat monstache/wtlive_pipeline.toml
enable-http-server = true
http-server-addr = ":8888"
#direct-read-namespaces = ["wtlive.myuser"]
change-stream-namespaces = ["wtlive.myuser"]
namespace-regex = '^wtlive.myuser$'
cluster-name="HA"
resume = true
replay = false
resume-write-unsafe = false
exit-after-direct-reads = false
elasticsearch-user = "es-appuser"
elasticsearch-password = "9V#xxxxxx"
elasticsearch-urls = ["https://vpc-wtlive-domain-staging-om2cbdeek6trkdrcb3dg4vm.eu-west-3.es.amazonaws.com"]
mongo-url = "mongodb://admin:VYn7ZD4CHDh8#wtlive-dedicated-shard-00-00.ynxpn.mongodb.net:27017,wtlive-dedicated-shard-00-01.ynxpn.mongodb.net:27017,wtlive-dedicated-shard-00-02.ynxpn.mongodb.net:27017/?tls=true&replicaSet=atlas-lmkye1-shard-0&authSource=admin&retryWrites=true&w=majority&tlsCAFile=/home/ec2-user/mongodb-ca.pem"
#[logs]
#info = "/home/ec2-user/logs/monstache/info.log"
#error = "/home/ec2-user/logs/monstache/error.log"
#warn = "/home/ec2-user/logs/monstache/warn.log"
#[[mapping]]
#namespace = "wtlive.myuser"
#index = "wtlive.myuser"
[[pipeline]]
namespace = "wtlive.myuser"
script = """
module.exports = function(ns, changeStream) {
if (changeStream) {
return [
{
$project: {
_id: 1,
operationType : 1,
clusterTime : 1,
documentKey : 1,
to : 1,
updateDescription : 1,
txnNumber : 1,
lsid : 1,
"fullDocument._id": 1,
"fullDocument.created": 1,
"fullDocument.lastVisit": 1,
"fullDocument.verified": 1,
"fullDocument.device.locale": "$fullDocument.device.locale",
"fullDocument.device.country": "$fullDocument.device.country",
"fullDocument.device.tz": "$fullDocument.device.tz",
"fullDocument.device.latLonCountry": "$fullDocument.device.latLonCountry",
"fullDocument.details.firstname": "$fullDocument._details.firstname",
"fullDocument.details.gender": "$fullDocument._details.gender",
"fullDocument.details.category": "$fullDocument._details.category",
"fullDocument.details.dob": "$fullDocument._details.dob",
"fullDocument.details.lookingFor": "$fullDocument._details.lookingFor",
"fullDocument.details.height": "$fullDocument._details.height",
"fullDocument.details.weight": "$fullDocument._details.weight",
"fullDocument.details.cigarette": "$fullDocument._details.cigarette",
"fullDocument.details.categorizedBy": "$fullDocument._details.categorizedBy",
"fullDocument.details.origin": "$fullDocument._details.origin",
"fullDocument.details.city": "$fullDocument._details.city",
"fullDocument.details.country": "$fullDocument._details.country",
"fullDocument.lifeSkills.educationLevel": "$fullDocument._lifeSkills.educationLevel",
"fullDocument.lifeSkills.pets": "$fullDocument._lifeSkills.pets",
"fullDocument.lifeSkills.religion": "$fullDocument._lifeSkills.religion",
"fullDocument.loveLife.children": "$fullDocument._loveLife.children",
"fullDocument.loveLife.relationType": "$fullDocument._loveLife.relationType",
"fullDocument.searchCriteria": "$fullDocument._searchCriteria",
"fullDocument.blocked" : 1,
"fullDocument.capping" : 1,
"fullDocument.fillingScore" : 1,
"fullDocument.viewed" : 1,
"fullDocument.likes" : 1,
"fullDocument.matches" : 1,
"fullDocument.blacklisted" : 1,
"fullDocument.uploadsList._id" : 1,
"fullDocument.uploadsList.status" : 1,
"fullDocument.uploadsList.url" : 1,
"fullDocument.uploadsList.position" : 1,
"fullDocument.uploadsList.imageSet" : 1,
"fullDocument.location" : 1,
"fullDocument.searchZone" : 1,
"fullDocument.locationPoint" : "$fullDocument.location.coordinates",
"fullDocument.selfieDateUpload" : 1,
"ns": 1
}
}
]
} else {
return [
{
$project: {
_id: 1,
"_id": 1,
"created": 1,
"lastVisit": 1,
"verified": 1,
"device.locale": "$device.locale",
"device.country": "$device.country",
"device.tz": "$device.tz",
"device.latLonCountry": "$device.latLonCountry",
"details.firstname": "$_details.firstname",
"details.gender": "$_details.gender",
"details.category": "$_details.category",
"details.dob": "$_details.dob",
"details.lookingFor": "$_details.lookingFor",
"details.height": "$_details.height",
"details.weight": "$_details.weight",
"details.cigarette": "$_details.cigarette",
"details.categorizedBy": "$_details.categorizedBy",
"details.origin": "$_details.origin",
"details.city": "$_details.city",
"details.country": "$_details.country",
"lifeSkills.educationLevel": "$_lifeSkills.educationLevel",
"lifeSkills.pets": "$_lifeSkills.pets",
"lifeSkills.religion": "$_lifeSkills.religion",
"loveLife.children": "$_loveLife.children",
"loveLife.relationType": "$_loveLife.relationType",
"searchCriteria": "$_searchCriteria",
"blocked" : 1,
"capping" : 1,
"fillingScore" : 1,
"viewed" : 1,
"likes" : 1,
"matches" : 1,
"blacklisted" : 1,
"uploadsList._id" : 1,
"uploadsList.status" : 1,
"uploadsList.url" : 1,
"uploadsList.position" : 1,
"uploadsList.imageSet" : 1,
"location" : 1,
"searchZone" : 1,
"selfieDateUpload" : 1,
"locationPoint" : "$location.coordinates"
}
}
]
}
}
"""
What could be the issue? And what action should I take from here please?
By uncommenting the #direct-read-namespaces = ["wtlive.myuser"] line, monstache can now do the initial sync, and everything is going well.
I'll comment out aigain and restart monstache service after the initial sync, to avoid re-syncing from scratch.

Older oplog entries are not getting truncated

I have a mongo instance running with oplogMinRetentionHours set to 24 hours and max oplog size set to 50G. But despite this config settings oplog entries seem to be withhold indefinitely since oplog has entries past 24 hours and oplog size has reached 1.4 TB and .34 TB on disk
db.runCommand( { serverStatus: 1 } ).oplogTruncation.oplogMinRetentionHours
24 hrs
db.getReplicationInfo()
{
"logSizeMB" : 51200,
"usedMB" : 1464142.51,
"timeDiff" : 3601538,
"timeDiffHours" : 1000.43,
"tFirst" : "Fri Mar 19 2021 14:15:49 GMT+0000 (Greenwich Mean Time)",
"tLast" : "Fri Apr 30 2021 06:41:27 GMT+0000 (Greenwich Mean Time)",
"now" : "Fri Apr 30 2021 06:41:28 GMT+0000 (Greenwich Mean Time)"
}
MongoDB server version: 4.4.0
OS: Windows Server 2016 DataCenter 64bit
what I have noticed is event with super user with root role is not able to access replset.oplogTruncateAfterPoint, not sure if this is by design
mongod.log
{"t":{"$date":"2021-04-30T06:35:51.308+00:00"},"s":"I", "c":"ACCESS",
"id":20436, "ctx":"conn8","msg":"Checking authorization
failed","attr":{"error":{"code":13,"codeName":"Unauthorized","errmsg":"not
authorized on local to execute command { aggregate:
"replset.oplogTruncateAfterPoint", pipeline: [ { $indexStats: {} }
], cursor: { batchSize: 1000.0 }, $clusterTime: { clusterTime:
Timestamp(1619764547, 1), signature: { hash: BinData(0,
180A28389B6BBA22ACEB5D3517029CFF8D31D3D8), keyId: 6935907196995633156
} }, $db: "local" }"}}}
Not sure why mongo would not delete older entries from oplog?
Mongodb oplog truncation seems to be triggered with inserts. So as and when insert happens oplog gets truncated.

Mongod Replica set aborting after invariant() failure due to Stable timestamp Timestamp does not equal appliedThrough timestamp

I am a newbie to MongoDB. I was doing a POC on consuming documents using Java Client.
I am using a 4.2.5 version.
I have 3 instances of mongod running in my local with a Replica set as below.
mongod --port 27017 --dbpath /data/d1/ --replSet rs0 --bind_ip localhost
mongod --port 27018 --dbpath /data/d2/ --replSet rs0 --bind_ip localhost
mongod --port 27019 --dbpath /data/d3/ --replSet rs0 --bind_ip localhost
After a certain time, one or two of the instance gets abended and when I tend to start again, I see the same error. I am not sure about what causes this error.
Any help would be appreciated.
Error:
2020-05-25T19:37:47.126+0530 I REPL [initandlisten] Rollback ID is 1
2020-05-25T19:37:47.128+0530 F - [initandlisten] Invariant failure !stableTimestamp || stableTimestamp->isNull() || appliedThrough.isNull() || *stableTimestamp == appliedThrough.getTimestamp() Stable timestamp Timestamp(1590410112, 1) does not equal appliedThrough timestamp { ts: Timestamp(1590410172, 1), t: 5 } src/mongo/db/repl/replication_recovery.cpp 412
2020-05-25T19:37:47.128+0530 F - [initandlisten]
***aborting after invariant() failure
2020-05-25T19:37:47.137+0530 F - [initandlisten] Got signal: 6 (Abort trap: 6).
0x109e10cc6 0x109e1054d 0x7fff5d3c9b5d 0xa00 0x7fff5d2836a6 0x109e04d4a 0x1083597af 0x1083722ba 0x108376eb9 0x108077c6c 0x108071744 0x108070999 0x7fff5d1de3d5 0x9
----- BEGIN BACKTRACE -----
"backtrace":[{"b":"10806F000","o":"1DA1CC6","s":"_ZN5mongo15printStackTraceERNSt3__113basic_ostreamIcNS0_11char_traitsIcEEEE"},{"b":"10806F000","o":"1DA154D","s":"_ZN5mongo12_GLOBAL__N_110abruptQuitEi"},{"b":"7FFF5D3C5000","o":"4B5D","s":"_sigtramp"},{"b":"0","o":"A00"},
...
...
...
...
{ "path" : "/System/Library/PrivateFrameworks/BackgroundTaskManagement.framework/Versions/A/BackgroundTaskManagement", "machType" : 6, "b" : "7FFF41F95000", "vmaddr" : "7FFF3C6CD000", "buildId" : "2A396FC07B7930889A82FB93C1181A57" }, { "path" : "/usr/lib/libxslt.1.dylib", "machType" : 6, "b" : "7FFF5C842000", "vmaddr" : "7FFF56F7A000", "buildId" : "EC50E503AEEE3F50956F55E4AF4584D9" }, { "path" : "/System/Library/PrivateFrameworks/AppleSRP.framework/Versions/A/AppleSRP", "machType" : 6, "b" : "7FFF4177E000", "vmaddr" : "7FFF3BEB6000", "buildId" : "EDD16B2E4F353E13B389CF77B3CAD4EB" } ] }}
mongod(_ZN5mongo15printStackTraceERNSt3__113basic_ostreamIcNS0_11char_traitsIcEEEE+0x36) [0x109e10cc6]
mongod(_ZN5mongo12_GLOBAL__N_110abruptQuitEi+0xBD) [0x109e1054d]
libsystem_platform.dylib(_sigtramp+0x1D) [0x7fff5d3c9b5d]
??? [0xa00]
libsystem_c.dylib(abort+0x7F) [0x7fff5d2836a6]
mongod(_ZN5mongo22invariantFailedWithMsgEPKcRKNSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEES1_j+0x33A) [0x109e04d4a]
mongod(_ZN5mongo4repl23ReplicationRecoveryImpl16recoverFromOplogEPNS_16OperationContextEN5boost8optionalINS_9TimestampEEE+0x43F) [0x1083597af]
mongod(_ZN5mongo4repl26ReplicationCoordinatorImpl21_startLoadLocalConfigEPNS_16OperationContextE+0x3AA) [0x1083722ba]
mongod(_ZN5mongo4repl26ReplicationCoordinatorImpl7startupEPNS_16OperationContextE+0xE9) [0x108376eb9]
mongod(_ZN5mongo12_GLOBAL__N_114_initAndListenEi+0x28FC) [0x108077c6c]
mongod(_ZN5mongo12_GLOBAL__N_111mongoDbMainEiPPcS2_+0xDA4) [0x108071744]
mongod(main+0x9) [0x108070999]
libdyld.dylib(start+0x1) [0x7fff5d1de3d5]
??? [0x9]
----- END BACKTRACE -----
Abort trap: 6
There appears to be a ticket for this issue experienced by another user. You may consider engaging with MongoDB developers in that ticket to provide the requested information.

Insert json file into mongodb

I am new to MongoDB. After installing MongoDB in Windows I am trying to insert a simple json file using the following command:
C:\>mongodb\bin\mongoimport --db test --collection docs < example2.json
I am getting the following error:
connected to: 127.0.0.1
Fri Oct 18 09:05:43.749 exception:BSON representation of supplied JSON is too large: code FailedToParse: FailedToParse: Field name expected: offset:43
Fri Oct 18 09:05:43.750
Fri Oct 18 09:05:43.750 exception:BSON representation of supplied JSON is too large: code FailedToParse: FailedToParse: Expecting '{': offset:0
Fri Oct 18 09:05:43.751
Fri Oct 18 09:05:43.751 exception:BSON representation of supplied JSON is too large: code FailedToParse: FailedToParse: Field name expected: offset:42
Fri Oct 18 09:05:43.751
Fri Oct 18 09:05:43.751 exception:BSON representation of supplied JSON is too large: code FailedToParse: FailedToParse: Expecting '{': offset:0
Fri Oct 18 09:05:43.751
Fri Oct 18 09:05:43.752 exception:BSON representation of supplied JSON is too large: code FailedToParse: FailedToParse: Field name expected: offset:44
Fri Oct 18 09:05:43.752
Fri Oct 18 09:05:43.752 exception:BSON representation of supplied JSON is too large: code FailedToParse: FailedToParse: Expecting '{': offset:0
Fri Oct 18 09:05:43.752
Fri Oct 18 09:05:43.752 check 0 0
Fri Oct 18 09:05:43.752 imported 0 objects
Fri Oct 18 09:05:43.752 ERROR: encountered 6 error(s)s
example2.json
{"FirstName": "Bruce", "LastName": "Wayne",
"Email": "bwayne#Wayneenterprises.com"}
{"FirstName": "Lucius", "LastName": "Fox",
"Email": "lfox#Wayneenterprises.com"}
{"FirstName": "Dick", "LastName": "Grayson",
"Email": "dgrayson#Wayneenterprises.com"}
What do I need to do to import new json file into mongodb?
Use
mongoimport --jsonArray --db test --collection docs --file example2.json
Its probably messing up because of the newline characters.
Below command worked for me
mongoimport --db test --collection docs --file example2.json
when i removed the extra newline character before Email attribute in each of the documents.
example2.json
{"FirstName": "Bruce", "LastName": "Wayne", "Email": "bwayne#Wayneenterprises.com"}
{"FirstName": "Lucius", "LastName": "Fox", "Email": "lfox#Wayneenterprises.com"}
{"FirstName": "Dick", "LastName": "Grayson", "Email": "dgrayson#Wayneenterprises.com"}
This worked for me - ( from mongo shell )
var file = cat('./new.json'); # file name
use testdb # db name
var o = JSON.parse(file); # convert string to JSON
db.forms.insert(o) # collection name
Use below command while importing JSON file
C:\>mongodb\bin\mongoimport --jsonArray -d test -c docs --file example2.json
the following two ways work well:
C:\>mongodb\bin\mongoimport --jsonArray -d test -c docs --file example2.json
C:\>mongodb\bin\mongoimport --jsonArray -d test -c docs < example2.json
if the collections are under a specific user, you can use -u -p --authenticationDatabase
This solution is applicable for Windows machine.
MongoDB needs data directory to store data in. Default path is C:\data\db. In case you don't have the data directory, create one in your C: drive, unless different VolumeName is used e.g. H: (or any other relevant VolumeName) which is the root of your machine;
Place the .json file you want to import within: C:\data\db\ .
Before running the command copy-paste mongoimport.exe from C:\Program Files\MongoDB\Tools\100\bin (default path for mongoimport.exe) to the directory of the C:\Program Files\MongoDB\Server\[your_server_version]\bin
Open the command prompt from within C:\data\db\ and type the following command by supporting the specific databasName, collectionName and fileName.json you wish to import :
mongoimport --db databaseName --collection collectionName --file fileName.json --type json --batchSize 1
Hereby,
batchSize can be any integer as per your wish
mongoimport --jsonArray -d DatabaseN -c collectionName /filePath/filename.json
Open command prompt separately
and check:
C:\mongodb\bin\mongoimport --db db_name --collection collection_name< filename.json
In MS Windows, the mongoimport command has to be run in a normal Windows command prompt, not from the mongodb command prompt.
It happened to me couple of weeks back. The version of mongoimport was too old. Once i Updated to latest version it ran successfully and imported all documents.
Reference: http://docs.mongodb.org/master/tutorial/install-mongodb-on-ubuntu/?_ga=1.11365492.1588529687.1434379875
In MongoDB To insert Json array data from file(from particular location from a system / pc) using mongo shell command. While executing below command, command should be in single line.
var file = cat('I:/data/db/card_type_authorization.json'); var o = JSON.parse(file); db.CARD_TYPE_AUTHORIZATION.insert(o);
JSON File: card_type_authorization.json
[{
"code": "visa",
"position": 1,
"description": "Visa",
"isVertualCard": false,
"comments": ""
},{
"code": "mastercard",
"position": 2,
"description": "Mastercard",
"isVertualCard": false,
"comments": ""
}]
It works with JS and Node
Preconditions:
Node
Mongo - either local installed or via Atlas
server.js:
var MongoClient = require('mongodb').MongoClient;
var fs = require('fs')
export function insert(coll) {
MongoClient.connect('uri', (err, db) => {
var myobj = fs.readFileSync("shop.json").toString()
myobj = JSON.parse(myobj)
db.db(dbWeb).collection(coll).insertMany(myobj, (err, res) => {
db.close();
});
});
}
shop.json:
[
{
"doc": "jacke_bb",
"link": "http://ebay.us/NDMJn9?cmpnId=5338273189",
},
{
"doc": "schals",
"link": "https://www.ebay-kleinanzeigen.de/s-anzeige/4-leichte-schals-fuer-den-sommer/2082511689-156-7597",
}
]
As one see, the json starts with [ and ends with ] and the insertMany is used. This leads to a correct nested insertion of the array into the collection.

MongoDB sharding problems

Our mongodb server deployed with 2 shards, each has 1 master server and 2 slave servers.
The four slave servers run mongo config as proxy, and two of the slave servers run arbiters.
But the mongodb coundn't be used now.
I can connect to 192.168.0.1:8000(mongos) and exec queries like 'use database' or 'show dbs', but i cann't exec queries in a choosed database such as 'db.foo.count()', 'db.foo.findOne()'
Here is the error log:
mongos> db.dev.count()
Fri Aug 16 12:55:36 uncaught exception: count failed: {
"assertion" : "DBClientBase::findN: transport error: 10.81.4.72:7100 query: { setShardVersion: \"\", init: true, configdb: \"10.81.4.72:7300,10.42.50.26:7300,10.81.51.235:7300\", serverID: ObjectId('520db0a51fa00999772612b9'), authoritative: true }",
"assertionCode" : 10276,
"errmsg" : "db assertion failure",
"ok" : 0
}
Fri Aug 16 11:23:29 [conn8431] DBClientCursor::init call() failed
Fri Aug 16 11:23:29 [conn8430] Socket recv() errno:104 Connection reset by peer 10.81.4.72:7100
Fri Aug 16 11:23:29 [conn8430] SocketException: remote: 10.81.4.72:7100 error: 9001 socket exception [1] server [10.81.4.72:7100]
Fri Aug 16 11:23:29 [conn8430] DBClientCursor::init call() failed
Fri Aug 16 11:23:29 [conn8430] DBException in process: could not initialize cursor across all shards because : DBClientBase::findN: transport error: 10.81.4.72:7100 query: { setShardVersion: "", init: true, configdb: "10.81.4.72:7300,10.42.50.26:7300,10.81.51.235:7300", serverID: ObjectId('520d99c972581e6a124d0561'), authoritative: true } # s01/10.36.31.36:7100,10.42.50.24:7100,10.81.4.72:7100
i can only start on mongos, queries wouldn't be exec if more than 1 mongos run at the same time, error log:
mongos> db.dev.count() Fri Aug 16 15:12:29 uncaught exception: count failed: { "assertion" : "DBClientBase::findN: transport error: 10.81.4.72:7100 query: { setShardVersion: \"\", init: true, configdb: \"10.81.4.72:7300,10.42.50.26:7300,10.81.51.235:7300\", serverID: ObjectId('520dd04967557902f73a9fba'), authoritative: true }", "assertionCode" : 10276, "errmsg" : "db assertion failure", "ok" : 0 }
Could you please clarify if your set-up was working before, if you are just setting it up now?
To repair your MongoDB, you might want to follow this link:
http://docs.mongodb.org/manual/tutorial/recover-data-following-unexpected-shutdown/
References
MongoDB Documentation : Deploying a Shard-Cluster
MongoDB Documentation : Add Shards to an existing cluster
Older, outdated(!) info:
YouTube Video on Setting-up Sharding for MongoDB
Corresponding Blog on blog.serverdensity.com