ERROR: child process failed, exited with error number 51 MongoDB - mongodb

Getting this error while restarting MongoDB , I am using Mongo 3.2.4 and doing this set up on a new machine
Starting mongod... about to fork child process, waiting until server is ready for connections.
forked process: 19438
ERROR: child process failed, exited with error number 51
mongod(_ZN5mongo19MmapV1ExtentManager4initEPNS_16OperationContextE+0x4A8) [0x1040278]
mongod(_ZN5mongo26MMAPV1DatabaseCatalogEntryC1EPNS_16OperationContextENS_10StringDataES3_bb+0x187) [0x1036dc7]
mongod(_ZN5mongo12MMAPV1Engine23getDatabaseCatalogEntryEPNS_16OperationContextENS_10StringDataE+0x14E) [0x103a1de]
mongod(_ZN5mongo14DatabaseHolder6openDbEPNS_16OperationContextENS_10StringDataEPb+0x133) [0xac92a3]
----- END BACKTRACE -----

For me this error occured due to incorrect ownership of some files in my data directory. I fixed this using the following command:
sudo chown -R mongodb: /path/to/db/directory
Where mongodb was the database user in my case.

This is resolved by inserting the following lines into /etc/security/limits.conf:
mongodb soft nofile 65535
mongodb hard nofile 90000
mongodb soft nproc 65535
mongodb hard nproc 90000
We need to add the user account used to run the Mongo service. Generally, it is the mongodb user.

In my case the problem was that I haven't created the appropriate folders specified in the config files.

Related

Failed to start mongod com.Could not find appropriate mongod in `/opt/mongodb/mms/mongodb-releases

I am getting following error when I tried to activate a backup for mongo instance.
I choosed local and copied all rpms to this folder /opt/mongodb/mms/mongodb-releases.
error:
Failed to start mongod
com.xgen.svc.brs.util.GenericMongoManager$MongodVersionException: Could not find appropriate mongod in `/opt/mongodb/mms/mongodb-releases/`, versions available to MMS: . Expecting version 3.4.4 or greater.
com.xgen.svc.brs.util.GenericMongoManager$Purpose.calculateBinaryPath(GenericMongoManager.java:166)
com.xgen.svc.brs.util.GenericMongoManager$Purpose.<init>(GenericMongoManager.java:125)
com.xgen.svc.brs.util.MongoManager$MongoDPurpose.<init>(MongoManager.java:331)
com.xgen.svc.brs.util.MongoManager$HeadPurpose.<init>(MongoManager.java:477)
com.xgen.svc.brs.job.ReplicaSetJob.startMongo(ReplicaSetJob.java:103)
com.xgen.svc.brs.job.ReplicaSetJob.startMongo(ReplicaSetJob.java:80)
com.xgen.svc.brs.job.IncrementalSyncJob.doWork(IncrementalSyncJob.java:82)
com.xgen.svc.brs.grid.Daemon.iterate(Daemon.java:116)
com.xgen.svc.brs.grid.Daemon.run(Daemon.java:305)

Migration From Tokumx 1.5 To Percona Server For mongodb 3.11

Migrating Data from Tokumx To Percona Server For MonoDB
Step 1 :
This guide describes how to upgrade existing Percona TokuMX instance to Percona Server for MongoDB. The following JavaScript files are required to perform the upgrade:
• allDbStats.js
• tokumx_dump_indexes.js
• psmdb_restore_indexes.js
You can download those files from GitHub.
Step 2 :
Run the allDbStats.js script to record database state before migration:
$ mongo ./allDbStats.js > ~/allDbStats.before.out
Step 3 :
Perform a dump of the database:
$ mongodump --out /your/dump/path
Step 4 :
Perform a dump of the indexes:
$ ./tokumx_dump_indexes.js > /your/dump/path/tokumxIndexes.json
Step 5 :
Restore the collections without indexes using “--noIndexRestore” switch:
$ mongorestore --noIndexRestore /your/dump/path
Step 6 :
Restore the indexes (this may take a while). This step will remove clustering options to the collections before inserting.
$./psmdb_restore_indexes.js --eval "data='/your/dump/path/tokumxIndexes.json' "
Step 7 :
Run the allDbStats.js script to record database state after migration:
mongo ./allDbStats.js > ~/allDbStats.after.out
This is the guide i have found in the Migration from Tokumx to Percona server for mongodb. at step 6 when i try to restore indexes i get below mentioned error :
/mnt/tokumx-bkup/tokumxIndexes.json
2016-06-29T05:28:20.028+0000 E QUERY SyntaxError: Unexpected identifier
at /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js:78:1 at /mnt/tokumx-bkup/tokumxIndexes.json
2016-06-29T05:28:20.028+0000 E QUERY Error: error loading js file: /mnt/tokumx-bkup/tokumxIndexes.json
at /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js:78:1 at /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js:78
failed to load: /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js
Any help will be welcomed.
Thanks
check the tokumxIndexes.json file. When tokumx_dump_indexes.js is run, the mongo shell parameter --quiet must be used or the resulting json will contain the shell preamble at the beginning.
And check the file using something like http://jsonlint.com/
Also if preamble is present delete these two lines from the tokumxIndexes.json file.
"MongoDB shell version: 3.0.11-1.6
connecting to: 127.0.0.1:27017/test"
and Run the script again.
and Run the script again
$./psmdb_restore_indexes.js --eval "data='/your/dump/path/tokumxIndexes.json' "
Now this script will start build Index Process.

MongoDB: mongoimport loses connection when importing big files

I have some trouble importing a JSON file to a local MongoDB instance. The JSON was generated using mongoexport and looks like this. No arrays, no hardcore nesting:
{"_created":{"$date":"2015-10-20T12:46:25.000Z"},"_etag":"7fab35685eea8d8097656092961d3a9cfe46ffbc","_id":{"$oid":"562637a14e0c9836e0821a5e"},"_updated":{"$date":"2015-10-20T12:46:25.000Z"},"body":"base64 encoded string","sender":"mail#mail.com","type":"answer"}
{"_created":{"$date":"2015-10-20T12:46:25.000Z"},"_etag":"7fab35685eea8d8097656092961d3a9cfe46ffbc","_id":{"$oid":"562637a14e0c9836e0821a5e"},"_updated":{"$date":"2015-10-20T12:46:25.000Z"},"body":"base64 encoded string","sender":"mail#mail.com","type":"answer"}
If I import a 9MB file with ~300 rows, there is no problem:
[stekhn latest]$ mongoimport -d mietscraping -c mails mails-small.json
2015-11-02T10:03:11.353+0100 connected to: localhost
2015-11-02T10:03:11.372+0100 imported 240 documents
But if try to import a 32MB file with ~1300 rows, the import fails:
[stekhn latest]$ mongoimport -d mietscraping -c mails mails.json
2015-11-02T10:05:25.228+0100 connected to: localhost
2015-11-02T10:05:25.735+0100 error inserting documents: lost connection to server
2015-11-02T10:05:25.735+0100 Failed: lost connection to server
2015-11-02T10:05:25.735+0100 imported 0 documents
Here is the log:
2015-11-02T11:53:04.146+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:45237 #21 (6 connections now open)
2015-11-02T11:53:04.532+0100 I - [conn21] Assertion: 10334:BSONObj size: 23592351 (0x167FD9F) is invalid. Size must be between 0 and 16793600(16MB) First element: insert: "mails"
2015-11-02T11:53:04.536+0100 I NETWORK [conn21] AssertionException handling request, closing client connection: 10334 BSONObj size: 23592351 (0x167FD9F) is invalid. Size must be between 0 and 16793600(16MB) First element: insert: "mails"
I've heard about the 16MB limit for BSON documents before, but since no row in my JSON file is bigger than 16MB, this shouldn't be a problem, right? When I do the exact same (32MB) import one my local computer, everything works fine.
Any ideas what could cause this weird behaviour?
I guess the problem is about performance, any way you can solved used:
you can use mongoimport option -j. Try increment if not work with 4. i.e, 4,8,16, depend of the number of core you have in your cpu.
mongoimport --help
-j, --numInsertionWorkers= number of insert operations to run
concurrently (defaults to 1)
mongoimport -d mietscraping -c mails -j 4 < mails.json
or you can split the file and import all files.
I hope this help you.
looking a little more, is a bug in some version
https://jira.mongodb.org/browse/TOOLS-939
here another solution you can change the batchSize, for default is 10000, reduce the value and test:
mongoimport -d mietscraping -c mails < mails.json --batchSize 1
Quite old, but I struggled on same issue.
If you want to import big files, especially remote with Compass or by Program just add
&wtimeoutMS=0
to your Connection-String. This removes Timeout on Write-Operations.

mongorestore failing because of DocTooLargeForCapped error

I'm trying to restore a collection like so:
$ mongorestore --verbose --db MY_DB --collection MY_COLLECTION /path/to/MY_COLLECTION.bson --port 1234 --noOptionsRestore
Here's the error output (timestamps removed):
using write concern: w='majority', j=false, fsync=false, wtimeout=0
checking for collection data in /path/to/MY_COLLECTION.bson
found metadata for collection at /path/to/MY_COLLECTION.metadata.json
reading metadata file from /path/to/MY_COLLECTION.metadata.json
skipping options restoration
restoring MY_DB.MY_COLLECTION from file /path/to/MY_COLLECTION.bson
file /path/to/MY_COLLECTION.bson is 241330 bytes
error: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
error: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
restoring indexes for collection MY_DB.MY_COLLECTION from metadata
Failed: restore error: MY_DB.MY_COLLECTION: error creating indexes for MY_DB.MY_COLLECTION: createIndex error: exception: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
The result of the restore is a database and collection with correct names but no documents.
OS: Ubuntu 14.04 running on Azure VM.
I just solved my own problem. See answer below.
The problem seemed to be that I was using mongod on the replica set PRIMARY member.
Once I commented out the following line in /etc/mongod.conf, it worked without problems:
replSet=REPL_SET_NAME --> #replSet=REPL_SET_NAME
I assume passing the correct replica set name to the mongorestore command (like in this question) could also work, but haven't tried that yet.

mongo shell throwing error while running "show collections" command

My mongo shell is starting without any error
>use mydb is also working properly (here db name is mydb)
but when i am giving show collections command, it is showing following error.
>show collections
Wed Oct 15 17:38:30 uncaught exception: error: {
"$err" : "file /var/lib/mongodb/mydb.6 open/create failed in createPrivateMap (look in log for more information)",
"code" : 13636
}
Here is the error log
17:38:22 [initandlisten] connection accepted from 127.0.0.1:53178 #1
17:38:30 [conn1] ERROR: mmap private failed with out of memory. You are using a 32-bit build and probably need to upgrade to 64
17:38:30 [conn1] Assertion: 13636:file /var/lib/mongodb/mydb.6 open/create failed in createPrivateMap (look in log for more information)
17:38:30 [conn1] assertion 13636 file /var/lib/mongodb/mydb.6 open/create failed in createPrivateMap (look in log for more information) ns:mydb.system.namespaces query:{}
17:39:01 [clientcursormon] mem (MB) res:2 virt:90 mapped:0
Based on one solution given for another stackoverflow question ,couldn't connect to server 127.0.0.1 shell/mongo.js , i tried same step in my case and problem was solved for time being, but the main issue is whenever i shutdown my machine and restart again i get the same error and i have to repeat same steps (as given in above link) to make mongo shell working and it ultimately lead to data loss within collections. Can anyone suggest what could be the reason , is there some problem with my mongodb installation? Please let me know if anyone had similar issue and successfully resolved it . Thanks
I think there's two possible sources of the problem:
your computer doesn't have enough RAM available
as the log message says, you are using a 32-bit build of MongoDB and should use a 64-bit build, because you have two much data to memory map with 32-bit (> about 2.5 GB)