MongoDB was working beautifully for me for several months until I had an unexpected shutdown a week or two ago. Since then, I've been getting the error in the title that snowballs into an invalid argument, then a library panic, then some fatal assertions which cause MongoDB to crash.
Now, I've done my research: the normal answers are to run the repair function and to make sure SELinux isn't screwing up the process. Neither of those have worked. The error gets thrown during WiredTiger's checkpoint process, so reads/writes to the database aren't the issue, and because it's during the checkpoint process, it guarantees that MongoDB won't stay up for more than a day.
To be clear: all the files in the database are owned by mongod:mongod, have permissions set to 600 (default, and I tried setting them to 755 to see if that fixed it, and it didn't). I'm running mongodb as a service on a CentOS 7 box, and the service file specifies that it should run as user mongod. The mongod.conf file specifies a mounted filesystem as the database, and it was happy with that until the unexpected shutdown. I'm running MongoDB version 4.0.1, so WiredTiger really doesn't like it if I disable Journaling either (disregarding the fact that I shouldn't disable it in the first place).
I feel like I've exhausted all my options, and that the only thing I can do is backup my data and reinstall MongoDB. Are there any that I've missed?
After creating a backup of my data via mongodump, shutting down mongo, removing the entire database with rm -rf 'path-to-database', rebooting mongo (without the replication config), and restoring the data with mongorestore, mongodb still crashes. This time, however, it's with an Invariant failure after the open: operation not permitted. The only conclusion I can think of is that the data itself has become corrupted in some way. Thankfully, this isn't "mission critical" data, so to speak, and I can easily obtain new data.
Unfortunately, this doesn't answer my original question of "what other options do I have?". However, I'm still posting this in case others run into this same kind of issue.
EDIT: invariant issue was caused by me forgetting to re-initialize my replication set. After fixing that, it's clean. Because of this, I no longer believe it was a data corruption issue, but a checkpoint corruption issue.
EDIT 2: So the issue arose again after about a week, and after another week of trying various debugging methods, I tried simply moving the mongo process to another server. So far, that's been working. The previous server was acting up (I couldn't even run top at one point - another process had a lock on a necessary library file to run it), so here's to hoping that the current server doesn't follow suite.
Related
I use MongoDB 4.2 on my local machine (windows 10). I have not changed any configurations, so the default behavior of only accepting local connections should be in place. (I only need to access it locally)
I was running a script that was reading data from my MongoDB, there are no writes to the db in this script. When all the numbers were crunched I noticed weird results, and saw that my database was suddenly gone. I checked my dbpath and the data was gone from there too! Could it be a hack, or was it MongoDB that dropped both the database and the raw data in the dbpath?
I've seen similar questions on this forum, mostly resolved by the author forgetting to reroute to the correct dbpath, which is not the case here. I've checked the log but the log seems to be very limited (I restarted mongod and could only see logging happening after the restart).
MongoDB does not delete all of the files in its data directory.
Most likely either you are checking in the wrong place or something external to MongoDB deleted its files.
Our DB of +- 400Gb is stopping on our one server.
From the logs:
2015-07-07T09:09:51.072+0200 I STORAGE [conn10] _getOpenFile() invalid file index requested 8388701
2015-07-07T09:09:51.072+0200 I - [conn10] Invariant failure false src/mongo/db/storage/mmap_v1/mmap_v1_extent_manager.cpp 201
2015-07-07T09:09:51.082+0200 I CONTROL [conn10]
Any idea in what are I should start looking? Storage issue?
I am just answering this question in case some people make the same non-technical mistake again:
I tried to scp all the files in the /data/db directory to the server. As the files are many (dbname.1 to dbname.55, about 100GB), it was interrupted in the middle (last successful file dbname.22), and I restarted and uploaded dbname.23 to dbname.55. And when I run queries in mongo client, it worked for some cases, and failed for some others showing the error message the same as in the question. I thought it might be some file broken in the file transferring, but the md5 check was all right. Only after I spent a long time finishing all the md5 check I found the reason.
It turned out to be that scp uploads dbname.21 to dbname.29 after it uploads dbname.2, so dbname.3 to dbname.9 was never uploaded to the server. I am going to upload them, and this should solve the problem.
I ran into a variant of this today as well. Mysteriously one of my data files disappeared (or didn't make it in a migration from another server). None of the repair/recovery procedures would work, failing on the same error you reference. Luckily I have a separate mongod that has a collection with the same name, so as a cheap hack I copied the (admittedly wrong) data file to the other server, and while I knew I wouldn't get any data back, the repair tools (such as mongod --repair) were then able to work their magic, but as expected, they recovered some data from the bad file I copied in, so I had to weed out some docs. Luckily it was the "mycollection.1" file, which is only 128MB.
I don't think this applies in your case since index of the missing data file your log is talking about is ridiculously high. Your log is essentially saying it can't find /data/dbname/mycollection.8388701. You said your data-set is only 400GB, so an index that high just doesn't make sense. You should have only roughly 200 data files since most of them are 2GB each by default. What is the result of db.stats() (specifically the fileSize attribute)?
This mongolab blog entry helped me understand the data file structure.
My advice for where you should start looking:
run the db.stats() command to get an idea of how big your data on
disk actually is.
Does it make sense for your server to be looking for a data file with a crazy high index? If not, the issue isn't really with storage, but with the extents and the metadata of your collection/database.
Do your repair tools work? If you have at least enough free disk space as the size of your data set (on disk), try the mongod --repair, or db.repairDatabase() tools to start a repair. I'm assuming it won't work since my repair attempts crashed with the same invalid file index requested error.
Try copying a "bad" file like I did that roughly matches what the missing file would look like (keeping in mind how the file sizes of the data files aren't all the same, do your best to match it up and try a repair). If this works, your data files will be cleaned up (but it does take a lot of disk space).
Hope that helps point you in the right direction.
In my case this happened in a development setting with MongoDB 3.6.20 on macOS 10.14.6. Another program restarted the mac and close any open terminals, including the terminal that ran the mongod process. After the OS restart, I could not restart the mongod because the Invariant failure. The error also mentioned a bad lockfile.
I was able to solve the issue with the following steps, yet I am not exactly sure which did the job:
remove corrupted lock file: rm -rf data/db/mongod.lock
direct outcome: mongod still failed due to Invariant failure but at least no mention about the lockfile anymore.
run mongod --repair
direct outcome: repair still failed due to Invariant failure. Error output mentions SocketException: Address already in use.
restart the machine again to free the socket.
direct outcome: mongod starts and runs without problems. Yay.
The first successful mongod run after the issue gave the following output:
[ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost.
Thus, it runs smoothly again. Maybe I was fortunate. I hope the same approach helps some of you.
I am currently experimenting with MongoDB replica set mechanism.
I already have a working standalone Mongo server with a main database of about 20GB of data.
I decided to convert this mongo server to a primary replica set server, then added a 2nd machine with a similar configuration (but a newer mongo version), as a secondary replica set server.
This works fine, all data is replicated to the secondary as expected.
But I would like to perform some alteration operations on the data (because somehow, my data model has changed and I need to, for example rename some properties, or convert references to a simple ObjectId, some things like that). By the same time I would like to update the first server which has an old version (2.4) to the last version available (2.6).
So I decided to follow the instructions on the MongoDB website to perform maintenance on replica set members.
shut down the secondary server. (ok)
restart server as standalone on another port (both servers usually run on 27017)
mongod --dbpath /my/database/path --port 37017
And then, the server never restarts correctly and I get this:
2014-10-03T08:20:58.716+0200 [initandlisten] opening db: myawesomedb
2014-10-03T08:20:58.735+0200 [initandlisten] myawesomedb Assertion failure _name == nsToDatabaseSubstring( ns ) src/mongo/db/catalog/database.cpp 472
2014-10-03T08:20:58.740+0200 [initandlisten] myawesomedb 0x11e6111 0x1187e49 0x116c15e 0x8c2208 0x765f0e 0x76ab3f 0x76c62f 0x76cedb 0x76d475 0x76d699 0x7fd958c3eec5 0x764329
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x11e6111]
/usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x1187e49]
/usr/bin/mongod(_ZN5mongo12verifyFailedEPKcS1_j+0x17e) [0x116c15e]
/usr/bin/mongod(_ZN5mongo8Database13getCollectionERKNS_10StringDataE+0x288) [0x8c2208]
/usr/bin/mongod(_ZN5mongo17checkForIdIndexesEPNS_8DatabaseE+0x19e) [0x765f0e]
/usr/bin/mongod() [0x76ab3f]
/usr/bin/mongod(_ZN5mongo14_initAndListenEi+0x5df) [0x76c62f]
/usr/bin/mongod(_ZN5mongo13initAndListenEi+0x1b) [0x76cedb]
/usr/bin/mongod() [0x76d475]
/usr/bin/mongod(main+0x9) [0x76d699]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7fd958c3eec5]
/usr/bin/mongod() [0x764329]
2014-10-03T08:20:58.756+0200 [initandlisten] exception in initAndListen: 0 assertion src/mongo/db/catalog/database.cpp:472, terminating
2014-10-03T08:20:58.757+0200 [initandlisten] dbexit:
What am I doing wrong ?
Note that at this time, the first server is still running as primary member.
Thanks in advance!
I believe you are hitting a bug in VMWare here (can you confirm you are using VMWare VMs? confirmed) - I have seen it confirmed on Ubuntu and Fedora so far. The bug causes pieces of previous data to not be zero'ed out when creating the MongoDB namespace files (not always, but sometimes). That previous data essentially manifests as corruption in the namespace files and leads to the assertion you saw.
To work around the issue, there will be a fix released in MongoDB versions 2.4.12 and 2.6.5+ as part of SERVER-15369. The OS/Kernel level fix will eventually percolate down from the kernel bug and the Ubuntu patch, but that may take some time to actually be available as an official update (hence the need for the workaround change in MongoDB itself in the interim).
The issue will only become apparent when you upgrade to 2.6 because of additional checking added to that version that was not present in 2.4, however the corruption is still present, just not reported on version 2.4
If you still have your primary running, and it does not have the corruption, I would recommend syncing a secondary that is not on a VMWare VM and/or taking a backup of your files as soon as possible for safety - there is no automatic way to fix this corruption right now.
You can also look at using version 2.6.5 once it is released (2.6.5 rc4 is available as of writing this which includes the fix). You will still need to resync with that version off your good source to create a working secondary, but at least there will then be no corruption of the ns files.
Updates:
Version 2.6.5 which includes the fix mentioned was released on October 9th
Version 2.4.12 which includes the fix was released on October 16th
Official MongoDB Advisory: https://groups.google.com/forum/#!topic/mongodb-announce/gPjazaAePoo
I am using Mongodb 2.4.8 on a 64 bit machiene with 3 servers as replicaSet, for which i have currently disbaled journaling on my development box .
Durabilty is not so important for our Application , so the reason i have disabled Journaling Option .
I see that there is only one advantage of journaling , that is in case of an unclean shutdown we dont have to issue a repair command as journaling will take care of it .
To produce this unclean shutdown i killed mongo replica process using kill -9 Mongo process Id , i just removed mongo locks and restarted the mongo primary , secondary and the arbitery servers , everything started fine .
My question is that , when i should we issue the repair command actually (as removing locks and restart works )
Please excuse if the question is too dumb , as i wanted to know the risk of disbaling journaling under production .
The repairDatabase command checks your whole database for corrupted data and discards that data so the rest becomes usable again.
This can become necessary after an unclear shutdown. In your case the shutdown didn't appear to corrupt any data (or maybe it did, but it didn't become apparent yet because the data in question wasn't accessed yet). But that doesn't mean that this will always be the case. Was your database actually doing anything at that moment? When the database is idle or only performing read-operations, there is usually not much to worry about. But when it is currently in the middle of a large write-operation, a sudden shutdown without journaling can be much more troublesome.
Another scenario where a database could be corrupted and repairDatabase could help is a physical malfunction of the storage medium or a corruption of the underlying filesystem.
Important note regarding replica-sets: When you have a replica-set, and only one node is corrupted, then you should rather remove that node and rebuild it from the other members of the replica-set. RepairDatabase will destroy any corrupted data. Restoring from a replica-set will not.
I installed MongoDB both on Win 7 and on Mac OS X, and both places, I got mongod (the server) and mongo (the client).
But at both places, running mongod will fail if I double click on the file, and the error message was gone too quickly before I can see anything. (was better on Mac because Terminal didn't exit automatically and showed the error message).
Turned out it was due to /data/db not exist and the QuickStart guide says: By default MongoDB will store data in /data/db, but it won't automatically create that directory
I just have a big question that MongoDB seems to want a lot of people using it (as do many other products), but why would it not automatically create the folder for you? If it didn't exist... creating it can do not much harm... especially you can state so in the user agreement. The question is why. I can think of one strange reason, but the reason may be too strange to list here...
One good reason would be that you do not want it in /data/db. In this case, you want it to fail with an error when you forgot to specify the correct directory on the command line. The same goes for mis-spelled directory names. If MongoDB just created a new directory and started to serve from there, that would not be very helpful. It would be quite confusing, because databases and collections are auto-created, so there would not even be errors when you try to access them.