How to know when my MongoDB database overhead reaches its limit? - mongodb

I installed a MongoDB database on my server. My server is in 32Bit and I can't change it soon.
When you use MongoDB in a 32Bit architecture you have a limit of 2,5Go of data, as mentionned in this MongoDB blog post.
The thing is that I have several database. So how can I know if I am close or not to this limit ?

Mongo will start throwing assertions when you hit the limit. If you look at the logs, it will have a bunch of error messages (helpfully) telling you to switch to a 64-bit machine.
Using the mongo shell, you can say:
> use admin
> db.runCommand({listDatabases : 1})
to get a general idea of how close to the limit you are. Sizes are in bytes.

Related

One database at MongoDB is not responding

I have several databases under the mongodb instance (v 4.0.26) in a Ubuntu 18.04 server.
One of the database has started behaving inconsistently in terms of connectivity all of a sudden.
I have checked the resource consumption on the server. 7GB of RAM left and 34GB of storage also available.
In Mongo Shell (at the server), when I connect to the particular database and perform db.getCollectionNames() it just hangs. This behavior is also not consistent. But every other database in that instance works without any problem.
I am suspecting there could be a corrupt document in any of the collection which is resulting this. Looking for guidance to debug this issue.
P.S.: Losing the data in db might cost my job.

mongodb tables disappereared somehow

I am using mongodb 3.2.11 in Ubuntu Zesty 17.04 and I am connecting from Nodejs 4.6 to mongodb in HTTPS, the database server is bound to its own address (127.0.0.1) and I have created a user besides admin for read/write to the database.
Although, most of my tables were certainly dropped somehow, only users (empty) and sessions table were left.
I grepped my logs for "drop" with grep -r "drop" and got no results. Despite I am using very recent versions of the software and made some security measures they don't seem enough. At this time I don't need to recover the data, but I wanted to know what else should I be looking at?
Try to use "show collections" in the mongo shell in ubuntu and see if the collections are shown after doing "use dbnamehere".

Incredibly low GridFS performance using MongoDB 3.0.0 and Mongofiles

I have a MongoDB database with a GridFS collection containing hundreds of thousands of files (345,073, to be precise -- and about 100GBs in volume).
On MongoDB 2.6.8 it takes a fraction of a second to list the files using the native mongofiles and connecting to mongod. This is the command I use:
mongofiles --db files list
I just brewed and linked MongoDB 3.0.0 and suddenly the same operation takes more than five minutes to complete, if ever it does. I have to kill the query most of the time, as it drives two of my CPU cores to 100%. The log file does not show anything irregular. I rebuilt the indexes to no avail. I also tried the same with my other GridFS collections in other databases, each with millions of files and I encounter the same issue.
Then I uninstalled 3.0.0 and relinked 2.6.8 and everything is back to normal (using the exact same data files).
I am running MongoDB on Yosemite, and I reckon the problem might be platform specific. But is there anything that I have ommited and I should take into consideration? Or have I really discovered a bug that I must report?
Having the same problem here, for me running a mongofiles 2.6 from a docker image fixed the problem, seems they broke something with the rewrite.

MongoDB 2Gb limit - can't compact database

I have been adding files to GridFS in my 32bit Mongo database. It eventually failed when the size of all Mongo files hit 2Gb. So, I then deleted the files in GridFS. I've tried running the repairDatabase() command, but it fails, saying "mongo requires 64bit for larger datasets". I get the same error trying to run the compact command against GridFS.
So, I've hit the 2Gb limit, but it won't let me compact or repair because it doesn't have space. Talk about Catch22!!
What do I do?
Edit
This is an immediate problem I have - how do I compact the database right now?
I think the only recourse is to upgrade to a 64-bit OS.
I had the same problem on my database and I solved it such way. At first I created Amazon EC2 64-bit instance and moved database files from 32-bit instance via plain copy. Then I made all needed cleanups in database on 64-bit instance and made dump with mongodump. This dump I moved back to 32-bit instance and restored database from it.
If you need to restore database with same name, that you had before, you can just rename your old db-files in dbpath (files have database name in their name)
And of course, you should move to upgrade to 64-bit later. MongoDB on 32-bit OS is very bad in support.
shot in the dark here... you could try opening a slave off the master (in 64 bit) and see if you can force a replication over to the slave, essentially backing up your data. I have no idea if this would actually work, as it's pretty clear that 32bit has a 2gig limit (all their docs warn about this :( ), but thought I'd at least post a somewhat potentially creative solution..

Mongodb32-bit limitation is for single database?

I am using mongodb-v2.0. I have gone through the 32-bit mongodb limitation of "2GB". The thing which baffling me is 2GB limitation. I will explain our scenario :-
When the database reaches 2GB. It is possible to use different database name in a single instance.If so then each database will have 2GB? Can we use different instance of mongodb listening on different port. If its possible,then can we continue in creating new database until it reaches 2GB of size?. In this way can we use multiple database of size 2GB on 32-bit mongodb on 32-bit machines?
Thanks,
sampath
The 2GB are the storage limit for the mongodb server. See in the FAQ http://www.mongodb.org/display/DOCS/FAQ#FAQ-Whatarethe32bitlimitations%3F
But maybe this is your solution: Database over 2GB in MongoDB