I have a problem setting the maximum incoming connections for my MongoDB.
I ran ulimit -n 1000000 and restarted mongo, the last ping in my MMS dashboard shows:
"maxIncomingConnections": 1000000,
however:
"connections": {
"current": 701,
"totalCreated": 712,
"available": 118
},
as you can see current + available is 819 which is the default (80% from 1024) from system.
Any ideas?
I don't know as which user you ran the ulimit command, but keep in mind that this only is valid for the current user in the current environment.
A better approach is to set the open file limit in /etc/security/limits.conf like this:
# Max is 64k anyway, and there is a hard limit
# of 20k connection in MongoDB anyway
# 40k open files should be more than enough
# unless you have _very_ large disks and a _shitload_ of datafiles
mongodb soft nofiles 40000
mongodb hard nofiles 64000
# Make sure we don't get throttled CPU wise
mongodb soft cpu unlimited
mongodb hard cpu unlimited
# This is kind of useless, since the maximum size
# a file created by MongoDB is 2GB for now
# but it is save and the docs say to do so
mongodb soft fsize -1
mongodb hard fsize -1
# We don't want our resident stack to be limited...
mongodb soft rss -1
mongodb hard rss -1
# ... nor the address space
mongodb soft as -1
mongodb hard as -1
# Last but not least, we want the number of processes at a reasonable limit
mongodb soft noproc 32000
mongodb hard noproc 32000
However, this is only a fallback in case you start MongoDB manually, since the upstart script should set the according limits. After adding these values, a reboot is needed iirc. The number of available connections should increase, then.
Note: Keep in mind that each connection gets about 1MB of stack allocated on the server, which then can not be used for holding indices and data within RAM.
Related
Scenario:
I had a standalone MongoDB Server v3.4.x where I had several DBs and collections respectively. As the plan was to upgrade to lastest 4.2.x, I have created a mongo dump of all DBs.
Created a shard cluster of config server (replica cluster), shard-1 server (replica cluster) & shard-2 server (cluster) [MongoDB v4.2.x]
Issue:
Now when I try to restore the dump, it's partially restoring every time I try to restore DBs. If I try to restore single DB it fails with same error. But whenever I try to restore specific DB & specific collection it always works fine. But the problem is so many collections across many DBs. Cannot do it for all indicvidually & every time it fails at different progress percentage/collection/DBs.
Error:
2020-02-07719:07:03.822+0000 [#####################...] myproduct_new.chats 68.1MB/74.8MB (91.0%)
2020-02-07719:07:03.851+0000 [########## ] myproduct_new.metaCrashes 216MB/502MB (42.9%)
2020-02-07719:07:03.876+0000 [################## ] myproduct_new.feeds 152MB/196MB (77.4%)
panic: close of closed channel
goroutine 25 [running]: github.com/mongodb/mongo-tools/mongorestore.(*MongoRestore).RestoreCollectionToDB(Oxc0001a0000, 0xc000234540, Oxc, 0xc00023454d, 900, Ox7fa5503e21f0, 0xc00020b890, 0x1f66e326, Ox0, ...)
/data/mci/533e19bcc94a47bf738334351cf58a07/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongorestore/restore. github.com/mongodb/mongo-tools/mongorestore.(*MongoRestore).RestoreIntent(Oxc0001a0000, Oxc00022f9e0, Ox0, Ox0, Ox0, Ox0)
/data/mci/533e19bcc94a47bf738334351cf58a07/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongorestore/restore. github.com/mongodb/mongo-tools/mongorestore.(*MongoRestore).RestoreIntents.funcl(Oxc0001a0000, 0xc000146420, 0x3)
/data/mci/533e19bcc94a47bf738334351cf58a07/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongorestore/restore. created by github.com/mongodb/mongo-tools/mongorestore.(*MongoRestore).RestoreIntents
/data/mci/533e19bcc94a47bf738334351cf58a07/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongorestore/restore. ubuntu#ip-00-xxx-xxx-00:/usr/local/backups/Dev_backup_07-02-2020$ Ox10, Oxc00000f go:503 +0x49b go:311 +Oxbe9 go:126 +Oxlcb go:109
+0x12d
Question:
I am connecting to mongos and trying to restore. Currently, sharding is not yet enabled for any DB. Can anyone put some light on whats going wrong or how to restore the dump?
I have got the same problem, then I found out that it is the problem of my mongodb replica set caused this error.
check the rs.status() of your database.
if you got the message
Our replica set config is invalid or we are not a member of it
try this answer
We faced exact same issue for same spec trying to restore from Mongodump. There is no definite reason but could be best to check below factors
Check your Dump size(bjson) vs Allocated Disk free space on Cluster. Dump size could be 2x to 3x of our core Mongo data folder size(Which is compressed on top of BJSON)
Check your Oplog size configured during cluster creation, for first time migration provide 10-15% of free diskspace size as Oplog size, you can change this after migration. This will help secondaries to lag bit longer and catch up on sync faster from WAL. eg: Allocated 3.5 GB for oplog out of Total 75 GB Hardisk size, with 45GB of Total data(compressed). In real world usage scenario(post migration), keep it 1-2 hour write data volumes as oplog size.
Now your total disk space would be Dump folder size + Oplog + 6GB(Default mongo installation + system extras).
Note: If you cannot afford to allocate Dump folder size, you have to run the restore in batches(DBs or Collections nsInclude option), giving time for Mongo to compress after importing bjson. This should be done in minutes
After your restore, Mongo will shrink the data size and diskspace will more or less match close to your Standalone data folder size.
If this is not set and your diskspace is under provisioned during Migration, while your migration is on, Mongo will try to increase diskspace and it cannot do when Primary is used, tries to increase in secondary and make current primary to secondary which could potentially cause above error. You can also check the Hardware/Status Vitals to see whether the servers changed state from Primary to Secondary
Also try NOT to enable Server Auto upsizing while creating cluster, I don't have definite rationale, but we don't want any actions to happen in background to upgrade hardware say M30 to M40 because CPU is busy in middle of Migration(it happened to me)
Finally as good practice, try to run Large Databases mainly with Large single Collection > 4 GB(Non-Shard) separately. I had 40+ dbs, with 20% > 15GB in dump BJSON size and having 1 or 2 big collections with >4GB multi-million docs. Once I separated them, it gave breathing space for Mongo to bulk insert and compress them with some elapsed time of few mins. Mongo restore happens at collection level
So instead of taking 40-50 mins to restore it took 90-120 mins after some mock practices and order.
If you have time to plan, try this out. Any other learning please share
Key takeaways - Check your Dump Folder size and large collection size.
RATE OF DISK WRITES, OPLOG Lag, RAM, CPU, IO are good KPIs to keep watch on during Dumprestore
This question already has answers here:
Too many open files while ensure index mongo
(3 answers)
Closed 5 years ago.
Trying to move a MongoDB database with a little over 100 million documents. Moving it from server in AWS to server in GCP. Tried mongodump - which worked, but mongorestore keeps breaking with an error -
error running create command: 24: Too many open files
How can this be done?
Don't want to transfer by creating a script on AWS server to fetch each document and push to an API endpoint on GCP server because it will take too long.
Edit (adding more details)
Already tried setting ulimit -n to unlimited. Doesn't work as GCP has a hardcoded limit that cannot be modified.
Looks like you are hitting the ulimit for your user. This is likely a function of some or all of the following:
Your user having the default ulimit (probably 256 or 1024 depending on the OS)
The size of your DB, MongoDB's use of memory mapped files can result in a large number of open files during the restore process
The way in which you are running mongorestore can increase the concurrency thereby increasing the number of file handles which are open concurrently
You can address the number of open files allowed for your user by invoking ulimit -n <some number> to increase the limit for your current shell. The number you choose cannot exceed the hard limit configured on your host. You can also change the ulimit permanently, more details here. This is the root cause fix but it is possible that your ability to change the ulimit is constrained by AWS so you might want to look at reducing the concurrency of your mongorestore process by tweaking the following settings:
--numParallelCollections int
Default: 4
Number of collections mongorestore should restore in parallel.
--numInsertionWorkersPerCollection int
Default: 1
Specifies the number of insertion workers to run concurrently per collection.
If you have chosen values for these other than 1 then you could reduce the concurrency (and hence the number of concurrently open file handles) by setting them as follows:
--numParallelCollections=1 --numInsertionWorkersPerCollection=1
Naturally, this will increase the run time of the restore process but it might allow you to sneak under the currently configured ulimit. Although, just to reiterate; the root cause fix is to increase the ulimit.
whats the command in mongo to check current Journal space used or whats the limit set
using Mongo version 3.2.8 and storage engine wiredTiger
What is the Journal space used by Mongo?
MongoDB uses a journal file size limit of 100 MB, WiredTiger creates a new journal file approximately every 100 MB of data. When WiredTiger creates a new journal file, WiredTiger syncs the previous journal file.
Source Journaling
How can I reduce the Journal file size?
Yes - there is a way to minimize the default size of the journal
files, subject to a couple of caveats. From the MongoDB configuration
documentation:
To reduce the impact of the journaling on disk usage, you can leave journal enabled, and set smallfiles to true to reduce the size of the
data and journal files.
Here is the smallfiles config information:
Set to true to modify MongoDB to use a smaller default data file size. Specifically, smallfiles reduces the initial size for data files
and limits them to 512 megabytes. The smallfiles setting also reduces
the size of each journal files from 1 gigabyte to 128 megabytes.
Use the smallfiles setting if you have a large number of databases that each hold a small quantity of data. The smallfiles setting can
lead mongod to create many files, which may affect performance for
larger databases.
Source Run a MongoDB configuration server without 3GB of journal files, answer by platforms
I am using Postgres DB for my product. While doing the batch insert using slick 3, I am getting an error message:
org.postgresql.util.PSQLException: FATAL: sorry, too many clients already.
My batch insert operation will be more than thousands of records.
Max connection for my postgres is 100.
How to increase the max connections?
Just increasing max_connections is bad idea. You need to increase shared_buffers and kernel.shmmax as well.
Considerations
max_connections determines the maximum number of concurrent connections to the database server. The default is typically 100 connections.
Before increasing your connection count you might need to scale up your deployment. But before that, you should consider whether you really need an increased connection limit.
Each PostgreSQL connection consumes RAM for managing the connection or the client using it. The more connections you have, the more RAM you will be using that could instead be used to run the database.
A well-written app typically doesn't need a large number of connections. If you have an app that does need a large number of connections then consider using a tool such as pg_bouncer which can pool connections for you. As each connection consumes RAM, you should be looking to minimize their use.
How to increase max connections
1. Increase max_connection and shared_buffers
in /var/lib/pgsql/{version_number}/data/postgresql.conf
change
max_connections = 100
shared_buffers = 24MB
to
max_connections = 300
shared_buffers = 80MB
The shared_buffers configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data.
If you have a system with 1GB or more of RAM, a reasonable starting
value for shared_buffers is 1/4 of the memory in your system.
it's unlikely you'll find using more than 40% of RAM to work better
than a smaller amount (like 25%)
Be aware that if your system or PostgreSQL build is 32-bit, it might
not be practical to set shared_buffers above 2 ~ 2.5GB.
Note that on Windows, large values for shared_buffers aren't as
effective, and you may find better results keeping it relatively low
and using the OS cache more instead. On Windows the useful range is
64MB to 512MB.
2. Change kernel.shmmax
You would need to increase kernel max segment size to be slightly larger
than the shared_buffers.
In file /etc/sysctl.conf set the parameter as shown below. It will take effect when postgresql reboots (The following line makes the kernel max to 96Mb)
kernel.shmmax=100663296
References
Postgres Max Connections And Shared Buffers
Tuning Your PostgreSQL Server
Adding to Winnie's great answer,
If anyone is not able to find the postgresql.conf file location in your setup, you can always ask the postgres itself.
SHOW config_file;
For me changing the max_connections alone made the trick.
EDIT: From #gies0r: In Ubuntu 18.04 it is at
/etc/postgresql/11/main/postgresql.conf
If your postgres instance is hosted by Amazon RDS, Amazon configures the max connections for you based on the amount of memory available.
Their documentation says you get 112 connections per 1 GB of memory (with a limit of 5000 connections no matter how much memory you have), but we found we started getting error messages closer to 80 connections in an instance with only 1 GB of memory. Increasing to 2 GB let us use 110 connections without a problem (and probably more, but that's the most we've tried so far.) We were able to increase the memory of an existing instance from 1 GB to 2 GB in just a few minutes pretty easily.
Here's the link to the relevant Amazon documentation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html#RDS_Limits.MaxConnections
change max_connections variable
in postgresql.conf file located in
/var/lib/pgsql/data or /usr/local/pgsql/data/
Locate postgresql.conf file by below command
locate postgresql.conf
Edit postgresql.conf file by below command
sudo nano /etc/postgresql/14/main/postgresql.conf
Change
max_connections = 100
shared_buffers = 24MB
to
max_connections = 300
shared_buffers = 80MB
I am using Mongo-DBv1.8.1. My server memory is 4GB but Mongo-DB is utilizing more than 3GB. Is there memory limitation option in Mongo-DB?.
If you are running MongoDB 3.2 or later version, you can limit the wiredTiger cache as mentioned above.
In /etc/mongod.conf add the wiredTiger part
...
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
...
This will limit the cache size to 1GB, more info in Doc
This solved the issue for me, running ubuntu 16.04 and mongoDB 3.2
PS: After changing the config, restart the mongo daemon.
$ sudo service mongod restart
# check the status
$ sudo service mongod status
Starting in 3.2, MongoDB uses the WiredTiger as the default storage engine. Previous versions used the MMAPv1 as the default storage engine.
With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache.
In MongoDB 3.2, the WiredTiger internal cache, by default, will use the larger of either:
60% of RAM minus 1 GB, or
1 GB.
For systems with up to 10 GB of RAM, the new default setting is less than or equal to the 3.0 default setting (For MongoDB 3.0, the WiredTiger internal cache uses either 1 GB or half of the installed physical RAM, whichever is larger).
For systems with more than 10 GB of RAM, the new default setting is greater than the 3.0 setting.
to limit the wiredTriggered Cache Add following line to .config file :
wiredTigerCacheSizeGB = 1
This question has been asked a couple times ...
See this related question/answer (quoted below) ... how to release the caching which is used by Mongodb?
MongoDB will (at least seem) to use up a lot of available memory, but it actually leaves it up to the OS's VMM to tell it to release the memory (see Caching in the MongoDB docs.)
You should be able to release any and all memory by restarting MongoDB.
However, to some extent MongoDB isn't really "using" the memory.
For example from the MongoDB docs Checking Server Memory Usage ...
Depending on the platform you may see
the mapped files as memory in the
process, but this is not strictly
correct. Unix top may show way more
memory for mongod than is really
appropriate. The Operating System (the
virtual memory manager specifically,
depending on OS) manages the memory
where the "Memory Mapped Files"
reside. This number is usually shown
in a program like "free -lmt".
It is called "cached" memory.
MongoDB uses the LRU (Least Recently Used) cache algorithm to determine which "pages" to release, you will find some more information in these two questions ...
MongoDB limit memory
MongoDB index/RAM relationship
Mongod start with memory limit (You can't.)
You can limit mongod process usage using cgroups on Linux.
Using cgroups, our task can be accomplished in a few easy steps.
Create control group:
cgcreate -g memory:DBLimitedGroup
(make sure that cgroups binaries installed on your system, consult your favorite Linux distribution manual for how to do that)
Specify how much memory will be available for this group:
echo 16G > /sys/fs/cgroup/memory/DBLimitedGroup/memory.limit_in_bytes
This command limits memory to 16G (good thing this limits the memory for both malloc allocations and OS cache)
Now, it will be a good idea to drop pages already stayed in cache:
sync; echo 3 > /proc/sys/vm/drop_caches
And finally assign a server to created control group:
cgclassify -g memory:DBLimitedGroup \`pidof mongod\`
This will assign a running mongod process to a group limited by only 16GB memory.
source: Using Cgroups to Limit MySQL and MongoDB memory usage
I don't think you can configure how much memory MongoDB uses, but that's OK (read below).
To quote from the official source:
Virtual memory size and resident size will appear to be very large for the mongod process. This is benign: virtual memory space will be just larger than the size of the datafiles open and mapped; resident size will vary depending on the amount of memory not used by other processes on the machine.
In other words, Mongo will let other programs use memory if they ask for it.
mongod --wiredTigerCacheSizeGB 2 xx
Adding to the top voted answer, in case you are on a low memory machine and want to configure the wiredTigerCache in MBs instead of whole number GBs, use this -
storage:
wiredTiger:
engineConfig:
configString : cache_size=345M
Source - https://jira.mongodb.org/browse/SERVER-22274
For Windows it seems possible to control the amount of memory MongoDB uses, see this tutorial at Captain Codeman:
Limit MongoDB memory use on Windows without Virtualization
Not really, there are a couple of tricks to limit memory, like on Windows you can use the Windows System Resource Manager (WSRM), but generally Mongo works best on a dedicated server when it's free to use memory without much contention with other systems.
Although the operating system will try to allocate memory to other processes as they need it, in practice this can lead to performance issues if other systems have high memory requirements too.
If you really need to limit memory, and only have a single server, then your best bet is virtualization.
This can be done with cgroups, by combining knowledge from these two articles:
https://www.percona.com/blog/2015/07/01/using-cgroups-to-limit-mysql-and-mongodb-memory-usage/
http://frank2.net/cgroups-ubuntu-14-04/
You can find here a small shell script which will create config and init files for Ubuntu 14.04:
http://brainsuckerna.blogspot.com.by/2016/05/limiting-mongodb-memory-usage-with.html
Just like that:
sudo bash -c 'curl -o- http://brains.by/misc/mongodb_memory_limit_ubuntu1404.sh | bash'
There is no reason to limit MongoDB cache as by default the mongod process will take 1/2 of the memory on the machine and no more. The default storage engine is WiredTiger. "With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache."
You are probably looking at top and assuming that Mongo is using all the memory on your machine. That is virtual memory. Use free -m:
total used free shared buff/cache available
Mem: 7982 1487 5601 8 893 6204
Swap: 0 0 0
Only when the available metric goes to zero is your computer swapping memory out to disk. In that case your database is too large for your machine. Add another mongodb instance to your cluster.
Use these two commands in the mongod console to get information about how much virtual and physical memory Mongodb is using:
var mem = db.serverStatus().tcmalloc;
mem.tcmalloc.formattedString
------------------------------------------------
MALLOC: 360509952 ( 343.8 MiB) Bytes in use by application
MALLOC: + 477704192 ( 455.6 MiB) Bytes in page heap freelist
MALLOC: + 33152680 ( 31.6 MiB) Bytes in central cache freelist
MALLOC: + 2684032 ( 2.6 MiB) Bytes in transfer cache freelist
MALLOC: + 3508952 ( 3.3 MiB) Bytes in thread cache freelists
MALLOC: + 6349056 ( 6.1 MiB) Bytes in malloc metadata
MALLOC: ------------
MALLOC: = 883908864 ( 843.0 MiB) Actual memory used (physical + swap)
MALLOC: + 33611776 ( 32.1 MiB) Bytes released to OS (aka unmapped)
MALLOC: ------------
MALLOC: = 917520640 ( 875.0 MiB) Virtual address space used
MALLOC:
MALLOC: 26695 Spans in use
MALLOC: 22 Thread heaps in use
MALLOC: 4096 Tcmalloc page size
One thing you can limit is the amount of memory mongodb uses while building indexes. This is set using the maxIndexBuildMemoryUsageMegabytes setting. An example of how its set is below:
mongo --eval "db.adminCommand( { setParameter: 1, maxIndexBuildMemoryUsageMegabytes: 70000 } )"
this worked for me on an AWS instance, to at least clear the cached memory mongo was using. after this you can see how your settings have had effect.
ubuntu#hongse:~$ free -m
total used free shared buffers cached
Mem: 3952 3667 284 0 617 514
-/+ buffers/cache: 2535 1416
Swap: 0 0 0
ubuntu#hongse:~$ sudo su
root#hongse:/home/ubuntu# sync; echo 3 > /proc/sys/vm/drop_caches
root#hongse:/home/ubuntu# free -m
total used free shared buffers cached
Mem: 3952 2269 1682 0 1 42
-/+ buffers/cache: 2225 1726
Swap: 0 0 0
If you're using Docker, reading the Docker image documentation (in the Setting WiredTiger cache size limits section) I found out that they set the default to consume all available memory regardless of memory limits you may have imposed on the container, so you would have to limit the RAM usage directly from the DB configuration.
Create you mongod.conf file:
# Limits cache storage
storage:
wiredTiger:
engineConfig:
cacheSizeGB: 1 # Set the size you want
Now you can assign that config file to the container: docker run --name mongo-container -v /path/to/mongod.conf:/etc/mongo/mongod.conf -d mongo --config /etc/mongo/mongod.conf
Alternatively you could use a docker-compose.yml file:
version: '3'
services:
mongo:
image: mongo:4.2
# Sets the config file
command: --config /etc/mongo/mongod.conf
volumes:
- ./config/mongo/mongod.conf:/etc/mongo/mongod.conf
# Others settings...