How to increase connections in mongodb - mongodb

I start MongoDB this way by running a script startmongo.sh which will start all the below in order
./mongodb1.sh
./mongodb2.sh
./mongod3_arbiter.sh
Each mongodb1.sh , mongodb2.sh , mongodb3_arbiter.sh consists of code as
mongod --config mongod1.conf
mongod --config mongod2.conf
mongod --config mongod3_arbiter.conf
I want to increase the connection limit to 10000 , so i wanted to specify attribute ulimit -n 10000
My question is do i need to specify this attribute all the above conf files ??
Right now the conf consists of
replSet = test
fork = true
port = 27017
dbpath = /mongologs/mongodb3
logpath = /mongologs/mongo/mongodb3
rest = true
Please let me know , thanks in advance .

H friend, I think you need to set maxpoolSize.
Please have a look at mongoDB Docs
There described like
uri.maxPoolSize¶
The maximum number of connections in the connection pool. The default value is 100.

These are set in /etc/limits.conf (debian based) or /etc/security/limits.conf (redhat based) depending on what linux distribution you have.
You are looking for the nofile attribute.
<domain> <type> <item> <value>
* soft nofile 10000
* hard nofile 10000

Related

is it correct parameters for pgbouncer.ini and postgresql.conf?

I have pgbouncer.ini file with the below configuration
[databases]
test_db = host=localhost port=5432 dbname=test_db
[pgbouncer]
logfile = /var/log/postgresql/pgbouncer.log
pidfile = /var/run/postgresql/pgbouncer.pid
listen_addr = 0.0.0.0
listen_port = 5433
unix_socket_dir = /var/run/postgresql
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
admin_users = postgres
#pool_mode = transaction
pool_mode = session
server_reset_query = RESET ALL;
ignore_startup_parameters = extra_float_digits
max_client_conn = 25000
autodb_idle_timeout = 3600
default_pool_size = 250
max_db_connections = 250
max_user_connections = 250
and I have in my postgresql.conf file
max_connections = 2000
does it effect badly on the performance ? because of max_connections in my postgresql.conf ? or it doesn't mean anything and already the connection handled by the pgbouncer ?
one more question. in pgpouncer configuration, does it right listen_addr = 0.0.0.0 ? or should to be listen_addr = * ?
Is it better to set default_pool_size on PgBouncer equal to the number of CPU cores available on this server?
Shall all of default_pool_size, max_db_connections and max_user_connections to be set with the same value ?
So the idea of using pgbouncer is to pool connections when you can't afford to have a higher number of max_connections in PG itself.
NOTE: Please DO NOT set max_connections to a number like 2000 just like that.
Let's start with an example, if you have a connection limit of 20 and then your app or organization wants to have a 1000 connections at a given time, that is where pooler comes into picture and in this specific case you want the 20 connections to pool that 1000 coming in from the application.
To understand how it actually works let's take a step back and understand what happens when you do not have a connection pooler and only rely on PG config setting for the max connections which in our case is 20.
So when a connection comes in from a client\application etc. the main process of postgresql(PID, i.e. parent ID) spawns a child for that. So each new connection spawns a child process under the main postgres process, like so:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
24379 postgres 20 0 346m 148m 122m R 61.7 7.4 0:46.36 postgres: sysbench sysbench ::1(40120)
24381 postgres 20 0 346m 143m 119m R 62.7 7.1 0:46.14 postgres: sysbench sysbench ::1(40124)
24380 postgres 20 0 338m 137m 121m R 57.7 6.8 0:46.04 postgres: sysbench sysbench ::1(40122)
24382 postgres 20 0 338m 129m 115m R 57.4 6.5 0:46.09 postgres: sysbench sysbench ::1(40126)
So now once a connection request is sent, it is received by the POSTMASTER process and creates a child process at OS level under the main parent process. This connection then has a life span of "unlimited" unless close by the application or you have a time out set for idle connections in postgresql.
Now here comes the situation where it can be a very costly affair to manage the connections with a given compute, if they exceed a certain limit. Meaning n number of connections when served have a given compute cost and after some time the OS won't be able to handle a situation with HUGE connections and will in turn cause contentions at different compute level(i.e. Memory, CPU, I/O).
What if you can use the presently spawned child processes(backends) if they are not doing any work. You will save time on getting the child process(backends) and the additional cost as well(this can be different at times). This is where the pool of connections that are always open help to serve different client requests comes in and is also called pooling.
So basically now you have only n connections available but the pooler can manage n+i number of connections to serve the client requests.
This where pg-bouncer helps to reuse the connections. It can be configured with 3 types of pooling i.e Session pooling, Statement pooling and Transaction pooling. Basically bouncer returns the connection back to the pool once it has done, statement level work or transaction level work etc. Only during session pooling it keeps the connections unless it disconnects.
So basically lower down the number of connections at PG conf file level and tune all settings in the bouncer.ini.
To answer the second part:
one more question. in pgpouncer configuration, does it right listen_addr = 0.0.0.0 ? or should to be listen_addr = * ?
It depends if you have a standalone deployment, server etc.
basically if its on the server itself and you want it to allow connections from everywhere(incoming) use "*" if you want only the local network to be allowed use "127.0.0.0".
For the rest of your questions check this link: pgbouncer docs
I have tried to share a little of what I know, feel free to ask away if anything was unclear or or correct if it was incorrectly mentioned.

how to restore mongodb data after my mongdb container has been removed

I run a mongodb in docker container, and i have data file backup.
But today I removed my mongodb careless.
I tried to run an another container and put the datafile into the container, but it did not work.
How can I restore my data from the data file?
the database file I only have now:
the container I use is tutum/mongodb.my docker-compose.yml file is
mongo_db:
image: tutum/mongodb
privileged: true
restart: always
ports:
- 27016:27017
- 28016:28017
volumes:
- /var/mongodb:/data/db
environment:
- MONGODB_PASS=xxxxxx
- AUTH=yes
and now I want restore my data from dirctory /var/mongodb to my new container
I put the file except mongod.lock in my new container,but my mongodb can't run.
it's the screenshot:
the mongod.conf is:
# Where to store the data.
# Note: if you run mongodb as a non-root user (recommended) you may
# need to create and set permissions for this directory manually,
# e.g., if the parent directory isn't mutable by the mongodb user.
dbpath=/var/lib/mongodb
#where to log
logpath=/var/log/mongodb/mongod.log
logappend=true
#port = 27017
# Listen to local interface only. Comment out to listen on all interfaces.
bind_ip = 127.0.0.1
# Disables write-ahead journaling
# nojournal = true
# Enables periodic logging of CPU utilization and I/O wait
#cpu = true
# Turn on/off security. Off is currently the default
#noauth = true
#auth = true
# Verbose logging output.
#verbose = true
# Inspect all client data for validity on receipt (useful for
# developing drivers)
#objcheck = true
# Enable db quota management
#quota = true
# Set oplogging level where n is
# 0=off (default)
# 1=W
# 2=R
# 3=both
# 7=W+some reads
#diaglog = 0
# Ignore query hints
#nohints = true
# Enable the HTTP interface (Defaults to port 28017).
#httpinterface = true
# Turns off server-side scripting. This will result in greatly limited
# functionality
#noscripting = true
# Turns off table scans. Any query that would do a table scan fails.
#notablescan = true
# Disable data file preallocation.
#noprealloc = true
# Specify .ns file size for new databases.
# nssize = <size>
# Replication Options
# in replicated mongo databases, specify the replica set name here
#replSet=setname
# maximum size in megabytes for replication operation log
#oplogSize=1024
# path to a key file storing authentication info for connections
# between replica set members
#keyFile=/path/to/keyfile
the container ower set STORAGE_ENGINE in env while the container start running
the enviroment of the container is:
# mongod.conf
STORAGE_ENGINE=wiredTiger
HOSTNAME=bb544551ec2b
MONGODB_PASS=xxxxxx
LS_COLORS=
AUTH=yes
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/etc
SHLVL=1
HOME=/root
LESSOPEN=| /usr/bin/lesspipe %s
JOURNALING=yes
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/env
OLDPWD=/
the logs under tutum/mongodb:3.0:
It depends how you performed your backup.
1 - You took a filesystem snapshot.
=> you can untar your snapshot in your data folder (check on mongod.conf where is located your data folder).
2 - You used mongodump command.
=> you need to use mongorestore command
I am struggling with mongodb and the lack of persistent storage across reboots myself. Reading your post here makes me wonder what this means:
From your mongod.conf:
Disables write-ahead journaling
nojournal = true
and from your environment variable:
JOURNALING=yes
Perhaps your env should be:
NOJOURNAL=false
or
NOJOURNAL=no
(Im also struggling with the markdown here)
I personally was finally able to persist data using this compose section - and by setting an ENV variable NODE_ENV: production (in a different service which was depending on the mongodb service)
image: mongo:4.4
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: something
MONGO_INITDB_ROOT_PASSWORD: otherthanthis
volumes:
- ./mongodata:/data/db:cached

ERROR: child process failed, exited with error number 51 MongoDB

Getting this error while restarting MongoDB , I am using Mongo 3.2.4 and doing this set up on a new machine
Starting mongod... about to fork child process, waiting until server is ready for connections.
forked process: 19438
ERROR: child process failed, exited with error number 51
mongod(_ZN5mongo19MmapV1ExtentManager4initEPNS_16OperationContextE+0x4A8) [0x1040278]
mongod(_ZN5mongo26MMAPV1DatabaseCatalogEntryC1EPNS_16OperationContextENS_10StringDataES3_bb+0x187) [0x1036dc7]
mongod(_ZN5mongo12MMAPV1Engine23getDatabaseCatalogEntryEPNS_16OperationContextENS_10StringDataE+0x14E) [0x103a1de]
mongod(_ZN5mongo14DatabaseHolder6openDbEPNS_16OperationContextENS_10StringDataEPb+0x133) [0xac92a3]
----- END BACKTRACE -----
For me this error occured due to incorrect ownership of some files in my data directory. I fixed this using the following command:
sudo chown -R mongodb: /path/to/db/directory
Where mongodb was the database user in my case.
This is resolved by inserting the following lines into /etc/security/limits.conf:
mongodb soft nofile 65535
mongodb hard nofile 90000
mongodb soft nproc 65535
mongodb hard nproc 90000
We need to add the user account used to run the Mongo service. Generally, it is the mongodb user.
In my case the problem was that I haven't created the appropriate folders specified in the config files.

How to find the optimal value for mongo.options.connectionsPerHost

Currently I am using Grails and I am running several servers connecting to a single mongo server.
options {
autoConnectRetry = true
connectTimeout = 3000
connectionsPerHost = 100
socketTimeout = 60000
threadsAllowedToBlockForConnectionMultiplier = 10
maxAutoConnectRetryTime=5
maxWaitTime=120000
}
Unfortunately, when I run 50 servers, total number of connections goes up by 5k. After a bit of research I found that this was a simple config in the DataSource.groovy
I am sure that my programs do not need 100 mongo connections.
But I am unsure what value should I set this to.
I have 2 doubts.
First, how to determine the optimal value for the connectionsPerHost.
Second, whether all these 100 connections are created once and then pooled?

errmsg" : "No host described in new configuration 1 for replica set rs0 maps to this node", Why I am getting this message?

I am getting this message every time I do rs.initiate() :
No host described in new configuration 1 for replica set rs0 maps to this node
This is how my /etc/hosts/ file looks on both the replica set servers.
Server 1 and server 2 "hosts" file
127.0.0.1 localhost
aa.bb.cc.dd DataMongo1
ee.ff.gg.hh DataMongo2
Server 1-mongod.conf file
bind-ip aa.bb.cc.dd
Server 2 -mongod.conf file
bind-ip ee.ff.gg.hh
changed server1 hostname to DataMongo1 and server2 to DataMongo2
$hostname DataMongo1
Port 27071 is uncommented on both servers
ReplicaSet config file:
cfg= {
_id:"rs0",
members:[{_id:0,host:"DataMongo1:27071"},{_id:1,host:"DataMongo2:27071"}]}
Please help me with this issue.
I just ran into this issue, and in my case the symptoms were that everything worked correctly, until I rebooted the server.
Then I would get the following error: NodeNotFound: No host described in new configuration $id for replica set $name maps to this node
Just restarting the mongodb daemon fixed it, so it couldn't be a replica set configuration issue.
After checking the logs a bit more in detail, I noticed the following error message: NETWORK [replexec-0] getaddrinfo("$name.emilburzo.com") failed: Temporary failure in name resolution -> bingo
It was trying to query the hostname before the network was fully up, and thus the replica set member didn't know it's own identity
Adding the server's FQDN hostname to /etc/hosts fixed it, e.g.:
127.0.1.1 shortname shortname.fqdn.com
Looks like the port is wrong. The default port of MongoDB is 27017, not 27071.
if you are using mongo.conf file then initially comment "replication" section like below,
...
#operationProfiling:
# replication:
# replSetName: rs0-here
#sharding:
...
Now run mongod and configure replica set on mongo terminal like below,
rs.initiate({_id: "rs0-here", version: 1, members: [{ _id: 0, host : "your_host:27017" }]})
You can also create users and databases here and then exit out of it. Uncomment the following section in mongo.conf,
replication:
replSetName: rs0-here
run mongod again and then this issue should go away.