MongoDB: Too many connections created from mongos to shards while building index - mongodb

We'v set up sharded cluster with replica set for each shard in our production ENV. Last night we encountered a issue while connecting to the cluster and issuing building index command through mongo shell , by connecting to the mongos instance , not directly to specific shard.
The issue is : once starting building the index, connections created from mongos to this shard increases rapidly and 'too many connections' errors show up in the primary shard's log file very soon.
The below is link for primary shard's log summary:
At the very beginning for index
Then very soon, the connections numbers reached to 10000:
Connection limit exceeded
From the three mongos' log, all the connections are initiated from mongos. We have googled and find related issue link as : https://jira.mongodb.org/browse/SERVER-28822
But there is no trigger conditions. And the same time, I tried to reproduce the question in test ENV ,but not occurred again. So, please help.
here is configurations for mongos:
mongos' configuration
and here is for shard:
primary shard's configuration

Found the answer.
It was because the index creation issued by mongorestore command was foreground, not background. I mistook the way which mongorestore took and not checked the meta file for the table schema.

Related

How to Make Mongodb replicaset as master

Recently because of an unknown issue our MongoDB hosted on a GCP compute VM has stopped, we were unable to restart it because it's throwing MongoDB.service is not found. so we reinstalled the MongoDB after taking a backup of all .wt files in the DB path. once we've reinstalled the MongoDB we copied the files back and we can't see the data in the DB. we tried the --repair flag but still no use.
is there a way we can get this working?
The other thing is we've taken the VM snapshot from the day before the crash. there we can see the data in MongoDB only if we run the method rs.slaveOk(). I think we can't use that DB as the primary db. is there a way that we can use this as a primary db.
I'm relatively new to the concept of replica set, Master/slave any suggestions and questions are welcome
Thanks
If you see the data from the snapshot with rs.slaveOk() it is easy to recover , you can reconfigure the member as standalone PRIMARY with this steps:
Get the current config:
cfg = rs.conf()
printjson(cfg)
Set in the temp variable cfg only the available member(in the example it is the first in the cfg with id:0 ):
cfg.members = [cfg.members[0]]
Reconfigure the replicaSet only with the available member:
rs.reconfig(cfg, {force : true})
(Don't forget to add the {force:true} option since it is the only way to reconfigure from SECONDARY)
If all is good with this member and it successfully elect as PRIMARY , you can add other new members with rs.add() ...

MongoDB cluster - CSRS failing to start

I'm trying ti configure mongo sharded cluster with docker on local environment, following this guide and got stuck on the first step. The problem is, that configuration server replica set could not start correctly.
I've tried to disable enableMajorityReadConcern but didn't succeed. It looks like config servers are obliged to have this set to true.
Those look strange for me:
*2018-11-14T16:55:38.669+0000 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset*
*2018-11-14T16:55:38.669+0000 I CONTROL
[LogicalSessionCacheRefresh] Failed to create config.system.sessions:
Cannot create config.system.sessions until there are shards, will try
again at the next refresh interval*
*2018-11-14T16:55:38.669+0000 I
CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set
up; waiting until next sessions refresh interval: Cannot create
config.system.sessions until there are shards*
I've tried 3 container instances scenario - all 3 display the same log output as the single container scenario. Any ideas are welcome!
Solved the issue. When CSRS is up for the first time - it's an expected behavior. rs.init() command (step#3 from deployment guide) must be executed via mongo shell to make CSRS finally configured & up. Command could be executed on any single node of CSRS.

Mongodb replication automatically

Is there any ways or methods to start mongodb replication directly when mongod service start? I don't want to enter to shell and ON the replication?
Thanks!
You can create a mongod service which starts automatically when server starts.
First you need to create a configuration file(mongodb.conf) which will include configuration settings such as replicaSet name etc. Then create a service and install it using following command
mongod -f c:\mongod.conf --install
Then start the service using
net start mongodb
Read about configuration file here and
How to install mongo as service here
When you create a valid replica set in mongodb, your data will be asynchronously from the primary member to the secondary members in replica set
Having said that, you're not required to do extra efforts manually to get data replication done
When you do rs.slaveOk() on secondary, that allows you to query data from secondary members in the replica set.
It's a provision. It allows you to read from secondary provided that you're can tolerate the possible eventual consistent data. The replication does not happen when you do rs.slaveOk() on secondary
I'm not sure to understand. Your question was about service start. On my part, I install mongo on ubuntu and the service is not started with replicatet mode.
Finally, I disabled the first one and I created another service with the option --replSet myReplicat .
When you have only 2 servers, there is a problem with majority votes. On my part, I had 2 secondary after I stopped the primary and it was difficult to comeback with 1 primary and 1 secondary.
Effectively, the replication is always active. By default, all connections should go to the Primary. If you want to readonly from a secondary, you first enter the commande rs.slaveOk(). This command is active at session level. If you reconnect, you have to pass it again. It is not possible to put it at server side.

Can't create user

I was using MongoDB version 2.6.6 on Google Compute Engine and used the click to deploy method.
rs0:SECONDARY> db.createUser({user:"admin", pwd:"secret_password", roles:[{role:"root", db:"admin"}]})
2015-07-13T15:02:28.434+0000 Error: couldn't add user: not master at src/mongo/shell/db.js:1004
rs0:SECONDARY> use admin
switched to db admin
rs0:SECONDARY> db.createUser({user:"admin", pwd:"secret_password", roles:["root"]})
2015-07-13T15:13:28.591+0000 Error: couldn't add user: not master at src/mongo/shell/db.js:1004
I had a similar problem with mongo 3.2:
Error: couldn't add user: not master :
When trying to create a new user, with root role.
I was using only a local copy of mongo.
In my mongod.conf file I had the following uncommented:
replication:
replSetName: <node name>
Commenting that out and restarting fixed the problem. I guess mongo thought it was part of a replication set, and was confused as to who the Master was.
Edit:
I've also found that if you ARE trying to setup a replication set, and you get the above error, then run:
rs.initiate()
This will start a replication set, and set the current node as PRIMARY.
Exit, and then log back in and you should see:
PRIMARY>
Now create users as needed.
I ran into this error when scripting replica set creation.
The solution was to add a delay between rs.initiate() and db.createUser().
Replica set creation is seemingly done in background and it takes time for the primary node to actually become primary. In interactive use this doesn't cause a problem because there is a delay while typing the next command, but when scripting the interactions the delay may need to be forced.
MongoDB will be deployed in a cluster of Compute Engine instances (also known as a MongoDB replica set). Each instance will use a boot disk and separate disk for database files.
Primary and master nodes are the nodes that can accept writes. MongoDB’s replication is “single-master:” only one node can accept write operations at a time.
Secondary and slave nodes are read-only nodes that replicate from the primary.
Your error message looks like you are trying to add the user on the secondary. Try adding the user in the primary.
I ran into this issue when I thought I was running mongo 3.4 but it was mongo 3.6. Uninstalling 3.6 and installing 3.4 fixed my issue.

Remove inaccessible Mongo shard

I have a MongoDB sharded setup with 3 shards: shard0000, shard0001 and shard0002. The machine that runs shard0002 is down now, which causes all my queries to fail. I'd like to temporarily remove shard0002 from my setup and keep working with the first two shards. That should be doable assuming I only use unsharded collections that reside in the first two shards, right?
What I tried first is: db.runCommand({removeshard: 'IP:PORT'}) which obviously doesn't help, because it just puts the shard in draining mode, which will never end (since it's down). Then I tried connected to my config server and did db.shards.remove({_id: 'shard0002'}) on the config DB then restart mongos so it reloads the config. Now whenever I try to do anything I get "can't find shard for: shard0002".
Is there any way to just let Mongo know that I don't care about that shard for now, then re-enable it later when it becomes available.
I had a different problem, and I manually removed the shard with:
use config
db.shards.remove({"_id":"shard0002"});
Manually modify the the shard entry in the config db, then removeshard
I tried several options to do this in version 4.2.
At the end I ended to these commands to be executed on Config Server:
use config
db.databases.updateMany( {primary: "shard0002"}, {$set: {primary: "shard0000"} })
db.shards.deleteOne({_id : "shard0002" })
db.chunks.updateMany( {shard : "shard0002"}, {$set: {shard: "shard0000"} })
while ( db.chunks.updateMany( {"history.shard" : "shard0002"},
{$set: {"history.$.shard": "shard0000"} }).modifiedCount > 0 ) { print("Updated") }
It works to a certain extent, i.e. CRUD operations are working. However, when you run getShardDistribution() then you get an error Collection 'db.collection' is not sharded.
Finally I see only one reliable & secure solution:
Shut down all mongod and mongos in your sharded cluster
Start available shards as standalone service (see Perform Maintenance on Replica Set Members)
Take a backup from available shards with mongodump.
Drop data folders from all hosts.
Build your application newly from scratch. Startup all mongod and mongos
Load data into new cluster with mongorestore
Perhaps for large cluster you have to shuffle around a bit like this:
Deploy Config servers and mongos server, with one empty shard
Start one old shard as standalone
Take backup from this old Shard
Tear down this old shard
build a fresh empty new shard
add new shard to your new cluster
restore data into the new cluster
backup can be dropped and shard can be reused in new cluster
Repeat above for each shard you have in your cluster (the broken shard might be skipped most likely)