MongoDB as windows service and setting up replicaSet - mongodb

I have installed MongoDB and its set up as windows service. When I try to set up replicaSet I am getting error "Only one usage of each socket address (protocol/network address/port) is normally permitted. for socket: 0.0.0.0:27017".
So, I have stopped the windows service and set up replicaSet. The replicaSet is working fine now. But, I don't see the windows service up and running. Does that means I can't set up replicaSet and MongoDB service at the same time?

You can set up replica set and MongoDB service at the same time on Windows. Since you have already set up a replica set, you are aware that you need to have a data directory and a log file for each replica set member. If you are running all replica set members on a single machine, each replica set member must be assigned a different port number. The sample provided is for development or functional testing only. Setting up all replica set members on a single machine will constitute a single point of failure in addition to being a total performance drag.
Create a configuration file for each replica set member including data directory, log file, port number, and replica set name. For example, I have a replica set of 3 members, one primary Mongodb running on port 27017 and two secondary, Mongodb1 on port 37017, and Mongodb2 on port 47017. The replica set name is rs1.
Here is the configuration file for instance Mongodb.
# mongod.conf
# data directory
dbpath=C:\data\db
# log file
logpath=C:\mongodb-win32-i386-2.4.4\log\mongo.log
logappend=true
#port number
port=27017
#replica set name
replSet=rs1
Here is the configuration file for one of the secondaries.
# mongo.conf
# data directory
dbpath=C:\data\db2
# log file
logpath=C:\mongodb-win32-i386-2.4.4\log2\mongo.log
logappend=true
# port number
port=47017
# replica set name
replSet=rs1
The following link provides a complete list of configuration file options:
http://docs.mongodb.org/manual/reference/configuration-options/
Add all three MongoDB instances as Windows service. Since I did not specify the service and service display name, the MongoDB service will use the default service/service display name MongoDB
C:\mongodb-2.4.4\bin>mongod --config C:\mongodb-2.4.4\mongod.cfg --install
Install the other two MongoDB instances with service name and service display name.
C:\mongodb-2.4.4\bin>mongod --config C:\mongodb-2.4.4\mongod1.cfg --serviceName MongoDB1 --serviceDisplayName MongoDB1 --install
C:\mongodb-2.4.4\bin>mongod --config C:\mongodb-2.4.4\mongod2.cfg --serviceName MongoDB2 --serviceDisplayName MongoDB2 --install
Start all three instances of MongDB
C:\mongodb-2.4.4\bin>net start mongodb
The Mongo DB service is starting.
The Mongo DB service was started successfully.
C:\mongodb-2.4.4\bin>net start mongodb1
The MongoDB1 service is starting.
The MongoDB1 service was started successfully.
C:\mongodb-2.4.4\bin>net start mongodb2
The MongoDB2 service is starting.
The MongoDB2 service was started successfully.
Verify the status of all three Windows service using the sc command with query option.
C:\mongodb-2.4.4\bin>sc query mongodb
C:\mongodb-2.4.4\bin>sc query mongodb1
C:\mongodb-2.4.4\bin>sc query mongodb2
Configure replica set from MongoDB shell. In the following example, the MongoDB instance on port 27017 will be the primary replica set member.
C:\mongodb-2.4.4\bin>mongo --port 27017
Set replica set configuration from MongoDB shell.
> config = { _id: "rs1", members:[
... { _id : 0, host : "localhost:27017"},
... { _id : 1, host : "localhost:37017"},
... { _id : 2, host : "localhost:47017"}
... ] }
At MongoDB shell, initialize the replica set and verify its status.
> rs.initiate(config)
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
> rs.status()
{
"set" : "rs1",
"date" : ISODate("2013-07-02T18:40:27Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "localhost:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 651,
"optime" : {
"t" : 1372790393,
"i" : 1
},
"optimeDate" : ISODate("2013-07-02T18:39:53Z"),
"self" : true
},
{
"_id" : 1,
"name" : "localhost:37017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 31,
"optime" : {
"t" : 1372790393,
"i" : 1
},
"optimeDate" : ISODate("2013-07-02T18:39:53Z"),
"lastHeartbeat" : ISODate("2013-07-02T18:40:26Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"pingMs" : 0,
"syncingTo" : "localhost:27017"
},
{
"_id" : 2,
"name" : "localhost:47017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 31,
"optime" : {
"t" : 1372790393,
"i" : 1
},
"optimeDate" : ISODate("2013-07-02T18:39:53Z"),
"lastHeartbeat" : ISODate("2013-07-02T18:40:26Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"pingMs" : 0,
"syncingTo" : "localhost:27017"
}
],
"ok" : 1
}
rs1:PRIMARY>
You can also check the status of the secondaries. Here I am connecting to one of the secondaries on port 37017.
C:\mongodb-2.4.4\bin>mongo --port 37017
The following prompt will present in MongoDB shell showing the secondary status.
rs1:SECONDARY>
A tutorial on deploying a replica set can be found here:
https://docs.mongodb.com/manual/tutorial/deploy-replica-set/

Not covering the service part.
but setting up sharded replica set.
Try this blog to setup mongodb cluster on your local, it has all the configuration files with windows script to create required folder structure.
https://medium.com/#arun2pratap/set-up-mongodb-sharded-cluster-for-windows-with-just-a-double-click-6eedbb7b79e

Related

Error initialising MongoDB replica set in Docker container

I have created a MongoDB docker container, with a replica set, using this command:
docker run -d --name mongo -v /data/db:/data/db mongo --replSet name
The container starts running.
I then try to initiate the replica set, using this command:
rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "fa07bcdd8591:27017",
"info" : "try querying local.system.replset to see current configuration",
"ok" : 0,
"errmsg" : "already initialized",
"code" : 23
}
But it gives the error message "already initialized".
When I check the health of the replica set with the rs.status() command
rs.status()
{
"state" : 10,
"stateStr" : "REMOVED",
"uptime" : 102,
"optime" : {
"ts" : Timestamp(1472680081, 1),
"t" : NumberLong(77)
},
"optimeDate" : ISODate("2016-08-31T21:48:01Z"),
"ok" : 0,
"errmsg" : "Our replica set config is invalid or we are not a member of it",
"code" : 93
}
It says the replica set config is invalid.
It looks you didn't specify any replica set configuration.
The replica set initiate example:
rs.initiate(
{
_id: "myReplSet",
version: 1,
members: [
{ _id: 0, host : "mongodb0.example.net:27017" },
{ _id: 1, host : "mongodb1.example.net:27017" },
{ _id: 2, host : "mongodb2.example.net:27017" }
]
}
)

replica set in MongoDB using docker, primary has error and stops being primary when another member is added to the set

I have two docker containers running a mongo instance each, they were initialized like this:
docker run --name mongodb-shard-1-node-1 -d -v mongodb-shard-1-node-1:/data/db -p 27031:27017 mongo --replSet rs0 --smallfiles --oplogSize 128
when i do docker inspect mongodb-shard-1-node-1 it shows the ip 172.17.0.2
docker run --name mongodb-shard-1-node-2 -d -v mongodb-shard-1-node-2:/data/db -p 27020:27017 mongo --replSet rs0 --smallfiles --oplogSize 128
when i do docker inspect mongodb-shard-1-node-2 it shows the ip 172.17.0.4
So i proceed to access mongodb-shard-1-node-1 by using docker exec -it mongodb-shard-1-node-1 mongo and i initialize it as the primary member like this:
rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "ee3c41ef76b2:27017",
"ok" : 1
}
Then I proceed to add the mongodb-shard-1-node-2 to this member for it to work as a secondary member, at first it looks like it worked:
rs0:PRIMARY> rs.add("172.17.0.4:27017")
{ "ok" : 1 }
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2016-05-20T01:04:02.095Z"),
"myState" : 1,
"term" : NumberLong(1),
"members" : [ervalMillis" : NumberLong(2000),
{ "_id" : 0,
"name" : "ee3c41ef76b2:27017",
"state" : 1,,
"uptime" : 27,PRIMARY",
"optime""ts" : Timestamp(1463706237, 1),
}, "t" : NumberLong(1)
"infoMessage" : "could not find member to sync from",
"electionDate" : ISODate("2016-05-20T01:03:43Z"),
"self" : truen" : 2,
{,
"name" : "172.17.0.4:27017",
"state" : 0,,
"uptime" : 4,"STARTUP",
"optime""ts" : Timestamp(0, 0),
}, "t" : NumberLong(-1)
"lastHeartbeat" : ISODate("2016-05-20T01:04:01.187Z"),
"pingMs" : NumberLong(0),Date("1970-01-01T00:00:00Z"),
} "configVersion" : -2
"ok" : 1
}
but right away it fails for some reason and i have no idea why, here's what i get:
rs0:PRIMARY> rs.status()
2016-05-20T01:04:18.007+0000 E QUERY [thread1] Error: error doing query:
failed: network error while attempting to run command 'replSetGetStatus' on host '127.0.0.1:27017' :
DB.prototype.runCommand#src/mongo/shell/db.js:135:1
DB.prototype.adminCommand#src/mongo/shell/db.js:153:16
rs.status#src/mongo/shell/utils.js:1090:12
#(shell):1:1
2016-05-20T01:04:18.012+0000 I NETWORK [thread1] trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2016-05-20T01:04:18.018+0000 I NETWORK [thread1] reconnect 127.0.0.1:27017 (127.0.0.1) ok
rs0:SECONDARY
What is wrong? How do I fix it?
Edit: just to clarify, i had already tried the connections between the containers by doing what this part of the documentation says at: Test Connections Between all Members
Had my question answered here:
https://dba.stackexchange.com/a/139145/91866
I'm gonna quote the whole answer:
Your primary is trying to auto-configure itself as ee3c41ef76b2:27017 and that then resolves to the loopback (127.0.0.1) which is then not responding on the container as it expects. Depending on what the second container does to resolve ee3c41ef76b2, and especially it it does not resolve to 172.17.0.2, it will probably not be able to talk to the primary either.
Assuming you are correct about the connectivity (and you have verified that the instances are listening on the IP and not just the loopbasck) then you need to override the automatic detection and be explicit when you are calling rs.initiate(), something like this:
rs.initiate(
{
_id: "rs0",
version: 1,
members: [
{ _id: 0, host : "172.17.0.2:27017" },
{ _id: 1, host : "172.17.0.4:27017" }
]
}
)

Replica Set Error Code 76

In ref to mongo dba course trying to create replica set as asked shown by instructor in El Capitano (Single machine only), I get following error. I have three members:
(mongodb was installed using homebrew)
Step I: Setting up config
cfg ={ _id :"abc", members:[{_id:0, host:"localhost:27001"}, {_id:1, host:"localhost:27002"}, {_id:2, host:"localhost:27003"}] }
{
"_id" : "abc",
"members" : [
{
"_id" : 0,
"host" : "localhost:27001"
},
{
"_id" : 1,
"host" : "localhost:27002"
},
{
"_id" : 2,
"host" : "localhost:27003"
}
]
}
STEP II: Initialize the Config.
rs.reconfig(cfg)
2015-10-05T11:34:27.082-0400 E QUERY Error: Could not retrieve replica set config: { "ok" : 0, "errmsg" : "not running with --replSet", "code" : 76 }
at Function.rs.conf (src/mongo/shell/utils.js:1017:11)
at Function.rs.reconfig (src/mongo/shell/utils.js:969:22)
at (shell):1:4 at src/mongo/shell/utils.js:1017
Make sure you have the replSetName name configured in /etc/mongod.conf
replication:
replSetName: "somename"
Then restart your mongod.
sudo service mongod stop
sudo service mongod start
sudo service mongod restart
You are not running replica set with the repl set name.The solution is to set a replication set name in the mongod config file using the paramater --replSet.
eg) --replSet=test_replica
Once changes are done in config file restart the server.

mongo replica set member is fatal

I'm trying to configure a replica set of 3 members on 3 different Linux machines. Im running the mongod with replset in the config file. I mistakley set 'rs.initialize' in 2 machines, and now I have found the primary and tried to add the other instances, but it says that the config file is not from the same version.
How can I remove the replica set and start everything back from scratch?
If the following is true:
Is this a brand-new deployment.
There is no data you need to keep.
You can do the following:
Shutdown all 3 mongods.
Remove all files and directories from the
"dbpath" partition in all 3 machines
Restart all 3 mongods Connect
to one of the mongodds and submit the following command
config = { "_id": "rs0", "members" : [
{ "_id" : 0, "host" : "##Your DNS NAME:PORTNUMBER#" },
{ "_id" : 1, "host" : "##Your DNS NAME:PORTNUMBER#" },
{ "_id" : 2, "host" : "##Your DNS NAME:PORTNUMBER#" } ]
}
rs.initiate(config)

mongodb replica set with multiple primaries and pingMS=0

I am trying to set a replica set with two nodes: Node0 and Node1. From the Node0 I initialized a replica set named "rs0" and added Node1 to it. The problem is that it is added as a primary node instead of a secondary node and the final result is a replica set with two primary nodes.
This is the result of executing the rs.status() command from Node0
"set" : "rs0",
"date" : ISODate("2012-10-23T21:03:37Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "Node0:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 61185,
"optime" : Timestamp(1350967947000, 1),
"optimeDate" : ISODate("2012-10-23T04:52:27Z"),
"self" : true
},
{
"_id" : 1,
"name" : "Node1:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 58270,
"optime" : Timestamp(1350956423000, 1),
"optimeDate" : ISODate("2012-10-23T01:40:23Z"),
"lastHeartbeat" : ISODate("2012-10-23T21:03:37Z"),
"pingMs" : 0
}
],
If I execute the same command from Node1 the only node listed is itself. Note that the pingMs is 0. Trying to add a third node or an arbiter give similar results: each one is added as primary and the pingMS is always 0.
You mentioned that you run rs.initiate() on both servers. This should only be done on one.
I suggest you start from scratch, by deleting the dbpath directory for each node (backup data before if the db was not empty). Then, start all mongod processes, log into one of them, then call
rs.initiate()
rs.add(<other node 1>)
The other node gets the replica set configuration through the first one automatically. Repeat `rs.add() for each additional node you want to add.
I ran into the same situation, having wrongly run rs.initiate() on two instances. I solved this by shutting down the should-be second instance, removing the data directory and relaunching the instance. Upon restart, it is properly detected as a member of the replica set, is properly synchronized, and most importantly there is only one primary.
This operation should not be dangerous since, to my knowledge, a replica set replicate all the data on the nodes. To be sure, you could just move the data directory after the shutdown of secondary node, so that you would keep a backup in case anything went wrong.