Standalone replica sets mongodb - mongodb

I need to create a standalone replica set in mongo. I followed the steps here: http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
Everything worked as expected, but I was wondering how I could configure this in the mongodb.conf file so I didn't have to manually do these steps every time. Is something like this possible via the conf file? I know there is a replSet param that you can have in the conf file, but I wasn't sure how to specify which ports to use for the different replica sets. Thanks!

Most of the command line parameters you specify are settable in the configuration file - you can see how here: http://docs.mongodb.org/manual/reference/configuration-options/
Specifically, notice that you can set, port, replSet, and dbPath from the configuration file.
There is also a good article on Replica set configuration here: http://docs.mongodb.org/manual/reference/replica-configuration/

Related

Monitor mongodb replica sets

I have a cluster environment and I have two shard replica sets and 1 configured replication set is configured when I have run the command db.stats() so why does it always show data set to shard replication?
I need to collect config replica set data, because it mentions above 3.2+, you can deploy the config as replication set and I did the same.
Please help me find the exact command or I am doing something wrong because I need to monitor configuration replication set data.
If you want to see the config of replica set, you can find it by rs.conf() or rs.config().
You can find out more about replication command here
But if you want to check the data inside the replication, you have to login to replication node by rs.slaveOk(), before do some queries.
PS. Sorry if I get your question wrong...

MongoDB error not master and slaveOk=false

I am using MongoDB with Loopback in my application with a loopback connector to the MongoDB. My application was working fine but now it throws an error
not master and slaveOk=false.
try running rs.slaveOk() in a mongoDB shell
You are attempting to connect to secondary replica whilst previously your app (connection) was set to connect likely to the primary, hence the error. If you use rs.secondaryOk() (slaveOk is deprecated now) you will possibly solve the connection problem but it might not be what you want.
To make sure you are doing the right thing, think if you want to connect to the secondary replica instead of primary. Usually, it's not what you want.
If you have permissions to amend the replica configuration
I suggest to connect using MongoDB Compass and execute rs.status() first to see the existing state and configuration for the cluster. Then, verify which replica is primary.
If necessary, adjust priorities in the replicaset configuration to assign primary status to the right replica. The highest priority number sets the replica as primary. This article shows how to do it right.
If you aren't able to change the replica configuration
Try a few things:
make sure your hostname points to the primary replica
if it is a local environment issue - make sure you added your local replica hostnames to the /etc/hosts pointing to 127.0.0.1
experiment with directConnection=true
experiment with multiple replica hosts and ?replicaSet=<name> - read this article (switch tabs to replica)
The best bet is that your database configuration has changed and your connection string no longer reflects it correctly. Usually, slight adjustments in the connection string are needed or just checking to what instance you want to connect.

Is zookeeper reconfig expected to update the zoo.cfg.dynamic file?

I'm setting up a distributed cluster for ZooKeeper based on version 3.5.2. In specific, I'm utilizing the reconfig command to dynamically update the configuration when there is any rebalance in the cluster (e.g. one of the nodes comes down).
The observation I have is that the zoo.cfg.dynamic file is not getting updated even when the reconfig (add/remove) command is correctly executed. Is this the expected behavior ? Basically I'm looking for guidance whether we should manage the zoo.cfg.dynamic file also through a separate script (update it lock-step with the reconfig command) or can we rely on the reconfig command to do this for us. My preference/expectation is the latter.
Following is the sample command:
reconfig -remove 6 -add server.5=125.23.63.23:1234:1235;1236
From the reconfig documentation:
Dynamic configuration parameters are stored in a separate file on the server (which we call the dynamic configuration file). This file is linked from the static config file using the new dynamicConfigFile keyword.
So I could practically start with any file name to host the ensemble list and ensure the 'dynamicConfigFile' config keyword just point to this file.
Now when the reconfig command is run, basically a new dynamic-config file (e.g. zoo.cfg.dynamic.00000112) is generated which contains the transformed list of servers, in the form as below (as an example):
server.1=125.23.63.23:2780:2783:participant;2791
server.2=125.23.63.24:2781:2784:participant;2792
server.3=125.23.63.25:2782:2785:participant;2793
The zoo.cfg file is hence auto-updated to point the 'dynamicConfigFile' config keyword to the new config file (zoo.cfg.dynamic.00000112). The previous dynamic-config file continues to be available in the runtime (config directory) but it is not being referred by the main config anymore.
So overall, there is no overhead to update any file lock-step to the reconfig command i.e. reconfig command takes care of it all. The only potential overhead to upfront resolve is to write a periodic purge of the old dynamic-config files.

MongoDB 3.4 - How to add config servers to a mongos

I'm editting the config file of a mongos. And I have a replicaSet of n config servers. To balance the load among all of them, I have to write all config servers in the config file, or it's enough to add the master?
sharding:
configDB: <configReplSetName>/cfg1.example.net:27017, cfg2.example.net:27017,...
The documentation is not very clear about this. Thank you.
Edit - Possible solution: I've found this doc, where I interpret, that you should add all your config servers to your mongo's config file. Otherway, if the master config server cannot be accessed, the cluster won't be able to operate. This makes sense, since, in mongodb architecture, only mongos can act as load balancers. If someone can confirm this, I will vote him.

Starting MongoDB as a service with auth

I created a mongodb replicaset (using 3.2) and on each server, I set up MongoDB as a service
"C:\Program Files\MongoDB\Server\3.2\bin\mongod.exe" --config "C:\program files\mongodb\server\3.2\mongod.cfg" --service
So far so good. I recently set up users and need to set up MongoDB so that a user has to be supplied. From what I've read, I would start mongodb with the --auth parameter. However, since the service is already created, is there an equivalent in the config file? Based on the Configuration File Options Documentions, I have tried security.authorization set to enable
security:
authorization: enabled
But when I restarted the service on both servers, it appears that neither could talk to each other. I also tried
setParameter:
auth
but MongoDB wouldn't start up with that configuration.
What's the right way to do this?
Since you are using a replica set, merely setting security.authorization is not sufficient as you need to allow for cluster members to authenticate which is referred to as "Internal Authentication" in the docs.
The easiest way to do this is using a keyfile which is essentially a shared secret/password among the cluster members. After you've created your keyfile and copied it to all of the replica set members, you'll need to specify it's location in your config via the security.keyFile setting or using --keyFile.
For reference, you may also want to read Enforce Keyfile Access Control on Existing Replica Set for more detail on these steps.