When adding a deployment to MongoDB, it is not correctly picking up the arbiter. The replicaset consists of a primary, secondary and arbiter. I have installed the automation agent on all 3 members and the monitoring and backup agents on the primary and secondary only.
Within the Deployment page, I click on the first servers button and everything is correct. Agents on all 3 servers are correct and present (green circle). Additionally, the server names are all shown as the correct hostname (fqdn). Versions of agents are consistent.
After adding the deployment, the primary and secondary nodes are picked up correctly, but the arbiter is not. Rather, it picks up the arbiter host, but by IP address. As such, it shows no agents at all.
From the primary and secondary members I can ping the arbiter and also connect to the arbiter using mongo --host --port.
I can't quite figure out what is wrong here and why I see all of the correct hosts in the servers section, but the deployment fails to correctly pick up the arbiter.
The problem was the replicaset configuration. The rs.config members were not the same case as the resulting hostname -f return.
To fix, I updated rs.conf. Assume the rs.conf shows Mongo-Arbiter:27017 for member[2]. For example:
hostname -f:
mongo-arbiter
cfg = rs.conf()
cfg.members[2].host="mongo-arbiter:27017"
rs.reconfig(cfg)
After ensuring all members in rs.conf matched their respective hostname, I could add the deployment to Ops Manager.
Related
I have some questions about the mongo replica
mongo replica
If I make 1 primary and 2 secondary MongoDB for replication. So I have 3 endpoints to 3 different DB and my apps can only write on primary DB. what if suddenly my primary shutdown and secondary DB take over the primary. Then how to automatically change the endpoint in my apps? should I use mongos (mongo routes)? but it needs sharding if I remember correctly.
Thank you.
All nodes in a replica set work together to have identical data. Secondary nodes may lag behind the primary, but you don't get "3 different DB". There is only one database of which copies exist on each node.
All MongoDB drivers know to monitor replica set members and discover which is the primary one automatically. You need to configure some drivers to do so by providing the replica set name, others do it automatically by default when they connect to a replica set node. Look up "connecting to replica set" in your driver documentation.
In a proper connection string you will provide all three RS members, e.g.
mongodb://mongodb0.example.com:27017,mongodb1.example.com:27017,mongodb2.example.com:27017/?replicaSet=myRepl
The client will detect the PRIMARY and will use it. I guess, most drivers will re-connect automatically if the PRIMARY node changes.
Most drivers will detect the PRIMARY automatically if you provide the ReplicaSet name, i.e.
mongodb://mongodb0.example.com:27017/?replicaSet=myRepl
would connect to the PRIMARY even if it is not mongodb0.example.com. However, if mongodb0.example.com does not run, then you don't connect at all. So, it is beneficial to provide all ReplicaSet members in the connection string.
See Connection String URI Format
mongos is needed only to connect to a Sharded Cluster.
I am using MongoDB with Loopback in my application with a loopback connector to the MongoDB. My application was working fine but now it throws an error
not master and slaveOk=false.
try running rs.slaveOk() in a mongoDB shell
You are attempting to connect to secondary replica whilst previously your app (connection) was set to connect likely to the primary, hence the error. If you use rs.secondaryOk() (slaveOk is deprecated now) you will possibly solve the connection problem but it might not be what you want.
To make sure you are doing the right thing, think if you want to connect to the secondary replica instead of primary. Usually, it's not what you want.
If you have permissions to amend the replica configuration
I suggest to connect using MongoDB Compass and execute rs.status() first to see the existing state and configuration for the cluster. Then, verify which replica is primary.
If necessary, adjust priorities in the replicaset configuration to assign primary status to the right replica. The highest priority number sets the replica as primary. This article shows how to do it right.
If you aren't able to change the replica configuration
Try a few things:
make sure your hostname points to the primary replica
if it is a local environment issue - make sure you added your local replica hostnames to the /etc/hosts pointing to 127.0.0.1
experiment with directConnection=true
experiment with multiple replica hosts and ?replicaSet=<name> - read this article (switch tabs to replica)
The best bet is that your database configuration has changed and your connection string no longer reflects it correctly. Usually, slight adjustments in the connection string are needed or just checking to what instance you want to connect.
I have a replica set with 3 nodes, I have a server titled dev-6 which is running mongo 3.0.6, and dev 5 which has 2 mongos on it running 3.2. I'd like for dev 6 to be the that is the primary, and so I've added the other 2 nodes to its initiated replica set, once I do that it becomes primary and the other 2 nodes begin to sync to it. Here is a screenshot of how my setup looks like when I bring down dev 6, and then is brought back up.
As, intended dev 6 is secondary, and so is dev 5: 27018. What I'm wondering about though is why is dev 5 saying there's no one to sync with, but dev 5:27019 is saying that its syncing with dev 5 :27018.
Im now going to follow the mongo instructions to make dev 6 the primary, here is the result now.
Dev 6 is the primary, but what Im trying to understand is how come the other dev 5 instances are not connecting with dev 6. Before some conclusions are jumped to, I am able to ping dev 5 from dev 6 and visa versa, the /etc/hosts profiles contain the ip addresses for one another.
EDIT: Im basing the replica set being unable to connect with the following message "lastHeartbeatMessage" : "could not find member to sync from",. This seems to be fixed if I run rs.config(//current cfg) or if I add or remove a replica set.
Your replica set seems to be healthy in both cases. All secondaries have applied the last operation from the primary's operation log (optime/optimeDate are the same), moreover lastHeartbeat is slightly behind the dev 6 time. In regard to the lastHeartbeatMessage refer this jira issue, that says:
When secondary choose a source to sync, it will choose a node who's
oplog is newer (not equal) than self, so after startup,when all nodes
have some data,the oplog will be same,so secondary cannot choose a
sync souce, write after a write operation happens, primary will have
newer oplog,secondary can successfully choose a targe to sync from,the
error message will disappear.
The error "could not find member to sync from" I usually associate with replica set members not being able to talk to one another. Either because of firewall or credential issues.
I know that you can ping the servers but have you tried connecting to the primary mongo instance from one of the secondaries using the mongoclient?
mongo vpc-dev-app-06:27017
with appropriate user credentials if necessary.
Has anything possibly changed in the mongod.conf as part of the upgrade?
I am new to mongodb replica set.
According to Replic Set Ref this should be connection string in my application to connect to mongodb
mongodb://db1.example.net,db2.example.net,db3.example.net:2500/?replicaSet=test
Suppose this is production replica set (i.e. I cannot change application code or stop all the mongo servers) And, I want to add another mongo db instance db4.example.net in test replica set. How will I do that?
How my application will know about the new db4.example.net
If you are looking for real-world scenario:
In situation when any of existing server is down due to hardware failure etc, it is natural to add another db server to the replica set to preserve the redundancy. But, how to do that.
The list of replica set hosts in your connection string is a "seed list", and does not have to include all of the members of your replica set.
The MongoDB client driver used by your application will iterate through the seed list until it can successfully connect to a host, and use that host to request the current replica set configuration which will list all current members of the replica set. Per the documentation, it is recommended to include at least two hosts in the connect string so that your driver can still connect in the event the first host happens to be down.
Any changes in replica set configuration (i.e. adding/removing members or election of a new primary) are automatically discovered by your client driver so you should not have to make any changes in the application configuration to add a new member to your replica set.
A change in replica set configuration may trigger an election for a new primary, so your application code should expect to handle transient errors for a few seconds during reconfiguration.
Some helpful mongo shell commands:
rs.conf() - display the current replication configuration
db.isMaster().primary - display the current primary
You should notice a version number in the configuration document returned by rs.conf(). This version is incremented on every configuration change so drivers and replica set nodes can check if they have a stale version of the config.
How my application will know about the new db4.example.net
Just rs.add("db4.example.net") and your application should discover this host automatically.
In your scenario, if you are replacing an entirely dead host you would likely also want to rs.remove() the original host (after adding the replacement) to maintain the voting majority for your replica set.
Alternatively, rather than adding a host with a new name you could replace the dead host with a new server using the same hostname as previously configured. For example, if db3.example.net died, you could replace it with a new db3.example.net and follow the steps to Resync a replica set member.
A way to provide abstraction to your database is to set up a sharded cluster. In that case, the access point between your application and the database are the mongodb routers. What happens behind them is outside of the visibility of the application. You can add shards, remove shards, turn shards into replica-sets and change those replica-sets all you want. The application keeps talking with the routers, and the routers know which servers they need to forward them. You can change the cluster configuration at runtime by connecting to the routers with the mongo shell.
When you have questions about how to set up and administrate MongoDB clusters, please ask on http://dba.stackexchange.com.
But note that in the scenario you described, that wouldn't even be necessary. When one of your database servers has a hardware failure and your system administrators want to replace it without application downtime, they can just assign the same IP and hostname to the new server so the application doesn't even notice that it's a replacement.
When you want to know details about how to do this, you will find help on http://serverfault.com
A MongoDB instance can have different roles:
Config server
Router (mongos)
Data server
Arbiter server (for replica sets)
I know that db.serverStatus() can be used to see if an instance is a router, the process value is mongos.
But for config servers, arbiters and data nodes the process value is mongod.
Is there a simple way of distinguishing between these instance types?
I want to bring attention to one particular important issue with this question: sharding is and horizontal dimension ( several replicasets where data is distributed to ) and replicaset is a high availability solution which is represented by the composition of different mongod nodes!
So you actually what you are trying to figure out is:
ReplicaSet nodes roles
Shard Nodes members
In the case of a replicaSet what you might be interested in knowing is each node role. You can easily get the information without needing to connect to all the nodes of the replicaset, just run the command:
db.isMaster()
with this you will get the node members and roles of each member.
For shard node members first of all you should never try to connect directly to the config servers. These are their to manage the distribution of chunks, chunk splits and other configuration data, relevant only for the shard cluster functionality. Avoid using those ip's to connect to from your application.
So if you want to have a clear view of which members compose your shard cluster, how many shards you have etc, you need to run command:
db.printShardStatus()
or
sh.status()
Please review the documentation here
Cheers,
N.