I have a cluster environment and I have two shard replica sets and 1 configured replication set is configured when I have run the command db.stats() so why does it always show data set to shard replication?
I need to collect config replica set data, because it mentions above 3.2+, you can deploy the config as replication set and I did the same.
Please help me find the exact command or I am doing something wrong because I need to monitor configuration replication set data.
If you want to see the config of replica set, you can find it by rs.conf() or rs.config().
You can find out more about replication command here
But if you want to check the data inside the replication, you have to login to replication node by rs.slaveOk(), before do some queries.
PS. Sorry if I get your question wrong...
Related
I have 3 questions regarding a replica set on mongodb on windows:
I currently have a standalone running with data on it, if I create a replica set (adding 2 secondries) will I have a downtime or I can create the replica set and adding 2 secondries while the standalone (now the primary) still running?
Will the 2 secondries copy all the data from the primary? Also data that was written to standalone before it became part of replica set?
Once there is an election a secondry become a primary but then it means the primary is on differnt IP + Port, this means I also need to change my write to the new primary by myself or mongodb doing it by himself? or need to use virtual ip?
Have a look at Convert a Standalone to a Replica Set.
You need to change the configuration file and restart the MongoDB, so you have a downtime of your MongoDB.
Yes, whenever you add a new member to a replica set then MonogDB performs a Initial Sync to the new secondary
You would need to change your connection, see Connect to a MongoDB Replica Set. The connection string contains all replica set members and the client will connect (by default) to the primary.
Actually you don't have to put all replica set members in your connection string, the client will discover them automatically. However, if you put only one member and by chance this member is down then you have no connection.
I am using MongoDB with Loopback in my application with a loopback connector to the MongoDB. My application was working fine but now it throws an error
not master and slaveOk=false.
try running rs.slaveOk() in a mongoDB shell
You are attempting to connect to secondary replica whilst previously your app (connection) was set to connect likely to the primary, hence the error. If you use rs.secondaryOk() (slaveOk is deprecated now) you will possibly solve the connection problem but it might not be what you want.
To make sure you are doing the right thing, think if you want to connect to the secondary replica instead of primary. Usually, it's not what you want.
If you have permissions to amend the replica configuration
I suggest to connect using MongoDB Compass and execute rs.status() first to see the existing state and configuration for the cluster. Then, verify which replica is primary.
If necessary, adjust priorities in the replicaset configuration to assign primary status to the right replica. The highest priority number sets the replica as primary. This article shows how to do it right.
If you aren't able to change the replica configuration
Try a few things:
make sure your hostname points to the primary replica
if it is a local environment issue - make sure you added your local replica hostnames to the /etc/hosts pointing to 127.0.0.1
experiment with directConnection=true
experiment with multiple replica hosts and ?replicaSet=<name> - read this article (switch tabs to replica)
The best bet is that your database configuration has changed and your connection string no longer reflects it correctly. Usually, slight adjustments in the connection string are needed or just checking to what instance you want to connect.
I build a MongoDB sharding environment and want to test the performance of migration data.
I insert one billion rows in a collection in Replica Set A.
I added another shard setting Replica Set B.
MongoDB starts to balance chunks between those shards.
After balancing is finished, I found out I can't look up some data.
Because those data have been moved to Replica Set B, only when I restart all mongo router service am I able to query them.
Is it a normal and inevitable procedure, or is there any way to reload the whole system (through mongo shell command or anything else)?
Thank you !!!
I found a command that it seems help to reload the router config
db.adminCommand({"flushRouterConfig":1});
2017-05-18 After testing, it works!
Here is a newbie trying to play around Mongodb. I am trying to demonstrate scaling in my class, meaning, I need to show that I have 2 instances of mongoDB up and running and I need to replicate them, set one as master and the other as secondary.
Can any of you suggest me a simple way to demonstrate that if primary/master fails the slave/secondary comes up as the master?
Please keep it as simple as possible as I am teaching to a fairly beginners of MongoDB
MongoDB replica sets are not master/slave. In order to achieve automatic failover you need to have a majority of nodes in the replica set able to elect a new primary. The minimum number of nodes in your replica set should be 3, which can either be 3 data-bearing nodes or 2 data-bearing nodes and an arbiter, which is a node that votes in elections.
A demo using replication alone is more about failover and redundancy than scaling (better demo'd with sharding).
If you want a very simple (and non-production) way to stand up a replica set or sharded cluster in a development environment, I would suggest using the mlaunch script which is part of mtools.
For example, to create a 3-node replica set with an arbiter:
mlaunch --replicaset --nodes 2 --arbiter
To create a sharded cluster with 3 shards backed by a replica set (plus mongos and config server):
mlaunch --replicaset --sharded 3
As mentioned in the other comments here, the free MMS Monitoring service is a good way to visualise activity in your MongoDB deployment, and you can use db.shutdownServer() to shutdown specific nodes to see the outcome.
The easiest way would be to set up the MongoDB monitoring service. Stop the MongoD process on one and watch the other take over. But, use replica sets rather than master/secondary as they are the recommended approach.
Actually, it is pretty easy
Set up a replica set with 2 "normal" mongods and an arbiter
Connect to both of the normal mongods using mongo
Show the output of rs.status(). (Note the selffield)
Shut down the current primary
Show the output of rs.status() again and again, until the former secondary is elected primary
Another option would be to write a simple java app which utilizes the driver, put it in an infinite loop which writes one entry every second and puts out the number of objects in the database. Catch exceptions and write out that a problem occurred. Start the sharded cluster, then start your application. Shut down the primary while the program is running. during the elections, there may be exceptions be thrown. As soon as the former secondary is elected primary, the document count should start to rise again.
I've been working with mongo for a few weeks and and building my environment in a dev. I started with a single node, then moved to a shard cluster, and now want to move to a replicated shard cluster. From what I read a Replicated Shard Cluster is the best of the best, scalability, durability, performance increase, etc.
I've read most of the (very good) tutorials in their help. It seems their lessons advise going from single node, to replica set, to sharded replica set, which, unfortunately is the opposite way I did it. I can't seem to find anything to upgrade a sharded cluster to a replicated shard cluster.
Here are 5 hosts that I have:
APPSERVER
CONFIGSERVER
SHARD1
SHARD2
SHARD3
I started each of the shard servers with:
mongod --shardsvr
Then I started the config server with:
mongod --configsvr
Then I started the mongos process on the APPSERVER with:
mongos --configdb CONFIGSERVER
Then in mongos, I added the shards, enabled sharding on my database, and defined a shardkey for a collection:
sh.addShard("SHARD1:27018");//again for 2 and 3
sh.enableSharding("beacon");
sh.shardCollection("beacon.alpha2", {"ip":1});
I want each of the shards replicated on each of the other two. (right?) Do I need to bring down the mongod processes on the shards and restart them with different CL parameters? What commands do I need to issue in my mongos shell to get it to replicate? Should I export all my data, take everything down, restart and reimport? Again, I see a lot of tutorials on how to create a replica set, but I don't really see anything on how to do a replica set given a sharded system to start with.
Thanks!
For each shard, you will need to restart the current member and start both it and two new members (or 1 new member and an arbiter) with the --replset command line option. You could add more members than that, but 3 is the lowest workable set. Then from inside what will become the new primary (your current SHARD1 for example) you could do the following:
rs.add("newmember1:port")
rs.add("newmember2:port")
rs.initiate();
You would then need to check and make sure that the sh.status() has been updated to reflect the new members of the replica set. In 2.2 this has become slightly easier as it should be automatic, for prior versions it was necessary to manually save the shard information in the config database, which is reflected in the documentation under sharded cluster. If it has been automatically added you will see the replica set list in the sh.status() output, similar to the following:
{ "_id" : "shard1", "host" : "shard1/SHARD1:port,newmember1:port,newmember2:port" }
If this does not happen automatically you will need to follow the procedure outlined in the documentation link above, namely from inside mongos:
db.getSiblingDB("config").shards.save({_id:"<name>", host:"<rsName>/member1,member2,..."})
Following the above example it would look like:
db.getSiblingDB("config").shards.save({_id:"SHARD1", host:"shard1/SHARD1:port,newmember1:port,newmember2:port"})
You would need to do this procedure for each shard, and you should do them one at a time so that you have all of SHARD1 as a replica set before moving on to SHARD2. You will also need to be aware that each replica set will become read-only while the initial election takes place, so at the very least you should schedule this in a downtime or maintenance window. Ideally test first in a staging environment.