I'm running MongoDB 2.4.5 and recently I've started digging into Replica Set to get some kind of redundancy.
I started same mongo instance with --replSet parameter and also added an Arbiter to running Replica Set. What happened was writing to mongo slowed down significantly (from 15ms to 30-60ms, sometimes even around 300ms). As soon as I restarted it in not-replicaset-mode performance went back to normal.
Related
I have a standalone deployment of MongoDB which runs on MongoDB 5.0. Due to some issue, mongod service crashed which was fixed. But the problem is, mongod service takes forever to restart because it is building all the indexes from scratch. For now, the daemon is running with no issues. But waiting for hours just for restarting the mongo daemon on next crash is not worth it.
Prior to MongoDB v4.4, there was a setting in mongod.conf indexBuildRetry which skipped the index building. Thanks.
If a node in a replica set is cleanly shutdown or rolls back during an index build, the index build progress is now saved to disk. When the server restarts, index creation resumes from the saved position.
https://docs.mongodb.com/manual/core/index-creation/#std-label-index-operations-build-failure
I have upgraded the MongoDB replica set, which consists of 3 members, from 4.0.11 to 4.2.5. After upgrading, startup lasts about 5 minutes. Before upgrading it was instant. It is related to oplog size, because I tested with dropping oplog on new mongo 4.2 and startup was instant.
Max oplog size was 25GB, I decreased it to 5GB and the startup is still slow. Mongo db is on AWS with EBS standard disks. However mongo worked well until this upgrade.
Do you have any idea what can cause slow startup?
I tried with changing following 3 WiredTiger default eviction parameters:
storage.wiredTiger.engineConfig.configString:
"eviction_dirty_target=60,
eviction_dirty_trigger=80,eviction=(threads_min=4,threads_max=4)"
Now mongo is starting immediately. Is it safe to set eviction_dirty_target and eviction_dirty_trigger values like this? Default is : eviction_dirty_target (default 5%) and eviction_dirty_trigger (default 20%). Thanks.
Examine the server logs, they'll say what the database is doing.
I have upgraded MongodDB sharded cluster having two replica sets from 3.2 to 3.4. Current storage engine is MMAPv1. After successfully upgrading all the secondary, primary, config server and mongos to 3.4, when I run config server using following command.
sudo mongod --configsvr
I keep getting following Error.
SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: 115 could not get updated shard list from config server due to Current storage engine does not support majority readConcerns; will retry after 30s
And also I am unable to connect mongos with config server. When I try to connect it using following command
sudo mongos --configdb [ip-of-my-config-server]:27019
It gives me following error.
BadValue: configdb supports only replica set connection string
I suppose mongos is unable to connect to config server due to the majority readConcerns error on config server.
MongoDB manual says
"When reading from the replica set config servers, MongoDB 3.4 uses a Read Concern level of "majority"."
And to use a read concern level of "majority", WiredTiger must be used as storage engine.
So it seems I have to switch to WiredTiger storage engine to make it work. But when I was going to switch to WiredTiger storage engine of a secondary replica set member, according to manual
"This procedure completely removes a secondary replica set member’s data"
So I am stuck halfway. Situation is
Config server is giving error regarding majority readConcerns.
I have to switch to WiredTiger to get rid of it.
Switching to WiredTiger will remove data from secondary members.
Data will not be replicated back to secondary members during this switching to WiredTiger procedure because of config server error and eventually I will be ended up losing all the data (Please correct if I am wrong).
Questions are:
Can I make MongoDB 3.4 to use a Read Concern level of "local" when reading from the replica set config servers?
How can I switch to WiredTiger engine without losing data in my scenario?
You could migrate each node in the replica set as if it was a standalone, by using mongodump to back up the data, restarting with WiredTiger and a blank data directory, then using mongorestore to populate the new database.
This isn't normally recommended for replica set nodes, but only because it's just easier to wipe the data on a secondary and let it resync from the other nodes. Doing it this way will work just fine, but involves a little more fiddly work for you with the dump and restore tools.
I was using MongoDB version 2.6.6 on Google Compute Engine and used the click to deploy method.
rs0:SECONDARY> db.createUser({user:"admin", pwd:"secret_password", roles:[{role:"root", db:"admin"}]})
2015-07-13T15:02:28.434+0000 Error: couldn't add user: not master at src/mongo/shell/db.js:1004
rs0:SECONDARY> use admin
switched to db admin
rs0:SECONDARY> db.createUser({user:"admin", pwd:"secret_password", roles:["root"]})
2015-07-13T15:13:28.591+0000 Error: couldn't add user: not master at src/mongo/shell/db.js:1004
I had a similar problem with mongo 3.2:
Error: couldn't add user: not master :
When trying to create a new user, with root role.
I was using only a local copy of mongo.
In my mongod.conf file I had the following uncommented:
replication:
replSetName: <node name>
Commenting that out and restarting fixed the problem. I guess mongo thought it was part of a replication set, and was confused as to who the Master was.
Edit:
I've also found that if you ARE trying to setup a replication set, and you get the above error, then run:
rs.initiate()
This will start a replication set, and set the current node as PRIMARY.
Exit, and then log back in and you should see:
PRIMARY>
Now create users as needed.
I ran into this error when scripting replica set creation.
The solution was to add a delay between rs.initiate() and db.createUser().
Replica set creation is seemingly done in background and it takes time for the primary node to actually become primary. In interactive use this doesn't cause a problem because there is a delay while typing the next command, but when scripting the interactions the delay may need to be forced.
MongoDB will be deployed in a cluster of Compute Engine instances (also known as a MongoDB replica set). Each instance will use a boot disk and separate disk for database files.
Primary and master nodes are the nodes that can accept writes. MongoDB’s replication is “single-master:” only one node can accept write operations at a time.
Secondary and slave nodes are read-only nodes that replicate from the primary.
Your error message looks like you are trying to add the user on the secondary. Try adding the user in the primary.
I ran into this issue when I thought I was running mongo 3.4 but it was mongo 3.6. Uninstalling 3.6 and installing 3.4 fixed my issue.
I have a 5GB database I want to compact and repair. Unfortunately, I have an active application running on that database.
I'm wondering if running a mongod --repair task with MongoDB 1.8 will block all the other write operations on the database.
I don't want to shutdown the entire application for hours...
You may take a look at --journal key. It keeps binary log for last operations and recovery may take much less time than repair.
http://www.mongodb.org/display/DOCS/Durability+and+Repair
Yes, repairDatabase is a blocking operation, which means you'll need to do it during a scheduled maintenance window.
Alternately, if you are using a replica set, it's possible to repair with no down time by taking one member out of the replica set, repairing it, re-adding it to the replica set, and repeating until all are repaired. See the note in yellow at the end of this section for more info and caveats.