I defined a replicaset on mongodb v 3.6.8 ubuntu 20.04 with 2 nodes
At the beginnning, I had the first node as Master and the Second as Secondary.
Due to problem to modify the unix service (cf option --replSet), at a time the master was stopped and the secondary up.
Now, I have 2 Secondary nodes; any command to add a third one or to remove the 2nd node freeze.
Any clue to start the first node as master ?
Related
Redis in our production environment is in cluster mode, with 6 nodes, 3 master nodes and 3 slave nodes. When nodes are switched due to network and other reasons, Redis-client cannot automatically refresh these nodes, and will report exception
MOVED 5649 192.168.1.1:6379
The vertx-redis-client version I use is 4.2.1
My configuration looks like this:
RedisOptions options = new RedisOptions();
options.setType(RedisClientType.CLUSTER)
.setRole(RedisRole.MASTER)
.setUseReplicas(RedisReplicas.NEVER)
Every time I encounter this problem, I need to restart my application service to restore it. I want to ask if there is any way to make vertx-redis-client automatically refresh the node?
Thank you ~
I am using PostgreSQL 13 to set up logical replication. When the original active node (A) becomes the secondary, the prior secondary(B) which is now active node needs to sync to the (A) node.
To summarize the issue:
Node A is active and fails at some point of time. Node B takes over and now running is active and accepting I/O from application. Now when Node A is recovered from failure and ready to become active again. In order to happen this Node A is trying to get the data which may be have been added while Node A was down. To get the this data Node A is creating a subscription to Node B which is now acting as a publisher. Issue is that this subscription on Node A fails as Node A already has some data before it went down and this data results in conflicts.
So what are my options here?
I have 3-node Replica Set in MongoDB. Is there any possibility to configure it to automatically run a shell script when failover happens or one of nodes goes down?
I believe this is not a role of database but maybe there are some mongo plugins/tools made for that purpose like Sentinel for Redis or MHA Manager for MySQL which report what node failed and what node become new master and so on.
MongoDB is always showing me this error message when I insert any data in my collection.
I am trying to configure ElasticSearch with mongodb, when I realized my Replica. I try to add something, but no results.
Mongo Shell always shows me the same message:
WritResult({"writeError":{"code":undefined,"errmsg":"not master"}})
This happens when you do not have a 3 node Replica Set, and you start the replica in Master-Slave configuration and after that your master goes down or the secondary goes down. Since there is not third node or Arbiter to elect a new primary, the primary steps down as master and in pure read only mode. The only way to bring the replica set up is to create a new server with the same repl-set name and add the current stepped down master as secondary to it.
I'd like to programatically setup zookeeper cluster. My goal is to use machines with CoreOS and dynamically deploy three nodes in form of docker containers and setup them to one zoo cluster.
Except common setup in manual (/zookeeperReconfig.html) which shows how to add another nodes to existing three nodes cluster, I found a conversation which say how to do that from the beginning when I have no running nodes in existing cluster. Unfortunately, this set of steps does not work for me. I'm talking about http://mail-archives.apache.org/mod_mbox/zookeeper-user/201408.mbox/%3CCAPSuxQipZFH2582bEMid2tCVBFk%3DC31hwCYjbFgSwChBF%2BZQVQ%40mail.gmail.com%3E
Here is a list of steps I did:
Run first node with standaloneEnabled=false and the only entry in zoo.cfg.dynamic.
server.1=localhost:2381:2281:participant;0.0.0.0:2181
Run second node with following dynamic cfg:
server.1=localhost:2381:2281:participant;0.0.0.0:2181
server.2=localhost:2382:2282:observer;0.0.0.0:2182
Note that there is no difference in resulting behavior when I'd change "observer" to "participant" for second node.
Now I have two running instances. I can use ./zkCli.sh to log into first node. When I try to add second node using following command:
reconfig -add server.2=localhost:2382:2282:participant;0.0.0.0:2182
... it fails with:
KeeperErrorCode = NewConfigNoQuorum for
However, after some research I found solution. But it's tricky and I don't think that it's the only correct solution.
What is working for me? I can do step #3 on first node again but now with "observer". This command causes that even first node knows about second node. When I type 'config' to console in zkCli, it seems that it's working. Next step is to log into second node using zkCli and than exec commands:
reconfig -remove 2 <- next step is not working w/o this
reconfig -add server.2=localhost:2382:2282:participant;0.0.0.0:2182
Well, now I have working cluster for two nodes. Finally, it's interesting that now I can add third node using regular scenario I've mentioned in first paragraph.
Do someone have some idea what I'm doing wrong?