How seed node works in Cassandra cluster - nosql

My understanding is :
Seed node maintains all the nodes list in cluster.
Lets say if we have to add a new node to the cluster, we have to enter the new node name in the seed list of seed server and then new node will be part of the ring.
I am assuming we don't have to mention any thing about the seed server in the peer nodes.
correct me if my understanding incorrect.
I read some where Failure in "Seed Node" doesn't cause any problem. Lets say if the seed node is crashed how the ring information is maintained?

I want to clarify because that quote from the docs is old and is was never exactly precise.
Even after bootstrapping, seed nodes still play a role in Gossip.
There is no additional impact if you have a seed node that goes down. Though if you need to replace a seed node you should follow the guide in the docs.
Details:
In addition to helping new nodes bootstrap, seed nodes are also used to prevent split brain in your cluster. A node finds out about other nodes when it handshakes with a node that already has information about other nodes from recent gossip operations.
Gossip.run() happens every second. In a single gossip run a node will handshake with one random live node, one random dead node--if any--based on some probability, and one random seed node if the random node wasn't seed--also based on some probability. As your list of seed nodes increases, the more nodes you will be handshaking with. Per this logic, the probabilistic frequency with which handshakes to the list of seed nodes occurs will increase as your proportion of seed nodes increases.
However, as noted above, step 3 only happens if step 1 did not occur on a seed node. So the probability of having to do step 3 increases with added seeds, maxing out at the point where half your nodes are seeds (.25 chance) and then decreases again.
It is recommended to keep 3 seed nodes per DC. Do not add all your nodes as seed nodes

It is the other way round: In the configuration of your new node you point to another, already existing node as the seed provider. The seed-provider is the initial contact point for a new node joining a cluster. After the node has joined the cluster it remembers the topology and does not require the seed provider any more.
From the Cassandra docs:
Note: The seed node designation has no purpose other than
bootstrapping the gossip process for new nodes joining the cluster.
Seed nodes are not a single point of failure, nor do they have any
other special purpose in cluster operations beyond the bootstrapping
of nodes.

Related

akka cluster : configuration seed node

I have a sample question
Does it make sense to configure all the nodes of the akka cluster as seed nodes?
example:
cluster {
seed-nodes = [
"akka://application#127.0.0.1:2551",
"akka://application#127.0.0.1:2552",
"akka://application#127.0.0.1:2553",
"akka://application#127.0.0.1:2554",
"akka://application#127.0.0.1:2555",
"akka://application#127.0.0.1:2556",
"akka://application#127.0.0.1:2557",
"akka://application#127.0.0.1:2558",
"akka://application#127.0.0.1:2559",
"akka://application#127.0.0.1:2560",
"akka://application#127.0.0.1:2561",
"akka://application#127.0.0.1:2562"]
downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
split-brain-resolver {
active-strategy = static-quorum
static-quorum {
quorum-size = 7
}
}
Are there disadvantages for this configuration?
I guess the answer has to be "it depends".
Seed nodes is one mechanism that enables new nodes to join akka cluster.
For your example to work you have to run all the nodes on the same host. I am guessing you're passing some JVM argument like -Dakka.remote.artery.canonical.port=2*** to bind each node to different port. That's fine, it will work. A new node starting up will try to join the cluster by contacting the seed nodes starting from the first until one of them responds.
In practice you probably want the cluster nodes running on different machines and that's when a static configuration like the one in your example can become a bit of a pain. This is because you'd need to know all the IP addresses beforehand and would need to guarantee that they will not change over time. This is perhaps possible in a network with statically assigned IPs but is nearly impossible with dynamically assigned IPs or in environments like Kubernetes. This is why there are other methods of cluster joining implemented (https://doc.akka.io/docs/akka/current/discovery/index.html).
So the disadvantage I see here is limitation of this configuration in any real-life scenario. As long as you're doing this to learn/experiment with Akka cluster, then it's all fine, though you can also argue that if you're doing that, then having a list of 12 seed nodes does not give you that much advantage over say 2 seed nodes as long as you can keep them up and running for the time of your experiment so that all the nodes can join the cluster.

Kubernetes: why would you need more than 2 nodes?

Given a K8s Cluster(managed cluster for example AKS) with 2 worker nodes, I've read that if one node fails all the pods will be restarted on the second node.
Why would you need more than 2 worker nodes per cluster in this scenario? You always have the possibility to select the number of nodes you want. And the more you select the more expensive it is.
It depends on the solution that you are deploying in the kubernetes cluster and the nature of high-availability that you want to achieve
If you want to work on an active-standby mode, where, if one node fails, the pods would be moved to other nodes, two nodes would work fine (as long as the single surviving node has the capacity to run all the pods)
Some databases / stateful applications, for instance, need minimum of three replica, so that you can reconcile if there is a mismatch/conflict in data due to network partition (i.e. you can pick the content held by two replicas)
For instance, ETCD would need 3 replicas
If whatever you are building needs only two nodes, then you wouldn't need more than 2. If you are building anything big where the amount of compute, memory needed is much more, then instead of opting for expensive nodes with huge CPU and RAM, you could instead join more and more lower priced nodes to the cluster. This is called horizontal scaling.

Standard availability for mongoDB replica set cluster of 3 nodes

I have set up a replica set of mongoDB with one primary, one secondary and one arbiter node, mongoDB installed on three independent AWS instances. I need to document overall availability of the replica set cluster formed as per aforementioned configuration but don't have any reliable/standard data to establish so.
Is there any standard data which can be referred to establish avaialability of overall cluster/individual node in above case?
Your configuration will guarantee continued availability, even after one node goes down. However, availability after that depends on how quickly you can replace the downed node, and that is up to your monitoring and maintenance abilities.
If you do not notice for while that a node is down, or if your procedure for replacing the node takes a long time (you may need to commission a new VM, install MongoDB, reconfigure the replica set, allow time for the new node to sync), then another node may go down and leave you with no availability.
So your actual availability depends on the answers to four questions:
Which replica set configuration do you use? Because that determines how many nodes need to go down before the replica set stops being available
How likely it is that any single node will go down or lose connection to the rest?
How good is your monitoring, so you notice there is a problem?
How fast are your processes for repairing the problem?
The answer to the first one is straightforward; you have decided on the minimum of two data-bearing nodes and one arbiter.
The answer to the second one is not quite straightforward; it depends on the reliability of each node, and the connections between them, and whether two or more are likely to go down together (perhaps if they are in the same availability zone).
The third and fourth, we can't help you with; you'll have to assess those for yourself.

how mongodb determines majority if a node fails during write operation

Suppose i have set w=majority as write concern and a node fails during a write operation,
will the majority be changed according to the currently alive nodes?
i.e., Suppose there are 3 nodes. Now the majority is 2. And if a node fails during a write operation, will the majority be decreased or will it remain same and wait for the node to come up?
The majority of a replica set is determined based on replica set configuration, not its current running state.
In other words, if you have a three node replica set configured, then majority is always two. If one node is offline, two is still the majority. If two nodes are offline, two is still majority and cannot be satisfied until one of the offline nodes comes back online.

mongo DB - All nodes secondary

All of the nodes in our cluster are "secondary" and no node is stepping up as "primary".
How do I force a node to become primary?
===SOLUTION===
We had 4 nodes in our replica set, when we are supposed to only have an odd number of nodes.
Remove a node so you have an odd number of nodes
rs.config()
Edit the list of servers in notepad/textpad removing one of the servers
config = POST_MODIFIED_LIST_HERE
rs.reconfig(config, {force:true})
Stop the mongodb service 'mongod' on all nodes, and bring them back up
Done
If this doesn't fix it, try adding a priority to one of the nodes.
You can use the following instructions available at MongoDB's website:
http://www.mongodb.org/display/DOCS/Forcing+a+Member+to+be+Primary
If you have an even number of nodes, one answer is to remove one. Another answer can be to add an arbiter, which doesn't have a copy of the data but participates in the cluster purely for voting and breaks ties. This way you get odd vote numbers and guaranteed elections, but the availability/capacity of four nodes.