I have an application where I cannot know the seed nodes ahead of time to put into the application configuration. Therefore, the application starts on one node and when it's started on the other nodes, they use Cluster.join to join the cluster on the first node. The problem is that the join never completes and the cluster never starts. What is the problem?
The problem is that there is no cluster yet to join. Simply instantiating a cluster object on the first node does not initiate the cluster. There is a small note in the documentation that may be easily missed:
Joining can also be performed programatically with Cluster(system).join. Note that you can only join to an existing cluster member, which means that for bootstrapping some node must join itself.
So, the first node should join itself to initiate the cluster. This causes the creation of a "leader" that is responsible for adding and removing nodes from the cluster.
Related
How to stop the ENTIRE cluster with sharding (spanning multiple machines - nodes) from one actor?
I know I can stop the actor system on 'this' node context.system.terminate()
I know I can stop the local Sharding Region.
I found .prepareForFullClusterShutdown() but it doesn't actually stop the nodes.
I suppose there is no single command to do that, but there must be some way to do this.
There's no out-of-the-box way to do this that I'm aware of: the overall expectation is that there's an external control plane (e.g. kubernetes) which manages this.
However, one could have an actor on every node of the cluster that listens for membership events and also subscribes to a pubsub topic. This actor would track the current cluster membership and, when told to begin a cluster shutdown, it publishes a (e.g.) ShutdownCluster message to the topic and tracks which nodes leave. After some length of time (since distributed pubsub is at-most-once) if there are nodes besides this one that haven't left, it sends it again. Eventually, after all other nodes in the cluster have left, this actor then shuts down its node. When other nodes see a ShutdownCluster message, they immediately shut themselves down.
Of course, this sort of scheme will probably not play nicely with any form of external orchestration (whether it's a container scheduler like kubernetes, mesos, or nomad; or even something simple like monit which notices that the service isn't running and restarts it).
I've been experimenting with Vert.x high availability features to test horizontal scalability and resiliency. I have a cluster of several nodes based on Hazelcast. I'm creating verticles on any nodes via an HTTP API. Verticles have the HA flag set when they are created.
Testing scalability
If I have n nodes Nn loaded with HA-verticles and if I add one additional node there is no verticle that is migrated from the Nn node on the new one so that the load would be balanced. Is there a way to tell Vert.x to do so, or not ? I believe it's not so simple...
Testing resilience
If I have n nodes Nn loaded with HA-verticles and I kill one of the nodes, all the verticles from that very node are migrated, but are migrated on one single of the remaining nodes that is not always the least loaded one. That destination node may become overloaded and the whole cluster would be at risk of freeze or crash. Same question as before: is there a way to force Vert.x to balance the restarted verticles on all nodes, or at least on the node that is the least loaded ?
Your observations are correct, there is no way:
to distribute verticles from a failed node over the rest of the nodes
to prevent starting verticles in a node that is already loaded
Improving the HA features is not on the Vert.x roadmap.
If, as it seems, you need more than basic failover, I would recommend to use specialized infrastructure tools that can leverage info from monitoring systems and start/stop new nodes as needed.
I am trying to configure a Active/Passive cluster with two nodes (using OpenShift). The second passive node should be a hot standby, in other words it is up and running but not doing anything, until the first node dies. Then the passive node becomes active and a new passive node is started.
I have read the High Availability documentation, however it just seems to cover the theory. Furthermore it seems like overkill ( I am thinking there might be an easier way to meet my goal).
Where would I start?
What you are asking for goes against the usual practice for how Kubernetes/OpenShift is used. You wouldn't have hot standby nodes, you would always use all nodes in the cluster. You would then allow for enough additional capacity in your cluster such that loosing a node doesn't cause a problem as other nodes would have enough capacity to then run the applications. In this scenario the Kubernetes scheduler would automatically restart any applications which were on a failed node on the other nodes in the cluster, without you needing to perform any explicit failover steps.
So don't try and do anything special, setup your cluster with the two nodes, with applications being distributed across both. If you need to have the ability to run with only a single node, make sure it has enough capacity to run everything. If over time you add more applications and one node is not enough, add a third node, with all three being used in normal case. You can then handle failure of a single node again.
I am creating nodes using the apache curator library on windows using CreateMode.Persistent. After a full cluster restart, I notice that sometimes the nodes are present. At other times they are not. Does Zookeeper guarantee the nodes created such to be persisted across restarts?
If you create a node with CreateMode.PERSISTENT and the method successfully returns then you are guaranteed that a quorum of nodes (n/2+1) have received and persisted the node. Restarting the cluster will not affect this. If you're seeing the node disappear then maybe a) you are using an async API to create the node and are not getting a success response; b) you're somehow wiping the data directory when you restart your cluster; c) ??
I wonder about the best strategy with regard to Zookeeper and SolrCloud clusters. Should one Zookeeper cluster be dedicated per SolrCloud cluster or multiple SolrCloud clusters can share one Zookeeper cluster? I guess the former must be a very safe approach but I am wondering if the 2nd option is fine as well.
As far as I know, SolrCloud use Zookeeper to share cluster state (up, down nodes) and to load core shared configurations (solrconfig.xml, schema.xml, etc...) on boot. If you have clients based on SolrJ's CloudSolrServer implementation than they will mostly perform reads of the cluster state.
In this respect, I think it should be fine to share the same ZK ensemble. Many reads and few writes, this is exactly what ZK is designed for.
SolrCloud puts very little load on a ZooKeeper cluster, so if it's purely a performance consideration then there's no problem. It would probably be a waste of resources to have one ZK cluster per SolrCloud if they're all on a local network. Just make sure the ZooKeeper configurations are in separate ZooKeeper paths. For example, using -zkHost :/ for one SolrCloud, and replace "path1" with "path2" for the second one will put the solr files in separate paths within ZooKeeper to ensure they don't conflict.
Note that the ZK cluster should be well-configured and robust, because if it goes down then none of the SolrClouds are going to be able to respond to changes in node availability or state. (If SolrCloud leader is lost, not connectable, or if a node enters recovering state, etc.)