I have one RabbitMQ cluster which consists of 3 nodes (each node has : Rabbitmq version :3.3.5, Erlang R14B0 ). Let's call this as Group1. Each node has n number of queues.
I have another cluster with set of 3 nodes with specifications as RabbitMQ 3.6.6, Erlang R16B03-1.Let's call this as Group2.
I need to replicate the queue structure of Group1 (node 1,node 2,node 3) exactly for Group2(node 1,node 2,node 3).
Is there any easy way to perform this without having to add each queue explicitly. Please suggest some measures.
Thank you.
Regards,
Shweta
Related
I've been experimenting with Vert.x high availability features to test horizontal scalability and resiliency. I have a cluster of several nodes based on Hazelcast. I'm creating verticles on any nodes via an HTTP API. Verticles have the HA flag set when they are created.
Testing scalability
If I have n nodes Nn loaded with HA-verticles and if I add one additional node there is no verticle that is migrated from the Nn node on the new one so that the load would be balanced. Is there a way to tell Vert.x to do so, or not ? I believe it's not so simple...
Testing resilience
If I have n nodes Nn loaded with HA-verticles and I kill one of the nodes, all the verticles from that very node are migrated, but are migrated on one single of the remaining nodes that is not always the least loaded one. That destination node may become overloaded and the whole cluster would be at risk of freeze or crash. Same question as before: is there a way to force Vert.x to balance the restarted verticles on all nodes, or at least on the node that is the least loaded ?
Your observations are correct, there is no way:
to distribute verticles from a failed node over the rest of the nodes
to prevent starting verticles in a node that is already loaded
Improving the HA features is not on the Vert.x roadmap.
If, as it seems, you need more than basic failover, I would recommend to use specialized infrastructure tools that can leverage info from monitoring systems and start/stop new nodes as needed.
I am learning kubernetes by following the official documentation and in the Creating Highly Available clusters with kubeadm part it's recommended to use 3 masters and 3 workers as a minimum required to set a HA cluster.
This recommendation is given with no explanation about the reasons behind it. In other words, why a 2 masters and 2 workers configuration is not ok with HA ?
You want an uneven number of master eligible nodes, so you can form a proper quorum (two out of three, three out of five). The total number of nodes doesn't actually matter. Smaller installation often make the same node master eligible and data holding at the same time, so in that case you'd prefer an uneven number of nodes. Once you move to a setup with dedicated master eligible nodes, you're freed from that restriction. You could also run 4 nodes with a quorum of 3, but that will make the cluster unavailable if any two nodes die. The worst setup is 2 nodes since you can only safely run with a quorum of 2, so if a node dies you're unavailable.
(This was an answer from here which I think is a good explanation)
This why exactly:
https://etcd.io/docs/v3.3/faq/#why-an-odd-number-of-cluster-members
check then the concept of quorum, you can find plenty of info especially in the pacemaker/corosync documentation
i want to build a 3 nodes (avoid split brain) symmetric cluster with high availability using replication. In addition I would like to be able to load balanced messages between nodes
how should this be achieved?
option 1: 1 master with 2 slaves
option 2: 3 colocated master/slave
nodes
Option 1 isn't really an option as the slaves will not participate in the voting process which means split-brain will not be mitigated. The only option you have left (of the 2 you listed, of course) is to use 3 colocated master/slaves.
According to https://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_zkMulitServerSetup
Cross Machine Requirements For the ZooKeeper service to be active,
there must be a majority of non-failing machines that can communicate
with each other. To create a deployment that can tolerate the failure
of F machines, you should count on deploying 2xF+1 machines. Thus, a
deployment that consists of three machines can handle one failure, and
a deployment of five machines can handle two failures. Note that a
deployment of six machines can only handle two failures since three
machines is not a majority. For this reason, ZooKeeper deployments are
usually made up of an odd number of machines.
To achieve the highest probability of tolerating a failure you should
try to make machine failures independent. For example, if most of the
machines share the same switch, failure of that switch could cause a
correlated failure and bring down the service. The same holds true of
shared power circuits, cooling systems, etc.
My question is:
What should we do after we identified a node failure within Zookeeper cluster to make the cluster 2F+1 again? Do we need to restart all the zookeeper nodes? Also the clients connects to Zookeeper cluster, suppose we used DNS name and the recovered node using same DNS name.
For example:
10.51.22.89 zookeeper1
10.51.22.126 zookeeper2
10.51.23.216 zookeeper3
if 10.51.22.89 dies and we bring up 10.51.22.90 as zookeeper1, and all the nodes can identify this change.
If you connect 10.51.22.90 as zookeeper1 (with the same myid file and configuration as 10.51.22.89 had before) and the data dir is empty, the process will connect to current leader (zookeeper2 or zookeeper3) and copy snapshot of the data. After successful initialization the node will inform rest of the cluster nodes and you have 2F+1 again.
Try this yourself, having tail -f on log files. It won't hurt the cluster and you will learn a lot on zookeeper internals ;-)
I wonder about the best strategy with regard to Zookeeper and SolrCloud clusters. Should one Zookeeper cluster be dedicated per SolrCloud cluster or multiple SolrCloud clusters can share one Zookeeper cluster? I guess the former must be a very safe approach but I am wondering if the 2nd option is fine as well.
As far as I know, SolrCloud use Zookeeper to share cluster state (up, down nodes) and to load core shared configurations (solrconfig.xml, schema.xml, etc...) on boot. If you have clients based on SolrJ's CloudSolrServer implementation than they will mostly perform reads of the cluster state.
In this respect, I think it should be fine to share the same ZK ensemble. Many reads and few writes, this is exactly what ZK is designed for.
SolrCloud puts very little load on a ZooKeeper cluster, so if it's purely a performance consideration then there's no problem. It would probably be a waste of resources to have one ZK cluster per SolrCloud if they're all on a local network. Just make sure the ZooKeeper configurations are in separate ZooKeeper paths. For example, using -zkHost :/ for one SolrCloud, and replace "path1" with "path2" for the second one will put the solr files in separate paths within ZooKeeper to ensure they don't conflict.
Note that the ZK cluster should be well-configured and robust, because if it goes down then none of the SolrClouds are going to be able to respond to changes in node availability or state. (If SolrCloud leader is lost, not connectable, or if a node enters recovering state, etc.)