I am designing IoT system with board computers such as raspberry pi.
Particularly, am designing application messaging platform that enables pub-sub, esb and so on.
To make it easy and simple, I am considering to employ rabbitmq.
Furthermore, I want to build rabbitmq cluster on those node, to avoid SPoF.
However, those devices sometimes will be turned off.
I think this means a node leaves from cluster temporarily.
I expect rabbitmq cluster assumes this situation a certain degree, but I cannot assume how much it is able to accept, what problems occurs.
To experts of rabbitmq cluster,
Could you tell me any concerns about it, and cases that we should care, please?
Do you think it does work in production?
Please tell me any cases similar to my assumption.
I really look forward to your reply.
Even if it is tiny things, would be nice for me.
TL;DR RabbitMQ doesn't work well in this scenario. Better use another thing.
RabbitMQ is intended to work with stable nodes, it uses the Raft algorithm for distributed consensus and elects their leader (see http://thesecretlivesofdata.com/raft). As we can observe with this approach the process to elect a leader is compounded by several steps. If the network is partitioned or the leader fails another leader must be elected. If this happens frequently the entire network would be unstable.
Maybe you could want to have a look at other technologies like https://deepstream.io.
Related
We are choosing the best option for implementing a leader election to achieve high availability. Our goal is to have only a single instance active at any given time. We are using Spring Boot to develop application which is getting deployed by default on Tomcat. Would be great to hear your opinion about the following options:
Does Zookeeper provide better CP than Consul ?
View on maintenance/complexity ?
ZooKeeper is based on ZAB & Consul is based on Raft. Both are very similar atomic broadcast algorithms at a high level. So, as far as "Consistancy" of CAP (which is actually linearizability, a very strong form of consistancy) is concerned, both will provides similar guarantees. Both of them have linearizable writes to quorum (majority). The other nodes (not in quorum) may lag in updates by default resulting in stale reads. This is done this way because complete linearizability makes things slow and many applications are good with a little stale reads. However, if that is not acceptable in a particular usecase, it is always possible to use sync call before read in ZooKeeper and Consistent mode in Consul to acheive complete linearizability.
For service discovery, however, Consul seems to provide higher level constructs that are not out-of-the-box in ZooKeeper.
In terms of leader election use case, both can be used.
But given that ZooKeeper is used by many top level apache projects and it is also older than the Raft and therefore Consul, I hope it will have better community support and documentation. Also the Apache documentation providing various recepes is great.
Finally, if you go with ZooKeeper, you may also want to use Apache Curator which provides higher level APIs on top of ZooKeeper.
Background
I inherited a Kafka/Zookeeper installation. I have a passing knowledge of those - I know the general architecture, how clients work, about topics, etc., have been involved in programming Java clients etc.
But the installation is somewhat dubious. They are three instances of Kafka and Zookeeper each (in their separate docker containers). Supposedly they should work, but what I am seeing is all processes spout immense amount of log output with loads and loads of (diverse) warnings and errors. I have the impression that some of these seem to be quite normal (or are being self-healed all the time), and am having a very hard time figuring if everything works as intended or not, and set up correctly.
Some of these are - according to Google - related to unclean shutdowns of the brokers; corrupted individual topics and such. As this is a test environment, I can easily delete such files.
I know about some commands which help me check topics etc. (basic stuff, like listing them, displaying their individual configuration etc.).
However...
Question
Is there an online ressource/documentation which can be used as a systematic walkthrough to check whether everything is basically setup OK; for example to clear up these questions:
Do the three Zookeepers and the three Kafka instances correctly talk to each other for high-availability purposes? Do they have a correct "leader" etc.?
Are the servers generally "healthy", i.e., easily able to accept connections etc.?
How are the topics working (what's in there, how many messages, etc.)?
I am aware that one may very quickly dismiss this question as too generic; I am not asking you to solve my problems. I am looking for a ressource to systematically walk through such an installation - it may or may not cover the examples I have given, but it definitely should give a systematic way to find out if things are fundamentally wrong.
Rather than looking solely at logs, you might want to familiarize yourself with JMX metrics and how you can gather them across the cluster.
If you want to actually collect and analyze logs, you'll likely need to separately use something like Elasticsearch.
You won't see "how many messages" in a topic, and you'll need even more monitoring to know if a port is actually open and the Kafka process is running, the disks are filling up, etc.
My point here is that, Kafka needs fed and watered, if you plan to productionalize it, you can't just set up a small cluster and forget about it. Even if you think it's setup correctly at the beginning, increasing the load on it will cause it to fall in a bad state eventually.
For a limited trial for your dev environment to get a full look at your cluster health, Confluent Control Center can assist with that.
To solve the "what's in there" problem, I suggest you setup a Schema Registry, and convince Kafka producers to use it.
This packtpub tutorial/training by Stéphane Maarek is wonderful resource for setting kafka in cluster mode. However he did that in AWS cloud in ubuntu VM.
I have followed the same steps and installed in Vagrant VMs in cent OS. You can find the code here.
The VM has yahoo kafka manager to monitor the kafka internal details. list of broker available, healthy , partitions, leaders etc.,
kafka manager can help you with high level monitoring.
Please provide your comments.
I am looking for advice for the following problem:
I am working with other people on a microservices architecture where the microservices are distributed on different machines. Resources on the machines are very limited.
Currently, communication runs through a message broker.
In my use case, one microservice occasionally needs to run some heavy computation. I would like to perform the computation on a machine with low CPU usage and enough available memory space.
My first idea is that every machine installs a microservice which publishes CPU usage and available memory space in the message broker. Each microservice that needs to distribute their workload is looking for the fittest machines and installs "worker"-microservices on the fly. Results are published in the message broker. Since resources are limited, worker-microservices are uninstalled when not needed anymore.
I haven't found a similar use case yet. Do you guys know a better existing solution?
I am quite new to the topic of microservices and distributed computing, so i would appreciate some advice and help.
I'm trying to wrap my head around Zookeeper and what it does. To this point, my experience with Zookeeper has been through other libraries that require Zookeeper (Solr and Kafka) and so my basic understand is the very vague "you better use Zookeeper to keep your configuration straight".
So help me think through a simple example problem. Let's say that I build my own service that does "stuff". There are two things that I want to protect:
I want to have as little downtime as possible (gotta keep doing stuff).
I can not have more than one server doing stuff because bad things would happen.
So, how would I set this up in Zookeeper? Is Zookeeper responsible for starting another stuff server if one goes down? Or do I subscribe to a Zookeeper "stuff doer status" callback? If I erroneously start up two stuff servers, how does Zookeeper help me keep bad things from happening?
Zookeeper is a distributed lock manager. These systems provide features like coordinator election (aka "master election" or "leader election") for a distributed system, as well as provide a consistent, distributed access to small amounts of critical information which is frequently used for configuration (i.e., don't treat it like a database or a general file system).
Note that Zookeeper does not manage your service, but you can use Zookeeper to keep a hot standby (or several) such that in case of one master failing, another one will take over, so you would run N replicas of your servers, such that one of the working instances can take over immediately if the current leader goes down or becomes unavailable for any reason.
Using master election, you can choose to have two (or more) servers, but only one of them will be able to take the master lock, so only that one will be able to take action. As soon as it goes away, it will lose its claim to the lock, and your hot standby will pick up the lock and start doing work that you need it to do. Look at Zookeeper recipes for code samples. However, properly handing off work, checkpointing, and general service resilience is still up to you to design and implement.
That said, Zookeeper and similar systems provide a solid foundation to enable you to build robust distributed systems.
Other systems similar to Zookeeper include (alphabetically):
Chubby
doozerd
etcd
Several of these have detailed comparisons written up on their respective websites to show how they differ from the others in the list.
We're developing a server system in Scala + Akka for a game that will serve clients in Android, iPhone, and Second Life. There are parts of this server that need to be highly available, running on multiple machines. If one of those servers dies (of, say, hardware failure), the system needs to keep running. I think I want the clients to have a list of machines they will try to connect with, similar to how Cassandra works.
The multi-node examples I've seen so far with Akka seem to me to be centered around the idea of scalability, rather than high availability (at least with regard to hardware). The multi-node examples seem to always have a single point of failure. For example there are load balancers, but if I need to reboot one of the machines that have load balancers, my system will suffer some downtime.
Are there any examples that show this type of hardware fault tolerance for Akka? Or, do you have any thoughts on good ways to make this happen?
So far, the best answer I've been able to come up with is to study the Erlang OTP docs, meditate on them, and try to figure out how to put my system together using the building blocks available in Akka.
But if there are resources, examples, or ideas on how to share state between multiple machines in a way that if one of them goes down things keep running, I'd sure appreciate them, because I'm concerned I might be re-inventing the wheel here. Maybe there is a multi-node STM container that automatically keeps the shared state in sync across multiple nodes? Or maybe this is so easy to make that the documentation doesn't bother showing examples of how to do it, or perhaps I haven't been thorough enough in my research and experimentation yet. Any thoughts or ideas will be appreciated.
HA and load management is a very important aspect of scalability and is available as a part of the AkkaSource commercial offering.
If you're listing multiple potential hosts in your clients already, then those can effectively become load balancers.
You could offer a host suggestion service and recommends to the client which machine they should connect to (based on current load, or whatever), then the client can pin to that until the connection fails.
If the host suggestion service is not there, then the client can simply pick a random host from it internal list, trying them until it connects.
Ideally on first time start up, the client will connect to the host suggestion service and not only get directed to an appropriate host, but a list of other potential hosts as well. This list can routinely be updated every time the client connects.
If the host suggestion service is down on the clients first attempt (unlikely, but...) then you can pre-deploy a list of hosts in the client install so it can start immediately randomly selecting hosts from the very beginning if it has too.
Make sure that your list of hosts is actual host names, and not IPs, that give you more flexibility long term (i.e. you'll "always have" host1.example.com, host2.example.com... etc. even if you move infrastructure and change IPs).
You could take a look how RedDwarf and it's fork DimDwarf are built. They are both horizontally scalable crash-only game app servers and DimDwarf is partly written in Scala (new messaging functionality). Their approach and architecture should match your needs quite well :)
2 cents..
"how to share state between multiple machines in a way that if one of them goes down things keep running"
Don't share state between machines, instead partition state across machines. I don't know your domain so I don't know if this will work. But essentially if you assign certain aggregates ( in DDD terms ) to certain nodes, you can keep those aggregates in memory ( actor, agent, etc ) when they are being used. In order to do this you will need to use something like zookeeper to coordinate which nodes handle which aggregates. In the event of failure you can bring the aggregate up on a different node.
Further more, if you use an event sourcing model to build your aggregates, it becomes almost trivial to have real-time copies ( slaves ) of your aggregate on other nodes by those nodes listening for events and maintaining their own copies.
By using Akka, we get remoting between nodes almost for free. This means that which ever node handles a request that might need to interact with an Aggregate/Entity on another nodes can do so with RemoteActors.
What I have outlined here is very general but gives an approach to distributed fault-tolerance with Akka and ZooKeeper. It may or may not help. I hope it does.
All the best,
Andy