Kafka on multiple instances of ec2 - apache-kafka

I am new to Kafka, trying to do a project. Wanted to do it as it would be in real life example, but I am kinda confused. While searching thru the internet I found that if I want to have 3 brokers and 3 zookeepers, to provide replication factor = 2 and quorum, I need 6 EC2 instances. I am looking thru youtube to find some examples, but as far as I see all of them show multiple brokers on one cluster. From my understanding it's better to keep ZKs and all brokers separately on each VM, so if one goes down I still have all of the rest. Can you confirm that ?
Also, wondering how to set partitioning. Is it important at the beginning of creating a topic, or I change that later when I need to scale ?
Thanks in advance
looking for information on yt, google.

My suggestion would be to use MSK Serverless and forget how many machines exist.
Kafka 3.3.1 doesn't need Zookeeper anymore. Zookeeper doesn't need to be separate machines (although recommended). You can also run multiple brokers on one server... So, I'm not fully sure why you would need 6 for replication factor of 2.
Regarding partitions, yes, create it ahead of time (over provision, if necessary) since you cannot easily move data across partitions when you do scale

Related

Apache Kafka KRaft mode topology best practices

My question concerns the recommended topology of Kafka brokers and controllers in KRaft mode.
Now, according to best practices with zookeeper we are supposed to create:
{3,5,7} Zookeeper nodes
{3,5,7} Three Kafka broker nodes
This is a well known structure that is recommended in every book and online course. But one of the drawbacks of this model is that we need to have at least 6 machines / nodes which is a lot.
Now, I'm afraid that in KRaft mode things might be different. The alternatives I see are the following:
Three nodes where each node consists of a controller and a broker. I'm not sure it's a good one for production because once a single node is down (controller + broker), our system becomes fragile and we cannot afford loosing another node. Plus, I think it can introduce complications in case we want to update a node in production in case the other crashes.
Six nodes: three separate controllers and three separate brokers - This is a good solution, it better handles some of the issues mentioned in (1), but I think we can find something better.
Five nodes where each node is both a controller and a broker - I know that five nodes is reserved for heavy load systems, but I think that it's much better than to use model (2). Why should be use six machines when we can use five and have a much more reliable and available system? In other words, we can use a much better and cheaper solution.
Hybrid - some standalone controller and brokers, and some mixed controllers and brokers - I'm not sure whether this model has some benefits.
The only thing that worries about model (3) is that I've not seen it in any other place so I'm not completely sure about it. Looking for your opinion and advise
I might be a little late here but your proposed third option is not really cheaper than the second if all five machines run as brokers. Brokers have way higher hardware requirements than controllers, so I would still go for the conservative second way.
Controller nodes will need as little as maybe 8 GB RAM and some storage/CPU so you only need three machines (the brokers) that do the heavy lifting with lots of RAM and storage (and some CPU)

Can a Kafka-Cluster be cut in half?

Scenario: you have a Kafka-Cluster in different DCs but they are configured as one cluster. So there is no mirroring through MirrorMaker or something liket hat. The DCs are not very far from eatch other. But they are physically separated.
Now what do you have to do to ensure that the cluster is failsafe on BOTH SIDES if the connection between those two DCs is down? So on BOTH sides the producers and consumer should still work.
I would guess: you need multiple Zookeepers on both sides and multiple Kafka-Nodes.
But is that enough? Does the cluster heal itself after getting reconnected?
Thanks in advance.
I'm assuming your datacenters that "are not very far from eatch other" are basically Availability Zones (AZs).
It's pretty common to spread a cluster over multiples AZs. However it's usually not desired or possible that each "slice" can live on its own.
The immediate issue is Zookeeper which by design prevents split-brain scenarios. So if a ZK cluster is split only one "slice" (at best) will carry on working. So the brokers that are on a side of the non working ZK clusters will be non functional.
Then let's say it was possible to have both sides keep working. What happens when you joins both sides again?
Data is likely to have diverged as clients wrote data to each side separately. Now you could have the same partition with different messages for the same offset and no way to resolve the conflict as both options are "valid".
I hope this shows why this is not a possible solution. In practice, if an AZ goes offline, it is non functional until it is brought back online.
Clients that were connected to the offline AZ should reconnect to the other AZ (using multiple bootstrap servers) and clients that were in the failed AZ should be reprovisioned into another one.
If configured correctly, Kafka can survive an AZ outage (even though in practice, it's best to have 3 AZs) and keep all resources available. Also in this scenario, the cluster will automatically return to a good state when the failed AZ returns.

Why ZooKeeper needs majority to run?

I've been wondering why ZooKeeper needs a majority of the machines in the ensemble to work at all. Lets say we have a very simple ensemble of 3 machines - A,B,C.
When A fails, new leader is elected - fine, everything works. When another one dies, lets say B, service is unavailable. Does it make sense? Why machine C cannot handle everything alone, until A and B are up again?
Since one machine is enough to do all the work (for example single machine ensemble works fine)...
Is there any particular reason why ZooKeeper is designed in this way? Is there a way to configure ZooKeeper that, for example ensemble is available always when at least one of N is up?
Edit:
Maybe there is a way to apply a custom algorithm of leader selection? Or define a size of quorum?
Thanks in advance.
Zookeeper is intended to distribute things reliably. If the network of systems becomes segmented, then you don't want the two halves operating independently and potentially getting out of sync, because when the failure is resolved, it won't know what to do. If you have it refuse to operate when it's got less than a majority, then you can be assured that when a failure is resolved, everything will come right back up without further intervention.
The reason to get a majority vote is to avoid a problem called "split-brain".
Basically in a network failure you don't want the two parts of the system to continue as usual. you want one to continue and the other to understand that it is not part of the cluster.
There are two main ways to achieve that one is to hold a shared resource, for instance a shared disk where the leader holds a lock, if you can see the lock you are part of the cluster if you don't you're out. If you are holding the lock you're the leader and if you don't your not. The problem with this approach is that you need that shared resource.
The other way to prevent a split-brain is majority count, if you get enough votes you are the leader. This still works with two nodes (for a quorum of 3) where the leader says it is the leader and the other node acting as a "witness" also agrees. This method is preferable as it can work in a shared nothing architecture and indeed that is what Zookeeper uses
As Michael mentioned, a node cannot know if the reason it doesn't see the other nodes in the cluster is because these nodes are down or there's a network problem - the safe bet is to say there's no quorum.
Let’s look at an example that shows how things can go wrong if the quorum (majority of running servers) is too small.
Say we have five servers and a quorum can be any set of two servers. Now say that servers s1 and s2 acknowledge that they have replicated a request to create a znode /z. The service returns to the client saying that the znode has been created. Now suppose servers s1 and s2 are partitioned away from the other servers and from clients for an arbitrarily long time, before they have a chance to replicate the new znode to the other servers. The service in this state is able to make progress because there are three servers available and it really needs only two according to our assumptions, but these three servers have never seen the new znode /z. Consequently, the request to create /z is not durable.
This is an example of the split-brain scenario. To avoid this problem, in this example the size of the quorum must be at least three, which is a majority out of the five servers in the ensemble. To make progress, the ensemble needs at least three servers available. To confirm that a request to update the state has completed successfully, this ensemble also requires that at least three servers acknowledge that they have replicated it.

Maximum servers in a ZooKeeper ensemble cluster?

Use case: 100 Servers in a pool; I want to start a ZooKeeper service on each Server and Server applications (ZooKeeper client) will use the ZooKeeper cluster (read/write). Then there is no single point of failure.
Is this solution possible for this use case? What about the performance?
What if there are 1000 Servers in the pool?
If you are simply trying to avoid a single point of failure, then you only need 3 servers. In a 3 node ensemble, a single failure can be tolerated with the remaining 2 nodes forming the quorum. The more servers you have the worse write performance will be. And 100 servers is the extreme of this, if ZK can even handle it.
However, having that many clients is no problem at all. Zookeeper has active deployments with many more than 1000 clients. If you find that you need more servers to handle your read load, you can always add Observers. I highly recommend you join the list serve. It is an excellent way to quickly have your questions answered, and likely in much more detail than anyone will give you on SO.
Maybe zookeeper is not the right tool?
Hazelcast does what you want, I think. You can hundreds of peers, and if the master is lost a new one is elected from all the peers.
You don't need to use all of hazel cast. You can just use the maps, or just the worker pools, or just the synchronisation primitives, or just the messaging etc.

Zookeeper Newbie - Which nodes do I read and write? Should I load balance?

I am a zookeeper newbie. I have three nodes in three separate data centers. I will need to read and write data from the python pykeeper API? So...
1) which node to I read and write from? Does it matter? Round robin? Write to master, read from slaves?
2) How do I know wich server was elected as master? Do I care? That I have yet to figure out.
3) For now I am using the following to connect to zookeeper.
import zc.zk
from random import choice
zk_servers = ['111.111.111.111:2181','111.111.111.222:2181','111.111.111.333:2181']
zk = zc.zk.ZooKeeper(choice(zk_servers))
This begs the question, what if a zk node fails? Should I place nodes behind HA proxy to load-balance the requests?
Any advice for using best practice for reading and writing to zk nodes is mush appreciated.
Thanks
The general model is that you supply your clients with the list of server nodes and then connect to the cluster as a whole. ZooKeeper shuffles the list of server addresses and then connects to one. You don't pick various servers to do individual tasks...part of the point of zookeeper is that it scales horizontally by adding more nodes...each of which responds to reads and to writes based on what data is being requested and where the cluster has put it.