Is a Kafka Connect worker a machine/server or just a cpu core? - apache-kafka

In the docs Kafka Connect workers are described as processes, so in my understanding cores of cpu.
But in the same docs they are meant to provide automatic fault tolerance (in their distributed mode), so in my understanding different machines, since fault tolerance at process level is meaningless imo.
Somebody could enlighten me please ?

A Kafka Connect worker is a JVM process.
You can run multiple Kafka Connect workers in distributed mode configured as a cluster, and if one worker dies the work (tasks) are distributed amongst the remaining workers.
Typically you would deploy one Kafka Connect worker per machine. Running multiple Kafka Connect workers in distributed mode on one machine is not something that would generally make sense IMO.
I have not tested it but I don't believe that a Kafka Connect worker is tied to one CPU.
For more explanation see here: https://youtu.be/oNK3lB8Z-ZA?t=1337 (slides: https://rmoff.dev/bbuzz19-kafka-connect)

Related

How does distribution mechanism works when Kafka runs locally?

How does distribution mechanism works when Kafka runs locally? Please tell the disadvantages too.
If you only run one broker locally, you have a single point of failure and no processing is truly distributed
If you have multiple brokers on the same machine, and you mount different volumes for each broker process logs, you'd end up with distributed storage + fault tolerance, but still no distributed processing
In either case, you can create as many topics as you want with many partitions, but you can only set the replication factor of the topics to be the number of active brokers
Multiple consumer processes are also able to run fine on a single machine, but you'd get more throughput by separating brokers and consumers across several physical machines (more cpu available, and different network interfaces)

How do Kafka Connect workers allocate manage resource limits (memory/cores) to distribute tasks?

In Kubernetes, you explicitly specify the resource limits for a container. In launching a Kafka connector, you request max tasks but how does the connect worker cluster know how to distribute the load? Does it consider the tasks as equal? Does it use internal metrics?
The Apache Kafka docs and the confluent docs do not explicitly say except Confluent advises the following which would indicate connect workers do not have resource management:
The resource limit depends heavily on the types of connectors being run by the workers, but in most cases users should be aware of CPU and memory bounds when running workers concurrently on a single machine.
https://docs.confluent.io/3.1.2/connect/userguide.html#connect-standalone-v-distributed
Also the cluster deployment appears to require an external resource manager to handle failover of workers.
Kafka Connect workers can be deployed in a number of ways, each with their own benefits. Workers lend themselves well to being run in containers in managed environments such as YARN, Mesos, or Docker Swarm as all state is stored in Kafka, making the local processes themselves stateless. We provide Docker images and documentation for getting started with those images is here. By design, Kafka Connect does not automatically handle restarting or scaling workers which means your existing clustering solutions can continue to be used transparently.
how does the connect worker cluster know how to distribute the load
Each connector can opt to partition its work into tasks (for example, ingesting multiple tables from one database could be done in parallel and so one table would be done by one task), up to the tasks.max limit configured.
Kafka Connect balances these tasks across the available workers such that they are evenly distributed (based on the number of tasks).
The rebalancing protocol changed in release 2.3 of Apache Kafka as part of KIP-415, there are details in the KIP and here. In a nutshell, with incremental cooperative rebalancing Kafka Connect spreads the tasks equally starting from the least loaded workers, eventually including more workers while the load evens out.
Also the cluster deployment appears to require an external resource manager to handle failover of workers.
To be clear - the failover of tasks is done automatically by Kafka Connect, and as you say, the failover of workers would be managed externally.

Running a single kafka s3 sink connector in standalone vs distributed mode

I have a kafka topic "mytopic" with 10 partitions and want to use S3 sink connector to sink records to an S3 bucket. For scaling purposes it should be running on multiple nodes to write partitions data in parallel to the same S3 bucket.
In Kafka connect user guide and actually many other blogs/tutorials it's recommended to run workers in distributed mode instead of standalone to achieve better scalability and fault tolerance:
... distributed mode is more flexible in terms of scalability and offers the added advantage of a highly available service to minimize downtime.
I want to figure out which mode to choose for my use case: having one logical connector running on multiple nodes in parallel. My understanding is following:
If I run in distributed mode, I will end up having only 1 worker processing all the partitions, since it's considered one connector task.
Instead I should run in standalone mode in multiple nodes. In that case I will have a consumer group and achieve parallel processing of partitions.
In above described standalone scenario I will actually have fault tolerance: if one instance dies, the consumer group will rebalance and other standalone workers will handle the freed partitions.
Is my understaning correct or am I missing something?
Unfortunately I couldn't find much information on this topic other than this google groups discussion, where the author came to the same conclusion as I did.
In theory, that might work, but you'll end up ssh-ing to multiple machines, having basically the same config files, and just not using the connect-distributed command instead of connect-standalone.
You're missing the part about Connect server task rebalancing, though, which communicates over the Connect server REST ports
The underlying task code is all the same, only the entrypoint and offset storage are different. So, why not just use distributed if you have multiple machines?
You don't need to run, multiple instances of standalone processes, the Kafka workers are taking care of distributing the tasks, rebalancing, offset management under the distributed mode, you need to specify the same group id ...

Kafka cluster with single broker

I'm looking to start using Kafka for a system and I'm trying to cover all use cases.
Normally it would be run as a cluster of brokers running on virtual servers (replication factor 3-5). but some customers though don't care about resilience and a broker failure needing a manual reboot of the whole system is fine with them, they just care about hardware costs.
So my question is, are there any issues with using Kafka as a single broker system for small installations with low throughput?
Cheers
It's absolutely OK to use a single Kafka broker. Note, however, that with a single broker you won't have a highly available service meaning that when the broker fails you will have a downtime.
Your replication-factor will be limited to 1 and therefore all of the partitions of a topic will be stored on the same node.
For a proof-of-concept or non-critical dev work, a single node cluster works just fine. However having a cluster has multiple benefits. It's okay to go with a single node cluster if the following are not important/relevant for you.
scalability [spreads load across multiple brokers to maintain certain throughput]
fail-over [guards against data loss in case one/more node(s) go down]
availability [system remains reachable and functioning even if one/more node(s) go down]

multiple connectors in kafka to different topics are going to same node

I have created two kafka connectors in kafka-connect which use the same Connector class but have different topics they listen to.
When I launch the process on my node, both the connectors end up creating tasks on this process. However, I would like one node to only handle one connector/topic. How can I limit a topic/connector to a single node? I don't see any configuration in connect-distributed.properties where a process could specify which connector to use.
Thanks
Kafka Connect in distributed mode can run as a cluster of one or more workers. Each worker can run multiple tasks. Depending on how many connectors and workers you are running, you will have tasks running on the same worker. This is deliberate - the idea is that Kafka Connect will manage your tasks and workload for you, across the available workers.
If you want to isolate your processing you can run Kafka Connect as separate Connect clusters, either on the same machine (make sure to use different REST ports), or separate machines.
For more info, see architecture and config for steps to configure separate clusters. Note that a cluster can actually be a single worker, but then you don't have any redundancy in the event of failure.