in the context of kubernetes or else, does it make sense to have one KSQL SERVER per application? When i read the capacity planning for KSQL Server, it is seems the basic settings are for running multiple queries on one server.
However I feel like to have a better control over scaling up and down with Kubernetes, it would make more sense to fix the number of Thread by per query, and launch a server configured in kube with let say 1 cpu, where only one application would run. However i am not sure how heavy are KSQL Server, and if that make actual sense or not.
Any recommendation.
First of all, what you have mentioned is clearly doable. You can run KSQL Server with Docker, so it's you could have a container orchestrator such as kubernetes or swarm maintaining and scheduling those KSQL Server instances.
So you know how this would play out:
Each KSQL Instance will join a group of other KSQL Instances with
the same KSQL_SERVICE_ID that use the same Kafka Cluster defined by KSQL_KSQL_STREAMS_BOOTSTRAP_SERVERS
You can create several KSQL Server Clusters, i.e for different
applications, just use different KSQL_SERVICE_ID while using the
same Kafka Cluster.
As a result, you now you have:
Multiple Containerized KSQL Server Instances managed by a container
orchestrator such as Kubernetes.
All of the KSQL Instances are connected to the Same Kafka Cluster (you can also have different Kafka Clusters for different KSQL_SERVICE_ID)
The KSQL Server Instances can be grouped in different applications
(different KSQL_SERVICE_ID) in order to achieve separation of
concerns so that scalability, security and availability can be
better maintained.
Regarding the coexistence of several KSQL Server Instances (maybe with different KSQL_SERVICE_ID) on the same server, you should know the available machine resources can be monopolized by a greedy instance, causing problems to the less greedy instance. With Kubernetes you could set resource limits on your Pods to avoid this, but greedy instances will be limited and slowed down.
Confluent advice regarding multi-tenancy:
We recommend against using KSQL in a multi-tenant fashion. For
example, if you have two KSQL applications running on the same node,
and one is greedy, you're likely to encounter resource issues related
to multi-tenancy. We recommend using a single pool of KSQL Server
instances per use case. You should deploy separate applications onto
separate KSQL nodes, because it becomes easier to reason about scaling
and resource utilization. Also, deploying per use case makes it easier
to reason about failovers and replication.
A possible drawback is the overhead you'll have if you run multiple KSQL Server Instances (Java Application footprint) in the same pool while having no work for them to do (i.e: no schedulable tasks due to lack of partitions on your topic(s)) or simply because you have very little workload. You might be doing the same job with less instances, avoiding idled or nearly-idled instances.
Of course stuffing all stream processing, maybe for completely different use cases or projects, on a single KSQL Server or pool of KSQL Servers may bring its own internal concurrency issues, development cycle complexities, management, etc..
I guess something in the middle will work fine. Use a pool of KSQL Server instances for a single project or use case, which in turn might translate to a pipeline consisting on a topology of several source, process and sinks, implemented by a number of KSQL queries.
Also, don't forget about the scaling mechanisms of Kafka, Kafka Streams and KSQL (built on top of Kafka Streams) discussed in the previous question you've posted.
All of this mechanisms can be found here:
https://docs.confluent.io/current/ksql/docs/capacity-planning.html
https://docs.confluent.io/current/ksql/docs/concepts/ksql-architecture.html
https://docs.confluent.io/current/ksql/docs/installation/install-ksql-with-docker.html
Related
I'm new to kafka.
Kafka is supposed to be used as a distributed service. But the tutorials and blog posts i found online never mention if there is one or several zookeeper nodes.
The tutorials just pop one zookeper instance, and then multiple kafka brokers.
Is it how it is supposed to be done?
Zookeeper is a co-ordination service (in a centralized manner) for distributed systems that is used by clusters for maintenance of distributed system . The distributed synchronization achieved by it via metadata such as configuration information, naming, etc.
In general architectures, Kafka cluster shall be served by 3 ZooKeeper nodes, but if the size of deployment is huge, then it can be ramped up to 5 ZooKeeper nodes but that in turn will add load on the nodes as all nodes try to be in sync as all metadata related activities are handled by ZooKeeper.
Also, it should be noted that as an improvement, the new release of Kafka reduces dependency on ZooKeeper in order to enhance scalability of metadata across, to reduce the complexity in maintaining the meta data with external components and to enhance the recovery from unexpected shutdowns. With new approach, the controller failover is almost instantaneous. This is achieved by Kafka Raft Metadata mode termed as 'KRaft' that will run Kafka without ZooKeeper by merging all the responsibilities handled by ZooKeeper inside a service in the Kafka Cluster itself and operates on event based mechanism that is used in the KRaft protocol.
Tutorials generally keep things nice and simple, so one ZooKeeper (often one Kafka broker too). Useful for getting started; useless for any kind of resilience :)
In practice, you are going to need three ZooKeeper nodes minimum.
If it helps, here is an enterprise reference architecture whitepaper for the deployment of Apache Kafka
Disclaimer: I work for Confluent, who publish the above whitepaper.
In Kubernetes, you explicitly specify the resource limits for a container. In launching a Kafka connector, you request max tasks but how does the connect worker cluster know how to distribute the load? Does it consider the tasks as equal? Does it use internal metrics?
The Apache Kafka docs and the confluent docs do not explicitly say except Confluent advises the following which would indicate connect workers do not have resource management:
The resource limit depends heavily on the types of connectors being run by the workers, but in most cases users should be aware of CPU and memory bounds when running workers concurrently on a single machine.
https://docs.confluent.io/3.1.2/connect/userguide.html#connect-standalone-v-distributed
Also the cluster deployment appears to require an external resource manager to handle failover of workers.
Kafka Connect workers can be deployed in a number of ways, each with their own benefits. Workers lend themselves well to being run in containers in managed environments such as YARN, Mesos, or Docker Swarm as all state is stored in Kafka, making the local processes themselves stateless. We provide Docker images and documentation for getting started with those images is here. By design, Kafka Connect does not automatically handle restarting or scaling workers which means your existing clustering solutions can continue to be used transparently.
how does the connect worker cluster know how to distribute the load
Each connector can opt to partition its work into tasks (for example, ingesting multiple tables from one database could be done in parallel and so one table would be done by one task), up to the tasks.max limit configured.
Kafka Connect balances these tasks across the available workers such that they are evenly distributed (based on the number of tasks).
The rebalancing protocol changed in release 2.3 of Apache Kafka as part of KIP-415, there are details in the KIP and here. In a nutshell, with incremental cooperative rebalancing Kafka Connect spreads the tasks equally starting from the least loaded workers, eventually including more workers while the load evens out.
Also the cluster deployment appears to require an external resource manager to handle failover of workers.
To be clear - the failover of tasks is done automatically by Kafka Connect, and as you say, the failover of workers would be managed externally.
I'm planning to build a Kafka Cluster using two servers, and host Zookeeper on these two servers as well.
The Question is, since Kafka requires Zookeeper to run, what is the best cluster build for zookeeper to implement Kafka Cluster on two servers?
for eg. I'm currently running two zookeepers on both servers and one Kafka on each server, and in the Kafka configuration they point to all Zookeepers.
Is there a better way to do this?
First of all, you don't have to setup Zookeper and Kafka in the same server. One of the roles of Zookeeper is electing controller. (one of the brokers which is responsible for maintaining the leader/follower relationship for all the partitions) For election; majority of Zookeper nodes must be alive. In your case even one Zookeeper instance is down, you cannot select controller. So there is no difference between having one Zookeper or two. That's why it is recommended to have at least 3 nodes in Zookeeper cluster. By this way you can handle failure of one Zookeeper node.
An addition to this, it is highly recommended to have at least three brokers in your Kafka cluster to maintain both consistency and high availability. (link1, link2)
UPDATE:
As long as you are limited to only two servers, then you can consider sacrificing from high availability by set up your broker by setting min.insync.replicas=2 and having topics with replication.factor=2. If HA is more important than data loss, then you can use min.insync.replicas=1 (default) broker config with again topic replication.factor=2. In this circumstance, your options are these IMHO. (Having one or two Zookeepers is not important as I mentioned above)
I am often faced with the same problem as you do #frisky5 where i would like to achieve a "suboptimal" HA system using only 2 nodes, and thus workarounds are always needed with cloud-native frameworks that rely on the assumption that clusters will have lot of nodes available.
That ain't always the case in real life, is it ;) ?
That being said, i see you essentially having 2 options:
Externalize zookeeper configuration on a replicated storage system using 2 nodes (e.g. DRBD)
Replicate Kafka data volumes entirely on the second nodes and use 2 one-node Kafka clusters that you switch on and off depending on who is the current master node.
I would go for the first option. In that case you would have 2 Kafka servers and one zookeeper server whose ip needs to be static (virtual ip). When the zookeeper node goes down, it is restarted one the second node with same VIP, but it needs to access the synchronized data folder.
I am not too familiar with zookeepers internals and i can't tell you whether it will go in conflict when starting up on a data store who "wasn't its own" but i would guess it makes sense for you to test it using a simple rsync setup.
Another way to achieve consensus if you are using a k3s based kubernetes cluster would be to rely on internal k8s distributed consensus mechanics to "tell Kafka" which node is the leader. This works for the postgresoperator by chruncydata because Patroni is cool ( https://patroni.readthedocs.io/en/latest/kubernetes.html ) š but i am not sure if Kafka/zookeeper are that flexible and can communicate with a rest API to set their locks ...
Once you have achieved this intermediate step, then you can use a PostgreSQL db as external source of truth for k3s and then it is as simple as syncing the postgres data folder between the machines (easily done with rsync). The beauty of this approach is that it is way more generic and could be used for other systems too.
Let me know what do you think about these two approaches and whether you manage to setup a test environment. If you do on GitHub i can help you out with implementation
I have a kafka topic "mytopic" with 10 partitions and want to use S3 sink connector to sink records to an S3 bucket. For scaling purposes it should be running on multiple nodes to write partitions data in parallel to the same S3 bucket.
In Kafka connect user guide and actually many other blogs/tutorials it's recommended to run workers in distributed mode instead of standalone to achieve better scalability and fault tolerance:
... distributed mode is more flexible in terms of scalability and offers the added advantage of a highly available service to minimize downtime.
I want to figure out which mode to choose for my use case: having one logical connector running on multiple nodes in parallel. My understanding is following:
If I run in distributed mode, I will end up having only 1 worker processing all the partitions, since it's considered one connector task.
Instead I should run in standalone mode in multiple nodes. In that case I will have a consumer group and achieve parallel processing of partitions.
In above described standalone scenario I will actually have fault tolerance: if one instance dies, the consumer group will rebalance and other standalone workers will handle the freed partitions.
Is my understaning correct or am I missing something?
Unfortunately I couldn't find much information on this topic other than this google groups discussion, where the author came to the same conclusion as I did.
In theory, that might work, but you'll end up ssh-ing to multiple machines, having basically the same config files, and just not using the connect-distributed command instead of connect-standalone.
You're missing the part about Connect server task rebalancing, though, which communicates over the Connect server REST ports
The underlying task code is all the same, only the entrypoint and offset storage are different. So, why not just use distributed if you have multiple machines?
You don't need to run, multiple instances of standalone processes, the Kafka workers are taking care of distributing the tasks, rebalancing, offset management under the distributed mode, you need to specify the same group id ...
I have a setup where I'm pushing events to kafka and then running a Kafka Streams application on the same cluster. Is it fair to say that the only way to scale the Kafka Streams application is to scale the kafka cluster itself by adding nodes or increasing Partitions?
In that case, how do I ensure that my consumers will not bring down the cluster and ensure that the critical pipelines are always "on". Is there any concept of Topology Priority which can avoid a possible downtime? I want to be able to expose the streams for anyone to build applications on without compromising the core pipelines. If the solution is to setup another kafka cluster, does it make more sense to use Apache storm instead, for all the adhoc queries? (I understand that a lot of consumers could still cause issues with the kafka cluster, but at least the topology processing is isolated now)
It is not recommended to run your Streams application on the same servers as your brokers (even if this is technically possible). Kafka's Streams API offers an application-based approach -- not a cluster-based approach -- because it's a library and not a framework.
It is not required to scale your Kafka cluster to scale your Streams application. In general, the parallelism of a Streams application is limited by the number of partitions of your app's input topics. It is recommended to over-partition your topic (the overhead for this is rather small) to guard against scaling limitations.
Thus, it is even simpler to "offer anyone to build applications" as everyone owns their application. There is no need to submit apps to a cluster. They can be executed anywhere you like (thus, each team can deploy their Streams application the same way by which they deploy any other application they have). Thus, you have many deployment options from a WAR file, over YARN/Mesos, to containers (like Kubernetes). Whatever works best for you.
Even if frameworks like Flink, Storm, or Samza offer cluster management, you can only use such tools that are integrated with those frameworks (for example, Samza requires YARN -- no other options available). Let's say you have already a Mesos setup, you can reuse it for your Kafka Streams applications -- no need for a dedicated "Kafka Streams cluster" (because there is no such thing).
An applicationās processor topology is scaled by breaking it into
multiple tasks.
More specifically, Kafka Streams creates a fixed number of tasks based
on the input stream partitions for the application, with each task
assigned a list of partitions from the input streams (i.e., Kafka
topics).
The assignment of partitions to tasks never changes so that each task
is a fixed unit of parallelism of the application. Tasks can then
instantiate their own processor topology based on the assigned
partitions; they also maintain a buffer for each of its assigned
partitions and process messages one-at-a-time from these record
buffers.
As a result stream tasks can be processed independently and in
parallel without manual intervention.
It is important to understand that Kafka Streams is not a resource
manager, but a library that ārunsā anywhere its stream processing
application runs. Multiple instances of the application are executed
either on the same machine, or spread across multiple machines and
tasks can be distributed automatically by the library to those running
application instances.
The assignment of partitions to tasks never changes; if an application
instance fails, all its assigned tasks will be restarted on other
instances and continue to consume from the same stream partitions.
The processing of the stream happens in the machines where the application is running.
I recommend you to have a look to this guide, it can help you to better understand the way Kafka Streams work.