Looking for UI tools to connect to Strimzi kafka cluster to get visibility to kafka topics, read messages within topics, broker and partition details and ability to connect with or without SSL/SASL connections.I have already tried using kafka tool and facing issues with it hence looking for an alternative). Kindly suggest some UI tools for the same ? ( similar to confluent center/kafka tool) which are either free or with minimal cost.
There are no UI tools for now but a new issue was started about it. I would follow it https://github.com/strimzi/strimzi-kafka-operator/issues/3287
There is a project in early stage of developement: https://github.com/strimzi/strimzi-ui
Strimzi UI provides a way for managing Strimzi and Kafka clusters (+
other components) deployed by it using a graphical user interface.
But unforunately in the moment of writing:
This UI is currently not in a state where it can be used. Is it still
early on in it's development but we hope to have something usable very
soon!
So, keep an eye on it.
Related
Because of budget issue, I personally am using Confluent Cloud basic edition which is free.
Is there any way to setup number of kafka brokers on my own?
I could not find anything related on confluent cloud web UI settings.
Does only Dedicated edition support such settings? I cannot afford that much right now.
Or is it possible to configure cluster settings (like number of brokers etc)
on my local terminal CLI?
Confluent Cloud provides a "serverless" experience, and you cannot configure the number of brokers.
I'm currently using Gatling & a home grown solution, but I was wondering if Kafka offer anything?
Yes, Kafka has both kafka-{producer,consumer}-perf-test.sh included
I'm about to start deploying to production a couple of Kafka cluster in 2 different DCs. My main use is for replication using MirrorMaker. To continuously stream/replicate ElasticSearch and Postgres between DCs in order to have a (near) real-time backup and failover.
What I can't get my head around is this simplest question: should I use Confluent or apache Kafka?
I can see that Confluent adds many niceties but what I don't get it: why would someone pick plain Apache Kafka then? I've seen this answer and it seems clear: "pick Confluent, has way more stuff".
As answered in linked post, you can add whatever external processes you want to Apache Kafka.
Note: You are not picking either or, you are always picking Apache Kafka. Confluent Platform adds on top of, similar to Cloudera's Data Platform, as an alternative consideration.
If you want to connect Elasticsearch and Postgres (via JDBC), both of those connectors are Open-Source under the Confluent Community License, so that would be one potential reason for not using Confluent products.
Other reason: Do you need the "more stuff"? Are you able to get support from elsewhere? For example, AWS support on MSK or IBM Streams or Azure EventHub are not using Confluent Platform (because it's against the above license)
MirrorMaker and MirrorMaker2 are both under the Apache License, so they have no such usage / redistribution restrictions.
should I use Confluent or apache Kafka?
When deciding on deploying a vanilla Apache or a commercially supported product you should think about the O&M (operation and maintenance) timeline and what you gain and lose. Whatever you choose will be very difficult to replace once in production.
I'll also agree with #OneCricketeer's answer.
Do you need the "more stuff"?
I work as a professional services consultant with some Apache products. My advice is keep your stack (whatever it is) as simple as possible. So if you don't need the additional tools and functionality of Confluent, don't use them. It's how they make the product "sticky" (re: vendor lock-in).
Vanilla Apache Kafka
Pro No vendor lock-in or dependencies
Pro Faster updates and feature development
Con No nice dashboards
Con Harder to secure
Confluent
Pro Commercial support and professional services available
Pro More stable with fast and easy security patches
Pro Nice dashboard and management tools
Pro Easier to properly secure
Con Expensive
Con Expect vendor lock-in and frequent up-sells
My Opinion
If you have money to spare and this will be a critical piece of infrastructure I'd recommend buying through Confluent. If you try to avoid paying for them, you'll have to hire someone (expensive) who knows it anyway and you'll have to deal with the patching nightmare of open source projects.
If this is something you want to kick the tires on, can allow for downtime, or think you'll replace in 2 years, I'd just use the Apache Kafka with one of the open source dashboards.
I already saw a similar question in SO, but not clearly answer my doubts.
We have different Kafka clusters and lot of exploitation operational habits around it. We have our way to start/stop the cluster, lots of exploit scripts that help maintain the cluster etc..
Now we would like to use Kafka connect connectors for new needs, but from what I saw, Kafka connect is extremely coupled to confluent-hub.
It's like I can't even use the connectors without having to install a full operational confluent-hub.
This makes it very difficult for us to use Kafka connect connectors, I understand that confluent-hub might be a framework that help running those connectors, but it's like we can't even use a dissociated Kafka cluster ( a one not exploited by confluent-hub..).
But maybe I miss something..
Do you know if there is any way to use properly Kafka connectors on a already existing Kafka cluster ( completely independent from confluent-hub) ?
EDITED :
It's more a question regarding the high coupled behaviour between confluent-hub and Kafka-connect. All the features that comes with Kafka connect ( distributed workers to handle different fail over scenarios, etc..) are not usable without confluent-hub, thus a "need" to have Kafka cluster running exclusively via confluent-hub, which is not an easy task when you already have an existing big Kafka cluster with lots of OPS habits on it.
Kafka Connect is part of Apache Kafka. It's a pluggable framework for streaming integration between systems in and out of Kafka.
To use Kafka Connect you need connectors for the specific technology with which you want to integrate. For example, S3 sink, Elasticsearch sink, JDBC source or sink, and so on.
The connector API is part of Apache Kafka, and available for anyone who wants to develop a connector.
Connectors are written by various people and organisations, and available in various different ways. How you obtain a connector depends on which connector you want, how its licensed, and how the author has made it available for distribution. It could be you go to github, clone the repo and build the JAR. It could be you can download the JAR directly.
All that Confluent Hub does is make lots of these connectors available for you in one place, easily searchable, and with an optional CLI tool that will install them for you.
Do you have to use Confluent Hub? No, not at all. Might it make your life easier in locating connectors that you want to use, and make it easier to install them? Hopefully :)
Disclaimer: I work for Confluent.
Currently, at my company we are migrating from Kafka 0.8 to 0.11, brokers migration steps and clearly stated in kafka documentation here
What I am stuck in is, upgrading the kafka clients (producers, consumers, spark-streaming), I don't find any documentation/ articles listing out clearly what are the required changes or steps to follow to upgarde the client, all what I found is the java doc Producer Client
What I did so far is to change the kafka client version in my gradle to kafka-clients-0.11.0.0, and everything from the compilation point of view went fine with no code changes at all.
What I seek help with is, is there any expected problems I should take care of, any pointers for client changes other than the kafka-client version?
I went through lots of experiments to get this done.
For the consumers and producers, I just used the kafka consumers and producers 0.11.0.
The trick part was replacing spark-streaming, spark-streaming latest version only support upto kafka 0.10.X, which doesn't contains any updates related to the new broker.
What I recommend here, if you are about to write an application from scratch and your main goal is realtime streaming go for kafka-streaming API, it is just AWESOME!, if you already have spark streaming app (which was my case), you should either judge which is more important than the other wether to get stuck with the kafka-broker version 10.X and spark-streaming which was [experimental][1] btw.
The benefits of having the streaming inside kafka not spark the following:
Kafka streaming is a normal jar that can be injected in any java application, so you don't care that much about deployment, and environment
Auto-scaling is so easy when using kafka-streaming using any scaleset provided by any cloud service provider, unlike scaling a HDP cluster.
Monitoring using something like prometheus would be much easier.