we are using apache kafka 2.7 version, Before that we used confluent-platform version of kafka where we can access control center to view the statistics like incoming/outgoing data rate,disk space like those metrices.
Now we are using apache kafka and need to view the same kind of statistics in control center
is it possible..?
if possible can anyone tell how to perform these?
You'd need to download Confluent Platform, still, then find and copy all relevant JAR files for the Confluent interceptors and metric reporters into the Kafka libs directory in each broker (and other clients like Connect), then reconfigure the relevant properties and restart each system. You might also be able to grab the jar(s) directly from Maven
Note that Apache Kafka brokers will not detect any broker configuration that starts with confluent. prefix, so you'll also need to manually create some topics like _confluent-metrics
https://docs.confluent.io/platform/current/kafka/metrics-reporter.html
Related
I'm new to Camel and Kafka. I have been reading some documentation on Camel, and have come across this repository with links to number of connectors. These seem to be connectors to primary sources of storage. What if I need to connect from .NET, pull the data and process it before committing it to my database? I feel like I'm missing the point somehow as I don't see any kinds of C# connectors.
Camel kafka connector project aims to provide a set of kafka connectors ready to use by simply configuring them. These connectors are based on the corresponding Camel components, that in turn focus on talking with an "external" system, protocol or SaaS.
So if I understood you usecase you probably need a kafka client in C# (like https://github.com/confluentinc/confluent-kafka-dotnet) so in your application logic you can connect to kafka grab events, apply what logic is needed and place them in the application database.
I already saw a similar question in SO, but not clearly answer my doubts.
We have different Kafka clusters and lot of exploitation operational habits around it. We have our way to start/stop the cluster, lots of exploit scripts that help maintain the cluster etc..
Now we would like to use Kafka connect connectors for new needs, but from what I saw, Kafka connect is extremely coupled to confluent-hub.
It's like I can't even use the connectors without having to install a full operational confluent-hub.
This makes it very difficult for us to use Kafka connect connectors, I understand that confluent-hub might be a framework that help running those connectors, but it's like we can't even use a dissociated Kafka cluster ( a one not exploited by confluent-hub..).
But maybe I miss something..
Do you know if there is any way to use properly Kafka connectors on a already existing Kafka cluster ( completely independent from confluent-hub) ?
EDITED :
It's more a question regarding the high coupled behaviour between confluent-hub and Kafka-connect. All the features that comes with Kafka connect ( distributed workers to handle different fail over scenarios, etc..) are not usable without confluent-hub, thus a "need" to have Kafka cluster running exclusively via confluent-hub, which is not an easy task when you already have an existing big Kafka cluster with lots of OPS habits on it.
Kafka Connect is part of Apache Kafka. It's a pluggable framework for streaming integration between systems in and out of Kafka.
To use Kafka Connect you need connectors for the specific technology with which you want to integrate. For example, S3 sink, Elasticsearch sink, JDBC source or sink, and so on.
The connector API is part of Apache Kafka, and available for anyone who wants to develop a connector.
Connectors are written by various people and organisations, and available in various different ways. How you obtain a connector depends on which connector you want, how its licensed, and how the author has made it available for distribution. It could be you go to github, clone the repo and build the JAR. It could be you can download the JAR directly.
All that Confluent Hub does is make lots of these connectors available for you in one place, easily searchable, and with an optional CLI tool that will install them for you.
Do you have to use Confluent Hub? No, not at all. Might it make your life easier in locating connectors that you want to use, and make it easier to install them? Hopefully :)
Disclaimer: I work for Confluent.
I have an exisiting Kafka Cluster. I want to install the Kafka REST Proxy:
https://github.com/confluentinc/kafka-rest
If I install confluent does that come with Kafka? I am afraid if I still it on my master Kafka node confluent will override all my settings and mess up my Kafka cluster.
How do you install Kafka REST when you have an existing Kafka cluster?
This is not made clear on their website. I have CentOS and was going to try:
sudo yum install confluent-platform-oss-2.11
Any help would be great....
Download the Confluent Platform tarball, extract it, (or preferrably use APT/YUM) then only configure and run the REST proxy via kafka-rest-start
I wouldn't recommend using APT/YUM to install the entire confluent platform if you already have an existing Kafka. You might be able to only install kafka-rest using it, though.
Alternatively, backup your existing Kafka and Zookeeper property files, then place the Confluent Platform on top of the existing files, keeping the original files. If your Kafka is an old release, take this as a good opportunity to schedule an upgrade. Downloading Confluent isn't going to overwrite anything for the upstream Apache projects version for the corresponding release. If anything, it's an extension
Currently, at my company we are migrating from Kafka 0.8 to 0.11, brokers migration steps and clearly stated in kafka documentation here
What I am stuck in is, upgrading the kafka clients (producers, consumers, spark-streaming), I don't find any documentation/ articles listing out clearly what are the required changes or steps to follow to upgarde the client, all what I found is the java doc Producer Client
What I did so far is to change the kafka client version in my gradle to kafka-clients-0.11.0.0, and everything from the compilation point of view went fine with no code changes at all.
What I seek help with is, is there any expected problems I should take care of, any pointers for client changes other than the kafka-client version?
I went through lots of experiments to get this done.
For the consumers and producers, I just used the kafka consumers and producers 0.11.0.
The trick part was replacing spark-streaming, spark-streaming latest version only support upto kafka 0.10.X, which doesn't contains any updates related to the new broker.
What I recommend here, if you are about to write an application from scratch and your main goal is realtime streaming go for kafka-streaming API, it is just AWESOME!, if you already have spark streaming app (which was my case), you should either judge which is more important than the other wether to get stuck with the kafka-broker version 10.X and spark-streaming which was [experimental][1] btw.
The benefits of having the streaming inside kafka not spark the following:
Kafka streaming is a normal jar that can be injected in any java application, so you don't care that much about deployment, and environment
Auto-scaling is so easy when using kafka-streaming using any scaleset provided by any cloud service provider, unlike scaling a HDP cluster.
Monitoring using something like prometheus would be much easier.
I'm new in Confluent world, and I know how to start kafka, zookeepers from confluent, but it's not that what I need.
I have already 3 kafka nodes and 2 zookeepers installed by Ambari. Afterwards I downloaded 3.0.0 version of Confluent and now I want to connect Confluent with already running Kafka and zookeeper. I don't want to instance new kafka server or zookeeper server which confluent is giving.
Does anyone has an idea how to accomplish that, what to actually run from Confluent and what to change.
By now I was only chaning files in ./etc/kafka or ./etc/zookeeper which are in Confluent dir. Thank you!
clarify some basics about Confluent and how manage communication between Confluent and Kafka
First things first, there is no single application called "Confluent" that can be started all on its own.
There's is nothing to configure for Kafka or Zookeeper. The Confluent Platform doesn't add anything on top of the existing Apache Kafka you have (presumably, via Hortonworks or Cloudera).
In fact, those companies do add patches to Kafka that would be slightly different than the base Apache versions you would get from Confluent.
That being said, if you read through each of extra services that Confluent provides, you'll notice either a Zookeeper or a Bootstrap server configuration option. Fill out those fields, start the respective services, and you're good to go.
what to actually run from Confluent
Look in the bin directory, you can find all the start scripts. From the comments, looks like you're trying to use Connect Distributed (which should already be installed by any recent Kafka installation, it's not Confluent specific), and Schema Registry. You'll have to be more specific about the errors that you get, but the config files are all in the etc path.
Unless you're using KSQL, REST Proxy or Control Center, there's not much to run because, as mentioned, Kafka Connect is included with the base Apache Kafka project and Hortonworks is maintaining their own Schema Registry project
2 zookeepers installed by Ambari
This is a highly non-recommended setup. Please install an odd number of Zookeepers. 3 or 5, preferably