How to run kafka producer in eclipse? - eclipse

I am new to kafka.
I have downloaded kafka 2.9.2-0.8.1.1. I have started zookeeper, broker, producer and consumer from command prompt. I successfully created a topic and sent a message.
Now I want to run the producer from eclipse. But I dont know how to to do that. I found some links like http://vulab.com/blog/?p=611 to do this but I am still not able to run it. Is this the correct process mentioned in the link? Do i really need to create maven project in eclipse? Or is there any other way to do that?

You need to use the Kafka Java API for creating kafka producer (if you intent to use JAVA). Using maven is highly recomended as it will help managing all the dependencies for you. But you are free to bypass maven if you are ready to manage all the required JARs by yourself.
kafka wiki is another good place to look at.

Related

How to connect Apache kafka metrices in confluent control center?

we are using apache kafka 2.7 version, Before that we used confluent-platform version of kafka where we can access control center to view the statistics like incoming/outgoing data rate,disk space like those metrices.
Now we are using apache kafka and need to view the same kind of statistics in control center
is it possible..?
if possible can anyone tell how to perform these?
You'd need to download Confluent Platform, still, then find and copy all relevant JAR files for the Confluent interceptors and metric reporters into the Kafka libs directory in each broker (and other clients like Connect), then reconfigure the relevant properties and restart each system. You might also be able to grab the jar(s) directly from Maven
Note that Apache Kafka brokers will not detect any broker configuration that starts with confluent. prefix, so you'll also need to manually create some topics like _confluent-metrics
https://docs.confluent.io/platform/current/kafka/metrics-reporter.html

Integration tests for spring kafka producer and consumer

I want to write tests for spring kafka producer and consumer. I have tried multiple ways:
EmbeddedKafka annotation
EmbeddedKafkaRule
EmbeddedKafkaBroker
etc...
Every time I get one or the other error and all the examples posted on GitHub don't seem to run at all. I checked the spring kafka versions for compatibility as well.
Can someone share an example code base that was recently written and has seen it run successfully?
There are hundreds of tests in the framework itself.
This is probably the most extensive one...
https://github.com/spring-projects/spring-kafka/blob/1b9a9451feea7cca16903f1c990c74c6be9b8ffb/spring-kafka/src/test/java/org/springframework/kafka/annotation/EnableKafkaIntegrationTests.java#L164-L176

Kafka - Confluent Hub - Exploit only part of it

I already saw a similar question in SO, but not clearly answer my doubts.
We have different Kafka clusters and lot of exploitation operational habits around it. We have our way to start/stop the cluster, lots of exploit scripts that help maintain the cluster etc..
Now we would like to use Kafka connect connectors for new needs, but from what I saw, Kafka connect is extremely coupled to confluent-hub.
It's like I can't even use the connectors without having to install a full operational confluent-hub.
This makes it very difficult for us to use Kafka connect connectors, I understand that confluent-hub might be a framework that help running those connectors, but it's like we can't even use a dissociated Kafka cluster ( a one not exploited by confluent-hub..).
But maybe I miss something..
Do you know if there is any way to use properly Kafka connectors on a already existing Kafka cluster ( completely independent from confluent-hub) ?
EDITED :
It's more a question regarding the high coupled behaviour between confluent-hub and Kafka-connect. All the features that comes with Kafka connect ( distributed workers to handle different fail over scenarios, etc..) are not usable without confluent-hub, thus a "need" to have Kafka cluster running exclusively via confluent-hub, which is not an easy task when you already have an existing big Kafka cluster with lots of OPS habits on it.
Kafka Connect is part of Apache Kafka. It's a pluggable framework for streaming integration between systems in and out of Kafka.
To use Kafka Connect you need connectors for the specific technology with which you want to integrate. For example, S3 sink, Elasticsearch sink, JDBC source or sink, and so on.
The connector API is part of Apache Kafka, and available for anyone who wants to develop a connector.
Connectors are written by various people and organisations, and available in various different ways. How you obtain a connector depends on which connector you want, how its licensed, and how the author has made it available for distribution. It could be you go to github, clone the repo and build the JAR. It could be you can download the JAR directly.
All that Confluent Hub does is make lots of these connectors available for you in one place, easily searchable, and with an optional CLI tool that will install them for you.
Do you have to use Confluent Hub? No, not at all. Might it make your life easier in locating connectors that you want to use, and make it easier to install them? Hopefully :)
Disclaimer: I work for Confluent.

Upgrading Kafka client from 0.8.2.0 to 0.11.0.0

Currently, at my company we are migrating from Kafka 0.8 to 0.11, brokers migration steps and clearly stated in kafka documentation here
What I am stuck in is, upgrading the kafka clients (producers, consumers, spark-streaming), I don't find any documentation/ articles listing out clearly what are the required changes or steps to follow to upgarde the client, all what I found is the java doc Producer Client
What I did so far is to change the kafka client version in my gradle to kafka-clients-0.11.0.0, and everything from the compilation point of view went fine with no code changes at all.
What I seek help with is, is there any expected problems I should take care of, any pointers for client changes other than the kafka-client version?
I went through lots of experiments to get this done.
For the consumers and producers, I just used the kafka consumers and producers 0.11.0.
The trick part was replacing spark-streaming, spark-streaming latest version only support upto kafka 0.10.X, which doesn't contains any updates related to the new broker.
What I recommend here, if you are about to write an application from scratch and your main goal is realtime streaming go for kafka-streaming API, it is just AWESOME!, if you already have spark streaming app (which was my case), you should either judge which is more important than the other wether to get stuck with the kafka-broker version 10.X and spark-streaming which was [experimental][1] btw.
The benefits of having the streaming inside kafka not spark the following:
Kafka streaming is a normal jar that can be injected in any java application, so you don't care that much about deployment, and environment
Auto-scaling is so easy when using kafka-streaming using any scaleset provided by any cloud service provider, unlike scaling a HDP cluster.
Monitoring using something like prometheus would be much easier.

How to update running kafka connector

I have kafka conenct running in Marathon container. If I want to update the connector plugin (jar) I have to upload the new one and then restart the Connect task.
Is it possible to do that without restarting/downtime?
The updated jar for the connector plugin needs to be added to the classpath and then the classloader for the worker needs to pick it up. The best way to do this currently is to take an outage as described here.
Depending on your connector, you might be able to do rolling upgrades, but the generic answer is that if you need to upgrade the connector plugin, you currently have to take an outage.