updating kafka dependency in camus is causing messages not read by EtlRecordReader - apache-kafka

In my project camus is used for long time and it is never get updated.
The camus project uses kafka version 0.8.2.2. I want to find a workaround to use kafka 1.0.0.
So I cloned the directory and updated the dependency. When I do that the Message here requires additional parameters here.
As given in the github link above, the code compiles but the messages are not read from the kafka due to the condition here.
Is it possible to update the kafka dependency along with appropriate data constructors of kafka.message.Message and make it work.

Related

Sending data before execution of scenarios

I am working on a scala application. I have some files in my resouce folder of project. Those are json files. I want to load all of them as string and send them over to kafka topic. I already have kafka producer code but just don't know how to all files and send them. I am using following code
Source.fromResource(path_of_file).mkstring
But with this I am able to send only one file which I pass but how can I write a generic code to load them and send them one by one. This thing I need to do in BeforeAll of my cucumber test. In short I just want to send these files before my any scenario begin to execute
Which sbt version are you using? Please note that sbt 1.2.8 has a bug in listing directories. Otherwise the following should do that:
new File(getClass.getResource("/my-dir").getFile).listFiles().foreach(println)

New download of Kafka already contains the old topic

I was working with a Kafka download pack and was following the Kafka getting started guide. Thus, I created a sample topic called test.
Then when I wanted to try setting some access control lists using the kafka-acls.sh script. For some reason, I did not find that script inside the bin directory of my kafka pack.
So, I downloaded a fresh kafka pack from their website to check and this script was available. (I don't know why or how it wasn't there in the earlier pack)
However, when I started kafka from my new pack and tried to create the same topic test, I get an error saying that the topic already exists.
I am trying to figure out how this is possible even with a freshly downloaded instance? Does kafka save topics in some common directory or something?
Shabir
Found the reason. I figured that if the topics are persisted even across different bundles of Kafka then it must be stored in some place in disk other than the bundle itself.
A little bit of searching proved that the zookeeper stores its details in the directory pointed to by dataDir inside the zookeeper.properties file which is by default /tmp/zookeeper.
Once I deleted this folder and started a fresh Kafka pack all previous topics were gone and it behaved like a new fresh pack.
Thanks
Shabir

SpringXD/Spring Integration: Using different versions of spring-integration-kafka for producer and consumer

I have the following configuration:
Spring-integration-kafka 1.3.1.RELEASE
I have a custom kafka-sink and a custom kafka-source
The configuration I want to have:
I'd like to still using Spring-integration-kafka 1.3.1.RELEASE with my custom kafka-sink.
I'm changing my kafka-source logic to use Spring-integration-kafka-2.1.0.RELEASE. I noticed the way to implement a consumer/producer is way different to prior versions of Spring-integration-kafka.
My question is: could I face some compatibily issues?
I'm using Rabbit.
You should be ok then; it would probably work with the newer kafka jars in the source's /lib directory since each module is loaded in its own classloader so there should be no clashes with the xd/lib jars.
However, you might have to remove the old kafka jars from the xd/lib directory (which is why I asked about the message bus).

Explain Maven artifact differences: kafka-client, kafka_2.11-<kafkserver-version>, scalatest-embedded-kafka_2.11.

Please explain the maven artifact differences and when to use what? for kafka-client, kafka_2.11-, scalatest-embedded-kafka_2.11. Is anything specially used for writing unit tests?
I want to understand when to use what?
In my repo, we have been using kafka_2.9.2-0.8.1.1, currently we are planning to move to kafka broker 0.9.0.1. Hence I used kafka_2.11-0.9.0.1 and also tried kafka_2.10-0.9.0.1.
When the unit tests runs, kafkaTestServer (kafkaserverstartable) always hangs internittently with kafka_2.10 and kafka_2.11
but with kafka_2.9.2-0.8.1.1 - never had hang issue.
if it proceeds, it failed with KafkaConfig init error or ScalaObject not found error.
I m kind of confused about these artifacts? Can anyone explain me about this?
The names encode the use Scala version as well as the uses Kafka version. For example kafka_2.9.2-0.8.1.1 is for Kafka 0.8.1.1 (ie, the suffix after the - is the Kafka version number and the binaries got compiled using Scala 2.9.2.
Thus, if you write code, you want to use the same Scala version as you artifact was compiled with. I assume, that the hanging and error is due to Scala version mismatch.

Error: Could not find or load main class config.zookeeper.properties

I am trying to execute a sample producer consumer application using Apache Kafka. I downloaded it from https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.0.0/kafka-0.10.0.0-src.tgz . Then I started following the steps given in http://www.javaworld.com/article/3060078/big-data/big-data-messaging-with-kafka-part-1.html.
When I tried to run bin/zookeeper-server-start.sh config/zookeeper.properties, I am getting Error: Could not find or load main class config.zookeeper.properties I googled about the issue but didn't get any useful information on this. Can anyone help me to continue?
You've downloaded the source package. Download the binary package of Kafka and do testing.
You have to download the binary version from the official Kafka web site.
Assuming you have the correct binary version check to see that you do not already have CLASSPATH defined in your environment. If you do and the defined CLASSPATH has a space in it (e.g.C:\Program Files\<>) then neither zookeeper or kafka will start.
To solve this either delete your existing CLASSPATH or modify the startup script that builds the zookeeper and kafka CLASSPATH values, putting your CLASSPATH entry in double quotes before the path is built