I’m trying to use some Kafka API with Kafka 1.0 and I find myself baffled at a fundamental level.
There’s a class AdminClient that’s been around since 0.10 or so. It has a method describeTopics I want to try. So I set up an Eclipse project and construct an AdminClient and point it to my Kafka cluster and all is well.
Until I examine the AdminClient class a little more closely and find it is a Scala module and quite different from the published Java API. Among other things describeTopics is nowhere to be found.
So I downloaded Kafka-1.0.0-src.tgz asnd poked around. I found core/src/main/scala/admin/AdminClient.scala which matches what I saw in Eclipse. Then I found clients/src/main/java/org/apache/clients/admin/AdminClient.java which matches the API doc.
I have the feeling there’s something I’m missing. How can I get to the API I see in the documentation?
It’s right here: https://github.com/apache/kafka/blob/1.0/clients/src/main/java/org/apache/kafka/clients/admin/AdminClient.java, which is in the kafka-clients artifact: http://search.maven.org/#artifactdetails%7Corg.apache.kafka%7Ckafka-clients%7C1.0.0%7Cjar
The AdminClient you are seeing comes from the core Kafka artifact. The new AdminClient API is available in the Kafka clients library (with consumer and producer) so you should use the following dependency in your pom.xml :
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>1.0.0</version>
</dependency>
The Scala implementation is the server side of the Admin functionality which runs in the Kafka Brokers which are also written in Scala. The java implementation is the client side which should match the docs because it’s the client API that should be documented.
Related
I need to write integration tests and mock reactive Kafka consumer. I see there're ways to do it with blocking Kafka, like using #EmbeddedKafka but was not able to find information about reactive
As linked in the comments, reactor-kafka project themselves use TestContainers for integration tests. If you don't run tests in an environment with Docker, then spring-kafka-test's EmbeddedKafka, or junit-kafka project should still work with Reactive-based Kafka clients since you really only need bootstrap.servers property to point at any valid broker server.
Regarding mocks, you don't need an actual broker; that's the point of mocking. Their source code does include mock classes, too.
All of us know, Kafka is still using log4j 1.x jar files even though Log4j 1.x has reached End of Life in 2015 and is no longer supported. So it became obsolete vulnerability for Kafka.
Are there any ways to replace log4j 1.x in present Kafka (docker images) or Is there any development work is going on to replace these log4j1.x? If yes, is there any ETA provided by Kafka team?
Yes, there is active work to replace log4j, if not already done so. Please search Kafka JIRA.
Also refer to Kafka CVE list - https://kafka.apache.org/cve-list
And Confluent's own announcements - https://support.confluent.io/hc/en-us/articles/4412615410580-December-2021-Log4j-Vulnerabilities-Advisory
Regarding out of support versions of Kafka, that you're not willing to upgrade, then you can try replacing log4j directly with reload4j (I've not tried it), but of course, this will not handle nested dependencies very accurately.
I tried the kafka connect transform predicate examples with debezium connector for MS SQL, and faced the issue with documentation for kafka connect. Examples in both documentations mention wrong org.apache.kafka.connect.predicates.TopicNameMatches, instead of the correct org.apache.kafka.connect.transforms.predicates.TopicNameMatches:
http://kafka.apache.org/documentation.html#connect_predicates
https://docs.confluent.io/platform/current/connect/transforms/regexrouter.html#predicate-examples
predicates=IsFoo
predicates.IsFoo.type=org.apache.kafka.connect.predicates.TopicNameMatches
predicates.IsFoo.pattern=foo
while in both distributions package is the same:
package org.apache.kafka.connect.transforms.predicates;
https://github.com/a0x8o/kafka/blob/master/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/predicates/TopicNameMatches.java
https://github.com/confluentinc/kafka/blob/master/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/predicates/TopicNameMatches.java
KIP for documentation improvement should then be issued for both?
You are correct: it's really mistake.
For the Apache Kafka docs, I already made a fix, but don't know why it didn't apply (asked about it in the PR).
Update. Fix will be applied in release 2.8
Currently, at my company we are migrating from Kafka 0.8 to 0.11, brokers migration steps and clearly stated in kafka documentation here
What I am stuck in is, upgrading the kafka clients (producers, consumers, spark-streaming), I don't find any documentation/ articles listing out clearly what are the required changes or steps to follow to upgarde the client, all what I found is the java doc Producer Client
What I did so far is to change the kafka client version in my gradle to kafka-clients-0.11.0.0, and everything from the compilation point of view went fine with no code changes at all.
What I seek help with is, is there any expected problems I should take care of, any pointers for client changes other than the kafka-client version?
I went through lots of experiments to get this done.
For the consumers and producers, I just used the kafka consumers and producers 0.11.0.
The trick part was replacing spark-streaming, spark-streaming latest version only support upto kafka 0.10.X, which doesn't contains any updates related to the new broker.
What I recommend here, if you are about to write an application from scratch and your main goal is realtime streaming go for kafka-streaming API, it is just AWESOME!, if you already have spark streaming app (which was my case), you should either judge which is more important than the other wether to get stuck with the kafka-broker version 10.X and spark-streaming which was [experimental][1] btw.
The benefits of having the streaming inside kafka not spark the following:
Kafka streaming is a normal jar that can be injected in any java application, so you don't care that much about deployment, and environment
Auto-scaling is so easy when using kafka-streaming using any scaleset provided by any cloud service provider, unlike scaling a HDP cluster.
Monitoring using something like prometheus would be much easier.
I am new to kafka.
I have downloaded kafka 2.9.2-0.8.1.1. I have started zookeeper, broker, producer and consumer from command prompt. I successfully created a topic and sent a message.
Now I want to run the producer from eclipse. But I dont know how to to do that. I found some links like http://vulab.com/blog/?p=611 to do this but I am still not able to run it. Is this the correct process mentioned in the link? Do i really need to create maven project in eclipse? Or is there any other way to do that?
You need to use the Kafka Java API for creating kafka producer (if you intent to use JAVA). Using maven is highly recomended as it will help managing all the dependencies for you. But you are free to bypass maven if you are ready to manage all the required JARs by yourself.
kafka wiki is another good place to look at.