java.lang.IllegalArgumentException kafka console consumer - apache-kafka

We are using Kafka 2.10-0.9.0.2.4.2.0-258 in our environments. We are getting below exception with kafka console consumer on few topics. I am aware that some times messages coming into these topics are too big but they do not exceed message.max.bytes.
./kafka-console-consumer.sh --zookeeper xxx:2181,xxx:2181,xxx:2181 --topic test-topic
{metadata.broker.list=xxx:9092,xxx:9092,xxx:9092, request.timeout.ms=30000, client.id=console-consumer-76015, security.protocol=PLAINTEXT}
[2016-08-28 21:27:54,795] ERROR Error processing message, terminating consumer process: (kafka.tools.ConsoleConsumer$)
java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:275)
at kafka.message.Message.sliceDelimited(Message.scala:237)
at kafka.message.Message.key(Message.scala:224)
at kafka.message.MessageAndMetadata.key(MessageAndMetadata.scala:30)
at kafka.consumer.OldConsumer.receive(BaseConsumer.scala:84)
at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:109)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:69)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:47)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Processed a total of 0 messages
I decreased replica.fetch.max.bytes equal to message.max.bytes and also num.replica.fetchers to 2 as suggested in link below but it did not resolve.
https://issues.apache.org/jira/browse/KAFKA-1196
Any idea what else I should do to make it work?
Any help would be appreciated.
Thanks in advance.

I was having the exact same issue. The root cause was the incompatibility of the kafka jar file between the one that your kafka installation use and the one that you used to develop and run your producer. You can find which version of kafka jars your installation is using in /usr/hdp/current/kafka-broker/libs
In my case, my kafka installation is using kafka_2.10-0.9.0.2.4.2.0-258.jar, but the kafka jar that I bundled with my producer was 0.10.0.1. Once, I switched to 0.9.0.2.4.2.0-258, it worked.
If your cluster is HDP and you are using maven to build your producer, you can find all jar dependencies here http://repo.hortonworks.com/content/repositories/releases/
For maven here is what you have to use:
Repository:
<repositories>
<repository>
<id>org.hortonworks</id>
<url>http://repo.hortonworks.com/content/repositories/releases/</url>
</repository>
</repositories>
Dependency:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.9.0.2.4.2.0-258</version>
<scope>compile</scope>
<exclusions>
<exclusion>
<artifactId>jmxri</artifactId>
<groupId>com.sun.jmx</groupId>
</exclusion>
<exclusion>
<artifactId>jms</artifactId>
<groupId>javax.jms</groupId>
</exclusion>
<exclusion>
<artifactId>jmxtools</artifactId>
<groupId>com.sun.jdmk</groupId>
</exclusion>
</exclusions>
</dependency>

I had the same issue (error/exception) when running the consumer. However, the scenario was slightly different.
I initially was working with Kafka Version - 2.11-0.10.2.0 (OLD) and then I had to change to 2.10-0.9.0.1 (NEW).
So when I downloaded the NEW setup and just started the zookeeper, broker, producer and finally the consumer I got the above error. I was using the default producer and consumer scripts given in the download itself and testing the quickstart guide.
I got the same error as above. And I could not figure out how the producer inside the freshly downloaded pack could be using a different jar as explained in the answer above.
So, I realized that there must be a common place which is referenced by all the Kafka instance which I found to be the /tmp/kafka-logs and /tmp/zookeeper folders.
Once, deleted them and restarted my NEW download Kafka instance, I was able to get rid of the above exception and things worked smoothly.
Hope this adds to help someone else also facing the same error but in a different scenario.
Shabir

Related

Kafka mqtt connector error in stand alone mode

I have an MQTT broker and Kafka broker running on ubuntu. I can publish messages to Kafka console consumers through producers. However, when I try to publish a message on Kafka through connector by this repository https://github.com/SINTEF-9012/kafka-mqtt-source-connector in standalone mode, this throws the following error:
.
These are the configuration for
connect-standalone.properties file:
source connector.properties file:
Please help me in connecting mosquitto to Kafka.
The error indicates that this dependendency declared in the POM is not part of the JAR that you've put in the plugin path
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.10.0</version>
</dependency>
Looking at the logs, it seems to have used the SNAPSHOT jar file, not the with-dependencies one

Failing to AVRO serialise on only one consuming microservice

I have five Kafka consuming services and one Kafka producing service. I am after rolling out a new avro schema in a consumer library in each of the Java consuming microservices. I have not made the producer side changes yet. But one of the consuming services is failing to serialize anything the other four are working fine.
I get this exception
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.sun.ws.rs.ext.RuntimeDelegateImpl
at javax.ws.rs.ext.RuntimeDelegate.findDelegate(RuntimeDelegate.java:122)
at javax.ws.rs.ext.RuntimeDelegate.getInstance(RuntimeDelegate.java:91)
at javax.ws.rs.core.UriBuilder.newInstance(UriBuilder.java:69)
at javax.ws.rs.core.UriBuilder.fromPath(UriBuilder.java:111)
at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:656)
at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:642)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:217)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaBySubjectAndId(CachedSchemaRegistryClient.java:291)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaById(CachedSchemaRegistryClient.java:276)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer$DeserializationContext.schemaFromRegistry(AbstractKafkaAvroDeserializer.java:273)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:97)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:87)
at io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:62)
Some things that changed on both the producer side and consumer sides is the versions of
kafka-avro-serializer up to 6.0.0
kafka-clients up to 2.0.0
Then as a result the record that arrives at this consumer is null and in our configuration blocks our queue and we don't advance the manual offset.
Hope you can help.
You need to include the jersey-client library.
You can add the .jar manually or depending on your build tool:
Maven
<dependency>
<groupId>org.glassfish.jersey.core</groupId>
<artifactId>jersey-client</artifactId>
<version>3.0.0</version>
</dependency>
Gradle
compile group: 'org.glassfish.jersey.core', name: 'jersey-client', version: '3.0.0'
SBT
libraryDependencies += "org.glassfish.jersey.core" % "jersey-client" % "3.0.0"
...
(...)

Kafka client upgrade from 0.8.2.1 to 2.1.1

I am planning to upgrade the kafka client from 0.8.2.1 to 2.1.1.
Need help with how to go about it?
My existing code is as shown here https://stackoverflow.com/a/37384777.
"kafka.consumer.KafkaStream" is depricated in 2.1.1
Need direction on how to go about this -
I Updated my pom to
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.1.1</version>
<scope>provided</scope>
</dependency>
The 0.8 client was written in scala. The clients were rewrittem in java since 0.9. It is a completely new API. So a rewrite of your code is needed.

How to send trace ID through kafka

Microservice1 -> kafka -> Microservice2
How do I pass the trace ID when transferring data?
and i'm using spring sleuth for makeing trace ID.
and i'm using "compile('org.springframework.kafka:spring-kafka:2.1.2.RELEASE')"
Please read the docs https://cloud.spring.io/spring-cloud-static/Finchley.SR2/single/spring-cloud.html#_sleuth_with_zipkin_over_rabbitmq_or_kafka
48.3.3 Sleuth with Zipkin over RabbitMQ or Kafka If you want to use RabbitMQ or Kafka instead of HTTP, add the spring-rabbit or
spring-kafka dependency. The default destination name is zipkin.
If using Kafka, you must set the property spring.zipkin.sender.type
property accordingly:
spring.zipkin.sender.type: kafka [Caution] Caution
spring-cloud-sleuth-stream is deprecated and incompatible with these
destinations.
If you want Sleuth over RabbitMQ, add the spring-cloud-starter-zipkin
and spring-rabbit dependencies.
The following example shows how to do so for Gradle:
Maven.
<dependencyManagement> 1
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${release.train.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies> </dependencyManagement>
<dependency> 2
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zipkin</artifactId> </dependency> <dependency> 3
<groupId>org.springframework.amqp</groupId>
<artifactId>spring-rabbit</artifactId> </dependency> 1
We recommend that you add the dependency management through the Spring
BOM so that you need not manage versions yourself.
2
Add the dependency to spring-cloud-starter-zipkin. That way, all
nested dependencies get downloaded.
3
To automatically configure RabbitMQ, add the spring-rabbit dependency.
Gradle.
dependencyManagement { 1
imports {
mavenBom "org.springframework.cloud:spring-cloud-dependencies:${releaseTrainVersion}"
} }
dependencies {
compile "org.springframework.cloud:spring-cloud-starter-zipkin" 2
compile "org.springframework.amqp:spring-rabbit" 3 } 1
We recommend that you add the dependency management through the Spring
BOM so that you need not manage versions yourself.
2
Add the dependency to spring-cloud-starter-zipkin. That way, all
nested dependencies get downloaded.
3
To automatically configure RabbitMQ, add the spring-rabbit dependency.

What's the difference between the two maven modules in Kafka

I noticed that since Kafka 0.8.2.0, Kafka has shipped with a new maven module:
http://mvnrepository.com/artifact/org.apache.kafka/kafka-clients
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.8.2.0</version>
</dependency>
But, it still ships with the older maven module
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.8.2.0</version>
</dependency>
What's the difference or relationship between these two modules? I noticed that SimpleConsumer that I have used before is in kafka_2.11 module,but not in kafka-clients, does it mean that if I want to use SimpleConsumer, I still have to include the kafka_2.11 module?
SimpleConsumer was an old implementation of Consumer in the Kafka. It's now deprecated in favor of new Consumer API. In Kafka 0.8.1, team had started to re-implement Producer/Consumer APIs, and it went into kafka-client maven artifact. You can trace the changes between versions: 0.8.1, 0.9.0, 1.0.0, ...
You need to use new Consumer API if you're using Kafka >= 0.10.