I have an MQTT broker and Kafka broker running on ubuntu. I can publish messages to Kafka console consumers through producers. However, when I try to publish a message on Kafka through connector by this repository https://github.com/SINTEF-9012/kafka-mqtt-source-connector in standalone mode, this throws the following error:
.
These are the configuration for
connect-standalone.properties file:
source connector.properties file:
Please help me in connecting mosquitto to Kafka.
The error indicates that this dependendency declared in the POM is not part of the JAR that you've put in the plugin path
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.10.0</version>
</dependency>
Looking at the logs, it seems to have used the SNAPSHOT jar file, not the with-dependencies one
Related
I have five Kafka consuming services and one Kafka producing service. I am after rolling out a new avro schema in a consumer library in each of the Java consuming microservices. I have not made the producer side changes yet. But one of the consuming services is failing to serialize anything the other four are working fine.
I get this exception
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.sun.ws.rs.ext.RuntimeDelegateImpl
at javax.ws.rs.ext.RuntimeDelegate.findDelegate(RuntimeDelegate.java:122)
at javax.ws.rs.ext.RuntimeDelegate.getInstance(RuntimeDelegate.java:91)
at javax.ws.rs.core.UriBuilder.newInstance(UriBuilder.java:69)
at javax.ws.rs.core.UriBuilder.fromPath(UriBuilder.java:111)
at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:656)
at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:642)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:217)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaBySubjectAndId(CachedSchemaRegistryClient.java:291)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaById(CachedSchemaRegistryClient.java:276)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer$DeserializationContext.schemaFromRegistry(AbstractKafkaAvroDeserializer.java:273)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:97)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:87)
at io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:62)
Some things that changed on both the producer side and consumer sides is the versions of
kafka-avro-serializer up to 6.0.0
kafka-clients up to 2.0.0
Then as a result the record that arrives at this consumer is null and in our configuration blocks our queue and we don't advance the manual offset.
Hope you can help.
You need to include the jersey-client library.
You can add the .jar manually or depending on your build tool:
Maven
<dependency>
<groupId>org.glassfish.jersey.core</groupId>
<artifactId>jersey-client</artifactId>
<version>3.0.0</version>
</dependency>
Gradle
compile group: 'org.glassfish.jersey.core', name: 'jersey-client', version: '3.0.0'
SBT
libraryDependencies += "org.glassfish.jersey.core" % "jersey-client" % "3.0.0"
...
(...)
I noticed that since Kafka 0.8.2.0, Kafka has shipped with a new maven module:
http://mvnrepository.com/artifact/org.apache.kafka/kafka-clients
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.8.2.0</version>
</dependency>
But, it still ships with the older maven module
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.8.2.0</version>
</dependency>
What's the difference or relationship between these two modules? I noticed that SimpleConsumer that I have used before is in kafka_2.11 module,but not in kafka-clients, does it mean that if I want to use SimpleConsumer, I still have to include the kafka_2.11 module?
SimpleConsumer was an old implementation of Consumer in the Kafka. It's now deprecated in favor of new Consumer API. In Kafka 0.8.1, team had started to re-implement Producer/Consumer APIs, and it went into kafka-client maven artifact. You can trace the changes between versions: 0.8.1, 0.9.0, 1.0.0, ...
You need to use new Consumer API if you're using Kafka >= 0.10.
We are using Kafka 2.10-0.9.0.2.4.2.0-258 in our environments. We are getting below exception with kafka console consumer on few topics. I am aware that some times messages coming into these topics are too big but they do not exceed message.max.bytes.
./kafka-console-consumer.sh --zookeeper xxx:2181,xxx:2181,xxx:2181 --topic test-topic
{metadata.broker.list=xxx:9092,xxx:9092,xxx:9092, request.timeout.ms=30000, client.id=console-consumer-76015, security.protocol=PLAINTEXT}
[2016-08-28 21:27:54,795] ERROR Error processing message, terminating consumer process: (kafka.tools.ConsoleConsumer$)
java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:275)
at kafka.message.Message.sliceDelimited(Message.scala:237)
at kafka.message.Message.key(Message.scala:224)
at kafka.message.MessageAndMetadata.key(MessageAndMetadata.scala:30)
at kafka.consumer.OldConsumer.receive(BaseConsumer.scala:84)
at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:109)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:69)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:47)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Processed a total of 0 messages
I decreased replica.fetch.max.bytes equal to message.max.bytes and also num.replica.fetchers to 2 as suggested in link below but it did not resolve.
https://issues.apache.org/jira/browse/KAFKA-1196
Any idea what else I should do to make it work?
Any help would be appreciated.
Thanks in advance.
I was having the exact same issue. The root cause was the incompatibility of the kafka jar file between the one that your kafka installation use and the one that you used to develop and run your producer. You can find which version of kafka jars your installation is using in /usr/hdp/current/kafka-broker/libs
In my case, my kafka installation is using kafka_2.10-0.9.0.2.4.2.0-258.jar, but the kafka jar that I bundled with my producer was 0.10.0.1. Once, I switched to 0.9.0.2.4.2.0-258, it worked.
If your cluster is HDP and you are using maven to build your producer, you can find all jar dependencies here http://repo.hortonworks.com/content/repositories/releases/
For maven here is what you have to use:
Repository:
<repositories>
<repository>
<id>org.hortonworks</id>
<url>http://repo.hortonworks.com/content/repositories/releases/</url>
</repository>
</repositories>
Dependency:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.9.0.2.4.2.0-258</version>
<scope>compile</scope>
<exclusions>
<exclusion>
<artifactId>jmxri</artifactId>
<groupId>com.sun.jmx</groupId>
</exclusion>
<exclusion>
<artifactId>jms</artifactId>
<groupId>javax.jms</groupId>
</exclusion>
<exclusion>
<artifactId>jmxtools</artifactId>
<groupId>com.sun.jdmk</groupId>
</exclusion>
</exclusions>
</dependency>
I had the same issue (error/exception) when running the consumer. However, the scenario was slightly different.
I initially was working with Kafka Version - 2.11-0.10.2.0 (OLD) and then I had to change to 2.10-0.9.0.1 (NEW).
So when I downloaded the NEW setup and just started the zookeeper, broker, producer and finally the consumer I got the above error. I was using the default producer and consumer scripts given in the download itself and testing the quickstart guide.
I got the same error as above. And I could not figure out how the producer inside the freshly downloaded pack could be using a different jar as explained in the answer above.
So, I realized that there must be a common place which is referenced by all the Kafka instance which I found to be the /tmp/kafka-logs and /tmp/zookeeper folders.
Once, deleted them and restarted my NEW download Kafka instance, I was able to get rid of the above exception and things worked smoothly.
Hope this adds to help someone else also facing the same error but in a different scenario.
Shabir
When trying to save data to Cassandra(in Scala), I get the following exception:
java.lang.ClassCastException:
com.datastax.driver.core.DefaultResultSetFuture cannot be cast to
com.google.common.util.concurrent.ListenableFuture
Please note that I do not get this error every time, but it comes up randomly once in a while which makes it more dangerous in production.
I am using YARN and I have shaded com.google.** to avoid the Guava symbol clash.
Here's the code snippet:
rdd.saveToCassandra(keyspace,"movie_attributes", SomeColumns("movie_id","movie_title","genre"))
Any help would be much appreciated.
UPDATE
Adding details from the pom file as requested:
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.5.0</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.5.0</version>
</dependency>
**Shading guava**
<relocation> <!-- Conflicts between Cassandra Java driver and YARN -->
<pattern>com.google</pattern>
<shadedPattern>oryx.com.google</shadedPattern>
<includes>
<include>com.google.common.**</include>
</includes>
</relocation>
Spark version: 1.5.2
Cassandra version: 2.2.3
Almost everyone who works on C* and Spark has seen these type of errors. The root cause is explained here.
C* driver depends on a relatively new version of guava while Spark depends on an older guava. To solve this before connector 1.6.2, you need to explicitly embed C* driver and guava with your application.
Since 1.6.2 and 2.0.0-M3, by default connector ships with the correct C* driver and guava shaded. So you should be OK with just connector artifact included in your project.
Things get tricky if your Spark application uses other libraries that depend on C* driver. Then you will have to manually include un-shaded version of connector, correct C* driver and shaded guava and deploy a fat jar. You essentially make your own connector package. In this case, you can't use --package to launch Spark cluster anymore.
tl;dr
use connector 1.6.2/2.0.0-M3 or above. 99% you should be OK.
In my java web app I'm sending messages to kafka.
I would like to compress my messages before sending it so I'm setting in my producer properties:
props.put("compression.codec", "2");
As I understand "2" stands for snappy, but when sending a message I'm getting:
java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.maxCompressedLength(I)I
at org.xerial.snappy.SnappyNative.maxCompressedLength(Native Method)
at org.xerial.snappy.Snappy.maxCompressedLength(Snappy.java:316)
at org.xerial.snappy.SnappyOutputStream.<init>(SnappyOutputStream.java:79)
at org.xerial.snappy.SnappyOutputStream.<init>(SnappyOutputStream.java:66)
at kafka.message.SnappyCompression.<init>(CompressionUtils.scala:61)
at kafka.message.CompressionFactory$.apply(CompressionUtils.scala:82)
at kafka.message.CompressionUtils$.compress(CompressionUtils.scala:109)
at kafka.message.MessageSet$.createByteBuffer(MessageSet.scala:71)
at kafka.message.ByteBufferMessageSet.<init>(ByteBufferMessageSet.scala:44)
at kafka.producer.async.DefaultEventHandler$$anonfun$3.apply(DefaultEventHandler.scala:94)
at kafka.producer.async.DefaultEventHandler$$anonfun$3.apply(DefaultEventHandler.scala:82)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
at scala.collection.Iterator$class.foreach(Iterator.scala:772)
at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
at scala.collection.mutable.HashMap.map(HashMap.scala:45)
at kafka.producer.async.DefaultEventHandler.serialize(DefaultEventHandler.scala:82)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:44)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:116)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:95)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:71)
at scala.collection.immutable.Stream.foreach(Stream.scala:526)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:70)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:41)
To resolve it I tried adding snappy dependency to my pom:
<dependency>
<groupId>org.xerial.snappy</groupId>
<artifactId>snappy-java</artifactId>
<version>${snappy-version}</version>
<scope>provided</scope>
</dependency>
and add the jar to my jetty server under /lib/ext
but still getting this error.
If I set "0" instead of "2" in the "compression.codec" property I do not get the exception, as expected.
what should I do in order to be able to use snappy compression?
This is my snappy version (should I use a different one?):
1.1.0.1
I'm deploying my app on jetty 8.1.9 which runs on Ubuntu 12.10.
<dependency>
<groupId>org.xerial.snappy</groupId>
<artifactId>snappy-java</artifactId>
<version>1.1.1.3</version>
</dependency>
I had the same issue and the code above solves my problem. The jar contains native libraries for all OS. Below are my development environment:
JDK version: 1.7.0_76
Kafka version: 2.10-0.8.2.1
Zookeeper version: 3.4.6