Exception in thread "main" java.lang.NoSuchMethodError: scala.Tuple2._1$mcI$sp()I - producer

Properties props = new Properties();
props.put("zk.connect", "localhost:2181");
props.put("serializer.class", "kafka.serializer.StringEncoder");
ProducerConfig config = new ProducerConfig(props);
Producer producer = new Producer(config);
ProducerData data = new ProducerData("test-topic", "test-message");
producer.send(data);
i am trying to execute this code then got Exception in thread "main" java.lang.NoSuchMethodError: scala.Tuple2._1$mcI$sp()I this exception.
i added all scala related jar file.plese suggedt me ??????

I faced the same issue. Check your classpath to see if sbt-launch.jar is preceding scala-library.jar. Both of them have the same class; scala.Tuple2 from scala-library is the correct one.
After placing scala-library.jar higher in the classpath solved the issue.
Thanks,
Hussain

Related

Problem reading AVRO messages from a Kafka Topic using Structured Spark Streaming (Spark Version 2.3.1.3.0.1.0-187/ Scala version 2.11.8)

I am invoking spark-shell like this
spark-shell --jars kafka-clients-0.10.2.1.jar,spark-sql-kafka-0-10_2.11-2.3.0.cloudera1.jar,spark-streaming-kafka-0-10_2.11-2.3.0.jar,spark-avro_2.11-2.4.0.jar,avro-1.9.1.jar
After That I read from a Kafka Topic using readStream()
val df = spark.readStream.format("kafka").option("kafka.bootstrap.servers",
"kafka-1.x:9093,kafka-2.x:9093,kafka-0.x:9093").option("kafka.security.protocol","
SASL_SSL").option("kafka.ssl.protocol","TLSv1.2").option("kafka.sasl.mechanism","PLAIN").option("kafka.sasl.jaas.config","org.apache.kafka.common.security.plain.PlainLoginModule
required username=\"token\" password=\"XXXXXXXXXX\";").option("subscribe", "test-topic").option("startingOffsets",
"latest").load()
Then I read the AVRO Schema File
val jsonFormatSchema = new String(Files.readAllBytes(Paths.get("/root/avro_schema.json")))
Then I make the DataFrame which matches the AVRO schema
val DataLineageDF = df.select(from_avro(col("value"),jsonFormatSchema).as("DataLineage")).select("DataLineage.*")
This Throws an Error on me :
java.lang.NoSuchMethodError: org.apache.avro.Schema.getLogicalType()Lorg/apache/avro/LogicalType;
I could fix this Problem by replacing the jar spark-avro_2.11-2.4.0.jar with spark-avro_2.11-2.4.0-palantir.31.jar
Issue:
DataLineageDF.writeStream.format("console").outputMode("append").trigger(Trigger.ProcessingTime("10 seconds")).start
Fails, with this Error
Exception in thread "stream execution thread for [id = ad836d19-0f29-499a-adea-57c6d9c630b2, runId = 489b1123-a2b2-48ea-9d24-e6744e0959b0]" java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.boxedType(Lorg/apache/spark/sql/types/DataType;)Ljava/lang/String;
which seems to be related to In-compatible jars. If anyone has any idea what's going wrong please comment

Exception on startup: NoSuchMethodException: org.springframework.kafka.core.KafkaTemplate.<init>()

I tried to update Spring Kafka version but got exception
Spring Kafka version 2.3.4.RELEASE
Spring Boot version 2.2.2.RELEASE
Kafka-clients version 2.3.1
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.kafka.core.KafkaTemplate]: No default constructor found; nested exception is java.lang.NoSuchMethodException: org.springframework.kafka.core.KafkaTemplate.<init>()
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:83)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1312)
... 101 more
Caused by: java.lang.NoSuchMethodException: org.springframework.kafka.core.KafkaTemplate.<init>()
at java.lang.Class.getConstructor0(Class.java:3082)
at java.lang.Class.getDeclaredConstructor(Class.java:2178)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:78)
... 102 more
You need to show you code and configuration and the full stack trace (you should never edit/truncate stack traces here). The error seems quite clear:
Caused by: java.lang.NoSuchMethodException: org.springframework.kafka.core.KafkaTemplate.()
There is no no-arg constructor - it needs a producer factory; we need to see the code and configuration to figure out who's trying to create a template with no PF.
Normaly, Spring Boot will automatically configure a KafkaTemplate for you.
Thank you! The problem wa in the tests - I incorrectly determined the generic type of KafkaTemplate. I used
KafkaTemplate <String, Bytes> instead of KafkaTemplate<String, Message> which I am using in app code. So I suppose, test spring context could not define proper bean to autowire.

Class org.apache.kafka.abstracts.serialization.StringDeserializer could not be found

I am working with kafka. and getting the message in the description. I am setting the properties for the deserializer in my consumer class.
props.put("key.deserializer", "org.apache.kafka.abstracts.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.abstracts.serialization.StringDeserializer");
producer = new KafkaProducer<>(props);
Still at runtime i'm getting an error that the deserializer could not be found. We recently upgraded to 10.1.1 from 10.0.1 is there a change in there that I am missing?
Kafka's String deserualizer is
'org.apache.kafka.common.serialization.StringDeserializer'
( https://github.com/apache/kafka/blob/0.10.0/clients/src/main/java/org/apache/kafka/common/serialization/StringDeserializer.java )

Kafka Stream failed to rebalance on startup

I'm trying to run a Kafka stream application and I'm running into the following exception when kafka stream starts.
Here the configurations that I'm using on Kafka 0.10.1
final Map<String, String> properties = new HashMap<>();
properties.put(StreamsConfig.APPLICATION_ID_CONFIG, "some app id");
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
properties.put(StreamsConfig.CLIENT_ID_CONFIG, "some app id");
properties.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "localhost:2181");
properties.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, "org.apache.kafka.common.serialization.Serdes$StringSerde");
properties.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, "org.apache.kafka.common.serialization.Serdes$StringSerde");
The exception that I'm getting.
Exception in thread "StreamThread-1" org.apache.kafka.streams.errors.StreamsException: stream-thread [StreamThread-1] Failed to rebalance
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:410)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
Caused by: java.lang.NullPointerException
at java.io.File.<init>(File.java:360)
at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:157)
at org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:163)
at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:85)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:62)
at org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:81)
at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:120)
at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:633)
at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:660)
at org.apache.kafka.streams.processor.internals.StreamThread.access$100(StreamThread.java:69)
at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:124)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:228)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:313)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407)
... 1 more
Seems like I need to set state.dir but that doesn't seem to work. By default on Kafka 0.10.1, it already defaults to /tmp/kafka-streams. Anyone have any ideas?

flink read data from kafka

I write a simple example
val env = StreamExecutionEnvironment.getExecutionEnvironment
val properties = new Properties()
properties.setProperty("bootstrap.servers","xxxxxx")
properties.setProperty("zookeeper.connect","xxxxxx")
properties.setProperty("group.id", "caffrey")
val stream = env
.addSource(new FlinkKafkaConsumer082[String]("topic", new SimpleStringSchema(), properties))
.print()
env.execute("Flink Kafka Example")
when I run this code I got an error like this:
[error] Class
org.apache.flink.streaming.api.checkpoint.CheckpointNotifier not found
- continuing with a stub.
I google this error and find CheckpointNotifier is an interface.
I really don't understand where did I do wrong.
Since CheckpointNotifier is a class from an older Flink version, I suspect that you are mixing different Flink dependencies in your pom file.
Make sure all Flink dependencies have the same version.