Spark + Kafka Integration error. NoClassDefFoundError: org/apache/spark/sql/internal/connector/SimpleTableProvider - scala

I am using Kafka 2.5.0 and Spark 3.0.0. I'm trying to import some data from kafka into spark. The following code snippet gives me an erorr:
spark.readStream.format("kafka").option("kafka.bootstrap.servers", "localhost:9092").option("subscribe", "topic1").load()
The error I get says
java.lang.NoClassDefFoundError: org/apache/spark/sql/internal/connector/SimpleTableProvider

This error is mainly due to spark-kafka dependency conflicts.
You can check the supporting scala version in maven repository if not yet.
If the error still occurring then you can share more details like the groupId, artifactId along with version.

Related

java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V - Flink on EMR

I am trying to run a Flink (v 1.13.1) application on EMR ( v
5.34.0).
My Flink application uses Scallop(v 4.1.0) to parse the arguments passed.
Scala version used for Flink application is
2.12.7.
I keep getting below error when I submit the flink application to the cluster. Any clue or help is highly appreciated.
java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at org.rogach.scallop.Scallop.<init>(Scallop.scala:63)
at org.rogach.scallop.Scallop$.apply(Scallop.scala:13)
Resolved the issue by downgrading Scala to 2.11. Flink 1.13.1 Scala shell REPL on EMR mentioned scala version 2.11.12 hence downgraded to that version of Scala and this problem has disappeared.

spark, kafka integration issue: object kafka is not a member of org.apache.spark.streaming

I am receiving error while building my spark application (scala) in IntelliJ IDE.
It is a simple application with uses Kafka Stream for further processing. I have added all the jars and the IDE does not show any unresolved import or code statements.
However, when I try to build the artifact, I get two errors stating that
Error:(13, 35) object kafka is not a member of package
org.apache.spark.streaming
import org.apache.spark.streaming.kafka.KafkaUtils
Error:(35, 60) not found: value KafkaUtils
val messages: ReceiverInputDStream[(String, String)] = KafkaUtils.createStream(streamingContext,zkQuorum,"myGroup",topics)
I have seen similar questions but most of the ppl complain about this issue while submitting to spark. However, I one step behind that and merely building the jar file which would be submitted ultimately to spark. On top I am using IntelliJ IDE and a bit new to spark and scala; lost here.
Below is the snapshot of the IntelliJ Error
IntelliJ Error
Thanks
Omer
The reason is that you need to add spark-streaming-kafka-K.version-Sc.version.jar to your pom.xml and as well as your spark lib directory.

Spark Kafka - Issue while running from Eclipse IDE

I am experimenting with Spark Kafka integration. And I want to test the code from my eclipse IDE. However, I got below error:
java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
at kafka.utils.Pool.<init>(Pool.scala:28)
at kafka.consumer.FetchRequestAndResponseStatsRegistry$.<init>(FetchRequestAndResponseStats.scala:60)
at kafka.consumer.FetchRequestAndResponseStatsRegistry$.<clinit>(FetchRequestAndResponseStats.scala)
at kafka.consumer.SimpleConsumer.<init>(SimpleConsumer.scala:39)
at org.apache.spark.streaming.kafka.KafkaCluster.connect(KafkaCluster.scala:52)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$org$apache$spark$streaming$kafka$KafkaCluster$$withBrokers$1.apply(KafkaCluster.scala:345)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$org$apache$spark$streaming$kafka$KafkaCluster$$withBrokers$1.apply(KafkaCluster.scala:342)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at org.apache.spark.streaming.kafka.KafkaCluster.org$apache$spark$streaming$kafka$KafkaCluster$$withBrokers(KafkaCluster.scala:342)
at org.apache.spark.streaming.kafka.KafkaCluster.getPartitionMetadata(KafkaCluster.scala:125)
at org.apache.spark.streaming.kafka.KafkaCluster.getPartitions(KafkaCluster.scala:112)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:403)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:532)
at org.apache.spark.streaming.kafka.KafkaUtils.createDirectStream(KafkaUtils.scala)
at com.capiot.platform.spark.SparkTelemetryReceiverFromKafkaStream.executeStreamingCalculations(SparkTelemetryReceiverFromKafkaStream.java:248)
at com.capiot.platform.spark.SparkTelemetryReceiverFromKafkaStream.main(SparkTelemetryReceiverFromKafkaStream.java:84)
UPDATE:
The versions that I am using are:
scala - 2.11
spark-streaming-kafka- 1.4.1
spark - 1.4.1
Can any one resolve the issue? Thanks in advance.
You have the wrong version of Scala. You need 2.10.x per
https://spark.apache.org/docs/1.4.1/
"For the Scala API, Spark 1.4.1 uses Scala 2.10."
Might be late to help OP, but when using kafka streaming with spark, you need to make sure that you use the right jar file.
For example, in my case, I have scala 2.11 (the minimum required for spark 2.0 which im using), and given that kafka spark requires the version 2.0.0 I have to use the artifact spark-streaming-kafka-0-8-assembly_2.11-2.0.0-preview.jar
Notice my scala version and the artifact version can be seen at 2.11-2.0.0
Hope this helps (someone)
Hope that helps.

NoSuchMethodError while running Spark Streaming job on HDP 2.2

I am trying to run a simple streaming job on HDP 2.2 Sandbox but facing java.lang.NoSuchMethodError error. I am able to run SparkPi example on this machine without an issue.
Following are the versions I am using-
<kafka.version>0.8.2.0</kafka.version>
<twitter4j.version>4.0.2</twitter4j.version>
<spark-version>1.2.1</spark-version>
<scala.version>2.11</scala.version>
Code Snippet -
val sparkConf = new SparkConf().setAppName("TweetSenseKafkaConsumer").setMaster("yarn-cluster");
val ssc = new StreamingContext(sparkConf, Durations.seconds(5));
Error text from Node Manager UI-
Exception in thread "Driver" scala.MatchError:
java.lang.NoSuchMethodError:
scala.Predef$.$conforms()Lscala/Predef$$less$colon$less; (of class
java.lang.NoSuchMethodError) at
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:432)
15/02/12 15:07:23 INFO yarn.ApplicationMaster: Waiting for spark
context initialization ... 1 15/02/12 15:07:33 INFO
yarn.ApplicationMaster: Waiting for spark context initialization ... 2
Job is accepted in YARN but it never goes into RUNNING status.
I was thinking it is due to Scala version differences. I tried changing POM configuration but still not able to fix the error.
Thank you for your help in advance.
Earlier I specified dependency for spark-streaming_2.10 ( Spark compiled with Scala 2.10). I did not specify dependency for Scala compiler itself. It seems Maven automatically pulled 2.11 (Maybe due to some other dependency). When trying to debug this issue, I added a dependency on Scala compiler 2.11. Now after Paul's comment I changed that Scala dependency version to 2.10 and it is working.

How to fix akka version compatibility issues?

I was thinking of using spark and redis together with SBT.
It runs fine if I comment out the spark dependency, if I include the spark dependency I get:
Exception in thread "main" java.lang.NoSuchMethodError: akka.actor.ActorSystem.dispatcher()Lscala/concurrent/ExecutionContextExecutor;
at redis.RedisClientActorLike.<init>(Redis.scala:31)
at redis.RedisClient.<init>(Redis.scala:69)
I have no issues when I do not include "redisscala". When I do include redisscala, then I get weird errors about Akka.
How do I get around this?
It appears that those versions of Spark and rediscala are using incompatible versions of Akka. Spark 1.1.0 is using Akka 2.2.3, and rediscala 1.3.1 is using Akka 2.3.4. There are some changes between Akka 2.2.x and 2.3.x that are causing issues, and your project currently has both as transient dependencies.
You either need to downgrade rediscala to 1.2 (which uses Akka 2.2.x), or upgrade Spark to 1.2-snapshot (which uses Akka 2.3.x).