Need some help, please.
I am using IntelliJ with SBT to build my apps.
I'm working on an app to read a Kafka topic in Spark Streaming in order to do some ETL work on it. Unfortunately, I can't read from Kafka.
The KafkaUtils.createDirectStream isn't resolving and keeps giving me errors (CANNOT RESOLVE SYMBOL). I have done my research and it appears I have the correct dependencies.
Here is my build.sbt:
name := "ASUIStreaming"
version := "0.1"
scalacOptions += "-target:jvm-1.8"
scalaVersion := "2.11.11"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.1.0"
libraryDependencies += "org.apache.spark" %% "spark-streaming" % "2.1.0"
libraryDependencies += "org.apache.spark" % "spark-streaming-kafka-0-8_2.11" % "2.1.0"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.1.0"
libraryDependencies += "org.apache.kafka" %% "kafka-clients" % "0.8.2.1"
libraryDependencies += "org.scala-lang.modules" %% "scala-parser-combinators" % "1.0.4"
Any suggestions? I should also mention I don't have admin access on the laptop since this is a work computer, and I am using a portable JDK and IntelliJ installation. However, my colleagues at work are in the same situation and it works fine for them.
Thanks in advance!
Here is the main Spark Streaming code snippet I'm using.
Note: I've masked some of the confidential work data such as IP and Topic name etc.
import org.apache.kafka.clients.consumer.ConsumerRecord
import kafka.serializer.StringDecoder
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark
import org.apache.kafka.clients.consumer._
import org.apache.kafka.common.serialization.StringDeserializer
import scala.util.parsing.json._
import org.apache.spark.streaming.kafka._
object ASUISpeedKafka extends App
{
// Create a new Spark Context
val conf = new SparkConf().setAppName("ASUISpeedKafka").setMaster("local[*]")
val sc = new SparkContext(conf)
val ssc = new StreamingContext(sc, Seconds(2))
//Identify the Kafka Topic and provide the parameters and Topic details
val kafkaTopic = "TOPIC1"
val topicsSet = kafkaTopic.split(",").toSet
val kafkaParams = Map[String, String]
(
"metadata.broker.list" -> "IP1:PORT, IP2:PORT2",
"auto.offset.reset" -> "smallest"
)
val kafkaStream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder]
(
ssc, kafkaParams, topicsSet
)
}
I was able to resolve the issue. After re-creating the project and adding all dependencies again, I found out that in Intellij certain code has to be on the same line other it won't compile.
In this case, putting val kafkaParams code on the same line (instead of in a code block) solved the issue!
Related
I am doing spark+Scala+SBT project setup in IntelliJ.
Scala Version: 2.12.8
SBT Version: 1.4.2
Java Version: 1.8
Build.sbt file:
name := "Spark_Scala_Sbt"
version := "0.1"
scalaVersion := "2.12.8"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.3.3",
"org.apache.spark" %% "spark-sql" % "2.3.3"
)
Scala file:
import org.apache.spark.sql.SparkSession
object FirstSparkApplication extends App {
val spark = SparkSession.builder
.master("local[*]")
.appName("Sample App")
.getOrCreate()
val data = spark.sparkContext.parallelize(
Seq("I like Spark", "Spark is awesome", "My first Spark job is working now and is counting down these words")
)
val filtered = data.filter(line => line.contains("awesome"))
filtered.collect().foreach(print)
}
But its showing below error messages:
1. Cannot resolve symbol apache.
2. Cannot resolve symbol SparkSession
3. Cannot resolve symbol sparkContext
4. Cannot resolve symbol filter.
5. Cannot resolve symbol collect.
6. Cannot resolve symbol contains.
What should I change here?
I have a basic Spark - Kafka code, I try to run following code:
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.storage.StorageLevel
import java.util.regex.Pattern
import java.util.regex.Matcher
import org.apache.spark.streaming.kafka._
import kafka.serializer.StringDecoder
import Utilities._
object WordCount {
def main(args: Array[String]): Unit = {
val ssc = new StreamingContext("local[*]", "KafkaExample", Seconds(1))
setupLogging()
// Construct a regular expression (regex) to extract fields from raw Apache log lines
val pattern = apacheLogPattern()
// hostname:port for Kafka brokers, not Zookeeper
val kafkaParams = Map("metadata.broker.list" -> "localhost:9092")
// List of topics you want to listen for from Kafka
val topics = List("testLogs").toSet
// Create our Kafka stream, which will contain (topic,message) pairs. We tack a
// map(_._2) at the end in order to only get the messages, which contain individual
// lines of data.
val lines = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
ssc, kafkaParams, topics).map(_._2)
// Extract the request field from each log line
val requests = lines.map(x => {val matcher:Matcher = pattern.matcher(x); if (matcher.matches()) matcher.group(5)})
// Extract the URL from the request
val urls = requests.map(x => {val arr = x.toString().split(" "); if (arr.size == 3) arr(1) else "[error]"})
// Reduce by URL over a 5-minute window sliding every second
val urlCounts = urls.map(x => (x, 1)).reduceByKeyAndWindow(_ + _, _ - _, Seconds(300), Seconds(1))
// Sort and print the results
val sortedResults = urlCounts.transform(rdd => rdd.sortBy(x => x._2, false))
sortedResults.print()
// Kick it off
ssc.checkpoint("/home/")
ssc.start()
ssc.awaitTermination()
}
}
I am using IntelliJ IDE, and create scala project by using sbt. Details of build.sbt file is as follow:
name := "Sample"
version := "1.0"
organization := "com.sundogsoftware"
scalaVersion := "2.11.8"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.2.0" % "provided",
"org.apache.spark" %% "spark-streaming" % "1.4.1",
"org.apache.spark" %% "spark-streaming-kafka" % "1.4.1",
"org.apache.hadoop" % "hadoop-hdfs" % "2.6.0"
)
However, when I try to build the code, it creates following error:
Error:scalac: missing or invalid dependency detected while loading class file 'StreamingContext.class'.
Could not access type Logging in package org.apache.spark,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with -Ylog-classpath to see the problematic classpath.)
A full rebuild may help if 'StreamingContext.class' was compiled against an incompatible version of org.apache.spark.
Error:scalac: missing or invalid dependency detected while loading class file 'DStream.class'.
Could not access type Logging in package org.apache.spark,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with -Ylog-classpath to see the problematic classpath.)
A full rebuild may help if 'DStream.class' was compiled against an incompatible version of org.apache.spark.
When using different Spark libraries together the versions of all libs should always match.
Also, the version of kafka you use matters also, so should be for example: spark-streaming-kafka-0-10_2.11
...
scalaVersion := "2.11.8"
val sparkVersion = "2.2.0"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVersion % "provided",
"org.apache.spark" %% "spark-streaming" % sparkVersion,
"org.apache.spark" %% "spark-streaming-kafka-0-10_2.11" % sparkVersion,
"org.apache.hadoop" % "hadoop-hdfs" % "2.6.0"
)
This is a useful site if you need to check the exact dependencies you should use:
https://search.maven.org/
I am new in spark and I am trying this example:
import org.apache.spark.SparkConf
import org.apache.spark.mllib.clustering.StreamingKMeans
import org.apache.spark.mllib.linalg.{Vectors,Vector}
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.streaming.{Seconds, StreamingContext}
object App {
def main(args: Array[String]) {
if (args.length != 5) {
System.err.println(
"Usage: StreamingKMeansExample " +
"<trainingDir> <testDir> <batchDuration> <numClusters> <numDimensions>")
System.exit(1)
}
// $example on$
val conf = new SparkConf().setAppName("StreamingKMeansExample")
val ssc = new StreamingContext(conf, Seconds(args(2).toLong))
val trainingData = ssc.textFileStream(args(0)).map(Vectors.parse)
val testData = ssc.textFileStream(args(1)).map(LabeledPoint.parse)
val model = new StreamingKMeans()
.setK(args(3).toInt)
.setDecayFactor(1.0)
.setRandomCenters(args(4).toInt, 0.0)
model.trainOn(trainingData)
model.predictOnValues(testData.map(lp => (lp.label, lp.features))).print()
ssc.start()
ssc.awaitTermination()
// $example off$
}
}
but it cannot resolve LabeledPoint.parse it only has apply and unapply methods available not parse.
It's probably the version I am using. So this is my sbt
name := "myApp"
version := "0.1"
scalaVersion := "2.11.0"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.2.0" % "provided",
"org.apache.spark" %% "spark-streaming" % "2.2.0",
"org.apache.spark" %% "spark-mllib" % "2.3.1"
)
EDIT so I made a custom made labelPoint class since nothing else worker that did solved the compile problem. But, I tried to run it and the predicted values are always zero.
the input txt for train is
[36.72, 67.44]
[92.20, 11.81]
[90.85, 48.07]
.....
and the test txt is
(2, [9.26,68.19])
(1, [3.27,9.14])
(9, [66.66,13.85])
....
So why the result values are 2,0 1,0 9,0 ? Is there a problem with labeledPoint?
I'm trying to build and run a Scala/Spark project in IntelliJ IDEA.
I have added org.apache.spark:spark-sql_2.11:2.0.0 in global libraries and my build.sbt looks like below.
name := "test"
version := "1.0"
scalaVersion := "2.11.8"
libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.0.0"
libraryDependencies += "org.apache.spark" % "spark-sql_2.11" % "2.0.0"
I still get an error that says
unknown artifact. unable to resolve or indexed
under spark-sql.
When tried to build the project the error was
Error:(19, 26) not found: type sqlContext, val sqlContext = new sqlContext(sc)
I have no idea what the problem could be. How to create a Spark/Scala project in IntelliJ IDEA?
Update:
Following the suggestions I updated the code to use Spark Session, but it still unable to read a csv file. What am I doing wrong here? Thank you!
val spark = SparkSession
.builder()
.appName("Spark example")
.config("spark.some.config.option", "some value")
.getOrCreate()
import spark.implicits._
val testdf = spark.read.csv("/Users/H/Desktop/S_CR_IP_H.dat")
testdf.show() //it doesn't show anything
//pdf.select("DATE_KEY").show()
sql should upper case letters as below
val sqlContext = new SQLContext(sc)
SQLContext is deprecated for newer versions of spark so I would suggest you to use SparkSession
val spark = SparkSession.builder().appName("testings").getOrCreate
val sqlContext = spark.sqlContext
If you want to set the master through your code instead of from spark-submit command then you can set .master as well (you can set configs too)
val spark = SparkSession.builder().appName("testings").master("local").config("configuration key", "configuration value").getOrCreate
val sqlContext = spark.sqlContext
Update
Looking at your sample data
DATE|PID|TYPE
8/03/2017|10199786|O
and testing your code
val testdf = spark.read.csv("/Users/H/Desktop/S_CR_IP_H.dat")
testdf.show()
I had output as
+--------------------+
| _c0|
+--------------------+
| DATE|PID|TYPE|
|8/03/2017|10199786|O|
+--------------------+
Now adding .option for delimiter and header as
val testdf2 = spark.read.option("delimiter", "|").option("header", true).csv("/Users/H/Desktop/S_CR_IP_H.dat")
testdf2.show()
Output was
+---------+--------+----+
| DATE| PID|TYPE|
+---------+--------+----+
|8/03/2017|10199786| O|
+---------+--------+----+
Note: I have used .master("local") for SparkSession object
(That should really be part of the Spark official documentation)
Replace the following from your configuration in build.sbt:
scalaVersion := "2.11.8"
libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.0.0"
libraryDependencies += "org.apache.spark" % "spark-sql_2.11" % "2.0.0"
with the following:
// the latest Scala version that is compatible with Spark
scalaVersion := "2.11.11"
// Few changes here
// 1. Use double %% so you don't have to worry about Scala version
// 2. I doubt you need spark-core dependency
// 3. Use the latest Spark version
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.2.0"
Don't worry about IntelliJ IDEA telling you the following:
unknown artifact. unable to resolve or indexed
It's just something you have to live with and the only solution I could find is to...accept the annoyance.
val sqlContext = new sqlContext(sc)
The real type is SQLContext, but as the scaladoc says:
As of Spark 2.0, this is replaced by SparkSession. However, we are keeping the class here for backward compatibility.
Please use SparkSession instead.
The entry point to programming Spark with the Dataset and DataFrame API.
See the Spark official documentation to read on SparkSession and other goodies. Start from Getting Started. Have fun!
I am following a Spark example from here http://spark.apache.org/docs/latest/sql-programming-guide.html.
val people = sc.textFile("../spark-training/simple-app/examples/src/main/resources/people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt))
people.registerTempTable("people")
I get the error that registerTempTable is not recognized.
After looking at some Github projects, it seems to me that I have the necessary imports:
import org.apache.spark.{SparkConf, SparkContext}
val conf = new SparkConf().setAppName("Select people")
val sc = new SparkContext(conf)
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext._
And build.sbt:
name := "exercises"
version := "1.0"
scalaVersion := "2.11.8"
libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.0.0"
libraryDependencies += "org.apache.spark" % "spark-sql_2.10" % "1.6.1"
What am I missing?
In your code, people is a RDD. registerTempTable is a dataframe api, not a RDD api. Your code drops the `toDF()' bit from the end of the example. Your first line should be as below
val people = sc.textFile("examples/src/main/resources/people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt)).toDF()