This word count works as expected :
System.setProperty("hadoop.home.dir", "H:\\winutils");
val sparkConf = new SparkConf().setAppName("GroupBy Test").setMaster("local[1]")
val sc = new SparkContext(sparkConf)
def main(args: Array[String]) {
val text_file = sc.textFile("h:\\data\\small.txt")
val counts = text_file.flatMap(line => line.split(" "))
.map(word => (word, 1))
.reduceByKey(_ + _)
counts.foreach(println);
}
All output messages are prepended by [error]
example :
[error] 16/03/17 12:13:58 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on por
[error] 16/03/17 12:13:58 INFO NettyBlockTransferService: Server created on 55715
[error] 16/03/17 12:13:58 INFO BlockManagerMaster: Trying to register BlockManager
[error] 16/03/17 12:13:58 INFO BlockManagerMasterEndpoint: Registering block manager localhost:55715 with 1140.4 MB RAM, BlockManage
[error] 16/03/17 12:13:58 INFO BlockManagerMaster: Registered BlockManager
I can prevent these error messaged being displayed using :
import org.apache.log4j.Logger
import org.apache.log4j.Level
Logger.getLogger("org").setLevel(Level.OFF)
Logger.getLogger("akka").setLevel(Level.OFF)
But this does not fix the issue.
[error] should not be displayed as these are not error messages but are info :
[error] 16/03/17 12:13:58 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on por
[error] 16/03/17 12:13:58 INFO NettyBlockTransferService: Server created on 55715
[error] 16/03/17 12:13:58 INFO BlockManagerMaster: Trying to register BlockManager
[error] 16/03/17 12:13:58 INFO BlockManagerMasterEndpoint: Registering block manager localhost:55715 with 1140.4 MB RAM, BlockManage
[error] 16/03/17 12:13:58 INFO BlockManagerMaster: Registered BlockManager
Update :
Why are [error] messages being displayed as they are not errors ?
Those are not Spark labels but sbt ones. In the default log4j config file of Spark you can find:
log4j.appender.console.target=System.err
So by default it will print to stderr in the console.
You probably are setting fork to true in your run config somewhere. When doing so everything that is printed to stderr in sbt is prepended with [error].
You should be able to control it with the OutputStrategy.
Related
I have a spark application that i want to run using sbt. If a run just an application using only scala code, it works. But when a try to import spark functionalities and perform spark code, it wont work. This is my spark script:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark._
object hi {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("hi").setMaster("local[2]");
// Create a Scala Spark Context.
val sc = new SparkContext(conf)
// Load our input data.
val file1 = sc.textFile("geotweets.tsv")
val a2 = file1.map(_.split("\t")).map(rec => rec(1)).take(10).foreach(println)
}
}
And my build.sbt is like this
name := "Spark-test"
version := "1.0"
scalaVersion := "2.10.2"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.10" % "1.0.2"
)
But when i run this application in sbt i get this error-message:
[info] Compiling 1 Scala source to C:\Users\kolbj\OneDrive - NTNU\Emner\BigData\SBT-Phase2\target\scala-2.10\classes ...
[info] Done compiling.
[info] Packaging C:\Users\kolbj\OneDrive - NTNU\Emner\BigData\SBT-Phase2\target\scala-2.10\faen_2.10-1.0.jar ...
[info] Done packaging.
[info] Running hi
18/04/21 15:20:37 INFO spark.SecurityManager: Changing view acls to: kolbj
18/04/21 15:20:37 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(kolbj)
18/04/21 15:20:38 INFO slf4j.Slf4jLogger: Slf4jLogger started
18/04/21 15:20:38 INFO Remoting: Starting remoting
18/04/21 15:20:38 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark#LAPTOP-9N8CNCEL:51096]
18/04/21 15:20:38 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark#LAPTOP-9N8CNCEL:51096]
18/04/21 15:20:38 INFO spark.SparkEnv: Registering MapOutputTracker
18/04/21 15:20:38 INFO spark.SparkEnv: Registering BlockManagerMaster
18/04/21 15:20:38 INFO storage.DiskBlockManager: Created local directory at C:\Users\kolbj\AppData\Local\Temp\spark-local-20180421152038-b562
18/04/21 15:20:38 INFO storage.MemoryStore: MemoryStore started with capacity 273.3 MB.
18/04/21 15:20:38 INFO network.ConnectionManager: Bound socket to port 51099 with id = ConnectionManagerId(LAPTOP-9N8CNCEL,51099)
18/04/21 15:20:38 INFO storage.BlockManagerMaster: Trying to register BlockManager
18/04/21 15:20:38 INFO storage.BlockManagerInfo: Registering block manager LAPTOP-9N8CNCEL:51099 with 273.3 MB RAM
18/04/21 15:20:38 INFO storage.BlockManagerMaster: Registered BlockManager
18/04/21 15:20:38 INFO spark.HttpServer: Starting HTTP Server
18/04/21 15:20:38 INFO server.Server: jetty-8.1.14.v20131031
18/04/21 15:20:38 INFO server.AbstractConnector: Started SocketConnector#0.0.0.0:51100
18/04/21 15:20:38 INFO broadcast.HttpBroadcast: Broadcast server started at http://192.168.56.1:51100
18/04/21 15:20:38 INFO spark.HttpFileServer: HTTP File server directory is C:\Users\kolbj\AppData\Local\Temp\spark-17906dea-b751-4fca-9c8c-bca10d06246a
18/04/21 15:20:38 INFO spark.HttpServer: Starting HTTP Server
18/04/21 15:20:38 INFO server.Server: jetty-8.1.14.v20131031
18/04/21 15:20:38 INFO server.AbstractConnector: Started SocketConnector#0.0.0.0:51101
18/04/21 15:20:38 INFO server.Server: jetty-8.1.14.v20131031
18/04/21 15:20:38 INFO server.AbstractConnector: Started SelectChannelConnector#0.0.0.0:4040
18/04/21 15:20:38 INFO ui.SparkUI: Started SparkUI at http://LAPTOP-9N8CNCEL:4040
18/04/21 15:20:39 INFO storage.MemoryStore: ensureFreeSpace(32816) called with curMem=0, maxMem=286575820
18/04/21 15:20:39 INFO storage.MemoryStore: Block broadcast_0 stored as values to memory (estimated size 32.0 KB, free 273.3 MB)
18/04/21 15:20:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/04/21 15:20:39 WARN snappy.LoadSnappy: Snappy native library not loaded
[error] (run-main-0) org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/kolbj/OneDrive - NTNU/Emner/BigData/SBT-Phase2/geotweets.tsv
[error] org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/kolbj/OneDrive - NTNU/Emner/BigData/SBT-Phase2/geotweets.tsv
[error] at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
[error] at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
[error] at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:175)
18/04/21 15:20:39 ERROR spark.ContextCleaner: Error in cleaning thread
[java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:117)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply(ContextCleaner.scala:115)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply(ContextCleaner.scala:115)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1160)
at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:114)
at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:65)
18/04/21 15:20:39 INFO network.ConnectionManager: Selector thread was interrupted!
18/04/21 15:20:39 ERROR util.Utils: Uncaught exception in thread SparkListenerBus
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:48)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply(LiveListenerBus.scala:47)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply(LiveListenerBus.scala:47)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1160)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:46)
error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
[error] at scala.Option.getOrElse(Option.scala:120)
[error] at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
[error] at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
[error] at scala.Option.getOrElse(Option.scala:120)
[error] at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
[error] at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
[error] at scala.Option.getOrElse(Option.scala:120)
[error] at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
[error] at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
[error] at scala.Option.getOrElse(Option.scala:120)
[error] at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
[error] at org.apache.spark.rdd.RDD.take(RDD.scala:983)
[error] at hi$.main(hw.scala:15)
[error] at hi.main(hw.scala)
[error] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[error] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[error] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[error] at java.lang.reflect.Method.invoke(Method.java:498)
[error] at sbt.Run.invokeMain(Run.scala:93)
[error] at sbt.Run.run0(Run.scala:87)
[error] at sbt.Run.execute$1(Run.scala:65)
[error] at sbt.Run.$anonfun$run$4(Run.scala:77)
[error] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
[error] at sbt.util.InterfaceUtil$$anon$1.get(InterfaceUtil.scala:10)
[error] at sbt.TrapExit$App.run(TrapExit.scala:252)
[error] at java.lang.Thread.run(Thread.java:748)
[error] java.lang.RuntimeException: Nonzero exit code: 1
[error] at sbt.Run$.executeTrapExit(Run.scala:124)
[error] at sbt.Run.run(Run.scala:77)
[error] at sbt.Defaults$.$anonfun$bgRunTask$5(Defaults.scala:1172)
[error] at sbt.Defaults$.$anonfun$bgRunTask$5$adapted(Defaults.scala:1167)
[error] at sbt.internal.BackgroundThreadPool.$anonfun$run$1(DefaultBackgroundJobService.scala:366)
[error] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
[error] at scala.util.Try$.apply(Try.scala:209)
[error] at sbt.internal.BackgroundThreadPool$BackgroundRunnable.run(DefaultBackgroundJobService.scala:289)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[error] at java.lang.Thread.run(Thread.java:748)
sbt:FAEN> [error] (Compile / run) Nonzero exit code: 1
[error] Total time: 16 s, completed 21.apr.2018 15:20:39
18/04/21 15:20:42 INFO storage.BlockManager: Removing broadcast 0
18/04/21 15:20:42 INFO spark.ContextCleaner: Cleaned broadcast 0
18/04/21 15:20:42 INFO storage.BlockManager: Removing block broadcast_0
18/04/21 15:20:42 INFO storage.MemoryStore: Block broadcast_0 of size 32816 dropped from memory (free 286575820)
i know that the spark code works fine when using spark REPL. Also this spark code needs to retreive a tsv file using this line
val file1 = sc.textFile("geotweets.tsv")
So my second question is where should this file be placed?
My project repository is like this:
SBT-phase2(project name)
\build.sbt
\src\main\scala\hw.scala
\src\main\scala\geotweets.tsv
Anyone who knows how to fix this? :)
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/kolbj/OneDrive - NTNU/Emner/BigData/SBT-Phase2/geotweets.tsv
The path of file you provided is wrong. Fix this. It will be good if you provide absolute path
You can use java.io.File's CanonicalPath api as
val file1 = sc.textFile(new java.io.File(".").getCanonicalFile+"\src\main\scala\geotweets.tsv")
Spark 2.1.1 built for Hadoop 2.7.3
Scala 2.11.11
Cluster has 3 Linux RHEL 7.3 Azure VM's, running Spark Standalone Deploy Mode (no YARN or Mesos, yet)
I have created a very simple SparkStreaming job using IntelliJ, written in Scala. I'm using Maven and building the job into a fat/uber jar that contains all dependencies.
When I run the job locally it works fine. If I copy the jar to the cluster and run it with a master of local[2] it also works fine. However, if I submit the job to the cluster master it's like it does not want to schedule additional work beyond the first task. The job starts up, grabs however many events are in the Azure Event Hub, processes them successfully, then never does anymore work. It does not matter if I submit the job to the master as just an application or if it's submitted using supervised cluster mode, both do the same thing.
I've looked through all the logs I know of (master, driver (where applicable), and executor) and I am not seeing any errors or warnings that seem actionable. I've altered the log level, shown below, to show ALL/INFO/DEBUG and sifted through those logs without finding anything that seems relevant.
It may be worth noting that I had previously created several jobs that connect to Kafka, instead of the Azure Event Hub, using Java and those jobs run in supervised cluster mode without an issue on this same cluster. This leads me to believe that the cluster configuration isn't an issue, it's either something with my code (below) or the Azure Event Hub.
Any thoughts on where I might check next to isolate this issue? Here is the code for my simple job.
Thanks in advance.
Note: conf.{name} indicates values I'm loading from a config file. I've tested loading and hard-coding them, both with the same result.
package streamingJob
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.eventhubs.EventHubsUtils
import org.joda.time.DateTime
object TestJob {
def main(args: Array[String]): Unit = {
val sparkConf = new SparkConf()
sparkConf.setAppName("TestJob")
// Uncomment to run locally
//sparkConf.setMaster("local[2]")
val sparkContext = new SparkContext(sparkConf)
sparkContext.setLogLevel("ERROR")
val streamingContext: StreamingContext = new StreamingContext(sparkContext, Seconds(1))
val readerParams = Map[String, String] (
"eventhubs.policyname" -> conf.policyname,
"eventhubs.policykey" -> conf.policykey,
"eventhubs.namespace" -> conf.namespace,
"eventhubs.name" -> conf.name,
"eventhubs.partition.count" -> conf.partitionCount,
"eventhubs.consumergroup" -> conf.consumergroup
)
val eventData = EventHubsUtils.createDirectStreams(
streamingContext,
conf.namespace,
conf.progressdir,
Map("name" -> readerParams))
eventData.foreachRDD(r => {
r.foreachPartition { p => {
p.foreach(d => {
println(DateTime.now() + ": " + d)
}) // end of EventData
}} // foreachPartition
}) // foreachRDD
streamingContext.start()
streamingContext.awaitTermination()
}
}
Here is a set of logs from when I run this as an application, not cluster/supervised.
/spark/bin/spark-submit --class streamingJob.TestJob --master spark://{ip}:7077 --total-executor-cores 1 /spark/job-files/fatjar.jar
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/11/06 17:52:04 INFO SparkContext: Running Spark version 2.1.1
17/11/06 17:52:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/06 17:52:05 INFO SecurityManager: Changing view acls to: root
17/11/06 17:52:05 INFO SecurityManager: Changing modify acls to: root
17/11/06 17:52:05 INFO SecurityManager: Changing view acls groups to:
17/11/06 17:52:05 INFO SecurityManager: Changing modify acls groups to:
17/11/06 17:52:05 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
17/11/06 17:52:06 INFO Utils: Successfully started service 'sparkDriver' on port 44384.
17/11/06 17:52:06 INFO SparkEnv: Registering MapOutputTracker
17/11/06 17:52:06 INFO SparkEnv: Registering BlockManagerMaster
17/11/06 17:52:06 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/11/06 17:52:06 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/11/06 17:52:06 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-b5e2c0f3-2500-42c6-b057-cf5d368580ab
17/11/06 17:52:06 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
17/11/06 17:52:06 INFO SparkEnv: Registering OutputCommitCoordinator
17/11/06 17:52:06 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/11/06 17:52:06 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://{ip}:4040
17/11/06 17:52:06 INFO SparkContext: Added JAR file:/spark/job-files/fatjar.jar at spark://{ip}:44384/jars/fatjar.jar with timestamp 1509990726989
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://{ip}:7077...
17/11/06 17:52:07 INFO TransportClientFactory: Successfully created connection to /{ip}:7077 after 72 ms (0 ms spent in bootstraps)
17/11/06 17:52:07 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20171106175207-0000
17/11/06 17:52:07 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44624.
17/11/06 17:52:07 INFO NettyBlockTransferService: Server created on {ip}:44624
17/11/06 17:52:07 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20171106175207-0000/0 on worker-20171106173151-{ip}-46086 ({ip}:46086) with 1 cores
17/11/06 17:52:07 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO StandaloneSchedulerBackend: Granted executor ID app-20171106175207-0000/0 on hostPort {ip}:46086 with 1 cores, 1024.0 MB RAM
17/11/06 17:52:07 INFO BlockManagerMasterEndpoint: Registering block manager {ip}:44624 with 366.3 MB RAM, BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20171106175207-0000/0 is now RUNNING
17/11/06 17:52:08 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
I have built a simple Spark app using sbt. Here's my code:
import org.apache.spark.sql.SparkSession
object HelloWorld {
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().master("local").appName("BigApple").getOrCreate()
import spark.implicits._
val ds = Seq(1, 2, 3).toDS()
ds.map(_ + 1).foreach(x => println(x))
}
}
Following is my build.sbt
name := """sbt-sample-app"""
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies += "org.scalatest" %% "scalatest" % "2.2.6" % "test"
libraryDependencies += "org.apache.spark" % "spark-sql_2.11" % "2.1.1"
Now when I try to do sbt run, it gives me following error:
$ sbt run
[info] Loading global plugins from /home/user/.sbt/0.13/plugins
[info] Loading project definition from /home/user/Projects/sample-app/project
[info] Set current project to sbt-sample-app (in build file:/home/user/Projects/sample-app/)
[info] Running HelloWorld
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/06/01 10:09:10 INFO SparkContext: Running Spark version 2.1.1
17/06/01 10:09:11 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/06/01 10:09:11 WARN Utils: Your hostname, user-Vostro-15-3568 resolves to a loopback address: 127.0.1.1; using 127.0.0.1 instead (on interface enp3s0)
17/06/01 10:09:11 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/06/01 10:09:11 INFO SecurityManager: Changing view acls to: user
17/06/01 10:09:11 INFO SecurityManager: Changing modify acls to: user
17/06/01 10:09:11 INFO SecurityManager: Changing view acls groups to:
17/06/01 10:09:11 INFO SecurityManager: Changing modify acls groups to:
17/06/01 10:09:11 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user); groups with view permissions: Set(); users with modify permissions: Set(user); groups with modify permissions: Set()
17/06/01 10:09:12 INFO Utils: Successfully started service 'sparkDriver' on port 39662.
17/06/01 10:09:12 INFO SparkEnv: Registering MapOutputTracker
17/06/01 10:09:12 INFO SparkEnv: Registering BlockManagerMaster
17/06/01 10:09:12 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/06/01 10:09:12 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/06/01 10:09:12 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-c6db1535-6a00-4760-93dc-968722e3d596
17/06/01 10:09:12 INFO MemoryStore: MemoryStore started with capacity 408.9 MB
17/06/01 10:09:13 INFO SparkEnv: Registering OutputCommitCoordinator
17/06/01 10:09:13 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/06/01 10:09:13 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://127.0.0.1:4040
17/06/01 10:09:13 INFO Executor: Starting executor ID driver on host localhost
17/06/01 10:09:13 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 34488.
17/06/01 10:09:13 INFO NettyBlockTransferService: Server created on 127.0.0.1:34488
17/06/01 10:09:13 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/06/01 10:09:13 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 127.0.0.1, 34488, None)
17/06/01 10:09:13 INFO BlockManagerMasterEndpoint: Registering block manager 127.0.0.1:34488 with 408.9 MB RAM, BlockManagerId(driver, 127.0.0.1, 34488, None)
17/06/01 10:09:13 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 127.0.0.1, 34488, None)
17/06/01 10:09:13 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 127.0.0.1, 34488, None)
17/06/01 10:09:14 INFO SharedState: Warehouse path is 'file:/home/user/Projects/sample-app/spark-warehouse'.
[error] (run-main-0) scala.ScalaReflectionException: class scala.Option in JavaMirror with ClasspathFilter(
[error] parent = URLClassLoader with NativeCopyLoader with RawResources(
[error] urls = List(/home/user/Projects/sample-app/target/scala-2.11/classes, ...,/home/user/.ivy2/cache/org.apache.parquet/parquet-jackson/jars/parquet-jackson-1.8.1.jar),
[error] parent = java.net.URLClassLoader#7c4113ce,
[error] resourceMap = Set(app.class.path, boot.class.path),
[error] nativeTemp = /tmp/sbt_c2afce
[error] )
[error] root = sun.misc.Launcher$AppClassLoader#677327b6
[error] cp = Set(/home/user/.ivy2/cache/org.glassfish.jersey.core/jersey-common/jars/jersey-common-2.22.2.jar, ..., /home/user/.ivy2/cache/net.razorvine/pyrolite/jars/pyrolite-4.13.jar)
[error] ) of type class sbt.classpath.ClasspathFilter with classpath [<unknown>] and parent being URLClassLoader with NativeCopyLoader with RawResources(
[error] urls = List(/home/user/Projects/sample-app/target/scala-2.11/classes, ..., /home/user/.ivy2/cache/org.apache.parquet/parquet-jackson/jars/parquet-jackson-1.8.1.jar),
[error] parent = java.net.URLClassLoader#7c4113ce,
[error] resourceMap = Set(app.class.path, boot.class.path),
[error] nativeTemp = /tmp/sbt_c2afce
[error] ) of type class sbt.classpath.ClasspathUtilities$$anon$1 with classpath [file:/home/user/Projects/sample-app/target/scala-2.11/classes/,...openjdk-amd64/jre/lib/jfr.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/classes] not found.
scala.ScalaReflectionException: class scala.Option in JavaMirror with ClasspathFilter(
parent = URLClassLoader with NativeCopyLoader with RawResources(
urls = List(/home/user/Projects/sample-app/target/scala-2.11/classes, ..., /home/user/.ivy2/cache/org.apache.parquet/parquet-jackson/jars/parquet-jackson-1.8.1.jar),
parent = java.net.URLClassLoader#7c4113ce,
resourceMap = Set(app.class.path, boot.class.path),
nativeTemp = /tmp/sbt_c2afce
)
root = sun.misc.Launcher$AppClassLoader#677327b6
cp = Set(/home/user/.ivy2/cache/org.glassfish.jersey.core/jersey-common/jars/jersey-common-2.22.2.jar, ..., /home/user/.ivy2/cache/net.razorvine/pyrolite/jars/pyrolite-4.13.jar)
) of type class sbt.classpath.ClasspathFilter with classpath [<unknown>] and parent being URLClassLoader with NativeCopyLoader with RawResources(
urls = List(/home/user/Projects/sample-app/target/scala-2.11/classes, ..., /home/user/.ivy2/cache/org.apache.parquet/parquet-jackson/jars/parquet-jackson-1.8.1.jar),
parent = java.net.URLClassLoader#7c4113ce,
resourceMap = Set(app.class.path, boot.class.path),
nativeTemp = /tmp/sbt_c2afce
) of type class sbt.classpath.ClasspathUtilities$$anon$1 with classpath [file:/home/user/Projects/sample-app/target/scala-2.11/classes/,.../jre/lib/charsets.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/jfr.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/classes] not found.
at scala.reflect.internal.Mirrors$RootsBase.staticClass(Mirrors.scala:123)
at scala.reflect.internal.Mirrors$RootsBase.staticClass(Mirrors.scala:22)
at org.apache.spark.sql.catalyst.ScalaReflection$$typecreator42$1.apply(ScalaReflection.scala:614)
at scala.reflect.api.TypeTags$WeakTypeTagImpl.tpe$lzycompute(TypeTags.scala:232)
at scala.reflect.api.TypeTags$WeakTypeTagImpl.tpe(TypeTags.scala:232)
at org.apache.spark.sql.catalyst.ScalaReflection$class.localTypeOf(ScalaReflection.scala:782)
at org.apache.spark.sql.catalyst.ScalaReflection$.localTypeOf(ScalaReflection.scala:39)
at org.apache.spark.sql.catalyst.ScalaReflection$.optionOfProductType(ScalaReflection.scala:614)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:51)
at org.apache.spark.sql.Encoders$.scalaInt(Encoders.scala:281)
at org.apache.spark.sql.SQLImplicits.newIntEncoder(SQLImplicits.scala:54)
at HelloWorld$.main(HelloWorld.scala:9)
at HelloWorld.main(HelloWorld.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
[trace] Stack trace suppressed: run last compile:run for the full output.
17/06/01 10:09:15 ERROR ContextCleaner: Error in cleaning thread
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:181)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1245)
at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:178)
at org.apache.spark.ContextCleaner$$anon$1.run(ContextCleaner.scala:73)
17/06/01 10:09:15 ERROR Utils: uncaught error in thread SparkListenerBus, stopping SparkContext
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:80)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1245)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
17/06/01 10:09:15 ERROR Utils: throw uncaught fatal error in thread SparkListenerBus
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:80)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1245)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
17/06/01 10:09:15 INFO SparkUI: Stopped Spark web UI at http://127.0.0.1:4040
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 7 s, completed 1 Jun, 2017 10:09:15 AM
But when I add fork in run := true in build.sbt the app runs fine
New build.sbt:
name := """sbt-sample-app"""
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies += "org.scalatest" %% "scalatest" % "2.2.6" % "test"
libraryDependencies += "org.apache.spark" % "spark-sql_2.11" % "2.1.1"
fork in run := true
Here's the output:
$ sbt run
[info] Loading global plugins from /home/user/.sbt/0.13/plugins
[info] Loading project definition from /home/user/Projects/sample-app/project
[info] Set current project to sbt-sample-app (in build file:/home/user/Projects/sample-app/)
[success] Total time: 0 s, completed 1 Jun, 2017 10:15:43 AM
[info] Updating {file:/home/user/Projects/sample-app/}sample-app...
[info] Resolving jline#jline;2.12.1 ...
[info] Done updating.
[warn] Scala version was updated by one of library dependencies:
[warn] * org.scala-lang:scala-library:(2.11.7, 2.11.0) -> 2.11.8
[warn] To force scalaVersion, add the following:
[warn] ivyScala := ivyScala.value map { _.copy(overrideScalaVersion = true) }
[warn] Run 'evicted' to see detailed eviction warnings
[info] Compiling 1 Scala source to /home/user/Projects/sample-app/target/scala-2.11/classes...
[info] Running HelloWorld
[error] Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
[error] 17/06/01 10:16:13 INFO SparkContext: Running Spark version 2.1.1
[error] 17/06/01 10:16:13 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[error] 17/06/01 10:16:14 WARN Utils: Your hostname, user-Vostro-15-3568 resolves to a loopback address: 127.0.1.1; using 127.0.0.1 instead (on interface enp3s0)
[error] 17/06/01 10:16:14 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
[error] 17/06/01 10:16:14 INFO SecurityManager: Changing view acls to: user
[error] 17/06/01 10:16:14 INFO SecurityManager: Changing modify acls to: user
[error] 17/06/01 10:16:14 INFO SecurityManager: Changing view acls groups to:
[error] 17/06/01 10:16:14 INFO SecurityManager: Changing modify acls groups to:
[error] 17/06/01 10:16:14 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user); groups with view permissions: Set(); users with modify permissions: Set(user); groups with modify permissions: Set()
[error] 17/06/01 10:16:14 INFO Utils: Successfully started service 'sparkDriver' on port 37747.
[error] 17/06/01 10:16:14 INFO SparkEnv: Registering MapOutputTracker
[error] 17/06/01 10:16:14 INFO SparkEnv: Registering BlockManagerMaster
[error] 17/06/01 10:16:14 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
[error] 17/06/01 10:16:14 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
[error] 17/06/01 10:16:14 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-edf40c39-a13e-4930-8e9a-64135bfa9770
[error] 17/06/01 10:16:14 INFO MemoryStore: MemoryStore started with capacity 1405.2 MB
[error] 17/06/01 10:16:14 INFO SparkEnv: Registering OutputCommitCoordinator
[error] 17/06/01 10:16:14 INFO Utils: Successfully started service 'SparkUI' on port 4040.
[error] 17/06/01 10:16:15 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://127.0.0.1:4040
[error] 17/06/01 10:16:15 INFO Executor: Starting executor ID driver on host localhost
[error] 17/06/01 10:16:15 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39113.
[error] 17/06/01 10:16:15 INFO NettyBlockTransferService: Server created on 127.0.0.1:39113
[error] 17/06/01 10:16:15 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
[error] 17/06/01 10:16:15 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 127.0.0.1, 39113, None)
[error] 17/06/01 10:16:15 INFO BlockManagerMasterEndpoint: Registering block manager 127.0.0.1:39113 with 1405.2 MB RAM, BlockManagerId(driver, 127.0.0.1, 39113, None)
[error] 17/06/01 10:16:15 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 127.0.0.1, 39113, None)
[error] 17/06/01 10:16:15 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 127.0.0.1, 39113, None)
[error] 17/06/01 10:16:15 INFO SharedState: Warehouse path is 'file:/home/user/Projects/sample-app/spark-warehouse/'.
[error] 17/06/01 10:16:18 INFO CodeGenerator: Code generated in 395.134683 ms
[error] 17/06/01 10:16:19 INFO CodeGenerator: Code generated in 9.077969 ms
[error] 17/06/01 10:16:19 INFO CodeGenerator: Code generated in 23.652705 ms
[error] 17/06/01 10:16:19 INFO SparkContext: Starting job: foreach at HelloWorld.scala:10
[error] 17/06/01 10:16:19 INFO DAGScheduler: Got job 0 (foreach at HelloWorld.scala:10) with 1 output partitions
[error] 17/06/01 10:16:19 INFO DAGScheduler: Final stage: ResultStage 0 (foreach at HelloWorld.scala:10)
[error] 17/06/01 10:16:19 INFO DAGScheduler: Parents of final stage: List()
[error] 17/06/01 10:16:19 INFO DAGScheduler: Missing parents: List()
[error] 17/06/01 10:16:19 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at foreach at HelloWorld.scala:10), which has no missing parents
[error] 17/06/01 10:16:20 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 6.3 KB, free 1405.2 MB)
[error] 17/06/01 10:16:20 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 3.3 KB, free 1405.2 MB)
[error] 17/06/01 10:16:20 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 127.0.0.1:39113 (size: 3.3 KB, free: 1405.2 MB)
[error] 17/06/01 10:16:20 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:996
[error] 17/06/01 10:16:20 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at foreach at HelloWorld.scala:10)
[error] 17/06/01 10:16:20 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
[error] 17/06/01 10:16:20 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 6227 bytes)
[error] 17/06/01 10:16:20 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
[info] 2
[info] 3
[info] 4
[error] 17/06/01 10:16:20 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1231 bytes result sent to driver
[error] 17/06/01 10:16:20 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 152 ms on localhost (executor driver) (1/1)
[error] 17/06/01 10:16:20 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
[error] 17/06/01 10:16:20 INFO DAGScheduler: ResultStage 0 (foreach at HelloWorld.scala:10) finished in 0.181 s
[error] 17/06/01 10:16:20 INFO DAGScheduler: Job 0 finished: foreach at HelloWorld.scala:10, took 0.596960 s
[error] 17/06/01 10:16:20 INFO SparkContext: Invoking stop() from shutdown hook
[error] 17/06/01 10:16:20 INFO SparkUI: Stopped Spark web UI at http://127.0.0.1:4040
[error] 17/06/01 10:16:20 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
[error] 17/06/01 10:16:20 INFO MemoryStore: MemoryStore cleared
[error] 17/06/01 10:16:20 INFO BlockManager: BlockManager stopped
[error] 17/06/01 10:16:20 INFO BlockManagerMaster: BlockManagerMaster stopped
[error] 17/06/01 10:16:20 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
[error] 17/06/01 10:16:20 INFO SparkContext: Successfully stopped SparkContext
[error] 17/06/01 10:16:20 INFO ShutdownHookManager: Shutdown hook called
[error] 17/06/01 10:16:20 INFO ShutdownHookManager: Deleting directory /tmp/spark-77d00e78-9f76-4ab2-bc40-0b99940661ac
[success] Total time: 37 s, completed 1 Jun, 2017 10:16:20 AM
Can anyone help me out in understanding the reason behind it ?
Excerpt from "Getting Started with SBT for Scala" By Shiti Saxena
Why do we need to fork JVM?
When a user runs code using run or console commands, the code is run on the same virtual machine as SBT. In some cases, running of code may cause SBT to crash, such as a System.exit call or unterminated threads (for example, when running tests on code while simultaneously working on the code).
If a test causes the JVM to shut down, you would need to restart SBT. In order to avoid such scenarious, forking the JVM is important.
You do not need to fork the JVM to run your code if the code follows the constraints listed as follows, else it must be run in a forked JVM:
No threads are created or the program ends when user-created threads terminate on their own
System.exit is used to end the program and user-created threads terminate when interrupted
No deserialization is done or deserialization code ensures that the right class loader is used
From the doc given here
By default, the run task runs in the same JVM as sbt. Forking is required under certain circumstances, however. Or, you might want to fork Java processes when implementing new tasks.
By default, a forked process uses the same Java and Scala versions being used for the build and the working directory and JVM options of the current process. This page discusses how to enable and configure forking for both run and test tasks. Each kind of task may be configured separately by scoping the relevant keys as explained below.
to enable fork in run simply use
fork in run := true
I couldn't find why exactly :
But this is their build file and recommendation :
https://github.com/deanwampler/spark-scala-tutorial/blob/master/project/Build.scala
Hope someone can give a better answer.
Edited Code :
import org.apache.spark.sql.SparkSession
object HelloWorld {
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().master("local").appName("BigApple").getOrCreate()
import spark.implicits._
val ds = Seq(1, 2, 3).toDS()
ds.map(_ + 1).foreach(x => println(x))
}
}
build.sbt
name := """untitled"""
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies += "org.scalatest" %% "scalatest" % "2.2.6" % "test"
libraryDependencies += "org.apache.spark" % "spark-sql_2.11" % "2.1.1"
I am new to Spark and I am trying to submit the "quick-start" job from my app. I try to emulate standalone-mode by starting master and slave on my localhost.
object SimpleApp {
def main(args: Array[String]): Unit = {
val logFile = "/opt/spark-2.0.0-bin-hadoop2.7/README.md"
val conf = new SparkConf().setAppName("SimpleApp")
conf.setMaster("spark://10.49.30.77:7077")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile,2).cache();
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println("Lines with a: %s , lines with b: %s".format(numAs,numBs))
}
}
I run my Spark app in my IDE (IntelliJ).
Looking at the logs (logs in workernode), it seems spark cannot find the job class.
16/09/15 17:50:58 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1912.0 B, free 366.3 MB)
16/09/15 17:50:58 INFO TorrentBroadcast: Reading broadcast variable 1 took 137 ms
16/09/15 17:50:58 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.1 KB, free 366.3 MB)
16/09/15 17:50:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.ClassNotFoundException: SimpleApp$$anonfun$1
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
1.Does this mean the Job resources (classes) are not transmitted to the slave node?
2.For stand-alone mode , I must submit jobs using "spark-submit" CLI? If so, how to submit sparks jobs from a app(for example a webapp)
3.Also unrelated question : I see in the logs the,DriverProgram starts a server(port 4040).Whats the purpose of this? DriveProgram being the client, why does it start this service ?
16/09/15 17:50:52 INFO SparkEnv: Registering OutputCommitCoordinator
16/09/15 17:50:53 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/09/15 17:50:53 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.49.30.77:4040
You should either set the resources paths in SparkConf using the setJars method or provide the resources in spark-submit command with the --jars option when running from CLI.
I have some problem with Cloudera VM and Spark. First of all, I'm completely new on Spark, and my boss asked to me to run Spark on Scala in a Virtual Machine for some test.
I have downloaded the Virtual Machine on Virtual Box environment, so I open Eclipse and I had a new Project on Maven.
Obliviously, after I run previously the Cloudera environment and start all services, like Spark, Yarn, Hive and so on.
All services work fine, and all check, in Cloudera services are green. I had do some test with Impala and that works perfectly.
With Eclipse and Scala-Maven environment, the things became worst: that is my very simple code in Scala:
package org.test.spark
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
object TestSelectAlgorithm {
def main(args: Array[String]) = {
val conf = new SparkConf()
.setAppName("TestSelectAlgorithm")
.setMaster("local")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
val df = sqlContext.sql("SELECT * FROM products").show()
}
}
The test is very simple, because the table "products" exist: if I copy-and-paste the same query on Impala, the query works fine!
On the Eclipse environment, otherwise, I have some problem:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/06/30 05:43:17 INFO SparkContext: Running Spark version 1.6.0
16/06/30 05:43:18 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/06/30 05:43:18 WARN Utils: Your hostname, quickstart.cloudera resolves to a loopback address: 127.0.0.1; using 10.0.2.15 instead (on interface eth0)
16/06/30 05:43:18 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/06/30 05:43:18 INFO SecurityManager: Changing view acls to: cloudera
16/06/30 05:43:18 INFO SecurityManager: Changing modify acls to: cloudera
16/06/30 05:43:18 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(cloudera); users with modify permissions: Set(cloudera)
16/06/30 05:43:19 INFO Utils: Successfully started service 'sparkDriver' on port 53730.
16/06/30 05:43:19 INFO Slf4jLogger: Slf4jLogger started
16/06/30 05:43:19 INFO Remoting: Starting remoting
16/06/30 05:43:19 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#10.0.2.15:39288]
16/06/30 05:43:19 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 39288.
16/06/30 05:43:19 INFO SparkEnv: Registering MapOutputTracker
16/06/30 05:43:19 INFO SparkEnv: Registering BlockManagerMaster
16/06/30 05:43:19 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-7d685fc0-ea88-423a-9335-42ca12db85da
16/06/30 05:43:19 INFO MemoryStore: MemoryStore started with capacity 1619.3 MB
16/06/30 05:43:20 INFO SparkEnv: Registering OutputCommitCoordinator
16/06/30 05:43:20 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/06/30 05:43:20 INFO SparkUI: Started SparkUI at http://10.0.2.15:4040
16/06/30 05:43:20 INFO Executor: Starting executor ID driver on host localhost
16/06/30 05:43:20 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 57294.
16/06/30 05:43:20 INFO NettyBlockTransferService: Server created on 57294
16/06/30 05:43:20 INFO BlockManagerMaster: Trying to register BlockManager
16/06/30 05:43:20 INFO BlockManagerMasterEndpoint: Registering block manager localhost:57294 with 1619.3 MB RAM, BlockManagerId(driver, localhost, 57294)
16/06/30 05:43:20 INFO BlockManagerMaster: Registered BlockManager
Exception in thread "main" org.apache.spark.sql.AnalysisException: Table not found: products;
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:306)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$9.applyOrElse(Analyzer.scala:315)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$9.applyOrElse(Analyzer.scala:310)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:57)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:57)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:53)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:56)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:54)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:54)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:265)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:305)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:54)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:310)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:300)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:83)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:80)
at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
at scala.collection.immutable.List.foldLeft(List.scala:84)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:80)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:72)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:72)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:36)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:36)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:34)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
at org.test.spark.TestSelectAlgorithm$.main(TestSelectAlgorithm.scala:18)
at org.test.spark.TestSelectAlgorithm.main(TestSelectAlgorithm.scala)
16/06/30 05:43:22 INFO SparkContext: Invoking stop() from shutdown hook
16/06/30 05:43:22 INFO SparkUI: Stopped Spark web UI at http://10.0.2.15:4040
16/06/30 05:43:22 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/06/30 05:43:22 INFO MemoryStore: MemoryStore cleared
16/06/30 05:43:22 INFO BlockManager: BlockManager stopped
16/06/30 05:43:22 INFO BlockManagerMaster: BlockManagerMaster stopped
16/06/30 05:43:22 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/06/30 05:43:22 INFO SparkContext: Successfully stopped SparkContext
16/06/30 05:43:22 INFO ShutdownHookManager: Shutdown hook called
16/06/30 05:43:22 INFO ShutdownHookManager: Deleting directory /tmp/spark-29d381e9-b5e7-485c-92f2-55dc57ca7d25
The main error is (for me):
Exception in thread "main" org.apache.spark.sql.AnalysisException: Table not found: products;
I searched on other site and documentation, and I founded that the problem is connected with the Hive table... but I don't use the Hive table, I use SparkSql...
Can anyone help me, please?
Thank you for any reply.
In spark, For impala there is no direct support as hive has .So, You have to load file. If it is csv you can use spark-csv,
val df = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true")
.option("inferSchema", "true")
.load("your .csv file location")
import sqlContext.implicits._
import sqlContext._
df.registerTempTable("products")
sqlContext.sql("select * from products").show()
pom dependency for spark-csv
<!-- https://mvnrepository.com/artifact/com.databricks/spark-csv_2.10 -->
<dependency>
<groupId>com.databricks</groupId>
<artifactId>spark-csv_2.10</artifactId>
<version>1.4.0</version>
</dependency>
for avro there is spark-avro
val sqlContext = new SQLContext(sc)
val df = sqlContext.read.avro("your .avro file location")
import sqlContext.implicits._
import sqlContext._
df.registerTempTable("products")
val result= sqlContext.sql("select * from products")
val result.show()
result.write
.format("com.databricks.spark.avro")
.save("Your ouput location")
pom dependency for avro
<!-- http://mvnrepository.com/artifact/com.databricks/spark-avro_2.10 -->
<dependency>
<groupId>com.databricks</groupId>
<artifactId>spark-avro_2.10</artifactId>
<version>2.0.1</version>
</dependency>
and parquet spark has in-build support
val sqlContext = new SQLContext(sc)
val parquetFile = sqlContext.read.parquet("your parquet file location")
parquetFile.registerTempTable("products")
sqlContext.sql("select * from products").show()
Can you check /user/cloudera/.sparkStaging/stagingArea location exist or it contains .avro file?? And please change "Your ouput location" by directory location.
Please check avro github page for more detail. https://github.com/databricks/spark-avro