ClassNotFoundException anonfun when deploy scala code to Spark - scala

I'm new to Apache Spark and I'm trying to deploy a piece of simple scala code to the Spark.
Note: I am trying to connect to a an existing running cluster which I configure via my java parameters to be: spark.master=spark://MyHostName:7077
Environment
Spark 1.5.1 build with scala 2.10
Spark runs standalone mode on my local machine
OS: Mac OS El Captain
JVM: JDK 1.8.0_60
IDE: IntelliJ IDEA Community 14.1.5
Scala version: 2.10.4
sbt: 0.13.8
Code
import org.apache.spark.{SparkConf, SparkContext}
object HelloSpark {
def main(args: Array[String]) {
val logFile = "/README.md"
val conf = new SparkConf().setAppName("Simple Application")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
println("%s done!".format(numAs))
}
}
build.sbt
name := "data-streamer210"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.10" % "1.5.1",
"org.apache.spark" % "spark-streaming_2.10" % "1.5.1",
"org.apache.spark" % "spark-mllib_2.10" % "1.5.1",
"org.apache.spark" % "spark-bagel_2.10" % "1.5.1",
"org.apache.spark" % "spark-streaming-twitter_2.10" % "1.5.1"
)
Error
15/10/19 19:40:09 INFO SparkContext: Starting job: count at HelloSpark.scala:14
15/10/19 19:40:09 INFO DAGScheduler: Got job 0 (count at HelloSpark.scala:14) with 2 output partitions
15/10/19 19:40:09 INFO DAGScheduler: Final stage: ResultStage 0(count at HelloSpark.scala:14)
15/10/19 19:40:09 INFO DAGScheduler: Parents of final stage: List()
15/10/19 19:40:09 INFO DAGScheduler: Missing parents: List()
15/10/19 19:40:09 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at filter at HelloSpark.scala:14), which has no missing parents
15/10/19 19:40:09 INFO MemoryStore: ensureFreeSpace(3192) called with curMem=120313, maxMem=2061647216
15/10/19 19:40:09 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.1 KB, free 1966.0 MB)
15/10/19 19:40:09 INFO MemoryStore: ensureFreeSpace(1892) called with curMem=123505, maxMem=2061647216
15/10/19 19:40:09 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1892.0 B, free 1966.0 MB)
15/10/19 19:40:09 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 127.0.0.1:50941 (size: 1892.0 B, free: 1966.1 MB)
15/10/19 19:40:09 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:861
15/10/19 19:40:09 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at filter at HelloSpark.scala:14)
15/10/19 19:40:09 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/10/19 19:40:10 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor#127.0.0.1:50951/user/Executor#-147774947]) with ID 0
15/10/19 19:40:10 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 127.0.0.1, PROCESS_LOCAL, 2160 bytes)
15/10/19 19:40:10 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 127.0.0.1, PROCESS_LOCAL, 2160 bytes)
15/10/19 19:40:10 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor#127.0.0.1:50952/user/Executor#1450479604]) with ID 2
15/10/19 19:40:10 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor#127.0.0.1:50957/user/Executor#1447408721]) with ID 1
15/10/19 19:40:10 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor#127.0.0.1:50955/user/Executor#1397136754]) with ID 3
15/10/19 19:40:10 INFO BlockManagerMasterEndpoint: Registering block manager 127.0.0.1:50963 with 530.0 MB RAM, BlockManagerId(0, 127.0.0.1, 50963)
15/10/19 19:40:10 INFO BlockManagerMasterEndpoint: Registering block manager 127.0.0.1:50964 with 530.0 MB RAM, BlockManagerId(2, 127.0.0.1, 50964)
15/10/19 19:40:10 INFO BlockManagerMasterEndpoint: Registering block manager 127.0.0.1:50965 with 530.0 MB RAM, BlockManagerId(1, 127.0.0.1, 50965)
15/10/19 19:40:10 INFO BlockManagerMasterEndpoint: Registering block manager 127.0.0.1:50966 with 530.0 MB RAM, BlockManagerId(3, 127.0.0.1, 50966)
15/10/19 19:40:11 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 127.0.0.1:50963 (size: 1892.0 B, free: 530.0 MB)
15/10/19 19:40:11 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 127.0.0.1): java.lang.ClassNotFoundException: HelloSpark$$anonfun$1
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:72)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:98)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/10/19 19:40:11 INFO TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) on executor 127.0.0.1: java.lang.ClassNotFoundException (HelloSpark$$anonfun$1) [duplicate 1]
15/10/19 19:40:11 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 2, 127.0.0.1, PROCESS_LOCAL, 2160 bytes)
15/10/19 19:40:11 INFO TaskSetManager: Starting task 1.1 in stage 0.0 (TID 3, 127.0.0.1, PROCESS_LOCAL, 2160 bytes)
15/10/19 19:40:11 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 127.0.0.1:50966 (size: 1892.0 B, free: 530.0 MB)
15/10/19 19:40:11 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 127.0.0.1:50964 (size: 1892.0 B, free: 530.0 MB)
15/10/19 19:40:11 INFO TaskSetManager: Lost task 1.1 in stage 0.0 (TID 3) on executor 127.0.0.1: java.lang.ClassNotFoundException (HelloSpark$$anonfun$1) [duplicate 2]
15/10/19 19:40:11 INFO TaskSetManager: Starting task 1.2 in stage 0.0 (TID 4, 127.0.0.1, PROCESS_LOCAL, 2160 bytes)
15/10/19 19:40:11 INFO TaskSetManager: Lost task 1.2 in stage 0.0 (TID 4) on executor 127.0.0.1: java.lang.ClassNotFoundException (HelloSpark$$anonfun$1) [duplicate 3]
15/10/19 19:40:11 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 2) on executor 127.0.0.1: java.lang.ClassNotFoundException (HelloSpark$$anonfun$1) [duplicate 4]
15/10/19 19:40:11 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 5, 127.0.0.1, PROCESS_LOCAL, 2160 bytes)
15/10/19 19:40:11 INFO TaskSetManager: Starting task 1.3 in stage 0.0 (TID 6, 127.0.0.1, PROCESS_LOCAL, 2160 bytes)
15/10/19 19:40:11 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 5) on executor 127.0.0.1: java.lang.ClassNotFoundException (HelloSpark$$anonfun$1) [duplicate 5]
15/10/19 19:40:11 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 7, 127.0.0.1, PROCESS_LOCAL, 2160 bytes)
15/10/19 19:40:11 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 7) on executor 127.0.0.1: java.lang.ClassNotFoundException (HelloSpark$$anonfun$1) [duplicate 6]
15/10/19 19:40:11 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
15/10/19 19:40:11 INFO TaskSchedulerImpl: Cancelling stage 0
15/10/19 19:40:11 INFO TaskSchedulerImpl: Stage 0 was cancelled
15/10/19 19:40:11 INFO DAGScheduler: ResultStage 0 (count at HelloSpark.scala:14) failed in 2.613 s
15/10/19 19:40:11 INFO DAGScheduler: Job 0 failed: count at HelloSpark.scala:14, took 2.716305 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 7, 127.0.0.1): java.lang.ClassNotFoundException: HelloSpark$$anonfun$1
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:72)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:98)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1848)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1919)
at org.apache.spark.rdd.RDD.count(RDD.scala:1121)
at HelloSpark$.main(HelloSpark.scala:14)
at HelloSpark.main(HelloSpark.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Caused by: java.lang.ClassNotFoundException: HelloSpark$$anonfun$1
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:72)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:98)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/10/19 19:40:11 INFO SparkContext: Invoking stop() from shutdown hook
15/10/19 19:40:11 WARN TaskSetManager: Lost task 1.3 in stage 0.0 (TID 6, 127.0.0.1): org.apache.spark.TaskKilledException
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:204)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/10/19 19:40:11 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/10/19 19:40:11 INFO SparkUI: Stopped Spark web UI at http://127.0.0.1:4040
15/10/19 19:40:11 INFO DAGScheduler: Stopping DAGScheduler
15/10/19 19:40:11 INFO SparkDeploySchedulerBackend: Shutting down all executors
15/10/19 19:40:11 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
15/10/19 19:40:11 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
15/10/19 19:40:11 INFO MemoryStore: MemoryStore cleared
15/10/19 19:40:11 INFO BlockManager: BlockManager stopped
15/10/19 19:40:11 INFO BlockManagerMaster: BlockManagerMaster stopped
15/10/19 19:40:11 INFO SparkContext: Successfully stopped SparkContext
15/10/19 19:40:11 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
15/10/19 19:40:11 INFO ShutdownHookManager: Shutdown hook called
15/10/19 19:40:11 INFO ShutdownHookManager: Deleting directory /private/var/folders/q9/m_d81ms107n09tj8k5wbzfb40000gp/T/spark-53ce9474-5488-4d50-bfb6-c58ddeed7640
Process finished with exit code 1

When you run Spark from IntelliJ you can either connect to a "local" spark JVM or to a remote cluster.
If you set you master to be local (e.g., setMaster("local[*]")), then any code you have in your local scope/project will be available to this temporary, local (single JVM) cluster you just created. Everything runs locally and will exit when your tests ends (if you running a unit test), or when you exit the app if you are running it as an app inside IntelliJ.
However, if you set master to point to a remote cluster (say setMaster("spark://localhost:7077")) you need to make sure that your cluster has access to your new code (in your case it needs to have access to the closure you are passing to filter).
When I want to execute a new piece of code on a running Spark cluster, I usually do that by packaging my app in an Uber Jar (see sbt-assembly) and then passing this as an argument in spark-submit (see more details by clicking on the link).

There's also an interesting interaction if you call setMaster in your code, even if you have it set to the right master. For example, I had code like this:
val conf = new SparkConf().setAppName("Simple Application").setMaster("spark://greine:7077")
that I submitted like this:
bin/spark-submit --class SimpleApp --master yarn --deploy-mode cluster /Users/james/Projects/sparkHelloWorld/target/scala-2.11/sparkHelloWorld-assembly-1.0.jar
The jar (sparkHelloWorld-assembly-1.0.jar) I believe was built correctly and had all the required class files. It still got an error:
17/04/08 09:19:08 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 5, 10.178.252.14, executor 1): java.lang.ClassNotFoundException: SimpleApp$$anonfun$1
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1819)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1986)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Once I removed the call to setMaster("spark://greine:7077") it ran and completed correctly, using the same spark-submit command.

Related

Getting NullPointerException in scala./Spark code

I am reading CSV file in spark using scala. And in the CSV file I am getting null in line 10(lets say). So my code throwing nullpointerexception at this line. So it is not printing next records.
Below is my code:
import org.apache.spark.sql.SparkSession
import java.lang.Long
object HighestGDP {
def main(args:Array[String]){
val spark = SparkSession.builder().appName("GDP").master("local").getOrCreate()
val data = spark.read.csv("D:\\BGH\\Spark\\World_Bank_Indicators.csv").rdd
val result = data.filter(line=>line.getString(1).substring(4,8).equals("2009")||line.getString(1).substring(4,8).equals("2010"))
result.foreach(println)
var gdp2009 = result.filter(rec=>rec.getString(1).substring(4,8).equals("2009"))
.map{line=>{
var GDP= 0L
if(line.getString(19).equals(null))
GDP=0L
else
GDP= line.getString(19).replaceAll(",", "").toLong
(line.getString(0),GDP)
}}
gdp2009.foreach(println)
result.foreach(println)
}
}
So is there any way where I can set the value to 0 where value is null. I tried with if else but still its not working.
ERROR:
18/03/06 22:56:01 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1208 bytes result sent to driver
18/03/06 22:56:01 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 297 ms on localhost (1/1)
18/03/06 22:56:01 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
18/03/06 22:56:01 INFO DAGScheduler: ResultStage 1 (foreach at HighestGDP.scala:12) finished in 0.297 s
18/03/06 22:56:01 INFO DAGScheduler: Job 1 finished: foreach at HighestGDP.scala:12, took 0.346954 s
18/03/06 22:56:01 INFO SparkContext: Starting job: foreach at HighestGDP.scala:21
18/03/06 22:56:01 INFO DAGScheduler: Got job 2 (foreach at HighestGDP.scala:21) with 1 output partitions
18/03/06 22:56:01 INFO DAGScheduler: Final stage: ResultStage 2 (foreach at HighestGDP.scala:21)
18/03/06 22:56:01 INFO DAGScheduler: Parents of final stage: List()
18/03/06 22:56:01 INFO DAGScheduler: Missing parents: List()
18/03/06 22:56:01 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[12] at map at HighestGDP.scala:14), which has no missing parents
18/03/06 22:56:01 INFO MemoryStore: Block broadcast_5 stored as values in memory (estimated size 15.3 KB, free 355.2 MB)
18/03/06 22:56:01 INFO MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 7.1 KB, free 355.2 MB)
18/03/06 22:56:01 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on 13.133.209.137:57085 (size: 7.1 KB, free: 355.5 MB)
18/03/06 22:56:01 INFO SparkContext: Created broadcast 5 from broadcast at DAGScheduler.scala:1012
18/03/06 22:56:01 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[12] at map at HighestGDP.scala:14)
18/03/06 22:56:01 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
18/03/06 22:56:01 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, localhost, partition 0, PROCESS_LOCAL, 5918 bytes)
18/03/06 22:56:01 INFO Executor: Running task 0.0 in stage 2.0 (TID 2)
18/03/06 22:56:01 INFO FileScanRDD: Reading File path: file:///D:/BGH/Spark/World_Bank_Indicators.csv, range: 0-260587, partition values: [empty row]
(Afghanistan,425)
(Albania,3796)
(Algeria,3952)
18/03/06 22:56:01 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 2)
java.lang.NullPointerException
at HighestGDP$$anonfun$3.apply(HighestGDP.scala:15)
at HighestGDP$$anonfun$3.apply(HighestGDP.scala:14)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:894)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:894)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
18/03/06 22:56:01 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 2, localhost): java.lang.NullPointerException
at HighestGDP$$anonfun$3.apply(HighestGDP.scala:15)
at HighestGDP$$anonfun$3.apply(HighestGDP.scala:14)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:894)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:894)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
18/03/06 22:56:01 ERROR TaskSetManager: Task 0 in stage 2.0 failed 1 times; aborting job
18/03/06 22:56:01 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
18/03/06 22:56:01 INFO TaskSchedulerImpl: Cancelling stage 2
18/03/06 22:56:01 INFO DAGScheduler: ResultStage 2 (foreach at HighestGDP.scala:21) failed in 0.046 s
18/03/06 22:56:01 INFO DAGScheduler: Job 2 failed: foreach at HighestGDP.scala:21, took 0.046961 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 2, localhost): java.lang.NullPointerException
at HighestGDP$$anonfun$3.apply(HighestGDP.scala:15)
at HighestGDP$$anonfun$3.apply(HighestGDP.scala:14)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:894)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:894)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1667)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1873)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1886)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1899)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1913)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:894)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:892)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.RDD.foreach(RDD.scala:892)
at HighestGDP$.main(HighestGDP.scala:21)
at HighestGDP.main(HighestGDP.scala)
Caused by: java.lang.NullPointerException
at HighestGDP$$anonfun$3.apply(HighestGDP.scala:15)
at HighestGDP$$anonfun$3.apply(HighestGDP.scala:14)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:894)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:894)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
18/03/06 22:56:01 INFO SparkContext: Invoking stop() from shutdown hook
18/03/06 22:56:01 INFO SparkUI: Stopped Spark web UI at http://13.133.209.137:4040
18/03/06 22:56:01 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/03/06 22:56:01 INFO MemoryStore: MemoryStore cleared
18/03/06 22:56:01 INFO BlockManager: BlockManager stopped
18/03/06 22:56:01 INFO BlockManagerMaster: BlockManagerMaster stopped
18/03/06 22:56:01 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/03/06 22:56:01 INFO SparkContext: Successfully stopped SparkContext
18/03/06 22:56:01 INFO ShutdownHookManager: Shutdown hook called
18/03/06 22:56:01 INFO ShutdownHookManager: Deleting directory C:\Users\kumar.harsh\AppData\Local\Temp\spark-65330823-f67a-4a9d-acaf-42478e3b7109
I guess the problem is line.getString(19).equals(null). If line.getString(19) return null you can not call the equals method (this will result in a NullPointerException). Instead of this check you should use line.getString(19) == null.
One more hint. Try to avoid setting the spark-master fixed in your code. That will cause problems later on. See the discussion on: Spark job with explicit setMaster("local"), passed to spark-submit with YARN.

Spark Exits with exception

This is the stackTrace that I am getting while running the application:
16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 233 on executor id: 4 hostname: 10.178.149.243.
16/11/03 11:25:45 WARN TaskSetManager: Lost task 1.0 in stage 11.0 (TID 217, 10.178.149.243): java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/11/03 11:25:45 INFO TaskSetManager: Lost task 14.0 in stage 11.0 (TID 225) on executor 10.178.149.243: java.util.NoSuchElementException (None.get) [duplicate 1]
16/11/03 11:25:45 INFO TaskSetManager: Starting task 14.1 in stage 11.0 (TID 234, 10.178.149.243, partition 14, NODE_LOCAL, 8828 bytes)
16/11/03 11:25:45 INFO TaskSetManager: Lost task 22.0 in stage 11.0 (TID 232) on executor 10.178.149.243: java.util.NoSuchElementException (None.get) [duplicate 2]
16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 234 on executor id: 4 hostname: 10.178.149.243.
16/11/03 11:25:45 INFO TaskSetManager: Starting task 22.1 in stage 11.0 (TID 235, 10.178.149.243, partition 22, NODE_LOCAL, 9066 bytes)
16/11/03 11:25:45 INFO TaskSetManager: Lost task 24.0 in stage 11.0 (TID 233) on executor 10.178.149.243: java.util.NoSuchElementException (None.get) [duplicate 3]
16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 235 on executor id: 4 hostname: 10.178.149.243.
16/11/03 11:25:45 INFO TaskSetManager: Starting task 24.1 in stage 11.0 (TID 236, 10.178.149.243, partition 24, NODE_LOCAL, 9185 bytes)
16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 236 on executor id: 4 hostname: 10.178.149.243.
16/11/03 11:25:45 INFO TaskSetManager: Lost task 22.1 in stage 11.0 (TID 235) on executor 10.178.149.243: java.util.NoSuchElementException (None.get) [duplicate 4]
16/11/03 11:25:45 INFO TaskSetManager: Starting task 22.2 in stage 11.0 (TID 237, 10.178.149.243, partition 22, NODE_LOCAL, 9066 bytes)
16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 237 on executor id: 4 hostname: 10.178.149.243.
16/11/03 11:25:45 INFO TaskSetManager: Lost task 14.1 in stage 11.0 (TID 234) on executor 10.178.149.243: java.util.NoSuchElementException (None.get) [duplicate 5]
16/11/03 11:25:45 INFO TaskSetManager: Starting task 14.2 in stage 11.0 (TID 238, 10.178.149.243, partition 14, NODE_LOCAL, 8828 bytes)
16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 238 on executor id: 4 hostname: 10.178.149.243.
16/11/03 11:25:45 INFO TaskSetManager: Lost task 24.1 in stage 11.0 (TID 236) on executor 10.178.149.243: java.util.NoSuchElementException (None.get) [duplicate 6]
16/11/03 11:25:45 INFO TaskSetManager: Starting task 24.2 in stage 11.0 (TID 239, 10.178.149.243, partition 24, NODE_LOCAL, 9185 bytes)
16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 239 on executor id: 4 hostname: 10.178.149.243.
16/11/03 11:25:45 INFO TaskSetManager: Lost task 22.2 in stage 11.0 (TID 237) on executor 10.178.149.243: java.util.NoSuchElementException (None.get) [duplicate 7]
16/11/03 11:25:45 INFO TaskSetManager: Starting task 22.3 in stage 11.0 (TID 240, 10.178.149.243, partition 22, NODE_LOCAL, 9066 bytes)
16/11/03 11:25:45 INFO TaskSetManager: Lost task 14.2 in stage 11.0 (TID 238) on executor 10.178.149.243: java.util.NoSuchElementException (None.get) [duplicate 8]
16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 240 on executor id: 4 hostname: 10.178.149.243.
16/11/03 11:25:45 INFO TaskSetManager: Starting task 14.3 in stage 11.0 (TID 241, 10.178.149.243, partition 14, NODE_LOCAL, 8828 bytes)
16/11/03 11:25:45 INFO TaskSetManager: Lost task 24.2 in stage 11.0 (TID 239) on executor 10.178.149.243: java.util.NoSuchElementException (None.get) [duplicate 9]
16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 241 on executor id: 4 hostname: 10.178.149.243.
16/11/03 11:25:45 INFO TaskSetManager: Starting task 24.3 in stage 11.0 (TID 242, 10.178.149.243, partition 24, NODE_LOCAL, 9185 bytes)
16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 242 on executor id: 4 hostname: 10.178.149.243.
16/11/03 11:25:45 INFO TaskSetManager: Lost task 22.3 in stage 11.0 (TID 240) on executor 10.178.149.243: java.util.NoSuchElementException (None.get) [duplicate 10]
16/11/03 11:25:45 ERROR TaskSetManager: Task 22 in stage 11.0 failed 4 times; aborting job
16/11/03 11:25:45 INFO TaskSetManager: Starting task 0.0 in stage 12.0 (TID 243, 10.178.149.243, partition 0, NODE_LOCAL, 10016 bytes)
16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 243 on executor id: 4 hostname: 10.178.149.243.
16/11/03 11:25:45 INFO TaskSetManager: Lost task 14.3 in stage 11.0 (TID 241) on executor 10.178.149.243: java.util.NoSuchElementException (None.get) [duplicate 11]
16/11/03 11:25:45 INFO TaskSchedulerImpl: Cancelling stage 12
16/11/03 11:25:45 INFO TaskSchedulerImpl: Stage 12 was cancelled
16/11/03 11:25:45 INFO TaskSetManager: Starting task 0.0 in stage 14.0 (TID 244, 10.178.149.243, partition 0, NODE_LOCAL, 7638 bytes)
16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 244 on executor id: 4 hostname: 10.178.149.243.
16/11/03 11:25:45 INFO TaskSetManager: Lost task 24.3 in stage 11.0 (TID 242) on executor 10.178.149.243: java.util.NoSuchElementException (None.get) [duplicate 12]
16/11/03 11:25:45 INFO DAGScheduler: ShuffleMapStage 12 (show at RNFBackTagger.scala:97) failed in 0.112 s
16/11/03 11:25:45 INFO TaskSchedulerImpl: Cancelling stage 14
16/11/03 11:25:45 INFO TaskSchedulerImpl: Stage 14 was cancelled
16/11/03 11:25:45 INFO DAGScheduler: ShuffleMapStage 14 (show at RNFBackTagger.scala:97) failed in 0.104 s
16/11/03 11:25:45 INFO TaskSchedulerImpl: Cancelling stage 11
16/11/03 11:25:45 INFO TaskSchedulerImpl: Stage 11 was cancelled
16/11/03 11:25:45 INFO DAGScheduler: ShuffleMapStage 11 (show at RNFBackTagger.scala:97) failed in 0.126 s
16/11/03 11:25:45 WARN TaskSetManager: Lost task 0.0 in stage 12.0 (TID 243, 10.178.149.243): java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/11/03 11:25:45 INFO DAGScheduler: Job 7 failed: show at RNFBackTagger.scala:97, took 0.141681 s
16/11/03 11:25:45 INFO TaskSchedulerImpl: Removed TaskSet 12.0, whose tasks have all completed, from pool
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 22 in stage 11.0 failed 4 times, most recent failure: Lost task 22.3 in stage 11.0 (TID 240, 10.178.149.243): java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1437)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1437)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1659)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1618)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1607)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1871)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1884)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1897)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:347)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2183)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2532)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2182)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2189)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1925)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1924)
at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2562)
at org.apache.spark.sql.Dataset.head(Dataset.scala:1924)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2139)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:239)
at org.apache.spark.sql.Dataset.show(Dataset.scala:526)
at com.knoldus.xml.RNFBackTagger$.main(RNFBackTagger.scala:97)
at com.knoldus.xml.RNFBackTagger.main(RNFBackTagger.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/11/03 11:25:45 WARN JobProgressListener: Task start for unknown stage 12
16/11/03 11:25:45 WARN TaskSetManager: Lost task 0.0 in stage 14.0 (TID 244, 10.178.149.243): java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/11/03 11:25:45 INFO TaskSchedulerImpl: Removed TaskSet 14.0, whose tasks have all completed, from pool
16/11/03 11:25:45 INFO SparkContext: Invoking stop() from shutdown hook
16/11/03 11:25:45 WARN JobProgressListener: Task start for unknown stage 14
16/11/03 11:25:45 INFO SerialShutdownHooks: Successfully executed shutdown hook: Clearing session cache for C* connector
16/11/03 11:25:45 INFO TaskSetManager: Finished task 5.0 in stage 11.0 (TID 219) in 137 ms on 10.178.149.22 (1/35)
16/11/03 11:25:45 INFO SparkUI: Stopped Spark web UI at http://10.178.149.133:4040
16/11/03 11:25:45 INFO StandaloneSchedulerBackend: Shutting down all executors
16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
16/11/03 11:25:45 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:132)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:571)
at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:179)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:108)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
16/11/03 11:25:45 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:132)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:571)
at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:179)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:108)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
16/11/03 11:25:45 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/11/03 11:25:45 INFO MemoryStore: MemoryStore cleared
16/11/03 11:25:45 INFO BlockManager: BlockManager stopped
16/11/03 11:25:45 INFO BlockManagerMaster: BlockManagerMaster stopped
16/11/03 11:25:45 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/11/03 11:25:45 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.rpc.RpcEnvStoppedException: RpcEnv already stopped.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:150)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:132)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:571)
at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:179)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:108)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
16/11/03 11:25:45 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.rpc.RpcEnvStoppedException: RpcEnv already stopped.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:150)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:132)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:571)
at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:179)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:108)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
16/11/03 11:25:45 INFO SparkContext: Successfully stopped SparkContext
16/11/03 11:25:45 INFO ShutdownHookManager: Shutdown hook called
16/11/03 11:25:45 INFO ShutdownHookManager: Deleting directory /tmp/spark-c52a6da9-5702-4128-9950-805d5f9dd75e
Earlier I was not able to pin point the problem !
Then I tried the removing unncessary Code approach !
Then I found out the problem lies in this :
val groupedDF = selectedDF.groupBy("id").agg(collect_list("name"))
groupedDF.show
Because if I try to show selectedDF it displays the correct result!
The spark version that I am using is 2.0.0 ! Please help me out and let me know what is the problem.
Link to Code is :
https://gist.github.com/shiv4nsh/0c3f62e3afd95634a6061b405c774582
Show on line 19 prints and the show on 28 throws this exception.
Server Configuration: I have spark 2.0 running on 8 core worker with 10 gb memory and its running on centOS
Script for launching application:
./bin/spark-submit --class com.knoldus.Application /root/code/newCode/project1/target/deployable.jar
Any help is appreciated !
Note: The code works fine in local mode. This error is thrown when i try to run it on cluster.
I had a similar issue and it turned out to be because of the fact that my application was creating a new SparkContext everytime it tried to reload load certain classes in the executors. It's very likely that the same problem in your case if the code that will need to be loaded by the executors to run certain steps is in the same 'logical context' as code that instantiates the SparkContext.
You need to make sure that your SparkContext is loaded only once at most simply by restructuring your code.
I had a similar problem, too. It turned out I was inadvertently trying to call SparkContext inside a UDAF (which runs inside executors).
More details here: How to collect a single row dataframe and use fields as constants

How to run spark-master with Eclipse, what am I doing wrong?

What I am trying to accomplish is the following:
Have Eclipse run the Spark code
Have the master set as "spark://spark-master:7077"
The spark-master is set on a virtual machine by going to a sbin directory and executing:
sh start-all.sh
and then
/bin/spark-class org.apache.spark.deploy.worker.Worker spark://spark-master:7077
This is what UI displays:
The version I have on a virtual machine is: Spark 1.3.1, Hadoop 2.6
on Eclipse (with Maven), I have installed: spark-core_2.10, Spark 1.3.10
When I set the master to be "local", there are no errors.
When I try to run the simple PI example setting the master to "spark://spark-master:7077", I get the error:
15/04/21 16:49:20 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, spark-master): java.lang.ClassNotFoundException: mavenj.testing123$1
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:65)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:68)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:94)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/04/21 16:49:20 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 2, spark-master, PROCESS_LOCAL, 1001329 bytes)
15/04/21 16:49:20 INFO TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1) on executor spark-master: java.lang.ClassNotFoundException (mavenj.testing123$1) [duplicate 1]
15/04/21 16:49:21 INFO TaskSetManager: Starting task 1.1 in stage 0.0 (TID 3, spark-master, PROCESS_LOCAL, 1001329 bytes)
15/04/21 16:49:21 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 2) on executor spark-master: java.lang.ClassNotFoundException (mavenj.testing123$1) [duplicate 2]
15/04/21 16:49:21 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 4, spark-master, PROCESS_LOCAL, 1001329 bytes)
15/04/21 16:49:21 INFO TaskSetManager: Lost task 1.1 in stage 0.0 (TID 3) on executor spark-master: java.lang.ClassNotFoundException (mavenj.testing123$1) [duplicate 3]
15/04/21 16:49:21 INFO TaskSetManager: Starting task 1.2 in stage 0.0 (TID 5, spark-master, PROCESS_LOCAL, 1001329 bytes)
15/04/21 16:49:21 INFO TaskSetManager: Lost task 1.2 in stage 0.0 (TID 5) on executor spark-master: java.lang.ClassNotFoundException (mavenj.testing123$1) [duplicate 4]
15/04/21 16:49:21 INFO TaskSetManager: Starting task 1.3 in stage 0.0 (TID 6, spark-master, PROCESS_LOCAL, 1001329 bytes)
15/04/21 16:49:22 INFO TaskSetManager: Lost task 1.3 in stage 0.0 (TID 6) on executor spark-master: java.lang.ClassNotFoundException (mavenj.testing123$1) [duplicate 5]
15/04/21 16:49:22 ERROR TaskSetManager: Task 1 in stage 0.0 failed 4 times; aborting job
15/04/21 16:49:22 INFO TaskSchedulerImpl: Cancelling stage 0
15/04/21 16:49:22 INFO TaskSchedulerImpl: Stage 0 was cancelled
15/04/21 16:49:22 INFO DAGScheduler: Stage 0 (reduce at testing123.java:35) failed in 9.762 s
15/04/21 16:49:22 INFO DAGScheduler: Job 0 failed: reduce at testing123.java:35, took 9.907884 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 6, spark-master): java.lang.ClassNotFoundException: mavenj.testing123$1
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:65)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:68)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:94)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
To answer the question (somehow I always find the answer whenever I ask on StackOverflow), to make it work, all the worker-code (if I can call it like that) needs to be put onto a JAR first, together with all the other JARs that are relevant. After the SparkContext is initiated, it is just a matter of making a note of a path using:
sc.addJar("PATH TO JAR");
PS I have tested it with several versions, and it worked. Latest version I tested it with was 1.3.1
EDIT: Make sure that ports are not in a conflict.

Spark MLlib libsvm issues with data

I'm trying with the demo in http://spark.apache.org/docs/1.2.1/mllib-linear-methods.html
with the example via scala version.
I run the demo it was worked fine but when I changed data and the step of train it just error with
15/05/05 16:32:02 INFO TaskSetManager: Starting task 0.0 in stage 12.0 (TID 21, localhost, PROCESS_LOCAL, 1447 bytes)
15/05/05 16:32:02 INFO TaskSetManager: Starting task 1.0 in stage 12.0 (TID 22, localhost, PROCESS_LOCAL, 1447 bytes)
15/05/05 16:32:02 INFO Executor: Running task 0.0 in stage 12.0 (TID 21)
15/05/05 16:32:02 INFO Executor: Running task 1.0 in stage 12.0 (TID 22)
15/05/05 16:32:02 INFO BlockManager: Found block rdd_7_1 locally
15/05/05 16:32:02 ERROR Executor: Exception in task 1.0 in stage 12.0 (TID 22)
java.lang.ArrayIndexOutOfBoundsException: -1
at org.apache.spark.mllib.linalg.BLAS$.dot(BLAS.scala:136)
at org.apache.spark.mllib.linalg.BLAS$.dot(BLAS.scala:106)
at org.apache.spark.mllib.optimization.HingeGradient.compute(Gradient.scala:313)
at org.apache.spark.mllib.optimization.GradientDescent$$anonfun$runMiniBatchSGD$1$$anonfun$1.apply(GradientDescent.scala:192)
at org.apache.spark.mllib.optimization.GradientDescent$$anonfun$runMiniBatchSGD$1$$anonfun$1.apply(GradientDescent.scala:190)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:201)
at scala.collection.AbstractIterator.aggregate(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$28.apply(RDD.scala:988)
at org.apache.spark.rdd.RDD$$anonfun$28.apply(RDD.scala:988)
at org.apache.spark.rdd.RDD$$anonfun$29.apply(RDD.scala:989)
at org.apache.spark.rdd.RDD$$anonfun$29.apply(RDD.scala:989)
at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
15/05/05 16:32:02 INFO BlockManager: Found block rdd_7_0 locally
15/05/05 16:32:02 ERROR Executor: Exception in task 0.0 in stage 12.0 (TID 21)
java.lang.ArrayIndexOutOfBoundsException: -1
at org.apache.spark.mllib.linalg.BLAS$.dot(BLAS.scala:136)
at org.apache.spark.mllib.linalg.BLAS$.dot(BLAS.scala:106)
at org.apache.spark.mllib.optimization.HingeGradient.compute(Gradient.scala:313)
at org.apache.spark.mllib.optimization.GradientDescent$$anonfun$runMiniBatchSGD$1$$anonfun$1.apply(GradientDescent.scala:192)
at org.apache.spark.mllib.optimization.GradientDescent$$anonfun$runMiniBatchSGD$1$$anonfun$1.apply(GradientDescent.scala:190)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:201)
at scala.collection.AbstractIterator.aggregate(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$28.apply(RDD.scala:988)
at org.apache.spark.rdd.RDD$$anonfun$28.apply(RDD.scala:988)
at org.apache.spark.rdd.RDD$$anonfun$29.apply(RDD.scala:989)
at org.apache.spark.rdd.RDD$$anonfun$29.apply(RDD.scala:989)
at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
15/05/05 16:32:02 WARN TaskSetManager: Lost task 1.0 in stage 12.0 (TID 22, localhost): java.lang.ArrayIndexOutOfBoundsException: -1
at org.apache.spark.mllib.linalg.BLAS$.dot(BLAS.scala:136)
at org.apache.spark.mllib.linalg.BLAS$.dot(BLAS.scala:106)
at org.apache.spark.mllib.optimization.HingeGradient.compute(Gradient.scala:313)
at org.apache.spark.mllib.optimization.GradientDescent$$anonfun$runMiniBatchSGD$1$$anonfun$1.apply(GradientDescent.scala:192)
at org.apache.spark.mllib.optimization.GradientDescent$$anonfun$runMiniBatchSGD$1$$anonfun$1.apply(GradientDescent.scala:190)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:201)
at scala.collection.AbstractIterator.aggregate(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$28.apply(RDD.scala:988)
at org.apache.spark.rdd.RDD$$anonfun$28.apply(RDD.scala:988)
at org.apache.spark.rdd.RDD$$anonfun$29.apply(RDD.scala:989)
at org.apache.spark.rdd.RDD$$anonfun$29.apply(RDD.scala:989)
at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
15/05/05 16:32:02 ERROR TaskSetManager: Task 1 in stage 12.0 failed 1 times; aborting job
15/05/05 16:32:02 INFO TaskSchedulerImpl: Removed TaskSet 12.0, whose tasks have all completed, from pool
15/05/05 16:32:02 INFO TaskSetManager: Lost task 0.0 in stage 12.0 (TID 21) on executor localhost: java.lang.ArrayIndexOutOfBoundsException (-1) [duplicate 1]
15/05/05 16:32:02 INFO TaskSchedulerImpl: Removed TaskSet 12.0, whose tasks have all completed, from pool
15/05/05 16:32:02 INFO TaskSchedulerImpl: Cancelling stage 12
15/05/05 16:32:02 INFO DAGScheduler: Job 12 failed: treeAggregate at GradientDescent.scala:189, took 0.032101 s
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 12.0 failed 1 times, most recent failure: Lost task 1.0 in stage 12.0 (TID 22, localhost): java.lang.ArrayIndexOutOfBoundsException: -1
at org.apache.spark.mllib.linalg.BLAS$.dot(BLAS.scala:136)
at org.apache.spark.mllib.linalg.BLAS$.dot(BLAS.scala:106)
at org.apache.spark.mllib.optimization.HingeGradient.compute(Gradient.scala:313)
at org.apache.spark.mllib.optimization.GradientDescent$$anonfun$runMiniBatchSGD$1$$anonfun$1.apply(GradientDescent.scala:192)
at org.apache.spark.mllib.optimization.GradientDescent$$anonfun$runMiniBatchSGD$1$$anonfun$1.apply(GradientDescent.scala:190)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:201)
at scala.collection.AbstractIterator.aggregate(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$28.apply(RDD.scala:988)
at org.apache.spark.rdd.RDD$$anonfun$28.apply(RDD.scala:988)
at org.apache.spark.rdd.RDD$$anonfun$29.apply(RDD.scala:989)
at org.apache.spark.rdd.RDD$$anonfun$29.apply(RDD.scala:989)
at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1203)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1191)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1191)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
https://github.com/hermitD/temp here's my test data file
I've use it to train with libsvm-tools under linux and it works! and exam format with libsvm python tool it shows ok. just don't know why it error.
After do some tests,finally I solve it.I wrote here for other people who met this question.
here's a example of data format error I faced
0 0:0 1:0 2:1
1 1:1 3:2
the data for 0:0 and 1:0/1:1 is the reason for ArrayIndexOutOfBoundsException.If someone who faced the same question just delete them from u past data or update it.
Since it was worked in libsvm-tools,so I guess in spark MLlib it just implements a bit different.
I had the same problem with the libSVM format and MLlib. In my case, the first feature was labeled as 0 instead of 1. XGBoost had no problem with that, but both Weka and Spark MLlib failed with the same ArrayIndexOutOfBoundsException: -1
The solution in this case is to sum 1 to each feature to start with 1 instead of 0. The easiest way to do it in Python is:
from sklearn.datasets import load_svmlight_file, dump_svmlight_file
X, y = load_svmlight_file('example.libsvm')
X.indices = (X.indices + 1)
dump_svmlight_file(X, y, 'fixed.libsvm')

Unable to run Spark with Mesos

I set up Spark-0.9.1 to run on mesos-0.13.0 using the steps mentioned here. The Mesos UI is showing two workers registered. I want to run these commands on Spark-shell
> scala> val data = 1 to 10000 data:
> scala.collection.immutable.Range.Inclusive = Range(1, 2, 3, 4, 5, 6,
> 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24,
> 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
> 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58,
> 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75,
> 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92,
> 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
> 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
> 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
> 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
> 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
> 164, 165, 166, 167, 168, 169, 170...
> scala> val distData = sc.parallelize(data) distData:
> org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at
> parallelize at <console>:14
Now when i run the collect method, the following error occurs.
> scala> distData.filter(_< 10).collect()
14/06/03 19:54:55 INFO SparkContext: Starting job: collect at <console>:17
14/06/03 19:54:55 INFO DAGScheduler: Got job 0 (collect at <console>:17) with 8 output partitions (allowLocal=false)
14/06/03 19:54:55 INFO DAGScheduler: Final stage: Stage 0 (collect at <console>:17)
14/06/03 19:54:55 INFO DAGScheduler: Parents of final stage: List()
14/06/03 19:54:55 INFO DAGScheduler: Missing parents: List()
14/06/03 19:54:55 INFO DAGScheduler: Submitting Stage 0 (FilteredRDD[1] at filter at <console>:17), which has no missing parents
14/06/03 19:54:55 INFO DAGScheduler: Submitting 8 missing tasks from Stage 0 (FilteredRDD[1] at filter at <console>:17)
14/06/03 19:54:55 INFO TaskSchedulerImpl: Adding task set 0.0 with 8 tasks
14/06/03 19:54:55 INFO TaskSetManager: Starting task 0.0:0 as TID 0 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:55 INFO TaskSetManager: Serialized task 0.0:0 as 1338 bytes in 8 ms
14/06/03 19:54:55 INFO TaskSetManager: Starting task 0.0:1 as TID 1 on executor 201406031732-3213994176-5050-6320-10: host-DSRV04.host (PROCESS_LOCAL)
14/06/03 19:54:55 INFO TaskSetManager: Serialized task 0.0:1 as 1338 bytes in 0 ms
14/06/03 19:54:55 INFO TaskSetManager: Starting task 0.0:2 as TID 2 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:55 INFO TaskSetManager: Serialized task 0.0:2 as 1338 bytes in 0 ms
14/06/03 19:54:55 INFO TaskSetManager: Starting task 0.0:3 as TID 3 on executor 201406031732-3213994176-5050-6320-10: host-DSRV04.host (PROCESS_LOCAL)
14/06/03 19:54:55 INFO TaskSetManager: Serialized task 0.0:3 as 1338 bytes in 1 ms
14/06/03 19:54:55 INFO TaskSetManager: Starting task 0.0:4 as TID 4 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:55 INFO TaskSetManager: Serialized task 0.0:4 as 1338 bytes in 0 ms
14/06/03 19:54:55 INFO TaskSetManager: Starting task 0.0:5 as TID 5 on executor 201406031732-3213994176-5050-6320-10: host-DSRV04.host (PROCESS_LOCAL)
14/06/03 19:54:55 INFO TaskSetManager: Serialized task 0.0:5 as 1338 bytes in 0 ms
14/06/03 19:54:55 INFO TaskSetManager: Starting task 0.0:6 as TID 6 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:55 INFO TaskSetManager: Serialized task 0.0:6 as 1338 bytes in 0 ms
14/06/03 19:54:55 INFO TaskSetManager: Starting task 0.0:7 as TID 7 on executor 201406031732-3213994176-5050-6320-10: host-DSRV04.host (PROCESS_LOCAL)
14/06/03 19:54:55 INFO TaskSetManager: Serialized task 0.0:7 as 1338 bytes in 0 ms
14/06/03 19:54:56 INFO TaskSetManager: Re-queueing tasks for 201406031732-3213994176-5050-6320-10 from TaskSet 0.0
14/06/03 19:54:56 WARN TaskSetManager: Lost TID 5 (task 0.0:5)
14/06/03 19:54:56 WARN TaskSetManager: Lost TID 7 (task 0.0:7)
14/06/03 19:54:56 WARN TaskSetManager: Lost TID 1 (task 0.0:1)
14/06/03 19:54:56 WARN TaskSetManager: Lost TID 3 (task 0.0:3)
14/06/03 19:54:56 INFO DAGScheduler: Executor lost: 201406031732-3213994176-5050-6320-10 (epoch 0)
14/06/03 19:54:56 INFO BlockManagerMasterActor: Trying to remove executor 201406031732-3213994176-5050-6320-10 from BlockManagerMaster.
14/06/03 19:54:56 INFO BlockManagerMaster: Removed 201406031732-3213994176-5050-6320-10 successfully in removeExecutor
14/06/03 19:54:56 INFO TaskSetManager: Starting task 0.0:3 as TID 8 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:56 INFO TaskSetManager: Serialized task 0.0:3 as 1338 bytes in 0 ms
14/06/03 19:54:56 INFO DAGScheduler: Host gained which was in lost list earlier: host-DSRV04.host
14/06/03 19:54:56 INFO TaskSetManager: Starting task 0.0:1 as TID 9 on executor 201406031732-3213994176-5050-6320-10: host-DSRV04.host (PROCESS_LOCAL)
14/06/03 19:54:56 INFO TaskSetManager: Serialized task 0.0:1 as 1338 bytes in 0 ms
14/06/03 19:54:56 INFO TaskSetManager: Starting task 0.0:7 as TID 10 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:56 INFO TaskSetManager: Serialized task 0.0:7 as 1338 bytes in 0 ms
14/06/03 19:54:56 INFO TaskSetManager: Starting task 0.0:5 as TID 11 on executor 201406031732-3213994176-5050-6320-10: host-DSRV04.host (PROCESS_LOCAL)
14/06/03 19:54:56 INFO TaskSetManager: Serialized task 0.0:5 as 1338 bytes in 0 ms
14/06/03 19:54:57 INFO TaskSetManager: Re-queueing tasks for 201406031732-3213994176-5050-6320-11 from TaskSet 0.0
14/06/03 19:54:57 WARN TaskSetManager: Lost TID 8 (task 0.0:3)
14/06/03 19:54:57 WARN TaskSetManager: Lost TID 2 (task 0.0:2)
14/06/03 19:54:57 WARN TaskSetManager: Lost TID 4 (task 0.0:4)
14/06/03 19:54:57 WARN TaskSetManager: Lost TID 10 (task 0.0:7)
14/06/03 19:54:57 WARN TaskSetManager: Lost TID 6 (task 0.0:6)
14/06/03 19:54:57 WARN TaskSetManager: Lost TID 0 (task 0.0:0)
14/06/03 19:54:57 INFO DAGScheduler: Executor lost: 201406031732-3213994176-5050-6320-11 (epoch 1)
14/06/03 19:54:57 INFO BlockManagerMasterActor: Trying to remove executor 201406031732-3213994176-5050-6320-11 from BlockManagerMaster.
14/06/03 19:54:57 INFO BlockManagerMaster: Removed 201406031732-3213994176-5050-6320-11 successfully in removeExecutor
14/06/03 19:54:57 INFO DAGScheduler: Host gained which was in lost list earlier: host-DSRV05.host
14/06/03 19:54:57 INFO TaskSetManager: Starting task 0.0:0 as TID 12 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:57 INFO TaskSetManager: Serialized task 0.0:0 as 1338 bytes in 1 ms
14/06/03 19:54:57 INFO TaskSetManager: Starting task 0.0:6 as TID 13 on executor 201406031732-3213994176-5050-6320-10: host-DSRV04.host (PROCESS_LOCAL)
14/06/03 19:54:57 INFO TaskSetManager: Serialized task 0.0:6 as 1338 bytes in 0 ms
14/06/03 19:54:57 INFO TaskSetManager: Starting task 0.0:7 as TID 14 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:57 INFO TaskSetManager: Serialized task 0.0:7 as 1338 bytes in 1 ms
14/06/03 19:54:57 INFO TaskSetManager: Starting task 0.0:4 as TID 15 on executor 201406031732-3213994176-5050-6320-10: host-DSRV04.host (PROCESS_LOCAL)
14/06/03 19:54:57 INFO TaskSetManager: Serialized task 0.0:4 as 1338 bytes in 0 ms
14/06/03 19:54:57 INFO TaskSetManager: Starting task 0.0:2 as TID 16 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:57 INFO TaskSetManager: Serialized task 0.0:2 as 1338 bytes in 0 ms
14/06/03 19:54:57 INFO TaskSetManager: Starting task 0.0:3 as TID 17 on executor 201406031732-3213994176-5050-6320-10: host-DSRV04.host (PROCESS_LOCAL)
14/06/03 19:54:57 INFO TaskSetManager: Serialized task 0.0:3 as 1338 bytes in 1 ms
14/06/03 19:54:57 INFO TaskSetManager: Re-queueing tasks for 201406031732-3213994176-5050-6320-11 from TaskSet 0.0
14/06/03 19:54:57 WARN TaskSetManager: Lost TID 14 (task 0.0:7)
14/06/03 19:54:57 WARN TaskSetManager: Lost TID 16 (task 0.0:2)
14/06/03 19:54:57 WARN TaskSetManager: Lost TID 12 (task 0.0:0)
14/06/03 19:54:57 INFO DAGScheduler: Executor lost: 201406031732-3213994176-5050-6320-11 (epoch 2)
14/06/03 19:54:57 INFO BlockManagerMasterActor: Trying to remove executor 201406031732-3213994176-5050-6320-11 from BlockManagerMaster.
14/06/03 19:54:57 INFO BlockManagerMaster: Removed 201406031732-3213994176-5050-6320-11 successfully in removeExecutor
14/06/03 19:54:57 INFO DAGScheduler: Host gained which was in lost list earlier: host-DSRV05.host
14/06/03 19:54:57 INFO TaskSetManager: Starting task 0.0:0 as TID 18 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:57 INFO TaskSetManager: Serialized task 0.0:0 as 1338 bytes in 0 ms
14/06/03 19:54:57 INFO TaskSetManager: Starting task 0.0:2 as TID 19 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:57 INFO TaskSetManager: Serialized task 0.0:2 as 1338 bytes in 0 ms
14/06/03 19:54:57 INFO TaskSetManager: Starting task 0.0:7 as TID 20 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:57 INFO TaskSetManager: Serialized task 0.0:7 as 1338 bytes in 0 ms
14/06/03 19:54:58 INFO TaskSetManager: Re-queueing tasks for 201406031732-3213994176-5050-6320-10 from TaskSet 0.0
14/06/03 19:54:58 WARN TaskSetManager: Lost TID 17 (task 0.0:3)
14/06/03 19:54:58 WARN TaskSetManager: Lost TID 11 (task 0.0:5)
14/06/03 19:54:58 WARN TaskSetManager: Lost TID 13 (task 0.0:6)
14/06/03 19:54:58 WARN TaskSetManager: Lost TID 9 (task 0.0:1)
14/06/03 19:54:58 WARN TaskSetManager: Lost TID 15 (task 0.0:4)
14/06/03 19:54:58 INFO DAGScheduler: Executor lost: 201406031732-3213994176-5050-6320-10 (epoch 3)
14/06/03 19:54:58 INFO BlockManagerMasterActor: Trying to remove executor 201406031732-3213994176-5050-6320-10 from BlockManagerMaster.
14/06/03 19:54:58 INFO BlockManagerMaster: Removed 201406031732-3213994176-5050-6320-10 successfully in removeExecutor
14/06/03 19:54:58 INFO DAGScheduler: Host gained which was in lost list earlier: host-DSRV04.host
14/06/03 19:54:58 INFO TaskSetManager: Starting task 0.0:4 as TID 21 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:58 INFO TaskSetManager: Serialized task 0.0:4 as 1338 bytes in 0 ms
14/06/03 19:54:58 INFO TaskSetManager: Starting task 0.0:1 as TID 22 on executor 201406031732-3213994176-5050-6320-10: host-DSRV04.host (PROCESS_LOCAL)
14/06/03 19:54:58 INFO TaskSetManager: Serialized task 0.0:1 as 1338 bytes in 0 ms
14/06/03 19:54:58 INFO TaskSetManager: Starting task 0.0:6 as TID 23 on executor 201406031732-3213994176-5050-6320-11: host-DSRV05.host (PROCESS_LOCAL)
14/06/03 19:54:58 INFO TaskSetManager: Serialized task 0.0:6 as 1338 bytes in 0 ms
14/06/03 19:54:58 INFO TaskSetManager: Starting task 0.0:5 as TID 24 on executor 201406031732-3213994176-5050-6320-10: host-DSRV04.host (PROCESS_LOCAL)
14/06/03 19:54:58 INFO TaskSetManager: Serialized task 0.0:5 as 1338 bytes in 1 ms
14/06/03 19:54:58 INFO TaskSetManager: Starting task 0.0:3 as TID 25 on executor 201406031732-3213994176-5050-6320-10: host-DSRV04.host (PROCESS_LOCAL)
14/06/03 19:54:58 INFO TaskSetManager: Serialized task 0.0:3 as 1338 bytes in 0 ms
14/06/03 19:54:59 INFO TaskSetManager: Re-queueing tasks for 201406031732-3213994176-5050-6320-11 from TaskSet 0.0
14/06/03 19:54:59 WARN TaskSetManager: Lost TID 23 (task 0.0:6)
14/06/03 19:54:59 WARN TaskSetManager: Lost TID 20 (task 0.0:7)
14/06/03 19:54:59 ERROR TaskSetManager: Task 0.0:7 failed 4 times; aborting job
14/06/03 19:54:59 INFO DAGScheduler: Failed to run collect at <console>:17
14/06/03 19:54:59 INFO DAGScheduler: Executor lost: 201406031732-3213994176-5050-6320-11 (epoch 4)
14/06/03 19:54:59 INFO BlockManagerMasterActor: Trying to remove executor 201406031732-3213994176-5050-6320-11 from BlockManagerMaster.
14/06/03 19:54:59 INFO BlockManagerMaster: Removed 201406031732-3213994176-5050-6320-11 successfully in removeExecutor
14/06/03 19:54:59 INFO DAGScheduler: Host gained which was in lost list earlier: host-DSRV05.host
org.apache.spark.SparkException: Job aborted: Task 0.0:7 failed 4 times (most recent failure: unknown)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1020)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1018)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1018)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:604)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:190)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
>
> scala> 14/06/03 19:55:00 INFO TaskSetManager: Re-queueing tasks for
> 201406031732-3213994176-5050-6320-10 from TaskSet 0.0 14/06/03
> 19:55:00 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have
> all completed, from pool 14/06/03 19:55:00 INFO DAGScheduler: Executor
> lost: 201406031732-3213994176-5050-6320-10 (epoch 5) 14/06/03 19:55:00
> INFO BlockManagerMasterActor: Trying to remove executor
> 201406031732-3213994176-5050-6320-10 from BlockManagerMaster. 14/06/03
> 19:55:00 INFO BlockManagerMaster: Removed
> 201406031732-3213994176-5050-6320-10 successfully in removeExecutor
> 14/06/03 19:55:00 INFO DAGScheduler: Host gained which was in lost
> list earlier: host-DSRV04.host
I've checked my configuration of spark many times and it looks fine to me. Any ideas what might have gone wrong?
--
Thanks
As it turns out my tar file wasn't created properly.
Recreated it and its working fine now.
Sorry for the trouble.