sample spark CSV and JSON program not running in windows [duplicate] - scala

This question already has answers here:
Failed to locate the winutils binary in the hadoop binary path
(17 answers)
Closed 6 years ago.
I am running spark program in Windows 10 machine.
I am trying to run the below spark program
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.sql.Column
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql._
import org.apache.spark.sql.SQLImplicits
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.TypedColumn
import org.apache.spark.sql.Encoder
import org.apache.spark.sql.Encoders
import com.databricks.spark.csv
object json1 {
def main(args : Array[String]){
val conf = new SparkConf().setAppName("Simple Application").setMaster("local[2]").set("spark.executor.memory", "1g")
val sc = new org.apache.spark.SparkContext(conf)
val sqlc = new org.apache.spark.sql.SQLContext(sc)
val NyseDF = sqlc.load("com.databricks.spark.csv",Map("path" -> args(0),"header"->"true"))
NyseDF.registerTempTable("NYSE")
NyseDF.printSchema()
}
}
When i run the program through Run application mode in eclispse with passing arguments
as
src/test/resources/demo.text
It fails with below error.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/10/10 11:02:18 INFO SparkContext: Running Spark version 1.6.0
16/10/10 11:02:18 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/10/10 11:02:18 INFO SecurityManager: Changing view acls to: subho
16/10/10 11:02:18 INFO SecurityManager: Changing modify acls to: subho
16/10/10 11:02:18 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(subho); users with modify permissions: Set(subho)
16/10/10 11:02:19 INFO Utils: Successfully started service 'sparkDriver' on port 61108.
16/10/10 11:02:20 INFO Slf4jLogger: Slf4jLogger started
16/10/10 11:02:20 INFO Remoting: Starting remoting
16/10/10 11:02:20 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#192.168.1.116:61121]
16/10/10 11:02:20 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 61121.
16/10/10 11:02:20 INFO SparkEnv: Registering MapOutputTracker
16/10/10 11:02:20 INFO SparkEnv: Registering BlockManagerMaster
16/10/10 11:02:21 INFO DiskBlockManager: Created local directory at C:\Users\subho\AppData\Local\Temp\blockmgr-69afda02-ccd1-41d1-aa25-830ba366a75c
16/10/10 11:02:21 INFO MemoryStore: MemoryStore started with capacity 1128.4 MB
16/10/10 11:02:21 INFO SparkEnv: Registering OutputCommitCoordinator
16/10/10 11:02:21 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/10/10 11:02:21 INFO SparkUI: Started SparkUI at http://192.168.1.116:4040
16/10/10 11:02:21 INFO Executor: Starting executor ID driver on host localhost
16/10/10 11:02:21 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 61132.
16/10/10 11:02:21 INFO NettyBlockTransferService: Server created on 61132
16/10/10 11:02:21 INFO BlockManagerMaster: Trying to register BlockManager
16/10/10 11:02:21 INFO BlockManagerMasterEndpoint: Registering block manager localhost:61132 with 1128.4 MB RAM, BlockManagerId(driver, localhost, 61132)
16/10/10 11:02:21 INFO BlockManagerMaster: Registered BlockManager
16/10/10 11:02:23 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 107.7 KB, free 107.7 KB)
16/10/10 11:02:23 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 9.8 KB, free 117.5 KB)
16/10/10 11:02:23 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:61132 (size: 9.8 KB, free: 1128.4 MB)
16/10/10 11:02:23 INFO SparkContext: Created broadcast 0 from textFile at TextFile.scala:30
16/10/10 11:02:23 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at scala.Option.map(Option.scala:146)
at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1293)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.take(RDD.scala:1288)
at com.databricks.spark.csv.CsvRelation.firstLine$lzycompute(CsvRelation.scala:174)
at com.databricks.spark.csv.CsvRelation.firstLine(CsvRelation.scala:169)
at com.databricks.spark.csv.CsvRelation.inferSchema(CsvRelation.scala:147)
at com.databricks.spark.csv.CsvRelation.<init>(CsvRelation.scala:70)
at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:138)
at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:40)
at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:28)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at org.apache.spark.sql.SQLContext.load(SQLContext.scala:1153)
at json1$.main(json1.scala:22)
at json1.main(json1.scala)
Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/subho/Desktop/code-master/simple-spark-project/src/test/resources/demo.text
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:251)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1293)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.take(RDD.scala:1288)
at com.databricks.spark.csv.CsvRelation.firstLine$lzycompute(CsvRelation.scala:174)
at com.databricks.spark.csv.CsvRelation.firstLine(CsvRelation.scala:169)
at com.databricks.spark.csv.CsvRelation.inferSchema(CsvRelation.scala:147)
at com.databricks.spark.csv.CsvRelation.<init>(CsvRelation.scala:70)
at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:138)
at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:40)
at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:28)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at org.apache.spark.sql.SQLContext.load(SQLContext.scala:1153)
at json1$.main(json1.scala:22)
at json1.main(json1.scala)
16/10/10 11:02:23 INFO SparkContext: Invoking stop() from shutdown hook
16/10/10 11:02:23 INFO SparkUI: Stopped Spark web UI at http://192.168.1.116:4040
16/10/10 11:02:23 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/10/10 11:02:23 INFO MemoryStore: MemoryStore cleared
16/10/10 11:02:23 INFO BlockManager: BlockManager stopped
16/10/10 11:02:24 INFO BlockManagerMaster: BlockManagerMaster stopped
16/10/10 11:02:24 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/10/10 11:02:24 INFO SparkContext: Successfully stopped SparkContext
16/10/10 11:02:24 INFO ShutdownHookManager: Shutdown hook called
16/10/10 11:02:24 INFO ShutdownHookManager: Deleting directory C:\Users\subho\AppData\Local\Temp\spark-7f53ea20-a38c-46d5-8476-a1ae040736ac
Below is the main error msg
Input path does not exist: file:/C:/Users/subho/Desktop/code-master/simple-spark-project/src/test/resources/demo.text
I have the file in the below location.
!]1
When i ran the below program it ran sucussfully,
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.sql.Column
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql._
import org.apache.spark.sql.SQLImplicits
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.TypedColumn
import org.apache.spark.sql.Encoder
import org.apache.spark.sql.Encoders
import com.databricks.spark.csv
object json1 {
def main(args : Array[String]){
val conf = new SparkConf().setAppName("Simple Application").setMaster("local[2]").set("spark.executor.memory", "1g")
val sc = new org.apache.spark.SparkContext(conf)
val sqlc = new org.apache.spark.sql.SQLContext(sc)
/* val NyseDF = sqlc.load("com.databricks.spark.csv",Map("path" -> args(0),"header"->"true"))
NyseDF.registerTempTable("NYSE")
NyseDF.printSchema()
print(sqlc.sql("select distinct(symbol) from NYSE").collect().toList)*/
val PersonDF = sqlc.jsonFile("src/test/resources/Person.json")
// PersonDF.printSchema()
PersonDF.registerTempTable("Person")
sqlc.sql("select * from Person where age < 60").collect().foreach(print)
}
Below is the log file.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/10/10 11:54:12 INFO SparkContext: Running Spark version 1.6.0
16/10/10 11:54:13 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/10/10 11:54:13 INFO SecurityManager: Changing view acls to: subho
16/10/10 11:54:13 INFO SecurityManager: Changing modify acls to: subho
16/10/10 11:54:13 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(subho); users with modify permissions: Set(subho)
16/10/10 11:54:14 INFO Utils: Successfully started service 'sparkDriver' on port 51113.
16/10/10 11:54:14 INFO Slf4jLogger: Slf4jLogger started
16/10/10 11:54:14 INFO Remoting: Starting remoting
16/10/10 11:54:15 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#192.168.1.116:51126]
16/10/10 11:54:15 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 51126.
16/10/10 11:54:15 INFO SparkEnv: Registering MapOutputTracker
16/10/10 11:54:15 INFO SparkEnv: Registering BlockManagerMaster
16/10/10 11:54:15 INFO DiskBlockManager: Created local directory at C:\Users\subho\AppData\Local\Temp\blockmgr-a52a5d5a-075b-4859-8434-935fdaba8538
16/10/10 11:54:15 INFO MemoryStore: MemoryStore started with capacity 1128.4 MB
16/10/10 11:54:15 INFO SparkEnv: Registering OutputCommitCoordinator
16/10/10 11:54:15 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/10/10 11:54:15 INFO SparkUI: Started SparkUI at http://192.168.1.116:4040
16/10/10 11:54:15 INFO Executor: Starting executor ID driver on host localhost
16/10/10 11:54:15 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51137.
16/10/10 11:54:15 INFO NettyBlockTransferService: Server created on 51137
16/10/10 11:54:15 INFO BlockManagerMaster: Trying to register BlockManager
16/10/10 11:54:15 INFO BlockManagerMasterEndpoint: Registering block manager localhost:51137 with 1128.4 MB RAM, BlockManagerId(driver, localhost, 51137)
16/10/10 11:54:15 INFO BlockManagerMaster: Registered BlockManager
16/10/10 11:54:17 INFO JSONRelation: Listing file:/C:/Users/subho/Desktop/code-master/simple-spark-project/src/test/resources/Person.json on driver
16/10/10 11:54:17 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInputPaths(FileInputFormat.java:447)
at org.apache.spark.sql.execution.datasources.json.JSONRelation.org$apache$spark$sql$execution$datasources$json$JSONRelation$$createBaseRdd(JSONRelation.scala:98)
at org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4$$anonfun$apply$1.apply(JSONRelation.scala:115)
at org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4$$anonfun$apply$1.apply(JSONRelation.scala:115)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4.apply(JSONRelation.scala:115)
at org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4.apply(JSONRelation.scala:109)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.execution.datasources.json.JSONRelation.dataSchema$lzycompute(JSONRelation.scala:109)
at org.apache.spark.sql.execution.datasources.json.JSONRelation.dataSchema(JSONRelation.scala:108)
at org.apache.spark.sql.sources.HadoopFsRelation.schema$lzycompute(interfaces.scala:636)
at org.apache.spark.sql.sources.HadoopFsRelation.schema(interfaces.scala:635)
at org.apache.spark.sql.execution.datasources.LogicalRelation.<init>(LogicalRelation.scala:37)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:244)
at org.apache.spark.sql.SQLContext.jsonFile(SQLContext.scala:1011)
at json1$.main(json1.scala:28)
at json1.main(json1.scala)
16/10/10 11:54:18 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 128.0 KB, free 128.0 KB)
16/10/10 11:54:18 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 14.1 KB, free 142.1 KB)
16/10/10 11:54:18 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:51137 (size: 14.1 KB, free: 1128.4 MB)
16/10/10 11:54:18 INFO SparkContext: Created broadcast 0 from jsonFile at json1.scala:28
16/10/10 11:54:18 INFO FileInputFormat: Total input paths to process : 1
16/10/10 11:54:18 INFO SparkContext: Starting job: jsonFile at json1.scala:28
16/10/10 11:54:18 INFO DAGScheduler: Got job 0 (jsonFile at json1.scala:28) with 2 output partitions
16/10/10 11:54:18 INFO DAGScheduler: Final stage: ResultStage 0 (jsonFile at json1.scala:28)
16/10/10 11:54:18 INFO DAGScheduler: Parents of final stage: List()
16/10/10 11:54:18 INFO DAGScheduler: Missing parents: List()
16/10/10 11:54:18 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at jsonFile at json1.scala:28), which has no missing parents
16/10/10 11:54:18 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.2 KB, free 146.3 KB)
16/10/10 11:54:18 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.4 KB, free 148.6 KB)
16/10/10 11:54:18 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:51137 (size: 2.4 KB, free: 1128.4 MB)
16/10/10 11:54:18 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
16/10/10 11:54:18 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at jsonFile at json1.scala:28)
16/10/10 11:54:18 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
16/10/10 11:54:18 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2113 bytes)
16/10/10 11:54:18 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1,PROCESS_LOCAL, 2113 bytes)
16/10/10 11:54:18 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
16/10/10 11:54:18 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16/10/10 11:54:18 INFO HadoopRDD: Input split: file:/C:/Users/subho/Desktop/code-master/simple-spark-project/src/test/resources/Person.json:0+92
16/10/10 11:54:18 INFO HadoopRDD: Input split: file:/C:/Users/subho/Desktop/code-master/simple-spark-project/src/test/resources/Person.json:92+93
16/10/10 11:54:18 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
16/10/10 11:54:18 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
16/10/10 11:54:18 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
16/10/10 11:54:18 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
16/10/10 11:54:18 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
16/10/10 11:54:18 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
16/10/10 11:54:19 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2886 bytes result sent to driver
16/10/10 11:54:19 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 2886 bytes result sent to driver
16/10/10 11:54:19 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1287 ms on localhost (1/2)
16/10/10 11:54:19 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 1264 ms on localhost (2/2)
16/10/10 11:54:19 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/10/10 11:54:19 INFO DAGScheduler: ResultStage 0 (jsonFile at json1.scala:28) finished in 1.314 s
16/10/10 11:54:19 INFO DAGScheduler: Job 0 finished: jsonFile at json1.scala:28, took 1.413653 s
16/10/10 11:54:20 INFO BlockManagerInfo: Removed broadcast_1_piece0 on localhost:51137 in memory (size: 2.4 KB, free: 1128.4 MB)
16/10/10 11:54:20 INFO ContextCleaner: Cleaned accumulator 1
16/10/10 11:54:20 INFO BlockManagerInfo: Removed broadcast_0_piece0 on localhost:51137 in memory (size: 14.1 KB, free: 1128.4 MB)
16/10/10 11:54:21 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 59.6 KB, free 59.6 KB)
16/10/10 11:54:21 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 13.8 KB, free 73.3 KB)
16/10/10 11:54:21 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:51137 (size: 13.8 KB, free: 1128.4 MB)
16/10/10 11:54:21 INFO SparkContext: Created broadcast 2 from collect at json1.scala:34
16/10/10 11:54:21 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 128.0 KB, free 201.3 KB)
16/10/10 11:54:21 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 14.1 KB, free 215.4 KB)
16/10/10 11:54:21 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on localhost:51137 (size: 14.1 KB, free: 1128.3 MB)
16/10/10 11:54:21 INFO SparkContext: Created broadcast 3 from collect at json1.scala:34
16/10/10 11:54:21 INFO FileInputFormat: Total input paths to process : 1
16/10/10 11:54:21 INFO SparkContext: Starting job: collect at json1.scala:34
16/10/10 11:54:21 INFO DAGScheduler: Got job 1 (collect at json1.scala:34) with 2 output partitions
16/10/10 11:54:21 INFO DAGScheduler: Final stage: ResultStage 1 (collect at json1.scala:34)
16/10/10 11:54:21 INFO DAGScheduler: Parents of final stage: List()
16/10/10 11:54:21 INFO DAGScheduler: Missing parents: List()
16/10/10 11:54:21 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[9] at collect at json1.scala:34), which has no missing parents
16/10/10 11:54:21 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 7.6 KB, free 223.0 KB)
16/10/10 11:54:21 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 4.1 KB, free 227.1 KB)
16/10/10 11:54:21 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on localhost:51137 (size: 4.1 KB, free: 1128.3 MB)
16/10/10 11:54:21 INFO SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1006
16/10/10 11:54:21 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 1 (MapPartitionsRDD[9] at collect at json1.scala:34)
16/10/10 11:54:21 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
16/10/10 11:54:21 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, localhost, partition 0,PROCESS_LOCAL, 2113 bytes)
16/10/10 11:54:21 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, localhost, partition 1,PROCESS_LOCAL, 2113 bytes)
16/10/10 11:54:21 INFO Executor: Running task 0.0 in stage 1.0 (TID 2)
16/10/10 11:54:21 INFO Executor: Running task 1.0 in stage 1.0 (TID 3)
16/10/10 11:54:21 INFO HadoopRDD: Input split: file:/C:/Users/subho/Desktop/code-master/simple-spark-project/src/test/resources/Person.json:92+93
16/10/10 11:54:21 INFO HadoopRDD: Input split: file:/C:/Users/subho/Desktop/code-master/simple-spark-project/src/test/resources/Person.json:0+92
16/10/10 11:54:22 INFO BlockManagerInfo: Removed broadcast_2_piece0 on localhost:51137 in memory (size: 13.8 KB, free: 1128.4 MB)
16/10/10 11:54:22 INFO GenerateUnsafeProjection: Code generated in 548.352258 ms
16/10/10 11:54:22 INFO GeneratePredicate: Code generated in 5.245214 ms
16/10/10 11:54:22 INFO Executor: Finished task 1.0 in stage 1.0 (TID 3). 2283 bytes result sent to driver
16/10/10 11:54:22 INFO Executor: Finished task 0.0 in stage 1.0 (TID 2). 2536 bytes result sent to driver
16/10/10 11:54:22 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 3) in 755 ms on localhost (1/2)
16/10/10 11:54:22 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 759 ms on localhost (2/2)
16/10/10 11:54:22 INFO DAGScheduler: ResultStage 1 (collect at json1.scala:34) finished in 0.760 s
16/10/10 11:54:22 INFO DAGScheduler: Job 1 finished: collect at json1.scala:34, took 0.779652 s
16/10/10 11:54:22 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
[53,Barack,Obama]16/10/10 11:54:22 INFO SparkContext: Invoking stop() from shutdown hook
16/10/10 11:54:22 INFO SparkUI: Stopped Spark web UI at http://192.168.1.116:4040
16/10/10 11:54:22 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/10/10 11:54:22 INFO MemoryStore: MemoryStore cleared
16/10/10 11:54:22 INFO BlockManager: BlockManager stopped
16/10/10 11:54:22 INFO BlockManagerMaster: BlockManagerMaster stopped
16/10/10 11:54:22 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/10/10 11:54:22 INFO SparkContext: Successfully stopped SparkContext
16/10/10 11:54:22 INFO ShutdownHookManager: Shutdown hook called
16/10/10 11:54:22 INFO ShutdownHookManager: Deleting directory C:\Users\subho\AppData\Local\Temp\spark-6cab6329-83f1-4af4-b64c-c869550405a4
16/10/10 11:54:22 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
Thanks and Regards,

The important section of the stacktrace is here:
ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
One possibility is to download winutils.exe (e.g. from here), put in in a folder called bin (as a subdirectory or your home directory, e.g. C:\Users\XXX\bin\winutils.exe) and then add this line at the beginning of your code:
System.setProperty("hadoop.home.dir", raw"C:\Users\XXX\")

Related

Kryo setWarnUnregisteredClasses to true showing nothing in spark config

val conf = new SparkConf()
.setAppName("example")
.setMaster("local[*]")
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.set("setWarnUnregisteredClasses","true")
When registrationRequired is set to true, it throws exception for class Person is not registered and also "org.apache.spark.internal.io.FileCommitProtocol$TaskCommitMessage" is not registered
So, now in default it is false, so making setWarnUnregisteredClasses to true, it should show warning message for unregistered class encountered as provided in the documentation https://github.com/EsotericSoftware/kryo#serializer-framework? But, nothing is shown in the logs regarding serialization.
What I am trying to do is to get a list of all unregistered class into my logs by setting this property .set("setWarnUnregisteredClasses","true")
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/12/10 15:56:09 WARN Utils: Your hostname, knoldus-Vostro-3546 resolves to a loopback address: 127.0.1.1; using 192.168.1.113 instead (on interface enp7s0)
19/12/10 15:56:09 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
19/12/10 15:56:10 INFO SparkContext: Running Spark version 2.4.4
19/12/10 15:56:11 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/12/10 15:56:12 INFO SparkContext: Submitted application: kyroExample
19/12/10 15:56:14 INFO SecurityManager: Changing view acls to: knoldus
19/12/10 15:56:14 INFO SecurityManager: Changing modify acls to: knoldus
19/12/10 15:56:14 INFO SecurityManager: Changing view acls groups to:
19/12/10 15:56:14 INFO SecurityManager: Changing modify acls groups to:
19/12/10 15:56:14 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(knoldus); groups with view permissions: Set(); users with modify permissions: Set(knoldus); groups with modify permissions: Set()
19/12/10 15:56:17 INFO Utils: Successfully started service 'sparkDriver' on port 36235.
19/12/10 15:56:17 INFO SparkEnv: Registering MapOutputTracker
19/12/10 15:56:18 INFO SparkEnv: Registering BlockManagerMaster
19/12/10 15:56:18 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/12/10 15:56:18 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/12/10 15:56:18 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-956a186e-cfbd-4ad2-b531-9f46bff96984
19/12/10 15:56:18 INFO MemoryStore: MemoryStore started with capacity 870.9 MB
19/12/10 15:56:18 INFO SparkEnv: Registering OutputCommitCoordinator
19/12/10 15:56:19 INFO Utils: Successfully started service 'SparkUI' on port 4040.
19/12/10 15:56:19 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.113:4040
19/12/10 15:56:19 INFO Executor: Starting executor ID driver on host localhost
19/12/10 15:56:19 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41737.
19/12/10 15:56:19 INFO NettyBlockTransferService: Server created on 192.168.1.113:41737
19/12/10 15:56:19 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/12/10 15:56:19 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.113, 41737, None)
19/12/10 15:56:19 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.113:41737 with 870.9 MB RAM, BlockManagerId(driver, 192.168.1.113, 41737, None)
19/12/10 15:56:19 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.113, 41737, None)
19/12/10 15:56:19 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.113, 41737, None)
19/12/10 15:56:21 INFO SparkContext: Starting job: take at KyroExample.scala:28
19/12/10 15:56:21 INFO DAGScheduler: Got job 0 (take at KyroExample.scala:28) with 1 output partitions
19/12/10 15:56:21 INFO DAGScheduler: Final stage: ResultStage 0 (take at KyroExample.scala:28)
19/12/10 15:56:21 INFO DAGScheduler: Parents of final stage: List()
19/12/10 15:56:21 INFO DAGScheduler: Missing parents: List()
19/12/10 15:56:21 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at filter at KyroExample.scala:24), which has no missing parents
19/12/10 15:56:21 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 3.0 KB, free 870.9 MB)
19/12/10 15:56:22 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1730.0 B, free 870.9 MB)
19/12/10 15:56:22 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.113:41737 (size: 1730.0 B, free: 870.9 MB)
19/12/10 15:56:22 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1161
19/12/10 15:56:22 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at filter at KyroExample.scala:24) (first 15 tasks are for partitions Vector(0))
19/12/10 15:56:22 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
19/12/10 15:56:22 WARN TaskSetManager: Stage 0 contains a task of very large size (243 KB). The maximum recommended task size is 100 KB.
19/12/10 15:56:22 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 249045 bytes)
19/12/10 15:56:22 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
19/12/10 15:56:23 INFO MemoryStore: Block rdd_1_0 stored as values in memory (estimated size 293.3 KB, free 870.6 MB)
19/12/10 15:56:23 INFO BlockManagerInfo: Added rdd_1_0 in memory on 192.168.1.113:41737 (size: 293.3 KB, free: 870.6 MB)
19/12/10 15:56:23 INFO Executor: 1 block locks were not released by TID = 0:
[rdd_1_0]
19/12/10 15:56:23 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1132 bytes result sent to driver
19/12/10 15:56:23 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 924 ms on localhost (executor driver) (1/1)
19/12/10 15:56:23 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
19/12/10 15:56:23 INFO DAGScheduler: ResultStage 0 (take at KyroExample.scala:28) finished in 1.733 s
19/12/10 15:56:23 INFO DAGScheduler: Job 0 finished: take at KyroExample.scala:28, took 1.895530 s
There is no unregistered class encountered logs. Why?
I had the same problem.
The issue is that setWarnUnregisteredClasses is a Kryo configuration that currently (I actually use Spark 2.4.4) is not exposed through Spark.
You have to set the specific configuration in Kryo.
The workaround I used was to have a custom KryoRegistrator.
Then I used it in this way:
class MyKryoRegistrator extends KryoRegistrator {
override def registerClasses(kryo: Kryo): Unit = {
kryo.setRegistrationRequired(false)
kryo.setWarnUnregisteredClasses(true)
...
You are using kryo registration so custom and other classes need to be registered with kryo and also both classes should implement serialize interface.
setWarnUnregisteredClasses will give warnings and conf.set("spark.kryo.registrationRequired", "true") throws exception for classes not registered.
You have to register person and TaskCommitMessage like
conf.registerKryoClasses(Array(classOf[Person]))

MongoDB Spark Connector : mongo-spark cannot find collection

I am getting an error while trying to read data from a collection.
My MongoDB instance is hosted in 192.168.1.2 while my spark instance is hosted in 1.1. The code is :
package org.sparkexample;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.SparkSession;
import org.bson.Document;
import com.mongodb.spark.MongoSpark;
import com.mongodb.spark.rdd.api.java.JavaMongoRDD;
public class WordCountTask {
public static void main(String[] args) {
System.out.println("arg : " + args[0]);
//checkArgument(args.length > 1, "Please provide the path of input file as first parameter.");
new WordCountTask().run(args[0]);
}
public void run(String inputFilePath) {
SparkSession spark = SparkSession.builder()
.master("spark://192.168.1.1:7077")
.appName("MongoSparkConnectorIntro")
.config("spark.mongodb.input.uri", "mongodb://192.168.1.2/local.Test")
.config("spark.mongodb.output.uri", "mongodb://192.168.1.2/local.Test")
.getOrCreate();
JavaSparkContext jsc = new JavaSparkContext(spark.sparkContext());
JavaMongoRDD<Document> rdd = MongoSpark.load(jsc);
System.out.println("******************************************");
System.out.println("The count is : ");
System.out.println(rdd.count());
System.out.println(rdd.first().toJson());
System.out.println("******************************************");
jsc.close();
}
}
The error(or rather info) obtained is:
INFO MongoSamplePartitioner: Could not find collection (Test),
using a single partition
Due to the above, the .first() command errors out. However, the collection does exists and I am able to access it. Can anyone let me know whats going wrong?
The full log is:
; ui acls disabled; users with view permissions: Set(mklrjv); groups with view
permissions: Set(); users with modify permissions: Set(mklrjv); groups with m
odify permissions: Set()
17/04/10 18:17:09 INFO Utils: Successfully started service 'sparkDriver' on port
34048.
17/04/10 18:17:09 INFO SparkEnv: Registering MapOutputTracker
17/04/10 18:17:09 INFO SparkEnv: Registering BlockManagerMaster
17/04/10 18:17:09 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storag
e.DefaultTopologyMapper for getting topology information
17/04/10 18:17:09 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/04/10 18:17:09 INFO DiskBlockManager: Created local directory at C:\Users\mra
jeev\AppData\Local\Temp\blockmgr-17cba028-2757-4f48-88ea-f8c7b33ccba9
17/04/10 18:17:09 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
17/04/10 18:17:09 INFO SparkEnv: Registering OutputCommitCoordinator
17/04/10 18:17:09 INFO Utils: Successfully started service 'SparkUI' on port 404
0.
17/04/10 18:17:09 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://
192.168.1.1:4040
17/04/10 18:17:09 INFO SparkContext: Added JAR file:/C:/Projects/SparkJava/targe
t/uber-first-example-1.0-SNAPSHOT.jar at spark://192.168.1.1:34048/jars/uber-f
irst-example-1.0-SNAPSHOT.jar with timestamp 1491828429769
17/04/10 18:17:09 INFO StandaloneAppClient$ClientEndpoint: Connecting to master
spark://192.168.1.1:7077...
17/04/10 18:17:10 INFO TransportClientFactory: Successfully created connection t
o /192.168.1.1:7077 after 55 ms (0 ms spent in bootstraps)
17/04/10 18:17:10 INFO StandaloneSchedulerBackend: Connected to Spark cluster wi
th app ID app-20170410181710-0013
17/04/10 18:17:10 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-2
0170410181710-0013/0 on worker-20170410150028-192.168.1.1-33151 (192.168.1.1
:33151) with 4 cores
17/04/10 18:17:10 INFO StandaloneSchedulerBackend: Granted executor ID app-20170
410181710-0013/0 on hostPort 192.168.1.1:33151 with 4 cores, 1024.0 MB RAM
17/04/10 18:17:10 INFO Utils: Successfully started service 'org.apache.spark.net
work.netty.NettyBlockTransferService' on port 34070.
17/04/10 18:17:10 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app
-20170410181710-0013/0 is now RUNNING
17/04/10 18:17:10 INFO NettyBlockTransferService: Server created on 10.78.130.13
4:34070
17/04/10 18:17:10 INFO BlockManager: Using org.apache.spark.storage.RandomBlockR
eplicationPolicy for block replication policy
17/04/10 18:17:10 INFO BlockManagerMaster: Registering BlockManager BlockManager
Id(driver, 192.168.1.1, 34070, None)
17/04/10 18:17:10 INFO BlockManagerMasterEndpoint: Registering block manager 10.
78.130.134:34070 with 366.3 MB RAM, BlockManagerId(driver, 192.168.1.1, 34070,
None)
17/04/10 18:17:10 INFO BlockManagerMaster: Registered BlockManager BlockManagerI
d(driver, 192.168.1.1, 34070, None)
17/04/10 18:17:10 INFO BlockManager: Initialized BlockManager: BlockManagerId(dr
iver, 192.168.1.1, 34070, None)
17/04/10 18:17:11 INFO EventLoggingListener: Logging events to file:/C:/tmp/spar
k-events/app-20170410181710-0013
17/04/10 18:17:11 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for
scheduling beginning after reached minRegisteredResourcesRatio: 0.0
17/04/10 18:17:11 INFO SharedState: Warehouse path is 'file:/C:/Projects/SparkJa
va/spark-warehouse/'.
17/04/10 18:17:12 WARN SparkSession$Builder: Using an existing SparkSession; som
e configuration may not take effect.
17/04/10 18:17:12 INFO MemoryStore: Block broadcast_0 stored as values in memory
(estimated size 216.0 B, free 366.3 MB)
17/04/10 18:17:12 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in
memory (estimated size 402.0 B, free 366.3 MB)
17/04/10 18:17:12 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 1
0.78.130.134:34070 (size: 402.0 B, free: 366.3 MB)
17/04/10 18:17:12 INFO SparkContext: Created broadcast 0 from broadcast at Mongo
Spark.scala:499
******************************************
The count is :
17/04/10 18:17:13 INFO cluster: Cluster created with settings {hosts=[10.78.130.
149:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30
000 ms', maxWaitQueueSize=500}
17/04/10 18:17:13 INFO cluster: Cluster description not yet available. Waiting f
or 30000 ms before timing out
17/04/10 18:17:13 INFO connection: Opened connection [connectionId{localValue:1,
serverValue:107}] to 192.168.1.2:27017
17/04/10 18:17:13 INFO cluster: Monitor thread successfully connected to server
with description ServerDescription{address=192.168.1.2:27017, type=STANDALONE,
state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 4, 2]}, minWire
Version=0, maxWireVersion=5, maxDocumentSize=16777216, roundTripTimeNanos=117621
8}
17/04/10 18:17:13 INFO MongoClientCache: Creating MongoClient: [192.168.1.2:27
017]
17/04/10 18:17:13 INFO connection: Opened connection [connectionId{localValue:2,
serverValue:108}] to 192.168.1.2:27017
17/04/10 18:17:13 INFO MongoSamplePartitioner: Could not find collection (Test),
using a single partition
17/04/10 18:17:13 INFO SparkContext: Starting job: count at WordCountTask.java:3
1
17/04/10 18:17:13 INFO DAGScheduler: Got job 0 (count at WordCountTask.java:31)
with 1 output partitions
17/04/10 18:17:13 INFO DAGScheduler: Final stage: ResultStage 0 (count at WordCo
untTask.java:31)
17/04/10 18:17:13 INFO DAGScheduler: Parents of final stage: List()
17/04/10 18:17:13 INFO DAGScheduler: Missing parents: List()
17/04/10 18:17:13 INFO DAGScheduler: Submitting ResultStage 0 (MongoRDD[0] at RD
D at MongoRDD.scala:52), which has no missing parents
17/04/10 18:17:13 INFO MemoryStore: Block broadcast_1 stored as values in memory
(estimated size 3.0 KB, free 366.3 MB)
17/04/10 18:17:13 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in
memory (estimated size 1855.0 B, free 366.3 MB)
17/04/10 18:17:13 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 1
0.78.130.134:34070 (size: 1855.0 B, free: 366.3 MB)
17/04/10 18:17:13 INFO SparkContext: Created broadcast 1 from broadcast at DAGSc
heduler.scala:996
17/04/10 18:17:13 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage
0 (MongoRDD[0] at RDD at MongoRDD.scala:52)
17/04/10 18:17:13 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/04/10 18:17:15 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered
executor NettyRpcEndpointRef(null) (192.168.1.1:34090) with ID 0
17/04/10 18:17:15 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 10
.78.130.134, executor 0, partition 0, ANY, 6112 bytes)
17/04/10 18:17:15 INFO BlockManagerMasterEndpoint: Registering block manager 10.
78.130.134:34108 with 366.3 MB RAM, BlockManagerId(0, 192.168.1.1, 34108, None
)
17/04/10 18:17:18 INFO MongoClientCache: Closing MongoClient: [192.168.1.2:270
17]
17/04/10 18:17:18 INFO connection: Closed connection [connectionId{localValue:2,
serverValue:108}] to 192.168.1.2:27017 because the pool has been closed.
17/04/10 18:17:47 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 1
0.78.130.134:34108 (size: 1855.0 B, free: 366.3 MB)
17/04/10 18:17:48 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 1
0.78.130.134:34108 (size: 402.0 B, free: 366.3 MB)
17/04/10 18:17:49 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in
34054 ms on 192.168.1.1 (executor 0) (1/1)
17/04/10 18:17:49 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have
all completed, from pool
17/04/10 18:17:49 INFO DAGScheduler: ResultStage 0 (count at WordCountTask.java:
31) finished in 35.409 s
17/04/10 18:17:49 INFO DAGScheduler: Job 0 finished: count at WordCountTask.java
:31, took 35.653876 s
0
17/04/10 18:17:49 INFO SparkContext: Starting job: first at WordCountTask.java:3
2
17/04/10 18:17:49 INFO DAGScheduler: Got job 1 (first at WordCountTask.java:32)
with 1 output partitions
17/04/10 18:17:49 INFO DAGScheduler: Final stage: ResultStage 1 (first at WordCo
untTask.java:32)
17/04/10 18:17:49 INFO DAGScheduler: Parents of final stage: List()
17/04/10 18:17:49 INFO DAGScheduler: Missing parents: List()
17/04/10 18:17:49 INFO DAGScheduler: Submitting ResultStage 1 (MongoRDD[0] at RD
D at MongoRDD.scala:52), which has no missing parents
17/04/10 18:17:49 INFO MemoryStore: Block broadcast_2 stored as values in memory
(estimated size 3.2 KB, free 366.3 MB)
17/04/10 18:17:49 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in
memory (estimated size 1926.0 B, free 366.3 MB)
17/04/10 18:17:49 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 1
0.78.130.134:34070 (size: 1926.0 B, free: 366.3 MB)
17/04/10 18:17:49 INFO SparkContext: Created broadcast 2 from broadcast at DAGSc
heduler.scala:996
17/04/10 18:17:49 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage
1 (MongoRDD[0] at RDD at MongoRDD.scala:52)
17/04/10 18:17:49 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/04/10 18:17:49 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, 10
.78.130.134, executor 0, partition 0, ANY, 6194 bytes)
17/04/10 18:17:49 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 1
0.78.130.134:34108 (size: 1926.0 B, free: 366.3 MB)
17/04/10 18:17:49 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in
56 ms on 192.168.1.1 (executor 0) (1/1)
17/04/10 18:17:49 INFO DAGScheduler: ResultStage 1 (first at WordCountTask.java:
32) finished in 0.057 s
17/04/10 18:17:49 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have
all completed, from pool
17/04/10 18:17:49 INFO DAGScheduler: Job 1 finished: first at WordCountTask.java
:32, took 0.076634 s
Exception in thread "main" java.lang.UnsupportedOperationException: empty collec
tion
at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1369)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.s
cala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.s
cala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.first(RDD.scala:1366)
at org.apache.spark.api.java.JavaRDDLike$class.first(JavaRDDLike.scala:5
38)
at org.apache.spark.api.java.AbstractJavaRDDLike.first(JavaRDDLike.scala
:45)
at org.sparkexample.WordCountTask.run(WordCountTask.java:32)
at org.sparkexample.WordCountTask.main(WordCountTask.java:14)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSub
mit$$runMain(SparkSubmit.scala:738)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:18
7)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/04/10 18:17:49 INFO SparkContext: Invoking stop() from shutdown hook
17/04/10 18:17:49 INFO SparkUI: Stopped Spark web UI at http://192.168.1.1:404
0
17/04/10 18:17:49 INFO StandaloneSchedulerBackend: Shutting down all executors
17/04/10 18:17:49 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each
executor to shut down
17/04/10 18:17:49 WARN TransportChannelHandler: Exception in connection from /10
.78.130.134:34132
java.io.IOException: An existing connection was forcibly closed by the remote ho
st
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirect
ByteBuf.java:221)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:899)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketCha
nnel.java:275)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(Abstra
ctNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.jav
a:652)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEve
ntLoop.java:575)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.ja
va:489)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:451)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThread
EventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorato
r.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)
17/04/10 18:17:49 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEnd
point stopped!
17/04/10 18:17:49 WARN TransportChannelHandler: Exception in connection from /10
.78.130.134:34113
java.io.IOException: An existing connection was forcibly closed by the remote ho
st
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirect
ByteBuf.java:221)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:899)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketCha
nnel.java:275)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(Abstra
ctNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.jav
a:652)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEve
ntLoop.java:575)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.ja
va:489)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:451)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThread
EventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorato
r.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)
17/04/10 18:17:49 WARN TransportChannelHandler: Exception in connection from /10
.78.130.134:34090
java.io.IOException: An existing connection was forcibly closed by the remote ho
st
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirect
ByteBuf.java:221)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:899)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketCha
nnel.java:275)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(Abstra
ctNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.jav
a:652)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEve
ntLoop.java:575)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.ja
va:489)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:451)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThread
EventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorato
r.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)
17/04/10 18:17:49 INFO MemoryStore: MemoryStore cleared
17/04/10 18:17:49 INFO BlockManager: BlockManager stopped
17/04/10 18:17:49 INFO BlockManagerMaster: BlockManagerMaster stopped
17/04/10 18:17:49 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
OutputCommitCoordinator stopped!
17/04/10 18:17:49 INFO SparkContext: Successfully stopped SparkContext
17/04/10 18:17:49 INFO ShutdownHookManager: Shutdown hook called
17/04/10 18:17:49 INFO ShutdownHookManager: Deleting directory C:\Users\mklrjv\
AppData\Local\Temp\spark-3213c1b3-9a85-42b0-ba04-6e0e46a90d98

Apache Spark: using spark-submit to transfer files from windows to cluster

Here's what I'm trying to do:
using spark-submit to submit a packaged / compiled (using sbt 0.13.12) scala programm to my virtualized "cluster" running hdp 2.4 (Spark 1.6.0, Scala 2.10.5) using virtual box
using the --files option to copy a text file "foo.txt" (which is located in the project root) from the "submitting" Windows machine (which is also running Spark 1.6.0 and Scala 2.10.5) to the working directories of executors (as described by spark-submit -h)
passing the textfile as first argument to my application
finally: reading in the file and counting the lines
The command for submitting is
spark-submit ^
--class boern.spark.SparkMeApp ^
--master "spark://127.0.0.1:7077" ^
--files "foo.txt" ^
target/scala-2.11/sparkme-project_2.11-1.0.jar foo.txt
The interesting part of code is
val fileName = args(0)
println(s"argument 0 is $fileName")
val lines = sc.textFile(fileName).cache
val c = lines.count /** line 37 */
The error (short version) I'm getting is:
INFO DAGScheduler: Job 0 failed: count at SparkMeApp.scala:37, Exception, Job aborted: java
.io.FileNotFoundException: File file:/E:/myProject/foo.txt does not exist
After two days of a combination "bruteforcing" and reading documentation I am still lost... Am I wrong, that sc.textFile(fileName).cache is executed on the workers and everything which is not preceeded by sc on master? Is using SparkFiles the way to go?
Stacktrace
E:\myProject\>spark-submit --verbose --class boern.spark.SparkMeApp --master "spark://127.0.0.1:7077" --files "foo.txt" target/scala-2.11/sparkme-project_2.11-1.0.jar foo.txt
Using properties file: null
Parsed arguments:
master spark://127.0.0.1:7077
deployMode null
executorMemory null
executorCores null
totalExecutorCores null
propertiesFile null
driverMemory null
driverCores null
driverExtraClassPath null
driverExtraLibraryPath null
driverExtraJavaOptions null
supervise false
queue null
numExecutors null
files file:/E:/myProject/foo.txt
pyFiles null
archives null
mainClass boern.spark.SparkMeApp
primaryResource file:/E:/myProject/target/scala-2.11/sparkme-project_2.11-1.0.jar
name boern.spark.SparkMeApp
childArgs [foo.txt]
jars null
packages null
packagesExclusions null
repositories null
verbose true
Spark properties used, including those specified through
--conf and those from the properties file null:
Main class:
boern.spark.SparkMeApp
Arguments:
foo.txt
System properties:
SPARK_SUBMIT -> true
spark.files -> file:/E:/myProject/foo.txt
spark.app.name -> boern.spark.SparkMeApp
spark.jars -> file:/E:/myProject/target/scala-2.11/sparkme-project_2.11-1.0.jar
spark.submit.deployMode -> client
spark.master -> spark://127.0.0.1:7077
Classpath elements:
file:/E:/myProject/target/scala-2.11/sparkme-project_2.11-1.0.jar
Working directory is E:\myProject\sbtmanual
Files:
\CONF.ENI
\mw.csv
\mw_out.csv
\pagefile.sys
\temp.rds
args:
foo.txt
config set.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/09/15 14:36:21 INFO SparkContext: Running Spark version 1.6.0
16/09/15 14:36:22 INFO SecurityManager: Changing view acls to: Boern
16/09/15 14:36:22 INFO SecurityManager: Changing modify acls to: Boern
16/09/15 14:36:22 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Boern); users with modify permissions: Set(Boern)
16/09/15 14:36:22 INFO Utils: Successfully started service 'sparkDriver' on port 59716.
16/09/15 14:36:23 INFO Slf4jLogger: Slf4jLogger started
16/09/15 14:36:23 INFO Remoting: Starting remoting
16/09/15 14:36:23 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#192.168.56.1:59729]
16/09/15 14:36:23 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 59729.
16/09/15 14:36:23 INFO SparkEnv: Registering MapOutputTracker
16/09/15 14:36:23 INFO SparkEnv: Registering BlockManagerMaster
16/09/15 14:36:23 INFO DiskBlockManager: Created local directory at C:\Users\Boern\AppData\Local\Temp\blockmgr-c7ee2dab-ea00-4ae5-9f06-c6ab74f135e5
16/09/15 14:36:23 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/09/15 14:36:23 INFO SparkEnv: Registering OutputCommitCoordinator
16/09/15 14:36:23 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
16/09/15 14:36:23 INFO Utils: Successfully started service 'SparkUI' on port 4041.
16/09/15 14:36:23 INFO SparkUI: Started SparkUI at http://192.168.56.1:4041
16/09/15 14:36:23 INFO HttpFileServer: HTTP File server directory is C:\Users\Boern\AppData\Local\Temp\spark-2736b20a-fc90-40e8-a7ad-2d8cac8001f2\httpd-14abb177-9801-403c-9df9-84afb2e87d70
16/09/15 14:36:23 INFO HttpServer: Starting HTTP Server
16/09/15 14:36:23 INFO Utils: Successfully started service 'HTTP file server' on port 59746.
16/09/15 14:36:23 INFO SparkContext: Added JAR file:/E:/myProject/target/scala-2.11/sparkme-project_2.11-1.0.jar at http://192.168.56.1:59746/jars/sparkme-project_2.11-1.0.jar with timestamp 1473942983631
16/09/15 14:36:23 INFO Utils: Copying E:\myProject\sbtmanual\foo.txt to C:\Users\Boern\AppData\Local\Temp\spark-2736b20a-fc90-40e8-a7ad-2d8cac8001f2\userFiles-7849db02-01ff-40ea-9250-62b87d854f4c\foo.txt
16/09/15 14:36:23 INFO SparkContext: Added file file:/E:/myProject/foo.txt at http://192.168.56.1:59746/files/foo.txt with timestamp 1473942983695
16/09/15 14:36:23 INFO AppClient$ClientEndpoint: Connecting to master spark://127.0.0.1:7077...
16/09/15 14:36:34 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160915123633-0015
16/09/15 14:36:34 INFO AppClient$ClientEndpoint: Executor added: app-20160915123633-0015/0 on worker-20160915105800-10.0.2.15-44537 (10.0.2.15:44537) with 4 cores
16/09/15 14:36:34 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160915123633-0015/0 on hostPort 10.0.2.15:44537 with 4 cores, 1024.0 MB RAM
16/09/15 14:36:34 INFO AppClient$ClientEndpoint: Executor updated: app-20160915123633-0015/0 is now RUNNING
16/09/15 14:36:34 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 59781.
16/09/15 14:36:34 INFO NettyBlockTransferService: Server created on 59781
16/09/15 14:36:34 INFO BlockManagerMaster: Trying to register BlockManager
16/09/15 14:36:34 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.56.1:59781 with 511.1 MB RAM, BlockManagerId(driver, 192.168.56.1, 59781)
16/09/15 14:36:34 INFO BlockManagerMaster: Registered BlockManager
16/09/15 14:36:34 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
sc set.
argument 0 is foo.txt
16/09/15 14:36:34 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 208.5 KB, free 208.5 KB)
16/09/15 14:36:34 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 19.3 KB, free 227.8 KB)
16/09/15 14:36:34 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.56.1:59781 (size: 19.3 KB, free: 511.1 MB)
16/09/15 14:36:34 INFO SparkContext: Created broadcast 0 from textFile at SparkMeApp.scala:39
16/09/15 14:36:34 INFO FileInputFormat: Total input paths to process : 1
16/09/15 14:36:34 INFO SparkContext: Starting job: count at SparkMeApp.scala:41
16/09/15 14:36:34 INFO DAGScheduler: Got job 0 (count at SparkMeApp.scala:41) with 2 output partitions
16/09/15 14:36:34 INFO DAGScheduler: Final stage: ResultStage 0 (count at SparkMeApp.scala:41)
16/09/15 14:36:34 INFO DAGScheduler: Parents of final stage: List()
16/09/15 14:36:34 INFO DAGScheduler: Missing parents: List()
16/09/15 14:36:34 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at textFile at SparkMeApp.scala:39), which has no missing parents
16/09/15 14:36:34 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.9 KB, free 230.7 KB)
16/09/15 14:36:34 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1752.0 B, free 232.4 KB)
16/09/15 14:36:34 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.56.1:59781 (size: 1752.0 B, free: 511.1 MB)
16/09/15 14:36:34 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
16/09/15 14:36:34 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at textFile at SparkMeApp.scala:39)
16/09/15 14:36:34 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
16/09/15 14:36:37 INFO SparkDeploySchedulerBackend: Registered executor NettyRpcEndpointRef(null) (BoernsPC:59783) with ID 0
16/09/15 14:36:37 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, BoernsPC, partition 0,PROCESS_LOCAL, 2286 bytes)
16/09/15 14:36:37 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, BoernsPC, partition 1,PROCESS_LOCAL, 2286 bytes)
16/09/15 14:36:47 INFO BlockManagerMasterEndpoint: Registering block manager BoernsPC:48448 with 511.5 MB RAM, BlockManagerId(0, BoernsPC, 48448)
16/09/15 14:36:48 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on BoernsPC:48448 (size: 1752.0 B, free: 511.5 MB)
16/09/15 14:36:48 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on BoernsPC:48448 (size: 19.3 KB, free: 511.5 MB)
16/09/15 14:36:49 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, BoernsPC): java.io.FileNotFoundException: File file:/E:/myProject/foo.txt does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:609)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:822)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:599)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:767)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:237)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/09/15 14:36:49 INFO TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) on executor BoernsPC: java.io.FileNotFoundException (File file:/E:/myProject/foo.txt does not exist) [duplicate 1]
16/09/15 14:36:49 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 2, BoernsPC, partition 0,PROCESS_LOCAL, 2286 bytes)
16/09/15 14:36:49 INFO TaskSetManager: Starting task 1.1 in stage 0.0 (TID 3, BoernsPC, partition 1,PROCESS_LOCAL, 2286 bytes)
16/09/15 14:36:49 INFO TaskSetManager: Lost task 1.1 in stage 0.0 (TID 3) on executor BoernsPC: java.io.FileNotFoundException (File file:/E:/myProject/foo.txt does not exist) [duplicate 2]
16/09/15 14:36:49 INFO TaskSetManager: Starting task 1.2 in stage 0.0 (TID 4, BoernsPC, partition 1,PROCESS_LOCAL, 2286 bytes)
16/09/15 14:36:49 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 2) on executor BoernsPC: java.io.FileNotFoundException (File file:/E:/myProject/foo.txt does not exist) [duplicate 3]
16/09/15 14:36:49 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 5, BoernsPC, partition 0,PROCESS_LOCAL, 2286 bytes)
16/09/15 14:36:49 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 5) on executor BoernsPC: java.io.FileNotFoundException (File file:/E:/myProject/foo.txt does not exist) [duplicate 4]
16/09/15 14:36:49 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 6, BoernsPC, partition 0,PROCESS_LOCAL, 2286 bytes)
16/09/15 14:36:49 INFO TaskSetManager: Lost task 1.2 in stage 0.0 (TID 4) on executor BoernsPC: java.io.FileNotFoundException (File file:/E:/myProject/foo.txt does not exist) [duplicate 5]
16/09/15 14:36:49 INFO TaskSetManager: Starting task 1.3 in stage 0.0 (TID 7, BoernsPC, partition 1,PROCESS_LOCAL, 2286 bytes)
16/09/15 14:36:49 INFO TaskSetManager: Lost task 1.3 in stage 0.0 (TID 7) on executor BoernsPC: java.io.FileNotFoundException (File file:/E:/myProject/foo.txt does not exist) [duplicate 6]
16/09/15 14:36:49 ERROR TaskSetManager: Task 1 in stage 0.0 failed 4 times; aborting job
16/09/15 14:36:49 INFO TaskSchedulerImpl: Cancelling stage 0
16/09/15 14:36:49 INFO TaskSchedulerImpl: Stage 0 was cancelled
16/09/15 14:36:49 INFO DAGScheduler: ResultStage 0 (count at SparkMeApp.scala:41) failed in 14,616 s
16/09/15 14:36:49 INFO DAGScheduler: Job 0 failed: count at SparkMeApp.scala:41, took 14,694943 s
16/09/15 14:36:49 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 6) on executor BoernsPC: java.io.FileNotFoundException (File file:/E:/myProject/foo.txt does not exist) [duplicate 7]
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 7, BoernsPC): java.io.FileNotFoundException: File file:/E:/myProject/foo.txt does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:609)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:822)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:599)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:767)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:237)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD.count(RDD.scala:1143)
at boern.spark.SparkMeApp$.main(SparkMeApp.scala:41)
at boern.spark.SparkMeApp.main(SparkMeApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.FileNotFoundException: File file:/E:/myProject/foo.txt does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:609)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:822)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:599)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:767)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:237)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/09/15 14:36:49 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/09/15 14:36:49 INFO SparkContext: Invoking stop() from shutdown hook
16/09/15 14:36:49 INFO SparkUI: Stopped Spark web UI at http://192.168.56.1:4041
16/09/15 14:36:49 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/09/15 14:36:49 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/09/15 14:36:49 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/09/15 14:36:49 INFO MemoryStore: MemoryStore cleared
16/09/15 14:36:49 INFO BlockManager: BlockManager stopped
16/09/15 14:36:49 INFO BlockManagerMaster: BlockManagerMaster stopped
16/09/15 14:36:49 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/09/15 14:36:49 INFO SparkContext: Successfully stopped SparkContext
16/09/15 14:36:49 INFO ShutdownHookManager: Shutdown hook called
16/09/15 14:36:49 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/09/15 14:36:49 INFO ShutdownHookManager: Deleting directory C:\Users\Boern\AppData\Local\Temp\spark-2736b20a-fc90-40e8-a7ad-2d8cac8001f2
16/09/15 14:36:49 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/09/15 14:36:49 INFO ShutdownHookManager: Deleting directory C:\Users\Boern\AppData\Local\Temp\spark-2736b20a-fc90-40e8-a7ad-2d8cac8001f2\httpd-14abb177-9801-403c-9df9-84afb2e87d70

SparkUI is stopping after execution of code in IntelliJ IDEA

I am trying to perform this simple Spark job using IntelliJ IDEA in Scala. However, Spark UI stops completely after complete execution of the object. Is there something that I am missing or listening at wrong location? Scala Version - 2.10.4 and Spark - 1.6.0
import org.apache.spark.{SparkConf, SparkContext}
object SimpleApp {
def main(args: Array[String]) {
val logFile = "C:/spark-1.6.0-bin-hadoop2.6/spark-1.6.0-bin-hadoop2.6/README.md" // Should be some file on your system
val conf = new SparkConf().setAppName("Simple Application").setMaster("local[*]")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
}
}
16/02/24 01:24:39 INFO SparkContext: Running Spark version 1.6.0
16/02/24 01:24:40 INFO SecurityManager: Changing view acls to: Sivaram Konanki
16/02/24 01:24:40 INFO SecurityManager: Changing modify acls to: Sivaram Konanki
16/02/24 01:24:40 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Sivaram Konanki); users with modify permissions: Set(Sivaram Konanki)
16/02/24 01:24:41 INFO Utils: Successfully started service 'sparkDriver' on port 54881.
16/02/24 01:24:41 INFO Slf4jLogger: Slf4jLogger started
16/02/24 01:24:42 INFO Remoting: Starting remoting
16/02/24 01:24:42 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#192.168.1.15:54894]
16/02/24 01:24:42 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 54894.
16/02/24 01:24:42 INFO SparkEnv: Registering MapOutputTracker
16/02/24 01:24:42 INFO SparkEnv: Registering BlockManagerMaster
16/02/24 01:24:42 INFO DiskBlockManager: Created local directory at C:\Users\Sivaram Konanki\AppData\Local\Temp\blockmgr-dad99e77-f3a6-4a1d-88d8-3b030be0bd0a
16/02/24 01:24:42 INFO MemoryStore: MemoryStore started with capacity 2.4 GB
16/02/24 01:24:42 INFO SparkEnv: Registering OutputCommitCoordinator
16/02/24 01:24:42 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/02/24 01:24:42 INFO SparkUI: Started SparkUI at http://192.168.1.15:4040
16/02/24 01:24:42 INFO Executor: Starting executor ID driver on host localhost
16/02/24 01:24:43 INFO Utils: <b>Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 54913.
16/02/24 01:24:43 INFO NettyBlockTransferService: Server created on 54913
16/02/24 01:24:43 INFO BlockManagerMaster: Trying to register BlockManager
16/02/24 01:24:43 INFO BlockManagerMasterEndpoint: Registering block manager localhost:54913 with 2.4 GB RAM, BlockManagerId(driver, localhost, 54913)
16/02/24 01:24:43 INFO BlockManagerMaster: Registered BlockManager
16/02/24 01:24:44 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 127.4 KB, free 127.4 KB)
16/02/24 01:24:44 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 13.9 KB, free 141.3 KB)
16/02/24 01:24:44 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:54913 (size: 13.9 KB, free: 2.4 GB)
16/02/24 01:24:44 INFO SparkContext: Created broadcast 0 from textFile at SimpleApp.scala:11
16/02/24 01:24:45 WARN : Your hostname, OSG-E5450-42 resolves to a loopback/non-reachable address: fe80:0:0:0:d9ff:4f93:5643:703d%wlan3, but we couldn't find any external IP address!
16/02/24 01:24:46 INFO FileInputFormat: Total input paths to process : 1
16/02/24 01:24:46 INFO SparkContext: Starting job: count at SimpleApp.scala:12
16/02/24 01:24:46 INFO DAGScheduler: Got job 0 (count at SimpleApp.scala:12) with 2 output partitions
16/02/24 01:24:46 INFO DAGScheduler: Final stage: ResultStage 0 (count at SimpleApp.scala:12)
16/02/24 01:24:46 INFO DAGScheduler: Parents of final stage: List()
16/02/24 01:24:46 INFO DAGScheduler: Missing parents: List()
16/02/24 01:24:46 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at filter at SimpleApp.scala:12), which has no missing parents
16/02/24 01:24:46 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.1 KB, free 144.5 KB)
16/02/24 01:24:46 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1886.0 B, free 146.3 KB)
16/02/24 01:24:46 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:54913 (size: 1886.0 B, free: 2.4 GB)
16/02/24 01:24:46 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
16/02/24 01:24:46 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at filter at SimpleApp.scala:12)
16/02/24 01:24:46 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
16/02/24 01:24:46 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2172 bytes)
16/02/24 01:24:46 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1,PROCESS_LOCAL, 2172 bytes)
16/02/24 01:24:46 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
16/02/24 01:24:46 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16/02/24 01:24:46 INFO CacheManager: Partition rdd_1_1 not found, computing it
16/02/24 01:24:46 INFO CacheManager: Partition rdd_1_0 not found, computing it
16/02/24 01:24:46 INFO HadoopRDD: Input split: file:/C:/spark-1.6.0-bin-hadoop2.6/spark-1.6.0-bin-hadoop2.6/README.md:1679+1680
16/02/24 01:24:46 INFO HadoopRDD: Input split: file:/C:/spark-1.6.0-bin-hadoop2.6/spark-1.6.0-bin-hadoop2.6/README.md:0+1679
16/02/24 01:24:46 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
16/02/24 01:24:46 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
16/02/24 01:24:46 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
16/02/24 01:24:46 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
16/02/24 01:24:46 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
16/02/24 01:24:46 INFO MemoryStore: Block rdd_1_1 stored as values in memory (estimated size 4.7 KB, free 151.0 KB)
16/02/24 01:24:46 INFO BlockManagerInfo: Added rdd_1_1 in memory on localhost:54913 (size: 4.7 KB, free: 2.4 GB)
16/02/24 01:24:46 INFO MemoryStore: Block rdd_1_0 stored as values in memory (estimated size 5.4 KB, free 156.5 KB)
16/02/24 01:24:46 INFO BlockManagerInfo: Added rdd_1_0 in memory on localhost:54913 (size: 5.4 KB, free: 2.4 GB)
16/02/24 01:24:46 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2662 bytes result sent to driver
16/02/24 01:24:46 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 2662 bytes result sent to driver
16/02/24 01:24:46 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 170 ms on localhost (1/2)
16/02/24 01:24:46 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 143 ms on localhost (2/2)
16/02/24 01:24:46 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/02/24 01:24:46 INFO DAGScheduler: ResultStage 0 (count at SimpleApp.scala:12) finished in 0.187 s
16/02/24 01:24:46 INFO DAGScheduler: Job 0 finished: count at SimpleApp.scala:12, took 0.303861 s
16/02/24 01:24:46 INFO SparkContext: Starting job: count at SimpleApp.scala:13
16/02/24 01:24:46 INFO DAGScheduler: Got job 1 (count at SimpleApp.scala:13) with 2 output partitions
16/02/24 01:24:46 INFO DAGScheduler: Final stage: ResultStage 1 (count at SimpleApp.scala:13)
16/02/24 01:24:46 INFO DAGScheduler: Parents of final stage: List()
16/02/24 01:24:46 INFO DAGScheduler: Missing parents: List()
16/02/24 01:24:46 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[3] at filter at SimpleApp.scala:13), which has no missing parents
16/02/24 01:24:46 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 3.1 KB, free 159.6 KB)
16/02/24 01:24:46 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1888.0 B, free 161.5 KB)16/02/24 01:24:46 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:54913 (size: 1888.0 B, free: 2.4 GB)
16/02/24 01:24:46 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1006
16/02/24 01:24:46 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 1 (MapPartitionsRDD[3] at filter at SimpleApp.scala:13)
16/02/24 01:24:46 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
16/02/24 01:24:46 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, localhost, partition 0,PROCESS_LOCAL, 2172 bytes)
16/02/24 01:24:46 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, localhost, partition 1,PROCESS_LOCAL, 2172 bytes)
16/02/24 01:24:46 INFO Executor: Running task 0.0 in stage 1.0 (TID 2)
16/02/24 01:24:46 INFO Executor: Running task 1.0 in stage 1.0 (TID 3)
16/02/24 01:24:46 INFO BlockManager: Found block rdd_1_0 locally
16/02/24 01:24:46 INFO BlockManager: Found block rdd_1_1 locally
16/02/24 01:24:46 INFO Executor: Finished task 0.0 in stage 1.0 (TID 2). 2082 bytes result sent to driver
16/02/24 01:24:46 INFO Executor: Finished task 1.0 in stage 1.0 (TID 3). 2082 bytes result sent to driver
16/02/24 01:24:46 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 34 ms on localhost (1/2)
16/02/24 01:24:46 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 3) in 37 ms on localhost (2/2)
Lines with a: 58, Lines with b: 26
16/02/24 01:24:46 INFO DAGScheduler: ResultStage 1 (count at SimpleApp.scala:13) finished in 0.040 s
16/02/24 01:24:46 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
16/02/24 01:24:46 INFO DAGScheduler: Job 1 finished: count at SimpleApp.scala:13, took 0.068350 s
16/02/24 01:24:46 INFO SparkContext: Invoking stop() from shutdown hook
16/02/24 01:24:46 INFO SparkUI: Stopped Spark web UI at http://192.168.1.15:4040
16/02/24 01:24:46 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/02/24 01:24:46 INFO MemoryStore: MemoryStore cleared
16/02/24 01:24:46 INFO BlockManager: BlockManager stopped
16/02/24 01:24:46 INFO BlockManagerMaster: BlockManagerMaster stopped
16/02/24 01:24:46 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/02/24 01:24:46 INFO SparkContext: Successfully stopped SparkContext
16/02/24 01:24:46 INFO ShutdownHookManager: Shutdown hook called
16/02/24 01:24:46 INFO ShutdownHookManager: Deleting directory C:\Users\Sivaram Konanki\AppData\Local\Temp\spark-861b5aef-6732-45e4-a4f4-6769370c555e
You can add a
Thread.sleep(1000000);//For 1000 seconds or more
at the bottom of your spark job, this will allow you to inspect the WebUI in IDEs like IntelliJ while running your Spark Job.
This is an expected behavior. Spark UI is maintained by the SparkContext so it cannot be active after application finished and context has been destroyed.
In the standalone mode information is preserved by the cluster web UI, on Mesos or Yarn you can use history server but in the local mode the only option I am aware of is to keep application running.

spark import apache library (math)

I am trying to run a simple application with spark
This is my scala file:
/* SimpleApp.scala */
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.commons.math3.random.RandomDataGenerator
object SimpleApp {
def main(args: Array[String]) {
val logFile = "/home/donbeo/Applications/spark/spark-1.1.0/README.md" // Should be some file on your system
val conf = new SparkConf().setAppName("Simple Application")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
println("A random number")
val randomData = new RandomDataGenerator()
println(randomData.nextLong(0, 100))
}
}
and this is my sbt file
name := "Simple Project"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0"
libraryDependencies += "org.apache.commons" % "commons-math3" % "3.3"
When I try to run the code I get this error
donbeo#donbeo-HP-EliteBook-Folio-9470m:~/Applications/spark/spark-1.1.0$ ./bin/spark-submit --class "SimpleApp" --master local[4] /home/donbeo/Documents/scala_code/simpleApp/target/scala-2.10/simple-project_2.10-1.0.jar
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/02/04 17:42:41 WARN Utils: Your hostname, donbeo-HP-EliteBook-Folio-9470m resolves to a loopback address: 127.0.1.1; using 192.168.1.45 instead (on interface wlan0)
15/02/04 17:42:41 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/02/04 17:42:41 INFO SecurityManager: Changing view acls to: donbeo,
15/02/04 17:42:41 INFO SecurityManager: Changing modify acls to: donbeo,
15/02/04 17:42:41 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(donbeo, ); users with modify permissions: Set(donbeo, )
15/02/04 17:42:42 INFO Slf4jLogger: Slf4jLogger started
15/02/04 17:42:42 INFO Remoting: Starting remoting
15/02/04 17:42:42 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#192.168.1.45:45935]
15/02/04 17:42:42 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver#192.168.1.45:45935]
15/02/04 17:42:42 INFO Utils: Successfully started service 'sparkDriver' on port 45935.
15/02/04 17:42:42 INFO SparkEnv: Registering MapOutputTracker
15/02/04 17:42:42 INFO SparkEnv: Registering BlockManagerMaster
15/02/04 17:42:42 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20150204174242-bbb1
15/02/04 17:42:42 INFO Utils: Successfully started service 'Connection manager for block manager' on port 55674.
15/02/04 17:42:42 INFO ConnectionManager: Bound socket to port 55674 with id = ConnectionManagerId(192.168.1.45,55674)
15/02/04 17:42:42 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
15/02/04 17:42:42 INFO BlockManagerMaster: Trying to register BlockManager
15/02/04 17:42:42 INFO BlockManagerMasterActor: Registering block manager 192.168.1.45:55674 with 265.4 MB RAM
15/02/04 17:42:42 INFO BlockManagerMaster: Registered BlockManager
15/02/04 17:42:42 INFO HttpFileServer: HTTP File server directory is /tmp/spark-49443053-833e-4596-9073-d74075483d35
15/02/04 17:42:42 INFO HttpServer: Starting HTTP Server
15/02/04 17:42:42 INFO Utils: Successfully started service 'HTTP file server' on port 41309.
15/02/04 17:42:42 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/02/04 17:42:42 INFO SparkUI: Started SparkUI at http://192.168.1.45:4040
15/02/04 17:42:42 INFO SparkContext: Added JAR file:/home/donbeo/Documents/scala_code/simpleApp/target/scala-2.10/simple-project_2.10-1.0.jar at http://192.168.1.45:41309/jars/simple-project_2.10-1.0.jar with timestamp 1423071762914
15/02/04 17:42:42 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver#192.168.1.45:45935/user/HeartbeatReceiver
15/02/04 17:42:43 INFO MemoryStore: ensureFreeSpace(32768) called with curMem=0, maxMem=278302556
15/02/04 17:42:43 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 32.0 KB, free 265.4 MB)
15/02/04 17:42:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/02/04 17:42:43 WARN LoadSnappy: Snappy native library not loaded
15/02/04 17:42:43 INFO FileInputFormat: Total input paths to process : 1
15/02/04 17:42:43 INFO SparkContext: Starting job: count at SimpleApp.scala:13
15/02/04 17:42:43 INFO DAGScheduler: Got job 0 (count at SimpleApp.scala:13) with 2 output partitions (allowLocal=false)
15/02/04 17:42:43 INFO DAGScheduler: Final stage: Stage 0(count at SimpleApp.scala:13)
15/02/04 17:42:43 INFO DAGScheduler: Parents of final stage: List()
15/02/04 17:42:43 INFO DAGScheduler: Missing parents: List()
15/02/04 17:42:43 INFO DAGScheduler: Submitting Stage 0 (FilteredRDD[2] at filter at SimpleApp.scala:13), which has no missing parents
15/02/04 17:42:43 INFO MemoryStore: ensureFreeSpace(2616) called with curMem=32768, maxMem=278302556
15/02/04 17:42:43 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.6 KB, free 265.4 MB)
15/02/04 17:42:43 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (FilteredRDD[2] at filter at SimpleApp.scala:13)
15/02/04 17:42:43 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/02/04 17:42:43 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1283 bytes)
15/02/04 17:42:43 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 1283 bytes)
15/02/04 17:42:43 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
15/02/04 17:42:43 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
15/02/04 17:42:43 INFO Executor: Fetching http://192.168.1.45:41309/jars/simple-project_2.10-1.0.jar with timestamp 1423071762914
15/02/04 17:42:43 INFO Utils: Fetching http://192.168.1.45:41309/jars/simple-project_2.10-1.0.jar to /tmp/fetchFileTemp3120003338190168194.tmp
15/02/04 17:42:43 INFO Executor: Adding file:/tmp/spark-ec5e14c2-9e58-4132-a4c9-2569d237a407/simple-project_2.10-1.0.jar to class loader
15/02/04 17:42:43 INFO CacheManager: Partition rdd_1_0 not found, computing it
15/02/04 17:42:43 INFO CacheManager: Partition rdd_1_1 not found, computing it
15/02/04 17:42:43 INFO HadoopRDD: Input split: file:/home/donbeo/Applications/spark/spark-1.1.0/README.md:0+2405
15/02/04 17:42:43 INFO HadoopRDD: Input split: file:/home/donbeo/Applications/spark/spark-1.1.0/README.md:2405+2406
15/02/04 17:42:43 INFO MemoryStore: ensureFreeSpace(7512) called with curMem=35384, maxMem=278302556
15/02/04 17:42:43 INFO MemoryStore: Block rdd_1_1 stored as values in memory (estimated size 7.3 KB, free 265.4 MB)
15/02/04 17:42:43 INFO BlockManagerInfo: Added rdd_1_1 in memory on 192.168.1.45:55674 (size: 7.3 KB, free: 265.4 MB)
15/02/04 17:42:43 INFO BlockManagerMaster: Updated info of block rdd_1_1
15/02/04 17:42:43 INFO MemoryStore: ensureFreeSpace(8352) called with curMem=42896, maxMem=278302556
15/02/04 17:42:43 INFO MemoryStore: Block rdd_1_0 stored as values in memory (estimated size 8.2 KB, free 265.4 MB)
15/02/04 17:42:43 INFO BlockManagerInfo: Added rdd_1_0 in memory on 192.168.1.45:55674 (size: 8.2 KB, free: 265.4 MB)
15/02/04 17:42:43 INFO BlockManagerMaster: Updated info of block rdd_1_0
15/02/04 17:42:43 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 2300 bytes result sent to driver
15/02/04 17:42:43 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2300 bytes result sent to driver
15/02/04 17:42:43 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 179 ms on localhost (1/2)
15/02/04 17:42:43 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 176 ms on localhost (2/2)
15/02/04 17:42:43 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/02/04 17:42:43 INFO DAGScheduler: Stage 0 (count at SimpleApp.scala:13) finished in 0.198 s
15/02/04 17:42:43 INFO SparkContext: Job finished: count at SimpleApp.scala:13, took 0.292364402 s
15/02/04 17:42:43 INFO SparkContext: Starting job: count at SimpleApp.scala:14
15/02/04 17:42:43 INFO DAGScheduler: Got job 1 (count at SimpleApp.scala:14) with 2 output partitions (allowLocal=false)
15/02/04 17:42:43 INFO DAGScheduler: Final stage: Stage 1(count at SimpleApp.scala:14)
15/02/04 17:42:43 INFO DAGScheduler: Parents of final stage: List()
15/02/04 17:42:43 INFO DAGScheduler: Missing parents: List()
15/02/04 17:42:43 INFO DAGScheduler: Submitting Stage 1 (FilteredRDD[3] at filter at SimpleApp.scala:14), which has no missing parents
15/02/04 17:42:43 INFO MemoryStore: ensureFreeSpace(2616) called with curMem=51248, maxMem=278302556
15/02/04 17:42:43 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 2.6 KB, free 265.4 MB)
15/02/04 17:42:43 INFO DAGScheduler: Submitting 2 missing tasks from Stage 1 (FilteredRDD[3] at filter at SimpleApp.scala:14)
15/02/04 17:42:43 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
15/02/04 17:42:43 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, localhost, ANY, 1283 bytes)
15/02/04 17:42:43 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, localhost, ANY, 1283 bytes)
15/02/04 17:42:43 INFO Executor: Running task 0.0 in stage 1.0 (TID 2)
15/02/04 17:42:43 INFO Executor: Running task 1.0 in stage 1.0 (TID 3)
15/02/04 17:42:43 INFO BlockManager: Found block rdd_1_1 locally
15/02/04 17:42:43 INFO BlockManager: Found block rdd_1_0 locally
15/02/04 17:42:43 INFO Executor: Finished task 1.0 in stage 1.0 (TID 3). 1731 bytes result sent to driver
15/02/04 17:42:43 INFO Executor: Finished task 0.0 in stage 1.0 (TID 2). 1731 bytes result sent to driver
15/02/04 17:42:43 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 3) in 14 ms on localhost (1/2)
15/02/04 17:42:43 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 17 ms on localhost (2/2)
15/02/04 17:42:43 INFO DAGScheduler: Stage 1 (count at SimpleApp.scala:14) finished in 0.017 s
15/02/04 17:42:43 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
15/02/04 17:42:43 INFO SparkContext: Job finished: count at SimpleApp.scala:14, took 0.034833058 s
Lines with a: 83, Lines with b: 38
A random number
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/math3/random/RandomDataGenerator
at SimpleApp$.main(SimpleApp.scala:20)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.commons.math3.random.RandomDataGenerator
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 9 more
donbeo#donbeo-HP-EliteBook-Folio-9470m:~/Applications/spark/spark-1.1.0$
I think I am doing something wrong when I import the math3 library.
Here there is a detailed explanation of how I have installed spark and built the project submit task to Spark
You need to specify common-math3 jar's path, it can be done using --jars option
./bin/spark-submit --class "SimpleApp" \
--master local[4] \
--jars <specify-path-of-commons-math3-jar> \
/home/donbeo/Documents/scala_code/simpleApp/target/scala-2.10/simple-project_2.10-1.0.jar
Alternatively, you can build an assembly jar which contains all the dependencies.
EDIT:
How to build assembly jar:
in file build.sbt
import AssemblyKeys._
import sbtassembly.Plugin._
name := "Simple Project"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0" % "provided"
libraryDependencies += "org.apache.commons" % "commons-math3" % "3.3"
// This statement includes the assembly plugin capabilities
assemblySettings
// Configure jar named used with the assembly plug-in
jarName in assembly := "simple-app-assembly.jar"
// A special option to exclude Scala itself form our assembly jar, since Spark
// already bundles Scala.
assemblyOption in assembly := (assemblyOption in assembly).value.copy(includeScala = false)
in file project/assembly.sbt
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.2")
Then make an assembly jar as follows:
sbt assembly