I am using boilerpipe to get text out of html. However there is some issue that I have not been able to resolve. I have a list of 50k elements. I am creating an rdd of 1000 elements and then processing them and saving the resultant rdd in hdfs. The error that I have encountered is this:
ERROR:root:Exception while sending command.
Traceback (most recent call last):
File "/home/hadoopuser/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 883, in send_command
response = connection.send_command(command)
File "/home/hadoopuser/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1040, in send_command
"Error while receiving", e, proto.ERROR_ON_RECEIVE)
Py4JNetworkError: Error while receiving
Traceback (most recent call last):
File "/home/hadoopuser/CommonCrawl_Spark/CommonCrawl_Spark/all.py", line 265, in <module>
x = get_data(line[:-1],c)
File "/home/hadoopuser/CommonCrawl_Spark/CommonCrawl_Spark/all.py", line 208, in get_data
sc.parallelize(warcrecords).repartition(72).map(lambda s: classify(s)).saveAsTextFile(file_name)
File "/home/hadoopuser/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1552, in saveAsTextFile
File "/home/hadoopuser/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/home/hadoopuser/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 327, in get_return_value
py4j.protocol.Py4JError: An error occurred while calling o40.saveAsTextFile
17/09/19 18:11:10 INFO SparkContext: Invoking stop() from shutdown hook
17/09/19 18:11:10 INFO SparkUI: Stopped Spark web UI at http://192.168.0.255:4040
17/09/19 18:11:10 INFO DAGScheduler: Job 0 failed: saveAsTextFile at NativeMethodAccessorImpl.java:0, took 14.746797 s
17/09/19 18:11:10 INFO DAGScheduler: ResultStage 1 (saveAsTextFile at NativeMethodAccessorImpl.java:0) failed in 7.906 s due to Stage cancelled because SparkContext was shut down
17/09/19 18:11:10 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo#ec3ca3)
17/09/19 18:11:10 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(0,1505824870317,JobFailed(org.apache.spark.SparkException: Job 0 cancelled because SparkContext was shut down))
17/09/19 18:11:10 INFO StandaloneSchedulerBackend: Shutting down all executors
17/09/19 18:11:10 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
17/09/19 18:11:10 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/09/19 18:11:10 INFO MemoryStore: MemoryStore cleared
17/09/19 18:11:10 INFO BlockManager: BlockManager stopped
17/09/19 18:11:10 INFO BlockManagerMaster: BlockManagerMaster stopped
17/09/19 18:11:10 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/09/19 18:11:10 INFO SparkContext: Successfully stopped SparkContext
17/09/19 18:11:10 INFO ShutdownHookManager: Shutdown hook called
17/09/19 18:11:10 INFO ShutdownHookManager: Deleting directory /tmp/spark-35ea0cd4-4b78-408b-8c3a-9966c1f84763/pyspark-b73e541b-1182-4449-96bc-26eabca1803d
17/09/19 18:11:10 INFO ShutdownHookManager: Deleting directory /tmp/spark-35ea0cd4-4b78-408b-8c3a-9966c1f84763
In the hdfs file, resultant of first 1000 elements are saved but going onwards it throws the above error. What is the fix to this?
removing this line from the code did the trick. still don't know why.
from boilerpipe.extract import Extractor
Related
I am doing a tutorial on Pluralsight for Apache Spark which is a simple word counter. I am on Windows 11 and I have IntelliJ IDEA 2022.3.1 (Ultimate Edition). Additionally, on my machine I have JKD8, Apache Spark 3.3.1 pre built for Hadoop 3.3 and later, and Hadoop 3.3.4. The code is written in Scala with SBT as the build tooland I've included the code below. After packaging the file with sbt package I run the command
spark-submit --class "main.WordCount" --master "local[*]" "C:\Users\user\Documents\Projects\WordCount\target\scala-2.11\word-count_2.11-0.1.jar"
I am receiving an exception
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat; (Full log below)
I have my dev tools (Java, Spark, Hadoop, etc) under C:\DevTools\TOOL and the Windows Environment variables are set as follows:
JAVA_HOME -> C:\DevTools\TOOL\Java
SPARK_HOME -> C:\DevTools\TOOL\Spark
HADOOP_HOME -> C:\DevTools\TOOL\Hadoop
PATH -> %JAVA_HOME%\bin; %SPARK_HOME%\bin; %HADOOP_HOME%\bin
Lastly, I've downloaded various winutils.exe and and hadoop.dll and I've put them in the Spark bin folder and the Hadoop bin folder but nothing seemingly works. Does anyone have any suggestions as to how I can get this to execute successfully?
build.sbt
name := "Word Count"
version := "0.1"
scalaVersion := "2.11.8"
val sparkVersion = "1.6.1"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" %% "spark-streaming" % sparkVersion
)
WordCount.scala
package main
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
object WordCount {
def main (args: Array[String]): Unit = {
val configuration = new SparkConf().setAppName("Word Counter")
val sparkContext = new SparkContext(configuration)
val textFile = sparkContext.textFile("file:///DevTools/TOOL/Spark")
val tokenizedFileData = textFile.flatMap(line=>line.split(" "))
val countPrep = tokenizedFileData.map(word=>(word, 1))
val counts = countPrep.reduceByKey((accumValue, newValue)=>accumValue + newValue)
val storedCounts = counts.sortBy(kvPair=>kvPair._2, false)
storedCounts.saveAsTextFile("file:///DevTools/TOOL/Spark/output")
}
}
Full Log
PS C:\Users\user\Documents\Projects\WordCount> spark-submit --class "main.WordCount" --master "local[*]" "C:\Users\user\Documents\Projects\WordCount\target\scala-2.11\word-count_2.11-0.1.jar"
23/01/26 17:00:08 INFO SparkContext: Running Spark version 3.3.1
23/01/26 17:00:08 INFO ResourceUtils: ==============================================================
23/01/26 17:00:08 INFO ResourceUtils: No custom resources configured for spark.driver.
23/01/26 17:00:08 INFO ResourceUtils: ==============================================================
23/01/26 17:00:08 INFO SparkContext: Submitted application: Word Counter
23/01/26 17:00:08 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
23/01/26 17:00:08 INFO ResourceProfile: Limiting resource is cpu
23/01/26 17:00:08 INFO ResourceProfileManager: Added ResourceProfile id: 0
23/01/26 17:00:08 INFO SecurityManager: Changing view acls to: user
23/01/26 17:00:08 INFO SecurityManager: Changing modify acls to: user
23/01/26 17:00:08 INFO SecurityManager: Changing view acls groups to:
23/01/26 17:00:08 INFO SecurityManager: Changing modify acls groups to:
23/01/26 17:00:08 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user); groups with view permissions: Set(); users with modify permissions: Set(user); groups with modify permissions: Set()
23/01/26 17:00:09 INFO Utils: Successfully started service 'sparkDriver' on port 50249.
23/01/26 17:00:09 INFO SparkEnv: Registering MapOutputTracker
23/01/26 17:00:09 INFO SparkEnv: Registering BlockManagerMaster
23/01/26 17:00:09 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/01/26 17:00:09 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/01/26 17:00:09 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23/01/26 17:00:09 INFO DiskBlockManager: Created local directory at C:\Users\user\AppData\Local\Temp\blockmgr-c7d05098-5b05-4121-b1b6-2e7445fc9240
23/01/26 17:00:09 INFO MemoryStore: MemoryStore started with capacity 366.3 MiB
23/01/26 17:00:09 INFO SparkEnv: Registering OutputCommitCoordinator
23/01/26 17:00:10 INFO Utils: Successfully started service 'SparkUI' on port 4040.
23/01/26 17:00:10 INFO SparkContext: Added JAR file:/C:/Users/user/Documents/Projects/WordCount/target/scala-2.11/word-count_2.11-0.1.jar at spark://localhost:50249/jars/word-count_2.11-0.1.jar with timestamp 1674770408345
23/01/26 17:00:10 INFO Executor: Starting executor ID driver on host localhost
23/01/26 17:00:10 INFO Executor: Starting executor with user classpath (userClassPathFirst = false): ''
23/01/26 17:00:10 INFO Executor: Fetching spark://localhost:50249/jars/word-count_2.11-0.1.jar with timestamp 1674770408345
23/01/26 17:00:10 INFO TransportClientFactory: Successfully created connection to localhost/192.168.1.221:50249 after 58 ms (0 ms spent in bootstraps)
23/01/26 17:00:10 INFO Utils: Fetching spark://localhost:50249/jars/word-count_2.11-0.1.jar to C:\Users\user\AppData\Local\Temp\spark-d7979eef-eac8-4a89-8ee0-246a821703d6\userFiles-8222f8d5-3999-47a7-b048-a9c37e66150a\fetchFileTemp8156211875497724521.tmp
23/01/26 17:00:11 INFO Executor: Adding file:/C:/Users/user/AppData/Local/Temp/spark-d7979eef-eac8-4a89-8ee0-246a821703d6/userFiles-8222f8d5-3999-47a7-b048-a9c37e66150a/word-count_2.11-0.1.jar to class loader
23/01/26 17:00:11 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50306.
23/01/26 17:00:11 INFO NettyBlockTransferService: Server created on localhost:50306
23/01/26 17:00:11 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
23/01/26 17:00:11 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, localhost, 50306, None)
23/01/26 17:00:11 INFO BlockManagerMasterEndpoint: Registering block manager localhost:50306 with 366.3 MiB RAM, BlockManagerId(driver, localhost, 50306, None)
23/01/26 17:00:11 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, localhost, 50306, None)
23/01/26 17:00:11 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, localhost, 50306, None)
23/01/26 17:00:12 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 358.0 KiB, free 366.0 MiB)
23/01/26 17:00:12 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 32.3 KiB, free 365.9 MiB)
23/01/26 17:00:12 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:50306 (size: 32.3 KiB, free: 366.3 MiB)
23/01/26 17:00:12 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:13
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat;
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.getStat(NativeIO.java:608)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNativeIO(RawLocalFileSystem.java:934)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:848)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:816)
at org.apache.hadoop.fs.LocatedFileStatus.<init>(LocatedFileStatus.java:52)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:2199)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:2179)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:244)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:332)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:208)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:292)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:292)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:292)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:292)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:288)
at org.apache.spark.Partitioner$.$anonfun$defaultPartitioner$4(Partitioner.scala:78)
at org.apache.spark.Partitioner$.$anonfun$defaultPartitioner$4$adapted(Partitioner.scala:78)
at scala.collection.immutable.List.map(List.scala:293)
at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:78)
at org.apache.spark.rdd.PairRDDFunctions.$anonfun$reduceByKey$4(PairRDDFunctions.scala:323)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:406)
at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:323)
at main.WordCount$.main(WordCount.scala:16)
at main.WordCount.main(WordCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/01/26 17:00:12 INFO SparkContext: Invoking stop() from shutdown hook
23/01/26 17:00:12 INFO SparkUI: Stopped Spark web UI at http://localhost:4040
23/01/26 17:00:12 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
23/01/26 17:00:12 INFO MemoryStore: MemoryStore cleared
23/01/26 17:00:12 INFO BlockManager: BlockManager stopped
23/01/26 17:00:12 INFO BlockManagerMaster: BlockManagerMaster stopped
23/01/26 17:00:12 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
23/01/26 17:00:12 INFO SparkContext: Successfully stopped SparkContext
23/01/26 17:00:12 INFO ShutdownHookManager: Shutdown hook called
23/01/26 17:00:12 INFO ShutdownHookManager: Deleting directory C:\Users\user\AppData\Local\Temp\spark-d7979eef-eac8-4a89-8ee0-246a821703d6
23/01/26 17:00:12 INFO ShutdownHookManager: Deleting directory C:\Users\user\AppData\Local\Temp\spark-26625e11-a7f1-41f7-b2b3-29f97ea9e75a
Spark job (Scala/s3) worked fine for few runs in stand-alone cluster with spark-submit but after few run it started giving the below error. There were no changes to code, it is making connection to spark-master but immediately application is getting killed with the reason “All masters are unresponsive! Giving up”.
22/03/20 05:33:39 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
22/03/20 05:33:39 INFO TransportClientFactory: Successfully created connection to spark-master/xx.x.x.xxx:7077 after 42 ms (0 ms spent in bootstraps)
22/03/20 05:33:59 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
22/03/20 05:34:19 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
22/03/20 05:34:39 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
22/03/20 05:34:39 WARN StandaloneSchedulerBackend: Application ID is not initialized yet.
22/03/20 05:34:39 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33139.
22/03/20 05:34:39 INFO NettyBlockTransferService: Server created on a1326e4ae4bb:33139
22/03/20 05:34:39 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
22/03/20 05:34:39 INFO SparkUI: Stopped Spark web UI at http://xxxxxxxxxxxxx:4040
22/03/20 05:34:39 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, a1326e4ae4bb, 33139, None)
22/03/20 05:34:39 INFO StandaloneSchedulerBackend: Shutting down all executors
22/03/20 05:34:39 INFO BlockManagerMasterEndpoint: Registering block manager a1326e4ae4bb:33139 with 1168.8 MiB RAM, BlockManagerId(driver, a1326e4ae4bb, 33139, None)
22/03/20 05:34:39 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
22/03/20 05:34:39 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, a1326e4ae4bb, 33139, None)
22/03/20 05:34:39 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, a1326e4ae4bb, 33139, None)
22/03/20 05:34:39 WARN StandaloneAppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master
22/03/20 05:34:39 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
22/03/20 05:34:39 INFO MemoryStore: MemoryStore cleared
22/03/20 05:34:39 INFO BlockManager: BlockManager stopped
22/03/20 05:34:39 INFO BlockManagerMaster: BlockManagerMaster stopped
22/03/20 05:34:39 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
22/03/20 05:34:40 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:281)
I'm trying to use Word2Vec from mllib, in order to apply a kmeans subsequently. I'm using scala 2.10.5 and spark 1.6.3. This is my code (after a Tokenization):
val word2Vec = new Word2Vec()
.setMinCount(2)
.setInputCol("FilteredFeauturesEntities")
.setOutputCol("Word2VecFeatures")
.setVectorSize(1000)
val model = word2Vec.fit(CleanedTokenizedDataFrame)
val word2VecDataFrame = model.transform(CleanedTokenizedDataFrame)
word2VecDataFrame.show()
I'm not getting a special error but my job don't reach the finishing lines.
This is the log output :
18/02/05 15:39:32 INFO TaskSetManager: Finished task 4.0 in stage 4.0 (TID 23) in 3143 ms on dhadlx122.haas.xxxxxx (2/9)
18/02/05 15:39:32 INFO TaskSetManager: Starting task 5.1 in stage 4.0 (TID 28, dhadlx121.haas.xxxxxx, partition 5,NODE_LOCAL, 2329 bytes)
18/02/05 15:39:32 INFO TaskSetManager: Finished task 0.0 in stage 4.0 (TID 20) in 3217 ms on dhadlx121.haas.xxxxxx (3/9)
18/02/05 15:39:32 INFO TaskSetManager: Finished task 1.0 in stage 4.0 (TID 22) in 3309 ms on dhadlx123.haas.xxxxxx (4/9)
18/02/05 15:39:32 INFO TaskSetManager: Finished task 2.0 in stage 4.0 (TID 21) in 3677 ms on dhadlx121.haas.xxxxxx (5/9)
18/02/05 15:39:33 INFO TaskSetManager: Finished task 6.0 in stage 4.0 (TID 25) in 3901 ms on dhadlx126.haas.xxxxxx (6/9)
18/02/05 15:39:33 INFO YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (dhadlx127.haas.xxxxxx:48384) with ID 6
18/02/05 15:39:33 INFO BlockManagerMasterEndpoint: Registering block manager dhadlx127.haas.xxxxxx:37909 with 5.3 GB RAM, BlockManagerId(6, dhadlx127.haas.xxxxxx, 37909)
18/02/05 15:39:33 INFO TaskSetManager: Lost task 5.1 in stage 4.0 (TID 28) on executor dhadlx121.haas.xxxxxx: java.lang.NullPointerException (null) [duplicate 1]
18/02/05 15:39:33 INFO TaskSetManager: Starting task 5.2 in stage 4.0 (TID 29, dhadlx128.haas.xxxxxx, partition 5,RACK_LOCAL, 2329 bytes)
18/02/05 15:39:33 INFO TaskSetManager: Finished task 7.0 in stage 4.0 (TID 27) in 2948 ms on dhadlx125.haas.xxxxxx (7/9)
18/02/05 15:39:34 INFO TaskSetManager: Lost task 5.2 in stage 4.0 (TID 29) on executor dhadlx128.haas.xxxxxx: java.lang.NullPointerException (null) [duplicate 2]
18/02/05 15:39:34 INFO TaskSetManager: Starting task 5.3 in stage 4.0 (TID 30, dhadlx127.haas.xxxxxx, partition 5,RACK_LOCAL, 2329 bytes)
18/02/05 15:39:35 INFO BlockManagerInfo: Added broadcast_7_piece0 in memory on dhadlx127.haas.xxxxxx:37909 (size: 26.4 KB, free: 5.3 GB)
18/02/05 15:39:35 INFO TaskSetManager: Finished task 3.0 in stage 4.0 (TID 19) in 6321 ms on dhadlx120.haas.xxxxxx (8/9)
18/02/05 15:39:36 INFO BlockManagerInfo: Added broadcast_6_piece0 in memory on dhadlx127.haas.xxxxxx:37909 (size: 58.9 KB, free: 5.3 GB)
18/02/05 15:39:40 INFO TaskSetManager: Lost task 5.3 in stage 4.0 (TID 30) on executor dhadlx127.haas.xxxxxx: java.lang.NullPointerException (null) [duplicate 3]
18/02/05 15:39:40 ERROR TaskSetManager: Task 5 in stage 4.0 failed 4 times; aborting job
18/02/05 15:39:40 INFO YarnScheduler: Removed TaskSet 4.0, whose tasks have all completed, from pool
18/02/05 15:39:40 INFO YarnScheduler: Cancelling stage 4
18/02/05 15:39:40 INFO DAGScheduler: ShuffleMapStage 4 (map at Word2Vec.scala:161) failed in 11.037 s
18/02/05 15:39:40 INFO DAGScheduler: Job 3 failed: collect at Word2Vec.scala:170, took 11.058049 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in stage 4.0 failed 4 times, most recent failure: Lost task 5.3 in stage 4.0 (TID 30, dhadlx127.haas.xxxxxx): java.lang.NullPointerException
at java.util.regex.Matcher.getTextLength(Matcher.java:1283)
at java.util.regex.Matcher.reset(Matcher.java:309)
at java.util.regex.Matcher.<init>(Matcher.java:229)
at java.util.regex.Pattern.matcher(Pattern.java:1093)
at scala.util.matching.Regex.replaceAllIn(Regex.scala:385)
at SemanticAnalysis.App$$anonfun$extractPattern$1$1.apply(App.scala:63)
at SemanticAnalysis.App$$anonfun$extractPattern$1$1.apply(App.scala:63)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:51)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:49)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:189)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:64)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:247)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1433)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1421)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1420)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1420)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:801)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1642)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1601)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1590)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:622)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1831)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1844)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1857)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1928)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:934)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:323)
at org.apache.spark.rdd.RDD.collect(RDD.scala:933)
at org.apache.spark.mllib.feature.Word2Vec.learnVocab(Word2Vec.scala:170)
at org.apache.spark.mllib.feature.Word2Vec.fit(Word2Vec.scala:284)
at org.apache.spark.ml.feature.Word2Vec.fit(Word2Vec.scala:149)
at SemanticAnalysis.App$.main(App.scala:126)
at SemanticAnalysis.App.main(App.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:750)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NullPointerException
at java.util.regex.Matcher.getTextLength(Matcher.java:1283)
at java.util.regex.Matcher.reset(Matcher.java:309)
at java.util.regex.Matcher.<init>(Matcher.java:229)
at java.util.regex.Pattern.matcher(Pattern.java:1093)
at scala.util.matching.Regex.replaceAllIn(Regex.scala:385)
at SemanticAnalysis.App$$anonfun$extractPattern$1$1.apply(App.scala:63)
at SemanticAnalysis.App$$anonfun$extractPattern$1$1.apply(App.scala:63)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:51)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:49)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:189)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:64)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:247)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
18/02/05 15:39:40 INFO SparkContext: Invoking stop() from shutdown hook
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static/sql,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution/json,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/json,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
18/02/05 15:39:40 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
18/02/05 15:39:40 INFO SparkUI: Stopped Spark web UI at http://xxx.xx.xx.xxx:xxxx
18/02/05 15:39:40 INFO YarnClientSchedulerBackend: Interrupting monitor thread
18/02/05 15:39:40 INFO YarnClientSchedulerBackend: Shutting down all executors
18/02/05 15:39:40 INFO YarnClientSchedulerBackend: Asking each executor to shut down
18/02/05 15:39:40 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
18/02/05 15:39:40 INFO YarnClientSchedulerBackend: Stopped
18/02/05 15:39:40 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/02/05 15:39:40 INFO MemoryStore: MemoryStore cleared
18/02/05 15:39:40 INFO BlockManager: BlockManager stopped
18/02/05 15:39:40 INFO BlockManagerMaster: BlockManagerMaster stopped
18/02/05 15:39:40 INFO SparkContext: Successfully stopped SparkContext
18/02/05 15:39:40 INFO ShutdownHookManager: Shutdown hook called
18/02/05 15:39:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-e769e7c5-4336-45bd-97cd-e0731803f45f
18/02/05 15:39:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-f427cf4c-4236-4e57-a304-6be2a52932f3
18/02/05 15:39:40 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/02/05 15:39:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-f427cf4c-4236-4e57-a304-6be2a52932f3/httpd-0ab9e5ee-930e-4a48-be77-f5a6d2b01250
18/02/05 15:39:40 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
18/02/05 15:39:40 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
18/02/05 15:39:40 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
Moreover, the same code works for a small example, in the same working environment :
package BIGDATA
/**
* #author ${user.name}
*/
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.functions._
import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.sql.{Row, SQLContext}
import org.apache.spark.sql.types.{ArrayType, StringType, StructField, StructType}
import org.apache.spark.ml.feature.StopWordsRemover
import org.apache.spark.ml.feature.{HashingTF, IDF, Tokenizer}
import org.apache.spark.ml.feature.Word2Vec
import org.apache.spark.ml.clustering.KMeans
import org.apache.spark.mllib.linalg.{VectorUDT, Vectors}
object App {
def main(args : Array[String]) {
val conf = new SparkConf()
.setAppName("SEMANTIC ANALYSIS - TEST")
val sc = new SparkContext(conf)
val hiveContext = new HiveContext(sc)
import hiveContext.implicits._
println("====================================================")
println("READING DATA")
println("====================================================")
val pattern: scala.util.matching.Regex = "(([\\w\\.-]+#[\\w\\.-]+)|((X|A|x|a)\\d{6})|(MA\\d{7}\\w|MA\\d{7}|FR\\d{8}\\w)|(w+\\..*(\\.com|fr))|([|\\[\\]!\\(\\)?,;:#&*#_=\\/]*))".r
def extractPattern(pattern: scala.util.matching.Regex) = udf(
(title: String) => pattern.replaceAllIn(title, "")
)
val df = Seq(
(8, "Hi I heard about Spark x163021. Now, let’s use trained model by loading it. We need to import KMeansModel in order to use it for loading the model from file."),
(64, "I wish Java could use case classes. Above is a very naive example in which we use training dataset as input data too. In real world we will train a model, save it and later use it for predicting clusters of input data."),
(-27, "Logistic regression models are neat. Here is how you can save a trained model and later load it for prediction.")
).toDF("number", "word").select($"number", $"word",
extractPattern(pattern)($"word").alias("NewWord"))
println("====================================================")
println("FEATURE TRANSFORMERS")
println("====================================================")
val tokenizer = new Tokenizer()
.setInputCol("NewWord")
.setOutputCol("FeauturesEntities")
val TokenizedDataFrame = tokenizer.transform(df)
val remover = new StopWordsRemover()
.setInputCol("FeauturesEntities")
.setOutputCol("FilteredFeauturesEntities")
val CleanedTokenizedDataFrame = remover.transform(TokenizedDataFrame)
CleanedTokenizedDataFrame.show()
println("====================================================")
println("WORD2VEC : LEARN A MAPPING FROM WORDS TO VECTORS")
println("====================================================")
// Learn a mapping from words to Vectors.
val word2Vec = new Word2Vec()
.setMinCount(2)
.setInputCol("FilteredFeauturesEntities")
.setOutputCol("Word2VecFeatures")
.setVectorSize(1000)
val model = word2Vec.fit(CleanedTokenizedDataFrame)
val word2VecDataFrame = model.transform(CleanedTokenizedDataFrame)
word2VecDataFrame.show()
}
}
What's wrong with the first example ? thx !
You code never reaches Word2Vec. It fails on udf call because word column contains nulls. For example
val df = Seq((1, null), (2, "foo bar")).toDF("id", "word")
df.select(extractPattern(pattern)($"word").alias("NewWord")).show
will fail with the same way:
java.lang.NullPointerException
at java.util.regex.Matcher.getTextLength(Matcher.java:1283)
at java.util.regex.Matcher.reset(Matcher.java:309)
at java.util.regex.Matcher.<init>(Matcher.java:229)
at java.util.regex.Pattern.matcher(Pattern.java:1093)
Clean your data using na.drop before you proceed, and in general use regexp_replace, not udf.
I just created a DC/OS cluster and am trying to run simple Spark task that reads data from /mnt/mesos/sandbox.
object SimpleApp {
def main(args: Array[String]) {
val conf = new SparkConf()
.setAppName("Simple Application")
println("STARTING JOB!")
val sc = new SparkContext(conf)
val rdd = sc.textFile("file:///mnt/mesos/sandbox/foo")
println(rdd.count)
println("ENDING JOB!")
}
}
And I'm deploying the app using
dcos spark run --submit-args='--conf spark.mesos.uris=https://dripit-spark.s3.amazonaws.com/foo --class SimpleApp https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar' --verbose
Unfortunately, task keeps failing with following exception
I0701 18:47:35.782994 30997 logging.cpp:188] INFO level logging started!
I0701 18:47:35.783197 30997 fetcher.cpp:424] Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"https:\/\/dripit-spark.s3.amazonaws.com\/foobar-assembly-1.0.jar"}},{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"https:\/\/dripit-spark.s3.amazonaws.com\/foo"}}],"sandbox_directory":"\/var\/lib\/mesos\/slave\/slaves\/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2\/frameworks\/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002\/executors\/driver-20160701184530-0001\/runs\/67b94f34-a9d3-4662-bedc-8578381e9305"}
I0701 18:47:35.784752 30997 fetcher.cpp:379] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar'
I0701 18:47:35.784791 30997 fetcher.cpp:250] Fetching directly into the sandbox directory
I0701 18:47:35.784818 30997 fetcher.cpp:187] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar'
I0701 18:47:35.784835 30997 fetcher.cpp:134] Downloading resource from 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foobar-assembly-1.0.jar'
W0701 18:47:36.057448 30997 fetcher.cpp:272] Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar
I0701 18:47:36.057673 30997 fetcher.cpp:456] Fetched 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foobar-assembly-1.0.jar'
I0701 18:47:36.057696 30997 fetcher.cpp:379] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foo'
I0701 18:47:36.057714 30997 fetcher.cpp:250] Fetching directly into the sandbox directory
I0701 18:47:36.057741 30997 fetcher.cpp:187] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foo'
I0701 18:47:36.057770 30997 fetcher.cpp:134] Downloading resource from 'https://dripit-spark.s3.amazonaws.com/foo' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foo'
W0701 18:47:36.114565 30997 fetcher.cpp:272] Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: https://dripit-spark.s3.amazonaws.com/foo
I0701 18:47:36.114600 30997 fetcher.cpp:456] Fetched 'https://dripit-spark.s3.amazonaws.com/foo' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foo'
I0701 18:47:36.307576 31006 exec.cpp:143] Version: 0.28.1
I0701 18:47:36.310127 31022 exec.cpp:217] Executor registered on slave c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2
16/07/01 18:47:37 INFO SparkContext: Running Spark version 1.6.1
16/07/01 18:47:37 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/01 18:47:37 WARN SparkConf:
SPARK_JAVA_OPTS was detected (set to '-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2 ').
This is deprecated in Spark 1.0+.
Please instead use:
- ./spark-submit with conf/spark-defaults.conf to set defaults for an application
- ./spark-submit with --driver-java-options to set -X options for a driver
- spark.executor.extraJavaOptions to set -X options for executors
- SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or worker)
16/07/01 18:47:37 WARN SparkConf: Setting 'spark.executor.extraJavaOptions' to '-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2 ' as a work-around.
16/07/01 18:47:37 WARN SparkConf: Setting 'spark.driver.extraJavaOptions' to '-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2 ' as a work-around.
16/07/01 18:47:37 INFO SecurityManager: Changing view acls to: root
16/07/01 18:47:37 INFO SecurityManager: Changing modify acls to: root
16/07/01 18:47:37 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/07/01 18:47:37 INFO Utils: Successfully started service 'sparkDriver' on port 47358.
16/07/01 18:47:38 INFO Slf4jLogger: Slf4jLogger started
16/07/01 18:47:38 INFO Remoting: Starting remoting
16/07/01 18:47:38 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#10.0.1.107:54467]
16/07/01 18:47:38 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 54467.
16/07/01 18:47:38 INFO SparkEnv: Registering MapOutputTracker
16/07/01 18:47:38 INFO SparkEnv: Registering BlockManagerMaster
16/07/01 18:47:38 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-96092a9a-3164-4d65-8c0b-df5403abb056
16/07/01 18:47:38 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/07/01 18:47:38 INFO SparkEnv: Registering OutputCommitCoordinator
16/07/01 18:47:38 INFO Server: jetty-8.y.z-SNAPSHOT
16/07/01 18:47:38 INFO AbstractConnector: Started SelectChannelConnector#0.0.0.0:4040
16/07/01 18:47:38 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/07/01 18:47:38 INFO SparkUI: Started SparkUI at http://10.0.1.107:4040
16/07/01 18:47:38 INFO HttpFileServer: HTTP File server directory is /tmp/spark-37696e45-5e8b-4328-81e6-deec1f185d75/httpd-69184304-7ffd-4420-b020-5f8a1bafecbd
16/07/01 18:47:38 INFO HttpServer: Starting HTTP Server
16/07/01 18:47:38 INFO Server: jetty-8.y.z-SNAPSHOT
16/07/01 18:47:38 INFO AbstractConnector: Started SocketConnector#0.0.0.0:49074
16/07/01 18:47:38 INFO Utils: Successfully started service 'HTTP file server' on port 49074.
16/07/01 18:47:38 INFO SparkContext: Added JAR file:/mnt/mesos/sandbox/foobar-assembly-1.0.jar at http://10.0.1.107:49074/jars/foobar-assembly-1.0.jar with timestamp 1467398858626
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#712: Client environment:zookeeper.version=zookeeper C client 3.4.5
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#716: Client environment:host.name=ip-10-0-1-107.eu-west-1.compute.internal
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#723: Client environment:os.name=Linux
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#724: Client environment:os.arch=4.1.7-coreos-r1
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#725: Client environment:os.version=#2 SMP Thu Nov 5 02:10:23 UTC 2015
I0701 18:47:38.778355 103 sched.cpp:164] Version: 0.25.0
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#733: Client environment:user.name=(null)
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#741: Client environment:user.home=/root
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#753: Client environment:user.dir=/opt/spark/dist
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#zookeeper_init#786: Initiating client connection, host=master.mesos:2181 sessionTimeout=10000 watcher=0x7f74d587c600 sessionId=0 sessionPasswd=<null> context=0x7f7540003f70 flags=0
2016-07-01 18:47:38,786:6(0x7f74c6ec0700):ZOO_INFO#check_events#1703: initiated connection to server [10.0.7.83:2181]
2016-07-01 18:47:38,787:6(0x7f74c6ec0700):ZOO_INFO#check_events#1750: session establishment complete on server [10.0.7.83:2181], sessionId=0x155a57d07f60050, negotiated timeout=10000
I0701 18:47:38.788107 99 group.cpp:331] Group process (group(1)#10.0.1.107:35064) connected to ZooKeeper
I0701 18:47:38.788147 99 group.cpp:805] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0701 18:47:38.788162 99 group.cpp:403] Trying to create path '/mesos' in ZooKeeper
I0701 18:47:38.789402 99 detector.cpp:156] Detected a new leader: (id='1')
I0701 18:47:38.789512 99 group.cpp:674] Trying to get '/mesos/json.info_0000000001' in ZooKeeper
I0701 18:47:38.790228 99 detector.cpp:481] A new leading master (UPID=master#10.0.7.83:5050) is detected
I0701 18:47:38.790293 99 sched.cpp:262] New master detected at master#10.0.7.83:5050
I0701 18:47:38.790473 99 sched.cpp:272] No credentials provided. Attempting to register without authentication
I0701 18:47:38.792147 97 sched.cpp:641] Framework registered with c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002-driver-20160701184530-0001
16/07/01 18:47:38 INFO CoarseMesosSchedulerBackend: Registered as framework ID c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002-driver-20160701184530-0001
16/07/01 18:47:38 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 38752.
16/07/01 18:47:38 INFO NettyBlockTransferService: Server created on 38752
16/07/01 18:47:38 INFO BlockManagerMaster: Trying to register BlockManager
16/07/01 18:47:38 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.1.107:38752 with 511.1 MB RAM, BlockManagerId(driver, 10.0.1.107, 38752)
16/07/01 18:47:38 INFO BlockManagerMaster: Registered BlockManager
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/07/01 18:47:39 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 117.2 KB, free 117.2 KB)
16/07/01 18:47:39 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 12.6 KB, free 129.8 KB)
16/07/01 18:47:39 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.1.107:38752 (size: 12.6 KB, free: 511.1 MB)
16/07/01 18:47:39 INFO SparkContext: Created broadcast 0 from textFile at SimpleApp.scala:13
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 4 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 2 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 0 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 1 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 3 is now TASK_RUNNING
16/07/01 18:47:39 WARN DFSUtil: Namenode for hdfs remains unresolved for ID nn1. Check your hdfs-site.xml file to ensure namenodes are configured properly.
16/07/01 18:47:39 WARN DFSUtil: Namenode for hdfs remains unresolved for ID nn2. Check your hdfs-site.xml file to ensure namenodes are configured properly.
Exception in thread "main" java.lang.IllegalArgumentException: java.net.UnknownHostException: namenode1.hdfs.mesos
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:240)
at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy(ConfiguredFailoverProxyProvider.java:124)
at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:74)
at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:65)
at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:152)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:579)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:524)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:653)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:427)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:400)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at scala.Option.map(Option.scala:145)
at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD.count(RDD.scala:1157)
at SimpleApp$.main(SimpleApp.scala:15)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:786)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:183)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:208)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:123)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.UnknownHostException: namenode1.hdfs.mesos
... 48 more
16/07/01 18:47:39 INFO SparkContext: Invoking stop() from shutdown hook
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/07/01 18:47:40 INFO SparkUI: Stopped Spark web UI at http://10.0.1.107:4040
16/07/01 18:47:40 INFO CoarseMesosSchedulerBackend: Shutting down all executors
16/07/01 18:47:40 INFO CoarseMesosSchedulerBackend: Asking each executor to shut down
I0701 18:47:40.051103 111 sched.cpp:1771] Asked to stop the driver
I0701 18:47:40.051283 96 sched.cpp:1040] Stopping framework 'c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002-driver-20160701184530-0001'
16/07/01 18:47:40 INFO CoarseMesosSchedulerBackend: driver.run() returned with code DRIVER_STOPPED
16/07/01 18:47:40 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/07/01 18:47:40 INFO MemoryStore: MemoryStore cleared
16/07/01 18:47:40 INFO BlockManager: BlockManager stopped
16/07/01 18:47:40 INFO BlockManagerMaster: BlockManagerMaster stopped
16/07/01 18:47:40 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/07/01 18:47:40 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/07/01 18:47:40 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/07/01 18:47:40 INFO SparkContext: Successfully stopped SparkContext
16/07/01 18:47:40 INFO ShutdownHookManager: Shutdown hook called
16/07/01 18:47:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-37696e45-5e8b-4328-81e6-deec1f185d75/httpd-69184304-7ffd-4420-b020-5f8a1bafecbd
16/07/01 18:47:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-37696e45-5e8b-4328-81e6-deec1f185d75
Why is Spark trying to connect to HDFS although file type is explicitly set to file://?
I thought that sc.textFile("file:///") doesn’t require HDFS setup.
Spark always use the Hadoop API to access a file, regardless of that file is local or in HDFS.
I think the problem is your Spark is inheriting an invalid HDFS configuration and hit this bug https://issues.apache.org/jira/browse/SPARK-11227
You should try some workarounds in that ticket to see if it works for you:
Use an older Spark < 1.5.0
Disable HA in HDFS configuration.
Spark will still use the hdfs to write the intermediate results of the stages (in your case, I guess the partial counts).
realy need your help to understand, what I'm doing wrong.
The intent of my experiment is to run spark job programatically instead of using ./spark-shell or ./spark-submit (These both work for me)
Environment:
I've created a Spark Cluster with 1 master & 1 worker using ./spark-ec2 script
Cluster looks good, however, when I try to run the code being packaged in a jar:
val logFile = "file:///root/spark/bin/README.md"
val conf = new SparkConf()
conf.setAppName("Simple App")
conf.setJars(List("file:///root/spark/bin/hello-apache-spark_2.10-1.0.0-SNAPSHOT.jar"))
conf.setMaster("spark://ec2-54-89-51-36.compute-1.amazonaws.com:7077")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(_.contains("a")).count()
val numBs = logData.filter(_.contains("b")).count()
println(s"1. Lines with a: $numAs, Lines with b: $numBs")
I get an exception:
*[info] Running com.paycasso.SimpleApp
14/09/05 14:50:29 INFO SecurityManager: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/09/05 14:50:29 INFO SecurityManager: Changing view acls to: root
14/09/05 14:50:29 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root)
14/09/05 14:50:30 INFO Slf4jLogger: Slf4jLogger started
14/09/05 14:50:30 INFO Remoting: Starting remoting
14/09/05 14:50:30 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark#ip-10-224-14-90.ec2.internal:54683]
14/09/05 14:50:30 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark#ip-10-224-14-90.ec2.internal:54683]
14/09/05 14:50:30 INFO SparkEnv: Registering MapOutputTracker
14/09/05 14:50:30 INFO SparkEnv: Registering BlockManagerMaster
14/09/05 14:50:30 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140905145030-85cb
14/09/05 14:50:30 INFO MemoryStore: MemoryStore started with capacity 589.2 MB.
14/09/05 14:50:30 INFO ConnectionManager: Bound socket to port 47852 with id = ConnectionManagerId(ip-10-224-14-90.ec2.internal,47852)
14/09/05 14:50:30 INFO BlockManagerMaster: Trying to register BlockManager
14/09/05 14:50:30 INFO BlockManagerInfo: Registering block manager ip-10-224-14-90.ec2.internal:47852 with 589.2 MB RAM
14/09/05 14:50:30 INFO BlockManagerMaster: Registered BlockManager
14/09/05 14:50:30 INFO HttpServer: Starting HTTP Server
14/09/05 14:50:30 INFO HttpBroadcast: Broadcast server started at http://**.***.**.**:49211
14/09/05 14:50:30 INFO HttpFileServer: HTTP File server directory is /tmp/spark-e2748605-17ec-4524-983b-97aaf2f94b30
14/09/05 14:50:30 INFO HttpServer: Starting HTTP Server
14/09/05 14:50:31 INFO SparkUI: Started SparkUI at http://ip-10-224-14-90.ec2.internal:4040
14/09/05 14:50:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/09/05 14:50:32 INFO SparkContext: Added JAR file:///root/spark/bin/hello-apache-spark_2.10-1.0.0-SNAPSHOT.jar at http://**.***.**.**:46491/jars/hello-apache-spark_2.10-1.0.0-SNAPSHOT.jar with timestamp 1409928632274
14/09/05 14:50:32 INFO AppClient$ClientActor: Connecting to master spark://ec2-54-89-51-36.compute-1.amazonaws.com:7077...
14/09/05 14:50:32 INFO MemoryStore: ensureFreeSpace(163793) called with curMem=0, maxMem=617820979
14/09/05 14:50:32 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 160.0 KB, free 589.0 MB)
14/09/05 14:50:32 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20140905145032-0005
14/09/05 14:50:32 INFO AppClient$ClientActor: Executor added: app-20140905145032-0005/0 on worker-20140905141732-ip-10-80-90-29.ec2.internal-57457 (ip-10-80-90-29.ec2.internal:57457) with 2 cores
14/09/05 14:50:32 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140905145032-0005/0 on hostPort ip-10-80-90-29.ec2.internal:57457 with 2 cores, 512.0 MB RAM
14/09/05 14:50:32 INFO AppClient$ClientActor: Executor updated: app-20140905145032-0005/0 is now RUNNING
14/09/05 14:50:33 INFO FileInputFormat: Total input paths to process : 1
14/09/05 14:50:33 INFO SparkContext: Starting job: count at SimpleApp.scala:26
14/09/05 14:50:33 INFO DAGScheduler: Got job 0 (count at SimpleApp.scala:26) with 1 output partitions (allowLocal=false)
14/09/05 14:50:33 INFO DAGScheduler: Final stage: Stage 0(count at SimpleApp.scala:26)
14/09/05 14:50:33 INFO DAGScheduler: Parents of final stage: List()
14/09/05 14:50:33 INFO DAGScheduler: Missing parents: List()
14/09/05 14:50:33 INFO DAGScheduler: Submitting Stage 0 (FilteredRDD[2] at filter at SimpleApp.scala:26), which has no missing parents
14/09/05 14:50:33 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0 (FilteredRDD[2] at filter at SimpleApp.scala:26)
14/09/05 14:50:33 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
14/09/05 14:50:36 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor#ip-10-80-90-29.ec2.internal:36966/user/Executor#2034537974] with ID 0
14/09/05 14:50:36 INFO TaskSetManager: Starting task 0.0:0 as TID 0 on executor 0: ip-10-80-90-29.ec2.internal (PROCESS_LOCAL)
14/09/05 14:50:36 INFO TaskSetManager: Serialized task 0.0:0 as 1880 bytes in 8 ms
14/09/05 14:50:37 INFO BlockManagerInfo: Registering block manager ip-10-80-90-29.ec2.internal:59950 with 294.9 MB RAM
14/09/05 14:50:38 WARN TaskSetManager: Lost TID 0 (task 0.0:0)
14/09/05 14:50:38 WARN TaskSetManager: Loss was due to java.io.EOFException
java.io.EOFException
at java.io.ObjectInputStream$BlockDataInputStream.readFully(ObjectInputStream.java:2744)
at java.io.ObjectInputStream.readFully(ObjectInputStream.java:1032)
at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
at org.apache.hadoop.io.UTF8.readChars(UTF8.java:216)
at org.apache.hadoop.io.UTF8.readString(UTF8.java:208)
at org.apache.hadoop.mapred.FileSplit.readFields(FileSplit.java:87)
at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:237)
at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
at org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at org.apache.spark.scheduler.ResultTask.readExternal(ResultTask.scala:147)
at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:63)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:165)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)*
What I'm actualy doing is a call "sbt run". So I assemble the scala project and run it.
By the way, I run that project on a master host, so the driver definitely is visible for a worker host.
Any help is appreciated. That's very strange, that such a simple example doesn't work in cluster. Using ./spark-submit is not convenient, I believe.
Thanks in advance.
After wasting a lot of time, I've found the problem. Despite I haven't used hadoop/hdfs in my application, hadoop client matters. The problem was in hadoop-client version, it was different than the version of hadoop, spark was built for. Spark's hadoop version 1.2.1, but in my application that was 2.4.
When I changed the version of hadoop client to 1.2.1 in my app, I'm able to execute spark code on cluster.