I have a maven scala application that submits a spark job to Spark standalone single node cluster. When job is submitted, Spark application tries to access cassandra, which is hosted on Amazon EC2 instance, using spark-cassandra-connector. Connection is established, but results are not returned. After some time connector disconnects. It works fine if I'm running spark in local mode.
I tried to create simple application and my code looks like this:
val sc = SparkContextLoader.getSC
def runSparkJob():Unit={
val table =sc.cassandraTable("prosolo_logs_zj", "logevents")
println(table.collect().mkString("\n"))
}
SparkContext.scala
object SparkContextLoader {
val sparkConf = new SparkConf()
sparkConf.setMaster("spark://127.0.1.1:7077")
sparkConf.set("spark.cores_max","2")
sparkConf.set("spark.executor.memory","2g")
sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
sparkConf.setAppName("Test application")
sparkConf.set("spark.cassandra.connection.host", "xxx.xxx.xxx.xxx")
sparkConf.set("spark.cassandra.connection.port", "9042")
sparkConf.set("spark.ui.port","4041")
val oneJar="/samplesparkmaven/target/samplesparkmaven-jar.jar"
sparkConf.setJars(List(oneJar))
#transient val sc = new SparkContext(sparkConf)
}
Console output looks like:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/02/14 23:11:25 INFO SparkContext: Running Spark version 2.1.0
17/02/14 23:11:26 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/02/14 23:11:27 WARN Utils: Your hostname, zoran-Latitude-E5420 resolves to a loopback address: 127.0.1.1; using 192.168.2.68 instead (on interface wlp2s0)
17/02/14 23:11:27 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/02/14 23:11:27 INFO SecurityManager: Changing view acls to: zoran
17/02/14 23:11:27 INFO SecurityManager: Changing modify acls to: zoran
17/02/14 23:11:27 INFO SecurityManager: Changing view acls groups to:
17/02/14 23:11:27 INFO SecurityManager: Changing modify acls groups to:
17/02/14 23:11:27 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(zoran); groups with view permissions: Set(); users with modify permissions: Set(zoran); groups with modify permissions: Set()
17/02/14 23:11:28 INFO Utils: Successfully started service 'sparkDriver' on port 33995.
17/02/14 23:11:28 INFO SparkEnv: Registering MapOutputTracker
17/02/14 23:11:28 INFO SparkEnv: Registering BlockManagerMaster
17/02/14 23:11:28 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/02/14 23:11:28 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/02/14 23:11:28 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-7b25a4cc-cb37-4332-a59b-e36fa45511cd
17/02/14 23:11:28 INFO MemoryStore: MemoryStore started with capacity 870.9 MB
17/02/14 23:11:28 INFO SparkEnv: Registering OutputCommitCoordinator
17/02/14 23:11:28 INFO Utils: Successfully started service 'SparkUI' on port 4041.
17/02/14 23:11:28 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.2.68:4041
17/02/14 23:11:28 INFO SparkContext: Added JAR /samplesparkmaven/target/samplesparkmaven-jar.jar at spark://192.168.2.68:33995/jars/samplesparkmaven-jar.jar with timestamp 1487142688817
17/02/14 23:11:28 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://127.0.1.1:7077...
17/02/14 23:11:28 INFO TransportClientFactory: Successfully created connection to /127.0.1.1:7077 after 62 ms (0 ms spent in bootstraps)
17/02/14 23:11:29 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20170214231129-0016
17/02/14 23:11:29 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 36901.
17/02/14 23:11:29 INFO NettyBlockTransferService: Server created on 192.168.2.68:36901
17/02/14 23:11:29 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/02/14 23:11:29 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.2.68, 36901, None)
17/02/14 23:11:29 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.2.68:36901 with 870.9 MB RAM, BlockManagerId(driver, 192.168.2.68, 36901, None)
17/02/14 23:11:29 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.2.68, 36901, None)
17/02/14 23:11:29 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.2.68, 36901, None)
17/02/14 23:11:29 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
17/02/14 23:11:29 INFO NettyUtil: Found Netty's native epoll transport in the classpath, using it
17/02/14 23:11:31 INFO Cluster: New Cassandra host /xxx.xxx.xxx.xxx:9042 added
17/02/14 23:11:31 INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
17/02/14 23:11:32 INFO SparkContext: Starting job: collect at SparkConnector.scala:28
17/02/14 23:11:32 INFO DAGScheduler: Got job 0 (collect at SparkConnector.scala:28) with 6 output partitions
17/02/14 23:11:32 INFO DAGScheduler: Final stage: ResultStage 0 (collect at SparkConnector.scala:28)
17/02/14 23:11:32 INFO DAGScheduler: Parents of final stage: List()
17/02/14 23:11:32 INFO DAGScheduler: Missing parents: List()
17/02/14 23:11:32 INFO DAGScheduler: Submitting ResultStage 0 (CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:18), which has no missing parents
17/02/14 23:11:32 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 8.4 KB, free 870.9 MB)
17/02/14 23:11:32 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 4.4 KB, free 870.9 MB)
17/02/14 23:11:32 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.2.68:36901 (size: 4.4 KB, free: 870.9 MB)
17/02/14 23:11:32 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:996
17/02/14 23:11:32 INFO DAGScheduler: Submitting 6 missing tasks from ResultStage 0 (CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:18)
17/02/14 23:11:32 INFO TaskSchedulerImpl: Adding task set 0.0 with 6 tasks
17/02/14 23:11:39 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
I'm using
scala 2.11.6
spark 2.1.0 (both for standalone spark and dependency in application)
spark-cassandra-connector 2.0.0-M3
Cassandra Java driver 3.0.0
Apache Cassandra 3.9
Version compatibility table for cassandra connector doesn't show any problem with it, but I can't figure out anything else that might be the problem.
I've finally solved the problem I had. It turned out to be the problem with path. I was using local path to the jar, but missed to add "." at the beginning, so it was treated as absolute path.
Unfortunately, there was no exception in the application indicating that file doesn't exist on the provided path, and the only exception I had was from the worker which could not find jar file in the Spark cluster.
Related
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I am trying to run a simple/dumb Spark Scala application example in Spark: The Definitive Guide. It reads a json file and do some work on it. But running it reports _corrupt_record: string (nullable = true). The json file has a JSON object per line. I was wondering what is wrong? Thanks.
Scala code:
package com.databricks.example
import org.apache.log4j.Logger
import org.apache.spark.sql.SparkSession
object DFUtils extends Serializable {
#transient lazy val logger = Logger.getLogger(getClass.getName)
def pointlessUDF(raw: String) = {
raw
}
}
object DataFrameExample extends Serializable {
def main(args: Array[String]): Unit = {
val pathToDataFolder = args(0)
val spark = SparkSession.builder().appName("Spark Example")
.config("spark.sql.warehouse.dir", "/user/hive/warehouse")
.getOrCreate()
// udf registration
spark.udf.register("myUDF", DFUtils.pointlessUDF(_:String):String)
val df = spark.read.json(pathToDataFolder + "data.json")
df.printSchema()
// df.collect.foreach(println)
// val x = df.select("value").foreach(x => println(x));
// val manipulated = df.groupBy("grouping").sum().collect().foreach(x => println(x))
// val manipulated = df.groupBy(expr("myUDF(group)")).sum().collect().foreach(x => println(x))
}
}
/tmp/test/data.json is
{"grouping":"group_1", value:5}
{"grouping":"group_1", value:6}
{"grouping":"group_3", value:7}
{"grouping":"group_2", value:3}
{"grouping":"group_4", value:2}
{"grouping":"group_1", value:1}
{"grouping":"group_2", value:2}
{"grouping":"group_3", value:3}
build.sbt is
$ cat build.sbt
name := "example"
organization := "com.databricks"
version := "0.1-SNAPSHOT"
scalaVersion := "2.11.8"
// scalaVersion := "2.13.1"
// Spark Information
// val sparkVersion = "2.2.0"
val sparkVersion = "2.4.5"
// allows us to include spark packages
resolvers += "bintray-spark-packages" at
"https://dl.bintray.com/spark-packages/maven/"
resolvers += "Typesafe Simple Repository" at
"http://repo.typesafe.com/typesafe/simple/maven-releases/"
resolvers += "MavenRepository" at
"https://mvnrepository.com/"
libraryDependencies ++= Seq(
// spark core
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" %% "spark-sql" % sparkVersion,
)
Build and package with SBT:
$ sbt package
[info] Loading project definition from /tmp/test/bookexample/project
[info] Loading settings for project bookexample from build.sbt ...
[info] Set current project to example (in build file:/tmp/test/bookexample/)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/simple/maven-releases/'; switch to HTTPS or opt-in as ("Typesafe Simple Repository" at "http://repo.typesafe.com/typesafe/simple/maven-releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/simple/maven-releases/'; switch to HTTPS or opt-in as ("Typesafe Simple Repository" at "http://repo.typesafe.com/typesafe/simple/maven-releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/simple/maven-releases/'; switch to HTTPS or opt-in as ("Typesafe Simple Repository" at "http://repo.typesafe.com/typesafe/simple/maven-releases/").withAllowInsecureProtocol(true)
[info] Compiling 1 Scala source to /tmp/test/bookexample/target/scala-2.11/classes ...
[success] Total time: 28 s, completed Mar 19, 2020, 8:35:50 AM
Run with spark-submit:
$ ~/programs/spark/spark-2.4.5-bin-hadoop2.7/bin/spark-submit --class com.databricks.example.DataFrameExample --master local target/scala-2.11/example_2.11-0.1-SNAPSHOT.jar /tmp/test/
20/03/19 08:37:58 WARN Utils: Your hostname, ocean resolves to a loopback address: 127.0.1.1; using 192.168.122.1 instead (on interface virbr0)
20/03/19 08:37:58 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/03/19 08:37:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/03/19 08:38:00 INFO SparkContext: Running Spark version 2.4.5
20/03/19 08:38:00 INFO SparkContext: Submitted application: Spark Example
20/03/19 08:38:00 INFO SecurityManager: Changing view acls to: t
20/03/19 08:38:00 INFO SecurityManager: Changing modify acls to: t
20/03/19 08:38:00 INFO SecurityManager: Changing view acls groups to:
20/03/19 08:38:00 INFO SecurityManager: Changing modify acls groups to:
20/03/19 08:38:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(t); groups with view permissions: Set(); users with modify permissions: Set(t); groups with modify permissions: Set()
20/03/19 08:38:01 INFO Utils: Successfully started service 'sparkDriver' on port 46163.
20/03/19 08:38:01 INFO SparkEnv: Registering MapOutputTracker
20/03/19 08:38:01 INFO SparkEnv: Registering BlockManagerMaster
20/03/19 08:38:01 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/03/19 08:38:01 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/03/19 08:38:01 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-42f9b92d-1420-4e04-aaf6-acb635a27907
20/03/19 08:38:01 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/03/19 08:38:02 INFO SparkEnv: Registering OutputCommitCoordinator
20/03/19 08:38:02 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/03/19 08:38:02 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.122.1:4040
20/03/19 08:38:02 INFO SparkContext: Added JAR file:/tmp/test/bookexample/target/scala-2.11/example_2.11-0.1-SNAPSHOT.jar at spark://192.168.122.1:46163/jars/example_2.11-0.1-SNAPSHOT.jar with timestamp 1584621482787
20/03/19 08:38:03 INFO Executor: Starting executor ID driver on host localhost
20/03/19 08:38:03 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35287.
20/03/19 08:38:03 INFO NettyBlockTransferService: Server created on 192.168.122.1:35287
20/03/19 08:38:03 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/03/19 08:38:03 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:03 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.122.1:35287 with 366.3 MB RAM, BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:03 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:03 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:04 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('/user/hive/warehouse').
20/03/19 08:38:04 INFO SharedState: Warehouse path is '/user/hive/warehouse'.
20/03/19 08:38:05 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/03/19 08:38:10 INFO InMemoryFileIndex: It took 97 ms to list leaf files for 1 paths.
20/03/19 08:38:10 INFO InMemoryFileIndex: It took 3 ms to list leaf files for 1 paths.
20/03/19 08:38:12 INFO FileSourceStrategy: Pruning directories with:
20/03/19 08:38:12 INFO FileSourceStrategy: Post-Scan Filters:
20/03/19 08:38:12 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
20/03/19 08:38:12 INFO FileSourceScanExec: Pushed Filters:
20/03/19 08:38:14 INFO CodeGenerator: Code generated in 691.376591 ms
20/03/19 08:38:14 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 285.2 KB, free 366.0 MB)
20/03/19 08:38:14 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 23.3 KB, free 366.0 MB)
20/03/19 08:38:14 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.122.1:35287 (size: 23.3 KB, free: 366.3 MB)
20/03/19 08:38:14 INFO SparkContext: Created broadcast 0 from json at DataFrameExample.scala:31
20/03/19 08:38:14 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194560 bytes, open cost is considered as scanning 4194304 bytes.
20/03/19 08:38:14 INFO SparkContext: Starting job: json at DataFrameExample.scala:31
20/03/19 08:38:14 INFO DAGScheduler: Got job 0 (json at DataFrameExample.scala:31) with 1 output partitions
20/03/19 08:38:14 INFO DAGScheduler: Final stage: ResultStage 0 (json at DataFrameExample.scala:31)
20/03/19 08:38:14 INFO DAGScheduler: Parents of final stage: List()
20/03/19 08:38:14 INFO DAGScheduler: Missing parents: List()
20/03/19 08:38:15 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at json at DataFrameExample.scala:31), which has no missing parents
20/03/19 08:38:15 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 12.3 KB, free 366.0 MB)
20/03/19 08:38:15 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 7.4 KB, free 366.0 MB)
20/03/19 08:38:15 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.122.1:35287 (size: 7.4 KB, free: 366.3 MB)
20/03/19 08:38:15 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1163
20/03/19 08:38:15 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at json at DataFrameExample.scala:31) (first 15 tasks are for partitions Vector(0))
20/03/19 08:38:15 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
20/03/19 08:38:15 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 8242 bytes)
20/03/19 08:38:15 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
20/03/19 08:38:15 INFO Executor: Fetching spark://192.168.122.1:46163/jars/example_2.11-0.1-SNAPSHOT.jar with timestamp 1584621482787
20/03/19 08:38:15 INFO TransportClientFactory: Successfully created connection to /192.168.122.1:46163 after 145 ms (0 ms spent in bootstraps)
20/03/19 08:38:15 INFO Utils: Fetching spark://192.168.122.1:46163/jars/example_2.11-0.1-SNAPSHOT.jar to /tmp/spark-983f7f15-6df2-4fec-90b0-2534f4b91764/userFiles-4bb98e5a-d49a-4e2f-9553-4e0982f41f0e/fetchFileTemp5270349024712252124.tmp
20/03/19 08:38:16 INFO Executor: Adding file:/tmp/spark-983f7f15-6df2-4fec-90b0-2534f4b91764/userFiles-4bb98e5a-d49a-4e2f-9553-4e0982f41f0e/example_2.11-0.1-SNAPSHOT.jar to class loader
20/03/19 08:38:16 INFO FileScanRDD: Reading File path: file:///tmp/test/data.json, range: 0-256, partition values: [empty row]
20/03/19 08:38:16 INFO CodeGenerator: Code generated in 88.903645 ms
20/03/19 08:38:16 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1893 bytes result sent to driver
20/03/19 08:38:16 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1198 ms on localhost (executor driver) (1/1)
20/03/19 08:38:16 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
20/03/19 08:38:16 INFO DAGScheduler: ResultStage 0 (json at DataFrameExample.scala:31) finished in 1.639 s
20/03/19 08:38:16 INFO DAGScheduler: Job 0 finished: json at DataFrameExample.scala:31, took 1.893394 s
root
|-- _corrupt_record: string (nullable = true)
20/03/19 08:38:16 INFO SparkContext: Invoking stop() from shutdown hook
20/03/19 08:38:16 INFO SparkUI: Stopped Spark web UI at http://192.168.122.1:4040
20/03/19 08:38:16 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/03/19 08:38:17 INFO MemoryStore: MemoryStore cleared
20/03/19 08:38:17 INFO BlockManager: BlockManager stopped
20/03/19 08:38:17 INFO BlockManagerMaster: BlockManagerMaster stopped
20/03/19 08:38:17 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/03/19 08:38:17 INFO SparkContext: Successfully stopped SparkContext
20/03/19 08:38:17 INFO ShutdownHookManager: Shutdown hook called
20/03/19 08:38:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-983f7f15-6df2-4fec-90b0-2534f4b91764
20/03/19 08:38:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-7d1fcc2e-af36-4dc4-ab6b-49b901e890ba
The original code from the book is
object DataFrameExample extends Serializable {
def main(args: Array[String]) = {
val pathToDataFolder = args(0)
// start up the SparkSession
// along with explicitly setting a given config
val spark = SparkSession.builder().appName("Spark Example")
.config("spark.sql.warehouse.dir", "/user/hive/warehouse")
.getOrCreate()
// udf registration
spark.udf.register("myUDF", someUDF(_:String):String)
val df = spark.read.json(pathToDataFolder + "data.json")
val manipulated = df.groupBy(expr("myUDF(group)")).sum().collect()
.foreach(x => println(x))
}
}
There is no issue with Code. The issue is with your data. It is not in json format. if you will check double quote(") is missing around column value in your data so it is giving _corrupt_record: string
Chang data as below and run the same code:
{"grouping":"group_1", "value":5}
{"grouping":"group_1", "value":6}
{"grouping":"group_3", "value":7}
{"grouping":"group_2", "value":3}
{"grouping":"group_4", "value":2}
{"grouping":"group_1", "value":1}
{"grouping":"group_2", "value":2}
{"grouping":"group_3", "value":3}
df = spark.read.json("/spath/files/1.json")
df.show()
+--------+-----+
|grouping|value|
+--------+-----+
| group_1| 5|
| group_1| 6|
| group_3| 7|
| group_2| 3|
| group_4| 2|
| group_1| 1|
| group_2| 2|
| group_3| 3|
+--------+-----+
As pointed out by others in this thread the problem is that your input is not a valid JSON. However libraries used by Spark, and by extensions Spark itself, supports such cases:
val df = spark
.read
.option("allowUnquotedFieldNames", "true")
.json(pathToDataFolder + "data.json")
val conf = new SparkConf()
.setAppName("example")
.setMaster("local[*]")
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.set("setWarnUnregisteredClasses","true")
When registrationRequired is set to true, it throws exception for class Person is not registered and also "org.apache.spark.internal.io.FileCommitProtocol$TaskCommitMessage" is not registered
So, now in default it is false, so making setWarnUnregisteredClasses to true, it should show warning message for unregistered class encountered as provided in the documentation https://github.com/EsotericSoftware/kryo#serializer-framework? But, nothing is shown in the logs regarding serialization.
What I am trying to do is to get a list of all unregistered class into my logs by setting this property .set("setWarnUnregisteredClasses","true")
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/12/10 15:56:09 WARN Utils: Your hostname, knoldus-Vostro-3546 resolves to a loopback address: 127.0.1.1; using 192.168.1.113 instead (on interface enp7s0)
19/12/10 15:56:09 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
19/12/10 15:56:10 INFO SparkContext: Running Spark version 2.4.4
19/12/10 15:56:11 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/12/10 15:56:12 INFO SparkContext: Submitted application: kyroExample
19/12/10 15:56:14 INFO SecurityManager: Changing view acls to: knoldus
19/12/10 15:56:14 INFO SecurityManager: Changing modify acls to: knoldus
19/12/10 15:56:14 INFO SecurityManager: Changing view acls groups to:
19/12/10 15:56:14 INFO SecurityManager: Changing modify acls groups to:
19/12/10 15:56:14 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(knoldus); groups with view permissions: Set(); users with modify permissions: Set(knoldus); groups with modify permissions: Set()
19/12/10 15:56:17 INFO Utils: Successfully started service 'sparkDriver' on port 36235.
19/12/10 15:56:17 INFO SparkEnv: Registering MapOutputTracker
19/12/10 15:56:18 INFO SparkEnv: Registering BlockManagerMaster
19/12/10 15:56:18 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/12/10 15:56:18 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/12/10 15:56:18 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-956a186e-cfbd-4ad2-b531-9f46bff96984
19/12/10 15:56:18 INFO MemoryStore: MemoryStore started with capacity 870.9 MB
19/12/10 15:56:18 INFO SparkEnv: Registering OutputCommitCoordinator
19/12/10 15:56:19 INFO Utils: Successfully started service 'SparkUI' on port 4040.
19/12/10 15:56:19 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.113:4040
19/12/10 15:56:19 INFO Executor: Starting executor ID driver on host localhost
19/12/10 15:56:19 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41737.
19/12/10 15:56:19 INFO NettyBlockTransferService: Server created on 192.168.1.113:41737
19/12/10 15:56:19 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/12/10 15:56:19 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.113, 41737, None)
19/12/10 15:56:19 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.113:41737 with 870.9 MB RAM, BlockManagerId(driver, 192.168.1.113, 41737, None)
19/12/10 15:56:19 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.113, 41737, None)
19/12/10 15:56:19 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.113, 41737, None)
19/12/10 15:56:21 INFO SparkContext: Starting job: take at KyroExample.scala:28
19/12/10 15:56:21 INFO DAGScheduler: Got job 0 (take at KyroExample.scala:28) with 1 output partitions
19/12/10 15:56:21 INFO DAGScheduler: Final stage: ResultStage 0 (take at KyroExample.scala:28)
19/12/10 15:56:21 INFO DAGScheduler: Parents of final stage: List()
19/12/10 15:56:21 INFO DAGScheduler: Missing parents: List()
19/12/10 15:56:21 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at filter at KyroExample.scala:24), which has no missing parents
19/12/10 15:56:21 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 3.0 KB, free 870.9 MB)
19/12/10 15:56:22 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1730.0 B, free 870.9 MB)
19/12/10 15:56:22 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.113:41737 (size: 1730.0 B, free: 870.9 MB)
19/12/10 15:56:22 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1161
19/12/10 15:56:22 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at filter at KyroExample.scala:24) (first 15 tasks are for partitions Vector(0))
19/12/10 15:56:22 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
19/12/10 15:56:22 WARN TaskSetManager: Stage 0 contains a task of very large size (243 KB). The maximum recommended task size is 100 KB.
19/12/10 15:56:22 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 249045 bytes)
19/12/10 15:56:22 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
19/12/10 15:56:23 INFO MemoryStore: Block rdd_1_0 stored as values in memory (estimated size 293.3 KB, free 870.6 MB)
19/12/10 15:56:23 INFO BlockManagerInfo: Added rdd_1_0 in memory on 192.168.1.113:41737 (size: 293.3 KB, free: 870.6 MB)
19/12/10 15:56:23 INFO Executor: 1 block locks were not released by TID = 0:
[rdd_1_0]
19/12/10 15:56:23 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1132 bytes result sent to driver
19/12/10 15:56:23 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 924 ms on localhost (executor driver) (1/1)
19/12/10 15:56:23 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
19/12/10 15:56:23 INFO DAGScheduler: ResultStage 0 (take at KyroExample.scala:28) finished in 1.733 s
19/12/10 15:56:23 INFO DAGScheduler: Job 0 finished: take at KyroExample.scala:28, took 1.895530 s
There is no unregistered class encountered logs. Why?
I had the same problem.
The issue is that setWarnUnregisteredClasses is a Kryo configuration that currently (I actually use Spark 2.4.4) is not exposed through Spark.
You have to set the specific configuration in Kryo.
The workaround I used was to have a custom KryoRegistrator.
Then I used it in this way:
class MyKryoRegistrator extends KryoRegistrator {
override def registerClasses(kryo: Kryo): Unit = {
kryo.setRegistrationRequired(false)
kryo.setWarnUnregisteredClasses(true)
...
You are using kryo registration so custom and other classes need to be registered with kryo and also both classes should implement serialize interface.
setWarnUnregisteredClasses will give warnings and conf.set("spark.kryo.registrationRequired", "true") throws exception for classes not registered.
You have to register person and TaskCommitMessage like
conf.registerKryoClasses(Array(classOf[Person]))
Disconnected from the target VM, address: '127.0.0.1:39989', transport: 'socket' on intellij idea CE. I can't debug my program. Any suggestions?
Connected to the target VM, address: '127.0.0.1:39989', transport: 'socket'
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/12/29 17:29:47 INFO SparkContext: Running Spark version 2.1.2
17/12/29 17:29:49 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/12/29 17:29:49 WARN Utils: Your hostname, ashfaq-VirtualBox resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface enp0s3)
17/12/29 17:29:49 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/12/29 17:29:49 INFO SecurityManager: Changing view acls to: ashfaq
17/12/29 17:29:49 INFO SecurityManager: Changing modify acls to: ashfaq
17/12/29 17:29:49 INFO SecurityManager: Changing view acls groups to:
17/12/29 17:29:49 INFO SecurityManager: Changing modify acls groups to:
17/12/29 17:29:49 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ashfaq); groups with view permissions: Set(); users with modify permissions: Set(ashfaq); groups with modify permissions: Set()
17/12/29 17:29:51 INFO Utils: Successfully started service 'sparkDriver' on port 46133.
17/12/29 17:29:51 INFO SparkEnv: Registering MapOutputTracker
17/12/29 17:29:51 INFO SparkEnv: Registering BlockManagerMaster
17/12/29 17:29:51 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/12/29 17:29:51 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/12/29 17:29:51 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-b3b48105-28be-4781-a395-c7e83cc72e8c
17/12/29 17:29:51 INFO MemoryStore: MemoryStore started with capacity 393.1 MB
17/12/29 17:29:51 INFO SparkEnv: Registering OutputCommitCoordinator
17/12/29 17:29:53 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/12/29 17:29:53 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.2.15:4040
17/12/29 17:29:53 INFO Executor: Starting executor ID driver on host localhost
17/12/29 17:29:54 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33583.
17/12/29 17:29:54 INFO NettyBlockTransferService: Server created on 10.0.2.15:33583
17/12/29 17:29:54 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/12/29 17:29:54 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.0.2.15, 33583, None)
17/12/29 17:29:54 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.2.15:33583 with 393.1 MB RAM, BlockManagerId(driver, 10.0.2.15, 33583, None)
17/12/29 17:29:54 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.0.2.15, 33583, None)
17/12/29 17:29:54 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.0.2.15, 33583, None)
17/12/29 17:29:58 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 236.5 KB, free 392.8 MB)
17/12/29 17:29:58 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 22.9 KB, free 392.8 MB)
17/12/29 17:29:58 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.2.15:33583 (size: 22.9 KB, free: 393.1 MB)
17/12/29 17:29:59 INFO SparkContext: Created broadcast 0 from textFile at scalaApp.scala:13
Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/home/ashfaq/Desktop/saclaAPP/data/UserPurchaseHistory.csv
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1968)
at org.apache.spark.rdd.RDD.count(RDD.scala:1158)
at ScalaApp$.main(scalaApp.scala:18)
at ScalaApp.main(scalaApp.scala)
17/12/29 17:29:59 INFO SparkContext: Invoking stop() from shutdown hook
17/12/29 17:29:59 INFO SparkUI: Stopped Spark web UI at http://10.0.2.15:4040
17/12/29 17:29:59 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.0.2.15:33583 in memory (size: 22.9 KB, free: 393.1 MB)
17/12/29 17:29:59 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/12/29 17:30:00 INFO MemoryStore: MemoryStore cleared
17/12/29 17:30:00 INFO BlockManager: BlockManager stopped
17/12/29 17:30:00 INFO BlockManagerMaster: BlockManagerMaster stopped
17/12/29 17:30:00 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/12/29 17:30:00 INFO SparkContext: Successfully stopped SparkContext
17/12/29 17:30:00 INFO ShutdownHookManager: Shutdown hook called
Disconnected from the target VM, address: '127.0.0.1:39989', transport: 'socket'
17/12/29 17:30:00 INFO ShutdownHookManager: Deleting directory /tmp/spark-58667739-7c15-4665-8ede-fde9c3ff1d83
Process finished with exit code 1
It looks like, you are trying to open a file which doesn't exist. The fisrt line of the error message says so:
Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/home/ashfaq/Desktop/saclaAPP/data/UserPurchaseHistory.csv
Spark 2.1.1 built for Hadoop 2.7.3
Scala 2.11.11
Cluster has 3 Linux RHEL 7.3 Azure VM's, running Spark Standalone Deploy Mode (no YARN or Mesos, yet)
I have created a very simple SparkStreaming job using IntelliJ, written in Scala. I'm using Maven and building the job into a fat/uber jar that contains all dependencies.
When I run the job locally it works fine. If I copy the jar to the cluster and run it with a master of local[2] it also works fine. However, if I submit the job to the cluster master it's like it does not want to schedule additional work beyond the first task. The job starts up, grabs however many events are in the Azure Event Hub, processes them successfully, then never does anymore work. It does not matter if I submit the job to the master as just an application or if it's submitted using supervised cluster mode, both do the same thing.
I've looked through all the logs I know of (master, driver (where applicable), and executor) and I am not seeing any errors or warnings that seem actionable. I've altered the log level, shown below, to show ALL/INFO/DEBUG and sifted through those logs without finding anything that seems relevant.
It may be worth noting that I had previously created several jobs that connect to Kafka, instead of the Azure Event Hub, using Java and those jobs run in supervised cluster mode without an issue on this same cluster. This leads me to believe that the cluster configuration isn't an issue, it's either something with my code (below) or the Azure Event Hub.
Any thoughts on where I might check next to isolate this issue? Here is the code for my simple job.
Thanks in advance.
Note: conf.{name} indicates values I'm loading from a config file. I've tested loading and hard-coding them, both with the same result.
package streamingJob
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.eventhubs.EventHubsUtils
import org.joda.time.DateTime
object TestJob {
def main(args: Array[String]): Unit = {
val sparkConf = new SparkConf()
sparkConf.setAppName("TestJob")
// Uncomment to run locally
//sparkConf.setMaster("local[2]")
val sparkContext = new SparkContext(sparkConf)
sparkContext.setLogLevel("ERROR")
val streamingContext: StreamingContext = new StreamingContext(sparkContext, Seconds(1))
val readerParams = Map[String, String] (
"eventhubs.policyname" -> conf.policyname,
"eventhubs.policykey" -> conf.policykey,
"eventhubs.namespace" -> conf.namespace,
"eventhubs.name" -> conf.name,
"eventhubs.partition.count" -> conf.partitionCount,
"eventhubs.consumergroup" -> conf.consumergroup
)
val eventData = EventHubsUtils.createDirectStreams(
streamingContext,
conf.namespace,
conf.progressdir,
Map("name" -> readerParams))
eventData.foreachRDD(r => {
r.foreachPartition { p => {
p.foreach(d => {
println(DateTime.now() + ": " + d)
}) // end of EventData
}} // foreachPartition
}) // foreachRDD
streamingContext.start()
streamingContext.awaitTermination()
}
}
Here is a set of logs from when I run this as an application, not cluster/supervised.
/spark/bin/spark-submit --class streamingJob.TestJob --master spark://{ip}:7077 --total-executor-cores 1 /spark/job-files/fatjar.jar
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/11/06 17:52:04 INFO SparkContext: Running Spark version 2.1.1
17/11/06 17:52:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/06 17:52:05 INFO SecurityManager: Changing view acls to: root
17/11/06 17:52:05 INFO SecurityManager: Changing modify acls to: root
17/11/06 17:52:05 INFO SecurityManager: Changing view acls groups to:
17/11/06 17:52:05 INFO SecurityManager: Changing modify acls groups to:
17/11/06 17:52:05 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
17/11/06 17:52:06 INFO Utils: Successfully started service 'sparkDriver' on port 44384.
17/11/06 17:52:06 INFO SparkEnv: Registering MapOutputTracker
17/11/06 17:52:06 INFO SparkEnv: Registering BlockManagerMaster
17/11/06 17:52:06 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/11/06 17:52:06 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/11/06 17:52:06 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-b5e2c0f3-2500-42c6-b057-cf5d368580ab
17/11/06 17:52:06 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
17/11/06 17:52:06 INFO SparkEnv: Registering OutputCommitCoordinator
17/11/06 17:52:06 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/11/06 17:52:06 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://{ip}:4040
17/11/06 17:52:06 INFO SparkContext: Added JAR file:/spark/job-files/fatjar.jar at spark://{ip}:44384/jars/fatjar.jar with timestamp 1509990726989
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://{ip}:7077...
17/11/06 17:52:07 INFO TransportClientFactory: Successfully created connection to /{ip}:7077 after 72 ms (0 ms spent in bootstraps)
17/11/06 17:52:07 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20171106175207-0000
17/11/06 17:52:07 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44624.
17/11/06 17:52:07 INFO NettyBlockTransferService: Server created on {ip}:44624
17/11/06 17:52:07 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20171106175207-0000/0 on worker-20171106173151-{ip}-46086 ({ip}:46086) with 1 cores
17/11/06 17:52:07 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO StandaloneSchedulerBackend: Granted executor ID app-20171106175207-0000/0 on hostPort {ip}:46086 with 1 cores, 1024.0 MB RAM
17/11/06 17:52:07 INFO BlockManagerMasterEndpoint: Registering block manager {ip}:44624 with 366.3 MB RAM, BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20171106175207-0000/0 is now RUNNING
17/11/06 17:52:08 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
realy need your help to understand, what I'm doing wrong.
The intent of my experiment is to run spark job programatically instead of using ./spark-shell or ./spark-submit (These both work for me)
Environment:
I've created a Spark Cluster with 1 master & 1 worker using ./spark-ec2 script
Cluster looks good, however, when I try to run the code being packaged in a jar:
val logFile = "file:///root/spark/bin/README.md"
val conf = new SparkConf()
conf.setAppName("Simple App")
conf.setJars(List("file:///root/spark/bin/hello-apache-spark_2.10-1.0.0-SNAPSHOT.jar"))
conf.setMaster("spark://ec2-54-89-51-36.compute-1.amazonaws.com:7077")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(_.contains("a")).count()
val numBs = logData.filter(_.contains("b")).count()
println(s"1. Lines with a: $numAs, Lines with b: $numBs")
I get an exception:
*[info] Running com.paycasso.SimpleApp
14/09/05 14:50:29 INFO SecurityManager: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/09/05 14:50:29 INFO SecurityManager: Changing view acls to: root
14/09/05 14:50:29 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root)
14/09/05 14:50:30 INFO Slf4jLogger: Slf4jLogger started
14/09/05 14:50:30 INFO Remoting: Starting remoting
14/09/05 14:50:30 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark#ip-10-224-14-90.ec2.internal:54683]
14/09/05 14:50:30 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark#ip-10-224-14-90.ec2.internal:54683]
14/09/05 14:50:30 INFO SparkEnv: Registering MapOutputTracker
14/09/05 14:50:30 INFO SparkEnv: Registering BlockManagerMaster
14/09/05 14:50:30 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140905145030-85cb
14/09/05 14:50:30 INFO MemoryStore: MemoryStore started with capacity 589.2 MB.
14/09/05 14:50:30 INFO ConnectionManager: Bound socket to port 47852 with id = ConnectionManagerId(ip-10-224-14-90.ec2.internal,47852)
14/09/05 14:50:30 INFO BlockManagerMaster: Trying to register BlockManager
14/09/05 14:50:30 INFO BlockManagerInfo: Registering block manager ip-10-224-14-90.ec2.internal:47852 with 589.2 MB RAM
14/09/05 14:50:30 INFO BlockManagerMaster: Registered BlockManager
14/09/05 14:50:30 INFO HttpServer: Starting HTTP Server
14/09/05 14:50:30 INFO HttpBroadcast: Broadcast server started at http://**.***.**.**:49211
14/09/05 14:50:30 INFO HttpFileServer: HTTP File server directory is /tmp/spark-e2748605-17ec-4524-983b-97aaf2f94b30
14/09/05 14:50:30 INFO HttpServer: Starting HTTP Server
14/09/05 14:50:31 INFO SparkUI: Started SparkUI at http://ip-10-224-14-90.ec2.internal:4040
14/09/05 14:50:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/09/05 14:50:32 INFO SparkContext: Added JAR file:///root/spark/bin/hello-apache-spark_2.10-1.0.0-SNAPSHOT.jar at http://**.***.**.**:46491/jars/hello-apache-spark_2.10-1.0.0-SNAPSHOT.jar with timestamp 1409928632274
14/09/05 14:50:32 INFO AppClient$ClientActor: Connecting to master spark://ec2-54-89-51-36.compute-1.amazonaws.com:7077...
14/09/05 14:50:32 INFO MemoryStore: ensureFreeSpace(163793) called with curMem=0, maxMem=617820979
14/09/05 14:50:32 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 160.0 KB, free 589.0 MB)
14/09/05 14:50:32 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20140905145032-0005
14/09/05 14:50:32 INFO AppClient$ClientActor: Executor added: app-20140905145032-0005/0 on worker-20140905141732-ip-10-80-90-29.ec2.internal-57457 (ip-10-80-90-29.ec2.internal:57457) with 2 cores
14/09/05 14:50:32 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140905145032-0005/0 on hostPort ip-10-80-90-29.ec2.internal:57457 with 2 cores, 512.0 MB RAM
14/09/05 14:50:32 INFO AppClient$ClientActor: Executor updated: app-20140905145032-0005/0 is now RUNNING
14/09/05 14:50:33 INFO FileInputFormat: Total input paths to process : 1
14/09/05 14:50:33 INFO SparkContext: Starting job: count at SimpleApp.scala:26
14/09/05 14:50:33 INFO DAGScheduler: Got job 0 (count at SimpleApp.scala:26) with 1 output partitions (allowLocal=false)
14/09/05 14:50:33 INFO DAGScheduler: Final stage: Stage 0(count at SimpleApp.scala:26)
14/09/05 14:50:33 INFO DAGScheduler: Parents of final stage: List()
14/09/05 14:50:33 INFO DAGScheduler: Missing parents: List()
14/09/05 14:50:33 INFO DAGScheduler: Submitting Stage 0 (FilteredRDD[2] at filter at SimpleApp.scala:26), which has no missing parents
14/09/05 14:50:33 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0 (FilteredRDD[2] at filter at SimpleApp.scala:26)
14/09/05 14:50:33 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
14/09/05 14:50:36 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor#ip-10-80-90-29.ec2.internal:36966/user/Executor#2034537974] with ID 0
14/09/05 14:50:36 INFO TaskSetManager: Starting task 0.0:0 as TID 0 on executor 0: ip-10-80-90-29.ec2.internal (PROCESS_LOCAL)
14/09/05 14:50:36 INFO TaskSetManager: Serialized task 0.0:0 as 1880 bytes in 8 ms
14/09/05 14:50:37 INFO BlockManagerInfo: Registering block manager ip-10-80-90-29.ec2.internal:59950 with 294.9 MB RAM
14/09/05 14:50:38 WARN TaskSetManager: Lost TID 0 (task 0.0:0)
14/09/05 14:50:38 WARN TaskSetManager: Loss was due to java.io.EOFException
java.io.EOFException
at java.io.ObjectInputStream$BlockDataInputStream.readFully(ObjectInputStream.java:2744)
at java.io.ObjectInputStream.readFully(ObjectInputStream.java:1032)
at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
at org.apache.hadoop.io.UTF8.readChars(UTF8.java:216)
at org.apache.hadoop.io.UTF8.readString(UTF8.java:208)
at org.apache.hadoop.mapred.FileSplit.readFields(FileSplit.java:87)
at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:237)
at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
at org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at org.apache.spark.scheduler.ResultTask.readExternal(ResultTask.scala:147)
at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:63)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:165)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)*
What I'm actualy doing is a call "sbt run". So I assemble the scala project and run it.
By the way, I run that project on a master host, so the driver definitely is visible for a worker host.
Any help is appreciated. That's very strange, that such a simple example doesn't work in cluster. Using ./spark-submit is not convenient, I believe.
Thanks in advance.
After wasting a lot of time, I've found the problem. Despite I haven't used hadoop/hdfs in my application, hadoop client matters. The problem was in hadoop-client version, it was different than the version of hadoop, spark was built for. Spark's hadoop version 1.2.1, but in my application that was 2.4.
When I changed the version of hadoop client to 1.2.1 in my app, I'm able to execute spark code on cluster.