` _corrupt_record: string (nullable = true)` with a simple Spark Scala application [closed] - scala

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I am trying to run a simple/dumb Spark Scala application example in Spark: The Definitive Guide. It reads a json file and do some work on it. But running it reports _corrupt_record: string (nullable = true). The json file has a JSON object per line. I was wondering what is wrong? Thanks.
Scala code:
package com.databricks.example
import org.apache.log4j.Logger
import org.apache.spark.sql.SparkSession
object DFUtils extends Serializable {
#transient lazy val logger = Logger.getLogger(getClass.getName)
def pointlessUDF(raw: String) = {
raw
}
}
object DataFrameExample extends Serializable {
def main(args: Array[String]): Unit = {
val pathToDataFolder = args(0)
val spark = SparkSession.builder().appName("Spark Example")
.config("spark.sql.warehouse.dir", "/user/hive/warehouse")
.getOrCreate()
// udf registration
spark.udf.register("myUDF", DFUtils.pointlessUDF(_:String):String)
val df = spark.read.json(pathToDataFolder + "data.json")
df.printSchema()
// df.collect.foreach(println)
// val x = df.select("value").foreach(x => println(x));
// val manipulated = df.groupBy("grouping").sum().collect().foreach(x => println(x))
// val manipulated = df.groupBy(expr("myUDF(group)")).sum().collect().foreach(x => println(x))
}
}
/tmp/test/data.json is
{"grouping":"group_1", value:5}
{"grouping":"group_1", value:6}
{"grouping":"group_3", value:7}
{"grouping":"group_2", value:3}
{"grouping":"group_4", value:2}
{"grouping":"group_1", value:1}
{"grouping":"group_2", value:2}
{"grouping":"group_3", value:3}
build.sbt is
$ cat build.sbt
name := "example"
organization := "com.databricks"
version := "0.1-SNAPSHOT"
scalaVersion := "2.11.8"
// scalaVersion := "2.13.1"
// Spark Information
// val sparkVersion = "2.2.0"
val sparkVersion = "2.4.5"
// allows us to include spark packages
resolvers += "bintray-spark-packages" at
"https://dl.bintray.com/spark-packages/maven/"
resolvers += "Typesafe Simple Repository" at
"http://repo.typesafe.com/typesafe/simple/maven-releases/"
resolvers += "MavenRepository" at
"https://mvnrepository.com/"
libraryDependencies ++= Seq(
// spark core
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" %% "spark-sql" % sparkVersion,
)
Build and package with SBT:
$ sbt package
[info] Loading project definition from /tmp/test/bookexample/project
[info] Loading settings for project bookexample from build.sbt ...
[info] Set current project to example (in build file:/tmp/test/bookexample/)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/simple/maven-releases/'; switch to HTTPS or opt-in as ("Typesafe Simple Repository" at "http://repo.typesafe.com/typesafe/simple/maven-releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/simple/maven-releases/'; switch to HTTPS or opt-in as ("Typesafe Simple Repository" at "http://repo.typesafe.com/typesafe/simple/maven-releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/simple/maven-releases/'; switch to HTTPS or opt-in as ("Typesafe Simple Repository" at "http://repo.typesafe.com/typesafe/simple/maven-releases/").withAllowInsecureProtocol(true)
[info] Compiling 1 Scala source to /tmp/test/bookexample/target/scala-2.11/classes ...
[success] Total time: 28 s, completed Mar 19, 2020, 8:35:50 AM
Run with spark-submit:
$ ~/programs/spark/spark-2.4.5-bin-hadoop2.7/bin/spark-submit --class com.databricks.example.DataFrameExample --master local target/scala-2.11/example_2.11-0.1-SNAPSHOT.jar /tmp/test/
20/03/19 08:37:58 WARN Utils: Your hostname, ocean resolves to a loopback address: 127.0.1.1; using 192.168.122.1 instead (on interface virbr0)
20/03/19 08:37:58 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/03/19 08:37:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/03/19 08:38:00 INFO SparkContext: Running Spark version 2.4.5
20/03/19 08:38:00 INFO SparkContext: Submitted application: Spark Example
20/03/19 08:38:00 INFO SecurityManager: Changing view acls to: t
20/03/19 08:38:00 INFO SecurityManager: Changing modify acls to: t
20/03/19 08:38:00 INFO SecurityManager: Changing view acls groups to:
20/03/19 08:38:00 INFO SecurityManager: Changing modify acls groups to:
20/03/19 08:38:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(t); groups with view permissions: Set(); users with modify permissions: Set(t); groups with modify permissions: Set()
20/03/19 08:38:01 INFO Utils: Successfully started service 'sparkDriver' on port 46163.
20/03/19 08:38:01 INFO SparkEnv: Registering MapOutputTracker
20/03/19 08:38:01 INFO SparkEnv: Registering BlockManagerMaster
20/03/19 08:38:01 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/03/19 08:38:01 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/03/19 08:38:01 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-42f9b92d-1420-4e04-aaf6-acb635a27907
20/03/19 08:38:01 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/03/19 08:38:02 INFO SparkEnv: Registering OutputCommitCoordinator
20/03/19 08:38:02 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/03/19 08:38:02 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.122.1:4040
20/03/19 08:38:02 INFO SparkContext: Added JAR file:/tmp/test/bookexample/target/scala-2.11/example_2.11-0.1-SNAPSHOT.jar at spark://192.168.122.1:46163/jars/example_2.11-0.1-SNAPSHOT.jar with timestamp 1584621482787
20/03/19 08:38:03 INFO Executor: Starting executor ID driver on host localhost
20/03/19 08:38:03 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35287.
20/03/19 08:38:03 INFO NettyBlockTransferService: Server created on 192.168.122.1:35287
20/03/19 08:38:03 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/03/19 08:38:03 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:03 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.122.1:35287 with 366.3 MB RAM, BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:03 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:03 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:04 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('/user/hive/warehouse').
20/03/19 08:38:04 INFO SharedState: Warehouse path is '/user/hive/warehouse'.
20/03/19 08:38:05 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/03/19 08:38:10 INFO InMemoryFileIndex: It took 97 ms to list leaf files for 1 paths.
20/03/19 08:38:10 INFO InMemoryFileIndex: It took 3 ms to list leaf files for 1 paths.
20/03/19 08:38:12 INFO FileSourceStrategy: Pruning directories with:
20/03/19 08:38:12 INFO FileSourceStrategy: Post-Scan Filters:
20/03/19 08:38:12 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
20/03/19 08:38:12 INFO FileSourceScanExec: Pushed Filters:
20/03/19 08:38:14 INFO CodeGenerator: Code generated in 691.376591 ms
20/03/19 08:38:14 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 285.2 KB, free 366.0 MB)
20/03/19 08:38:14 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 23.3 KB, free 366.0 MB)
20/03/19 08:38:14 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.122.1:35287 (size: 23.3 KB, free: 366.3 MB)
20/03/19 08:38:14 INFO SparkContext: Created broadcast 0 from json at DataFrameExample.scala:31
20/03/19 08:38:14 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194560 bytes, open cost is considered as scanning 4194304 bytes.
20/03/19 08:38:14 INFO SparkContext: Starting job: json at DataFrameExample.scala:31
20/03/19 08:38:14 INFO DAGScheduler: Got job 0 (json at DataFrameExample.scala:31) with 1 output partitions
20/03/19 08:38:14 INFO DAGScheduler: Final stage: ResultStage 0 (json at DataFrameExample.scala:31)
20/03/19 08:38:14 INFO DAGScheduler: Parents of final stage: List()
20/03/19 08:38:14 INFO DAGScheduler: Missing parents: List()
20/03/19 08:38:15 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at json at DataFrameExample.scala:31), which has no missing parents
20/03/19 08:38:15 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 12.3 KB, free 366.0 MB)
20/03/19 08:38:15 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 7.4 KB, free 366.0 MB)
20/03/19 08:38:15 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.122.1:35287 (size: 7.4 KB, free: 366.3 MB)
20/03/19 08:38:15 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1163
20/03/19 08:38:15 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at json at DataFrameExample.scala:31) (first 15 tasks are for partitions Vector(0))
20/03/19 08:38:15 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
20/03/19 08:38:15 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 8242 bytes)
20/03/19 08:38:15 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
20/03/19 08:38:15 INFO Executor: Fetching spark://192.168.122.1:46163/jars/example_2.11-0.1-SNAPSHOT.jar with timestamp 1584621482787
20/03/19 08:38:15 INFO TransportClientFactory: Successfully created connection to /192.168.122.1:46163 after 145 ms (0 ms spent in bootstraps)
20/03/19 08:38:15 INFO Utils: Fetching spark://192.168.122.1:46163/jars/example_2.11-0.1-SNAPSHOT.jar to /tmp/spark-983f7f15-6df2-4fec-90b0-2534f4b91764/userFiles-4bb98e5a-d49a-4e2f-9553-4e0982f41f0e/fetchFileTemp5270349024712252124.tmp
20/03/19 08:38:16 INFO Executor: Adding file:/tmp/spark-983f7f15-6df2-4fec-90b0-2534f4b91764/userFiles-4bb98e5a-d49a-4e2f-9553-4e0982f41f0e/example_2.11-0.1-SNAPSHOT.jar to class loader
20/03/19 08:38:16 INFO FileScanRDD: Reading File path: file:///tmp/test/data.json, range: 0-256, partition values: [empty row]
20/03/19 08:38:16 INFO CodeGenerator: Code generated in 88.903645 ms
20/03/19 08:38:16 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1893 bytes result sent to driver
20/03/19 08:38:16 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1198 ms on localhost (executor driver) (1/1)
20/03/19 08:38:16 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
20/03/19 08:38:16 INFO DAGScheduler: ResultStage 0 (json at DataFrameExample.scala:31) finished in 1.639 s
20/03/19 08:38:16 INFO DAGScheduler: Job 0 finished: json at DataFrameExample.scala:31, took 1.893394 s
root
|-- _corrupt_record: string (nullable = true)
20/03/19 08:38:16 INFO SparkContext: Invoking stop() from shutdown hook
20/03/19 08:38:16 INFO SparkUI: Stopped Spark web UI at http://192.168.122.1:4040
20/03/19 08:38:16 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/03/19 08:38:17 INFO MemoryStore: MemoryStore cleared
20/03/19 08:38:17 INFO BlockManager: BlockManager stopped
20/03/19 08:38:17 INFO BlockManagerMaster: BlockManagerMaster stopped
20/03/19 08:38:17 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/03/19 08:38:17 INFO SparkContext: Successfully stopped SparkContext
20/03/19 08:38:17 INFO ShutdownHookManager: Shutdown hook called
20/03/19 08:38:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-983f7f15-6df2-4fec-90b0-2534f4b91764
20/03/19 08:38:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-7d1fcc2e-af36-4dc4-ab6b-49b901e890ba
The original code from the book is
object DataFrameExample extends Serializable {
def main(args: Array[String]) = {
val pathToDataFolder = args(0)
// start up the SparkSession
// along with explicitly setting a given config
val spark = SparkSession.builder().appName("Spark Example")
.config("spark.sql.warehouse.dir", "/user/hive/warehouse")
.getOrCreate()
// udf registration
spark.udf.register("myUDF", someUDF(_:String):String)
val df = spark.read.json(pathToDataFolder + "data.json")
val manipulated = df.groupBy(expr("myUDF(group)")).sum().collect()
.foreach(x => println(x))
}
}

There is no issue with Code. The issue is with your data. It is not in json format. if you will check double quote(") is missing around column value in your data so it is giving _corrupt_record: string
Chang data as below and run the same code:
{"grouping":"group_1", "value":5}
{"grouping":"group_1", "value":6}
{"grouping":"group_3", "value":7}
{"grouping":"group_2", "value":3}
{"grouping":"group_4", "value":2}
{"grouping":"group_1", "value":1}
{"grouping":"group_2", "value":2}
{"grouping":"group_3", "value":3}
df = spark.read.json("/spath/files/1.json")
df.show()
+--------+-----+
|grouping|value|
+--------+-----+
| group_1| 5|
| group_1| 6|
| group_3| 7|
| group_2| 3|
| group_4| 2|
| group_1| 1|
| group_2| 2|
| group_3| 3|
+--------+-----+

As pointed out by others in this thread the problem is that your input is not a valid JSON. However libraries used by Spark, and by extensions Spark itself, supports such cases:
val df = spark
.read
.option("allowUnquotedFieldNames", "true")
.json(pathToDataFolder + "data.json")

Related

Failing to execute spark-submit command on a sample word count project

I am doing a tutorial on Pluralsight for Apache Spark which is a simple word counter. I am on Windows 11 and I have IntelliJ IDEA 2022.3.1 (Ultimate Edition). Additionally, on my machine I have JKD8, Apache Spark 3.3.1 pre built for Hadoop 3.3 and later, and Hadoop 3.3.4. The code is written in Scala with SBT as the build tooland I've included the code below. After packaging the file with sbt package I run the command
spark-submit --class "main.WordCount" --master "local[*]" "C:\Users\user\Documents\Projects\WordCount\target\scala-2.11\word-count_2.11-0.1.jar"
I am receiving an exception
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat; (Full log below)
I have my dev tools (Java, Spark, Hadoop, etc) under C:\DevTools\TOOL and the Windows Environment variables are set as follows:
JAVA_HOME -> C:\DevTools\TOOL\Java
SPARK_HOME -> C:\DevTools\TOOL\Spark
HADOOP_HOME -> C:\DevTools\TOOL\Hadoop
PATH -> %JAVA_HOME%\bin; %SPARK_HOME%\bin; %HADOOP_HOME%\bin
Lastly, I've downloaded various winutils.exe and and hadoop.dll and I've put them in the Spark bin folder and the Hadoop bin folder but nothing seemingly works. Does anyone have any suggestions as to how I can get this to execute successfully?
build.sbt
name := "Word Count"
version := "0.1"
scalaVersion := "2.11.8"
val sparkVersion = "1.6.1"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" %% "spark-streaming" % sparkVersion
)
WordCount.scala
package main
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
object WordCount {
def main (args: Array[String]): Unit = {
val configuration = new SparkConf().setAppName("Word Counter")
val sparkContext = new SparkContext(configuration)
val textFile = sparkContext.textFile("file:///DevTools/TOOL/Spark")
val tokenizedFileData = textFile.flatMap(line=>line.split(" "))
val countPrep = tokenizedFileData.map(word=>(word, 1))
val counts = countPrep.reduceByKey((accumValue, newValue)=>accumValue + newValue)
val storedCounts = counts.sortBy(kvPair=>kvPair._2, false)
storedCounts.saveAsTextFile("file:///DevTools/TOOL/Spark/output")
}
}
Full Log
PS C:\Users\user\Documents\Projects\WordCount> spark-submit --class "main.WordCount" --master "local[*]" "C:\Users\user\Documents\Projects\WordCount\target\scala-2.11\word-count_2.11-0.1.jar"
23/01/26 17:00:08 INFO SparkContext: Running Spark version 3.3.1
23/01/26 17:00:08 INFO ResourceUtils: ==============================================================
23/01/26 17:00:08 INFO ResourceUtils: No custom resources configured for spark.driver.
23/01/26 17:00:08 INFO ResourceUtils: ==============================================================
23/01/26 17:00:08 INFO SparkContext: Submitted application: Word Counter
23/01/26 17:00:08 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
23/01/26 17:00:08 INFO ResourceProfile: Limiting resource is cpu
23/01/26 17:00:08 INFO ResourceProfileManager: Added ResourceProfile id: 0
23/01/26 17:00:08 INFO SecurityManager: Changing view acls to: user
23/01/26 17:00:08 INFO SecurityManager: Changing modify acls to: user
23/01/26 17:00:08 INFO SecurityManager: Changing view acls groups to:
23/01/26 17:00:08 INFO SecurityManager: Changing modify acls groups to:
23/01/26 17:00:08 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user); groups with view permissions: Set(); users with modify permissions: Set(user); groups with modify permissions: Set()
23/01/26 17:00:09 INFO Utils: Successfully started service 'sparkDriver' on port 50249.
23/01/26 17:00:09 INFO SparkEnv: Registering MapOutputTracker
23/01/26 17:00:09 INFO SparkEnv: Registering BlockManagerMaster
23/01/26 17:00:09 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/01/26 17:00:09 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/01/26 17:00:09 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23/01/26 17:00:09 INFO DiskBlockManager: Created local directory at C:\Users\user\AppData\Local\Temp\blockmgr-c7d05098-5b05-4121-b1b6-2e7445fc9240
23/01/26 17:00:09 INFO MemoryStore: MemoryStore started with capacity 366.3 MiB
23/01/26 17:00:09 INFO SparkEnv: Registering OutputCommitCoordinator
23/01/26 17:00:10 INFO Utils: Successfully started service 'SparkUI' on port 4040.
23/01/26 17:00:10 INFO SparkContext: Added JAR file:/C:/Users/user/Documents/Projects/WordCount/target/scala-2.11/word-count_2.11-0.1.jar at spark://localhost:50249/jars/word-count_2.11-0.1.jar with timestamp 1674770408345
23/01/26 17:00:10 INFO Executor: Starting executor ID driver on host localhost
23/01/26 17:00:10 INFO Executor: Starting executor with user classpath (userClassPathFirst = false): ''
23/01/26 17:00:10 INFO Executor: Fetching spark://localhost:50249/jars/word-count_2.11-0.1.jar with timestamp 1674770408345
23/01/26 17:00:10 INFO TransportClientFactory: Successfully created connection to localhost/192.168.1.221:50249 after 58 ms (0 ms spent in bootstraps)
23/01/26 17:00:10 INFO Utils: Fetching spark://localhost:50249/jars/word-count_2.11-0.1.jar to C:\Users\user\AppData\Local\Temp\spark-d7979eef-eac8-4a89-8ee0-246a821703d6\userFiles-8222f8d5-3999-47a7-b048-a9c37e66150a\fetchFileTemp8156211875497724521.tmp
23/01/26 17:00:11 INFO Executor: Adding file:/C:/Users/user/AppData/Local/Temp/spark-d7979eef-eac8-4a89-8ee0-246a821703d6/userFiles-8222f8d5-3999-47a7-b048-a9c37e66150a/word-count_2.11-0.1.jar to class loader
23/01/26 17:00:11 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50306.
23/01/26 17:00:11 INFO NettyBlockTransferService: Server created on localhost:50306
23/01/26 17:00:11 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
23/01/26 17:00:11 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, localhost, 50306, None)
23/01/26 17:00:11 INFO BlockManagerMasterEndpoint: Registering block manager localhost:50306 with 366.3 MiB RAM, BlockManagerId(driver, localhost, 50306, None)
23/01/26 17:00:11 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, localhost, 50306, None)
23/01/26 17:00:11 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, localhost, 50306, None)
23/01/26 17:00:12 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 358.0 KiB, free 366.0 MiB)
23/01/26 17:00:12 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 32.3 KiB, free 365.9 MiB)
23/01/26 17:00:12 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:50306 (size: 32.3 KiB, free: 366.3 MiB)
23/01/26 17:00:12 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:13
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat;
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.getStat(NativeIO.java:608)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNativeIO(RawLocalFileSystem.java:934)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:848)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:816)
at org.apache.hadoop.fs.LocatedFileStatus.<init>(LocatedFileStatus.java:52)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:2199)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:2179)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:244)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:332)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:208)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:292)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:292)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:292)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:292)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:288)
at org.apache.spark.Partitioner$.$anonfun$defaultPartitioner$4(Partitioner.scala:78)
at org.apache.spark.Partitioner$.$anonfun$defaultPartitioner$4$adapted(Partitioner.scala:78)
at scala.collection.immutable.List.map(List.scala:293)
at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:78)
at org.apache.spark.rdd.PairRDDFunctions.$anonfun$reduceByKey$4(PairRDDFunctions.scala:323)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:406)
at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:323)
at main.WordCount$.main(WordCount.scala:16)
at main.WordCount.main(WordCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/01/26 17:00:12 INFO SparkContext: Invoking stop() from shutdown hook
23/01/26 17:00:12 INFO SparkUI: Stopped Spark web UI at http://localhost:4040
23/01/26 17:00:12 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
23/01/26 17:00:12 INFO MemoryStore: MemoryStore cleared
23/01/26 17:00:12 INFO BlockManager: BlockManager stopped
23/01/26 17:00:12 INFO BlockManagerMaster: BlockManagerMaster stopped
23/01/26 17:00:12 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
23/01/26 17:00:12 INFO SparkContext: Successfully stopped SparkContext
23/01/26 17:00:12 INFO ShutdownHookManager: Shutdown hook called
23/01/26 17:00:12 INFO ShutdownHookManager: Deleting directory C:\Users\user\AppData\Local\Temp\spark-d7979eef-eac8-4a89-8ee0-246a821703d6
23/01/26 17:00:12 INFO ShutdownHookManager: Deleting directory C:\Users\user\AppData\Local\Temp\spark-26625e11-a7f1-41f7-b2b3-29f97ea9e75a

How to raise log level to error in Spark?

I have tried to suppress log by spark.sparkContext.setLogLevel("ERROR") in:
package com.databricks.example
import org.apache.log4j.Logger
import org.apache.spark.sql.SparkSession
object DFUtils extends Serializable {
#transient lazy val logger = Logger.getLogger(getClass.getName)
def pointlessUDF(raw: String) = {
raw
}
}
object DataFrameExample extends Serializable {
def main(args: Array[String]): Unit = {
val pathToDataFolder = args(0)
// println(pathToDataFolder + "data.json")
// start up the SparkSession
// along with explicitly setting a given config
val spark = SparkSession.builder().appName("Spark Example")
.config("spark.sql.warehouse.dir", "/user/hive/warehouse")
.getOrCreate()
// for suppresse logs by raising log level
spark.sparkContext.setLogLevel("ERROR")
// println(spark.range(1, 2000).count());
// udf registration
spark.udf.register("myUDF", DFUtils.pointlessUDF(_:String):String)
val df = spark.read.json(pathToDataFolder + "data.json")
df.printSchema()
// df.collect.foreach(println)
// val x = df.select("value").foreach(x => println(x));
val manipulated = df.groupBy("grouping").sum().collect().foreach(x => println(x))
// val manipulated = df.groupBy(expr("myUDF(group)")).sum().collect().foreach(x => println(x))
}
}
Why do I still get INFO and WARN level logs? Have I successfully raised log level to error? Thanks.
$ ~/programs/spark/spark-2.4.5-bin-hadoop2.7/bin/spark-submit --class com.databricks.example.DataFrameExample --master local target/scala-2.11/example_2.11-0.1-SNAPSHOT.jar /tmp/test/
20/03/19 10:09:10 WARN Utils: Your hostname, ocean resolves to a loopback address: 127.0.1.1; using 192.168.122.1 instead (on interface virbr0)
20/03/19 10:09:10 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/03/19 10:09:11 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/03/19 10:09:12 INFO SparkContext: Running Spark version 2.4.5
20/03/19 10:09:12 INFO SparkContext: Submitted application: Spark Example
20/03/19 10:09:12 INFO SecurityManager: Changing view acls to: t
20/03/19 10:09:12 INFO SecurityManager: Changing modify acls to: t
20/03/19 10:09:12 INFO SecurityManager: Changing view acls groups to:
20/03/19 10:09:12 INFO SecurityManager: Changing modify acls groups to:
20/03/19 10:09:12 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(t); groups with view permissions: Set(); users with modify permissions: Set(t); groups with modify permissions: Set()
20/03/19 10:09:13 INFO Utils: Successfully started service 'sparkDriver' on port 35821.
20/03/19 10:09:13 INFO SparkEnv: Registering MapOutputTracker
20/03/19 10:09:13 INFO SparkEnv: Registering BlockManagerMaster
20/03/19 10:09:13 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/03/19 10:09:13 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/03/19 10:09:13 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-ce47f30a-ee1c-44a8-9f5b-204905ee3b2d
20/03/19 10:09:13 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/03/19 10:09:13 INFO SparkEnv: Registering OutputCommitCoordinator
20/03/19 10:09:14 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/03/19 10:09:14 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.122.1:4040
20/03/19 10:09:14 INFO SparkContext: Added JAR file:/tmp/test/bookexample/target/scala-2.11/example_2.11-0.1-SNAPSHOT.jar at spark://192.168.122.1:35821/jars/example_2.11-0.1-SNAPSHOT.jar with timestamp 1584626954295
20/03/19 10:09:14 INFO Executor: Starting executor ID driver on host localhost
20/03/19 10:09:14 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39215.
20/03/19 10:09:14 INFO NettyBlockTransferService: Server created on 192.168.122.1:39215
20/03/19 10:09:14 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/03/19 10:09:14 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.122.1, 39215, None)
20/03/19 10:09:14 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.122.1:39215 with 366.3 MB RAM, BlockManagerId(driver, 192.168.122.1, 39215, None)
20/03/19 10:09:14 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.122.1, 39215, None)
20/03/19 10:09:14 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.122.1, 39215, None)
root
|-- grouping: string (nullable = true)
|-- value: long (nullable = true)
[group_3,10]
[group_1,12]
[group_2,5]
[group_4,2]
You need to add a log4j.properties file into your resources folder. Otherwise it would use the default settings that are set in your spark folder. On Linux usually here: /etc/spark2/.../log4j-defaults.properties).
The location is also mentioned in your log file:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Make sure to set the rootCategory to ERROR, like in the following example:
# Set everything to be logged to the console
log4j.rootCategory=ERROR, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

Kryo setWarnUnregisteredClasses to true showing nothing in spark config

val conf = new SparkConf()
.setAppName("example")
.setMaster("local[*]")
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.set("setWarnUnregisteredClasses","true")
When registrationRequired is set to true, it throws exception for class Person is not registered and also "org.apache.spark.internal.io.FileCommitProtocol$TaskCommitMessage" is not registered
So, now in default it is false, so making setWarnUnregisteredClasses to true, it should show warning message for unregistered class encountered as provided in the documentation https://github.com/EsotericSoftware/kryo#serializer-framework? But, nothing is shown in the logs regarding serialization.
What I am trying to do is to get a list of all unregistered class into my logs by setting this property .set("setWarnUnregisteredClasses","true")
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/12/10 15:56:09 WARN Utils: Your hostname, knoldus-Vostro-3546 resolves to a loopback address: 127.0.1.1; using 192.168.1.113 instead (on interface enp7s0)
19/12/10 15:56:09 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
19/12/10 15:56:10 INFO SparkContext: Running Spark version 2.4.4
19/12/10 15:56:11 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/12/10 15:56:12 INFO SparkContext: Submitted application: kyroExample
19/12/10 15:56:14 INFO SecurityManager: Changing view acls to: knoldus
19/12/10 15:56:14 INFO SecurityManager: Changing modify acls to: knoldus
19/12/10 15:56:14 INFO SecurityManager: Changing view acls groups to:
19/12/10 15:56:14 INFO SecurityManager: Changing modify acls groups to:
19/12/10 15:56:14 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(knoldus); groups with view permissions: Set(); users with modify permissions: Set(knoldus); groups with modify permissions: Set()
19/12/10 15:56:17 INFO Utils: Successfully started service 'sparkDriver' on port 36235.
19/12/10 15:56:17 INFO SparkEnv: Registering MapOutputTracker
19/12/10 15:56:18 INFO SparkEnv: Registering BlockManagerMaster
19/12/10 15:56:18 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/12/10 15:56:18 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/12/10 15:56:18 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-956a186e-cfbd-4ad2-b531-9f46bff96984
19/12/10 15:56:18 INFO MemoryStore: MemoryStore started with capacity 870.9 MB
19/12/10 15:56:18 INFO SparkEnv: Registering OutputCommitCoordinator
19/12/10 15:56:19 INFO Utils: Successfully started service 'SparkUI' on port 4040.
19/12/10 15:56:19 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.113:4040
19/12/10 15:56:19 INFO Executor: Starting executor ID driver on host localhost
19/12/10 15:56:19 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41737.
19/12/10 15:56:19 INFO NettyBlockTransferService: Server created on 192.168.1.113:41737
19/12/10 15:56:19 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/12/10 15:56:19 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.113, 41737, None)
19/12/10 15:56:19 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.113:41737 with 870.9 MB RAM, BlockManagerId(driver, 192.168.1.113, 41737, None)
19/12/10 15:56:19 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.113, 41737, None)
19/12/10 15:56:19 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.113, 41737, None)
19/12/10 15:56:21 INFO SparkContext: Starting job: take at KyroExample.scala:28
19/12/10 15:56:21 INFO DAGScheduler: Got job 0 (take at KyroExample.scala:28) with 1 output partitions
19/12/10 15:56:21 INFO DAGScheduler: Final stage: ResultStage 0 (take at KyroExample.scala:28)
19/12/10 15:56:21 INFO DAGScheduler: Parents of final stage: List()
19/12/10 15:56:21 INFO DAGScheduler: Missing parents: List()
19/12/10 15:56:21 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at filter at KyroExample.scala:24), which has no missing parents
19/12/10 15:56:21 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 3.0 KB, free 870.9 MB)
19/12/10 15:56:22 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1730.0 B, free 870.9 MB)
19/12/10 15:56:22 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.113:41737 (size: 1730.0 B, free: 870.9 MB)
19/12/10 15:56:22 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1161
19/12/10 15:56:22 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at filter at KyroExample.scala:24) (first 15 tasks are for partitions Vector(0))
19/12/10 15:56:22 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
19/12/10 15:56:22 WARN TaskSetManager: Stage 0 contains a task of very large size (243 KB). The maximum recommended task size is 100 KB.
19/12/10 15:56:22 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 249045 bytes)
19/12/10 15:56:22 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
19/12/10 15:56:23 INFO MemoryStore: Block rdd_1_0 stored as values in memory (estimated size 293.3 KB, free 870.6 MB)
19/12/10 15:56:23 INFO BlockManagerInfo: Added rdd_1_0 in memory on 192.168.1.113:41737 (size: 293.3 KB, free: 870.6 MB)
19/12/10 15:56:23 INFO Executor: 1 block locks were not released by TID = 0:
[rdd_1_0]
19/12/10 15:56:23 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1132 bytes result sent to driver
19/12/10 15:56:23 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 924 ms on localhost (executor driver) (1/1)
19/12/10 15:56:23 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
19/12/10 15:56:23 INFO DAGScheduler: ResultStage 0 (take at KyroExample.scala:28) finished in 1.733 s
19/12/10 15:56:23 INFO DAGScheduler: Job 0 finished: take at KyroExample.scala:28, took 1.895530 s
There is no unregistered class encountered logs. Why?
I had the same problem.
The issue is that setWarnUnregisteredClasses is a Kryo configuration that currently (I actually use Spark 2.4.4) is not exposed through Spark.
You have to set the specific configuration in Kryo.
The workaround I used was to have a custom KryoRegistrator.
Then I used it in this way:
class MyKryoRegistrator extends KryoRegistrator {
override def registerClasses(kryo: Kryo): Unit = {
kryo.setRegistrationRequired(false)
kryo.setWarnUnregisteredClasses(true)
...
You are using kryo registration so custom and other classes need to be registered with kryo and also both classes should implement serialize interface.
setWarnUnregisteredClasses will give warnings and conf.set("spark.kryo.registrationRequired", "true") throws exception for classes not registered.
You have to register person and TaskCommitMessage like
conf.registerKryoClasses(Array(classOf[Person]))

Spark cassandra connector doesn't work in Standalone Spark cluster

I have a maven scala application that submits a spark job to Spark standalone single node cluster. When job is submitted, Spark application tries to access cassandra, which is hosted on Amazon EC2 instance, using spark-cassandra-connector. Connection is established, but results are not returned. After some time connector disconnects. It works fine if I'm running spark in local mode.
I tried to create simple application and my code looks like this:
val sc = SparkContextLoader.getSC
def runSparkJob():Unit={
val table =sc.cassandraTable("prosolo_logs_zj", "logevents")
println(table.collect().mkString("\n"))
}
SparkContext.scala
object SparkContextLoader {
val sparkConf = new SparkConf()
sparkConf.setMaster("spark://127.0.1.1:7077")
sparkConf.set("spark.cores_max","2")
sparkConf.set("spark.executor.memory","2g")
sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
sparkConf.setAppName("Test application")
sparkConf.set("spark.cassandra.connection.host", "xxx.xxx.xxx.xxx")
sparkConf.set("spark.cassandra.connection.port", "9042")
sparkConf.set("spark.ui.port","4041")
val oneJar="/samplesparkmaven/target/samplesparkmaven-jar.jar"
sparkConf.setJars(List(oneJar))
#transient val sc = new SparkContext(sparkConf)
}
Console output looks like:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/02/14 23:11:25 INFO SparkContext: Running Spark version 2.1.0
17/02/14 23:11:26 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/02/14 23:11:27 WARN Utils: Your hostname, zoran-Latitude-E5420 resolves to a loopback address: 127.0.1.1; using 192.168.2.68 instead (on interface wlp2s0)
17/02/14 23:11:27 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/02/14 23:11:27 INFO SecurityManager: Changing view acls to: zoran
17/02/14 23:11:27 INFO SecurityManager: Changing modify acls to: zoran
17/02/14 23:11:27 INFO SecurityManager: Changing view acls groups to:
17/02/14 23:11:27 INFO SecurityManager: Changing modify acls groups to:
17/02/14 23:11:27 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(zoran); groups with view permissions: Set(); users with modify permissions: Set(zoran); groups with modify permissions: Set()
17/02/14 23:11:28 INFO Utils: Successfully started service 'sparkDriver' on port 33995.
17/02/14 23:11:28 INFO SparkEnv: Registering MapOutputTracker
17/02/14 23:11:28 INFO SparkEnv: Registering BlockManagerMaster
17/02/14 23:11:28 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/02/14 23:11:28 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/02/14 23:11:28 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-7b25a4cc-cb37-4332-a59b-e36fa45511cd
17/02/14 23:11:28 INFO MemoryStore: MemoryStore started with capacity 870.9 MB
17/02/14 23:11:28 INFO SparkEnv: Registering OutputCommitCoordinator
17/02/14 23:11:28 INFO Utils: Successfully started service 'SparkUI' on port 4041.
17/02/14 23:11:28 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.2.68:4041
17/02/14 23:11:28 INFO SparkContext: Added JAR /samplesparkmaven/target/samplesparkmaven-jar.jar at spark://192.168.2.68:33995/jars/samplesparkmaven-jar.jar with timestamp 1487142688817
17/02/14 23:11:28 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://127.0.1.1:7077...
17/02/14 23:11:28 INFO TransportClientFactory: Successfully created connection to /127.0.1.1:7077 after 62 ms (0 ms spent in bootstraps)
17/02/14 23:11:29 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20170214231129-0016
17/02/14 23:11:29 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 36901.
17/02/14 23:11:29 INFO NettyBlockTransferService: Server created on 192.168.2.68:36901
17/02/14 23:11:29 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/02/14 23:11:29 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.2.68, 36901, None)
17/02/14 23:11:29 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.2.68:36901 with 870.9 MB RAM, BlockManagerId(driver, 192.168.2.68, 36901, None)
17/02/14 23:11:29 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.2.68, 36901, None)
17/02/14 23:11:29 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.2.68, 36901, None)
17/02/14 23:11:29 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
17/02/14 23:11:29 INFO NettyUtil: Found Netty's native epoll transport in the classpath, using it
17/02/14 23:11:31 INFO Cluster: New Cassandra host /xxx.xxx.xxx.xxx:9042 added
17/02/14 23:11:31 INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
17/02/14 23:11:32 INFO SparkContext: Starting job: collect at SparkConnector.scala:28
17/02/14 23:11:32 INFO DAGScheduler: Got job 0 (collect at SparkConnector.scala:28) with 6 output partitions
17/02/14 23:11:32 INFO DAGScheduler: Final stage: ResultStage 0 (collect at SparkConnector.scala:28)
17/02/14 23:11:32 INFO DAGScheduler: Parents of final stage: List()
17/02/14 23:11:32 INFO DAGScheduler: Missing parents: List()
17/02/14 23:11:32 INFO DAGScheduler: Submitting ResultStage 0 (CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:18), which has no missing parents
17/02/14 23:11:32 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 8.4 KB, free 870.9 MB)
17/02/14 23:11:32 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 4.4 KB, free 870.9 MB)
17/02/14 23:11:32 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.2.68:36901 (size: 4.4 KB, free: 870.9 MB)
17/02/14 23:11:32 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:996
17/02/14 23:11:32 INFO DAGScheduler: Submitting 6 missing tasks from ResultStage 0 (CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:18)
17/02/14 23:11:32 INFO TaskSchedulerImpl: Adding task set 0.0 with 6 tasks
17/02/14 23:11:39 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
I'm using
scala 2.11.6
spark 2.1.0 (both for standalone spark and dependency in application)
spark-cassandra-connector 2.0.0-M3
Cassandra Java driver 3.0.0
Apache Cassandra 3.9
Version compatibility table for cassandra connector doesn't show any problem with it, but I can't figure out anything else that might be the problem.
I've finally solved the problem I had. It turned out to be the problem with path. I was using local path to the jar, but missed to add "." at the beginning, so it was treated as absolute path.
Unfortunately, there was no exception in the application indicating that file doesn't exist on the provided path, and the only exception I had was from the worker which could not find jar file in the Spark cluster.

spark import apache library (math)

I am trying to run a simple application with spark
This is my scala file:
/* SimpleApp.scala */
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.commons.math3.random.RandomDataGenerator
object SimpleApp {
def main(args: Array[String]) {
val logFile = "/home/donbeo/Applications/spark/spark-1.1.0/README.md" // Should be some file on your system
val conf = new SparkConf().setAppName("Simple Application")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
println("A random number")
val randomData = new RandomDataGenerator()
println(randomData.nextLong(0, 100))
}
}
and this is my sbt file
name := "Simple Project"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0"
libraryDependencies += "org.apache.commons" % "commons-math3" % "3.3"
When I try to run the code I get this error
donbeo#donbeo-HP-EliteBook-Folio-9470m:~/Applications/spark/spark-1.1.0$ ./bin/spark-submit --class "SimpleApp" --master local[4] /home/donbeo/Documents/scala_code/simpleApp/target/scala-2.10/simple-project_2.10-1.0.jar
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/02/04 17:42:41 WARN Utils: Your hostname, donbeo-HP-EliteBook-Folio-9470m resolves to a loopback address: 127.0.1.1; using 192.168.1.45 instead (on interface wlan0)
15/02/04 17:42:41 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/02/04 17:42:41 INFO SecurityManager: Changing view acls to: donbeo,
15/02/04 17:42:41 INFO SecurityManager: Changing modify acls to: donbeo,
15/02/04 17:42:41 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(donbeo, ); users with modify permissions: Set(donbeo, )
15/02/04 17:42:42 INFO Slf4jLogger: Slf4jLogger started
15/02/04 17:42:42 INFO Remoting: Starting remoting
15/02/04 17:42:42 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#192.168.1.45:45935]
15/02/04 17:42:42 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver#192.168.1.45:45935]
15/02/04 17:42:42 INFO Utils: Successfully started service 'sparkDriver' on port 45935.
15/02/04 17:42:42 INFO SparkEnv: Registering MapOutputTracker
15/02/04 17:42:42 INFO SparkEnv: Registering BlockManagerMaster
15/02/04 17:42:42 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20150204174242-bbb1
15/02/04 17:42:42 INFO Utils: Successfully started service 'Connection manager for block manager' on port 55674.
15/02/04 17:42:42 INFO ConnectionManager: Bound socket to port 55674 with id = ConnectionManagerId(192.168.1.45,55674)
15/02/04 17:42:42 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
15/02/04 17:42:42 INFO BlockManagerMaster: Trying to register BlockManager
15/02/04 17:42:42 INFO BlockManagerMasterActor: Registering block manager 192.168.1.45:55674 with 265.4 MB RAM
15/02/04 17:42:42 INFO BlockManagerMaster: Registered BlockManager
15/02/04 17:42:42 INFO HttpFileServer: HTTP File server directory is /tmp/spark-49443053-833e-4596-9073-d74075483d35
15/02/04 17:42:42 INFO HttpServer: Starting HTTP Server
15/02/04 17:42:42 INFO Utils: Successfully started service 'HTTP file server' on port 41309.
15/02/04 17:42:42 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/02/04 17:42:42 INFO SparkUI: Started SparkUI at http://192.168.1.45:4040
15/02/04 17:42:42 INFO SparkContext: Added JAR file:/home/donbeo/Documents/scala_code/simpleApp/target/scala-2.10/simple-project_2.10-1.0.jar at http://192.168.1.45:41309/jars/simple-project_2.10-1.0.jar with timestamp 1423071762914
15/02/04 17:42:42 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver#192.168.1.45:45935/user/HeartbeatReceiver
15/02/04 17:42:43 INFO MemoryStore: ensureFreeSpace(32768) called with curMem=0, maxMem=278302556
15/02/04 17:42:43 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 32.0 KB, free 265.4 MB)
15/02/04 17:42:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/02/04 17:42:43 WARN LoadSnappy: Snappy native library not loaded
15/02/04 17:42:43 INFO FileInputFormat: Total input paths to process : 1
15/02/04 17:42:43 INFO SparkContext: Starting job: count at SimpleApp.scala:13
15/02/04 17:42:43 INFO DAGScheduler: Got job 0 (count at SimpleApp.scala:13) with 2 output partitions (allowLocal=false)
15/02/04 17:42:43 INFO DAGScheduler: Final stage: Stage 0(count at SimpleApp.scala:13)
15/02/04 17:42:43 INFO DAGScheduler: Parents of final stage: List()
15/02/04 17:42:43 INFO DAGScheduler: Missing parents: List()
15/02/04 17:42:43 INFO DAGScheduler: Submitting Stage 0 (FilteredRDD[2] at filter at SimpleApp.scala:13), which has no missing parents
15/02/04 17:42:43 INFO MemoryStore: ensureFreeSpace(2616) called with curMem=32768, maxMem=278302556
15/02/04 17:42:43 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.6 KB, free 265.4 MB)
15/02/04 17:42:43 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (FilteredRDD[2] at filter at SimpleApp.scala:13)
15/02/04 17:42:43 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/02/04 17:42:43 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1283 bytes)
15/02/04 17:42:43 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 1283 bytes)
15/02/04 17:42:43 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
15/02/04 17:42:43 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
15/02/04 17:42:43 INFO Executor: Fetching http://192.168.1.45:41309/jars/simple-project_2.10-1.0.jar with timestamp 1423071762914
15/02/04 17:42:43 INFO Utils: Fetching http://192.168.1.45:41309/jars/simple-project_2.10-1.0.jar to /tmp/fetchFileTemp3120003338190168194.tmp
15/02/04 17:42:43 INFO Executor: Adding file:/tmp/spark-ec5e14c2-9e58-4132-a4c9-2569d237a407/simple-project_2.10-1.0.jar to class loader
15/02/04 17:42:43 INFO CacheManager: Partition rdd_1_0 not found, computing it
15/02/04 17:42:43 INFO CacheManager: Partition rdd_1_1 not found, computing it
15/02/04 17:42:43 INFO HadoopRDD: Input split: file:/home/donbeo/Applications/spark/spark-1.1.0/README.md:0+2405
15/02/04 17:42:43 INFO HadoopRDD: Input split: file:/home/donbeo/Applications/spark/spark-1.1.0/README.md:2405+2406
15/02/04 17:42:43 INFO MemoryStore: ensureFreeSpace(7512) called with curMem=35384, maxMem=278302556
15/02/04 17:42:43 INFO MemoryStore: Block rdd_1_1 stored as values in memory (estimated size 7.3 KB, free 265.4 MB)
15/02/04 17:42:43 INFO BlockManagerInfo: Added rdd_1_1 in memory on 192.168.1.45:55674 (size: 7.3 KB, free: 265.4 MB)
15/02/04 17:42:43 INFO BlockManagerMaster: Updated info of block rdd_1_1
15/02/04 17:42:43 INFO MemoryStore: ensureFreeSpace(8352) called with curMem=42896, maxMem=278302556
15/02/04 17:42:43 INFO MemoryStore: Block rdd_1_0 stored as values in memory (estimated size 8.2 KB, free 265.4 MB)
15/02/04 17:42:43 INFO BlockManagerInfo: Added rdd_1_0 in memory on 192.168.1.45:55674 (size: 8.2 KB, free: 265.4 MB)
15/02/04 17:42:43 INFO BlockManagerMaster: Updated info of block rdd_1_0
15/02/04 17:42:43 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 2300 bytes result sent to driver
15/02/04 17:42:43 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2300 bytes result sent to driver
15/02/04 17:42:43 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 179 ms on localhost (1/2)
15/02/04 17:42:43 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 176 ms on localhost (2/2)
15/02/04 17:42:43 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/02/04 17:42:43 INFO DAGScheduler: Stage 0 (count at SimpleApp.scala:13) finished in 0.198 s
15/02/04 17:42:43 INFO SparkContext: Job finished: count at SimpleApp.scala:13, took 0.292364402 s
15/02/04 17:42:43 INFO SparkContext: Starting job: count at SimpleApp.scala:14
15/02/04 17:42:43 INFO DAGScheduler: Got job 1 (count at SimpleApp.scala:14) with 2 output partitions (allowLocal=false)
15/02/04 17:42:43 INFO DAGScheduler: Final stage: Stage 1(count at SimpleApp.scala:14)
15/02/04 17:42:43 INFO DAGScheduler: Parents of final stage: List()
15/02/04 17:42:43 INFO DAGScheduler: Missing parents: List()
15/02/04 17:42:43 INFO DAGScheduler: Submitting Stage 1 (FilteredRDD[3] at filter at SimpleApp.scala:14), which has no missing parents
15/02/04 17:42:43 INFO MemoryStore: ensureFreeSpace(2616) called with curMem=51248, maxMem=278302556
15/02/04 17:42:43 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 2.6 KB, free 265.4 MB)
15/02/04 17:42:43 INFO DAGScheduler: Submitting 2 missing tasks from Stage 1 (FilteredRDD[3] at filter at SimpleApp.scala:14)
15/02/04 17:42:43 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
15/02/04 17:42:43 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, localhost, ANY, 1283 bytes)
15/02/04 17:42:43 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, localhost, ANY, 1283 bytes)
15/02/04 17:42:43 INFO Executor: Running task 0.0 in stage 1.0 (TID 2)
15/02/04 17:42:43 INFO Executor: Running task 1.0 in stage 1.0 (TID 3)
15/02/04 17:42:43 INFO BlockManager: Found block rdd_1_1 locally
15/02/04 17:42:43 INFO BlockManager: Found block rdd_1_0 locally
15/02/04 17:42:43 INFO Executor: Finished task 1.0 in stage 1.0 (TID 3). 1731 bytes result sent to driver
15/02/04 17:42:43 INFO Executor: Finished task 0.0 in stage 1.0 (TID 2). 1731 bytes result sent to driver
15/02/04 17:42:43 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 3) in 14 ms on localhost (1/2)
15/02/04 17:42:43 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 17 ms on localhost (2/2)
15/02/04 17:42:43 INFO DAGScheduler: Stage 1 (count at SimpleApp.scala:14) finished in 0.017 s
15/02/04 17:42:43 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
15/02/04 17:42:43 INFO SparkContext: Job finished: count at SimpleApp.scala:14, took 0.034833058 s
Lines with a: 83, Lines with b: 38
A random number
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/math3/random/RandomDataGenerator
at SimpleApp$.main(SimpleApp.scala:20)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.commons.math3.random.RandomDataGenerator
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 9 more
donbeo#donbeo-HP-EliteBook-Folio-9470m:~/Applications/spark/spark-1.1.0$
I think I am doing something wrong when I import the math3 library.
Here there is a detailed explanation of how I have installed spark and built the project submit task to Spark
You need to specify common-math3 jar's path, it can be done using --jars option
./bin/spark-submit --class "SimpleApp" \
--master local[4] \
--jars <specify-path-of-commons-math3-jar> \
/home/donbeo/Documents/scala_code/simpleApp/target/scala-2.10/simple-project_2.10-1.0.jar
Alternatively, you can build an assembly jar which contains all the dependencies.
EDIT:
How to build assembly jar:
in file build.sbt
import AssemblyKeys._
import sbtassembly.Plugin._
name := "Simple Project"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0" % "provided"
libraryDependencies += "org.apache.commons" % "commons-math3" % "3.3"
// This statement includes the assembly plugin capabilities
assemblySettings
// Configure jar named used with the assembly plug-in
jarName in assembly := "simple-app-assembly.jar"
// A special option to exclude Scala itself form our assembly jar, since Spark
// already bundles Scala.
assemblyOption in assembly := (assemblyOption in assembly).value.copy(includeScala = false)
in file project/assembly.sbt
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.2")
Then make an assembly jar as follows:
sbt assembly