Why do we need to add "fork in run := true" when running Spark SBT application? - scala

I have built a simple Spark app using sbt. Here's my code:
import org.apache.spark.sql.SparkSession
object HelloWorld {
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().master("local").appName("BigApple").getOrCreate()
import spark.implicits._
val ds = Seq(1, 2, 3).toDS()
ds.map(_ + 1).foreach(x => println(x))
}
}
Following is my build.sbt
name := """sbt-sample-app"""
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies += "org.scalatest" %% "scalatest" % "2.2.6" % "test"
libraryDependencies += "org.apache.spark" % "spark-sql_2.11" % "2.1.1"
Now when I try to do sbt run, it gives me following error:
$ sbt run
[info] Loading global plugins from /home/user/.sbt/0.13/plugins
[info] Loading project definition from /home/user/Projects/sample-app/project
[info] Set current project to sbt-sample-app (in build file:/home/user/Projects/sample-app/)
[info] Running HelloWorld
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/06/01 10:09:10 INFO SparkContext: Running Spark version 2.1.1
17/06/01 10:09:11 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/06/01 10:09:11 WARN Utils: Your hostname, user-Vostro-15-3568 resolves to a loopback address: 127.0.1.1; using 127.0.0.1 instead (on interface enp3s0)
17/06/01 10:09:11 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/06/01 10:09:11 INFO SecurityManager: Changing view acls to: user
17/06/01 10:09:11 INFO SecurityManager: Changing modify acls to: user
17/06/01 10:09:11 INFO SecurityManager: Changing view acls groups to:
17/06/01 10:09:11 INFO SecurityManager: Changing modify acls groups to:
17/06/01 10:09:11 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user); groups with view permissions: Set(); users with modify permissions: Set(user); groups with modify permissions: Set()
17/06/01 10:09:12 INFO Utils: Successfully started service 'sparkDriver' on port 39662.
17/06/01 10:09:12 INFO SparkEnv: Registering MapOutputTracker
17/06/01 10:09:12 INFO SparkEnv: Registering BlockManagerMaster
17/06/01 10:09:12 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/06/01 10:09:12 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/06/01 10:09:12 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-c6db1535-6a00-4760-93dc-968722e3d596
17/06/01 10:09:12 INFO MemoryStore: MemoryStore started with capacity 408.9 MB
17/06/01 10:09:13 INFO SparkEnv: Registering OutputCommitCoordinator
17/06/01 10:09:13 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/06/01 10:09:13 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://127.0.0.1:4040
17/06/01 10:09:13 INFO Executor: Starting executor ID driver on host localhost
17/06/01 10:09:13 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 34488.
17/06/01 10:09:13 INFO NettyBlockTransferService: Server created on 127.0.0.1:34488
17/06/01 10:09:13 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/06/01 10:09:13 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 127.0.0.1, 34488, None)
17/06/01 10:09:13 INFO BlockManagerMasterEndpoint: Registering block manager 127.0.0.1:34488 with 408.9 MB RAM, BlockManagerId(driver, 127.0.0.1, 34488, None)
17/06/01 10:09:13 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 127.0.0.1, 34488, None)
17/06/01 10:09:13 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 127.0.0.1, 34488, None)
17/06/01 10:09:14 INFO SharedState: Warehouse path is 'file:/home/user/Projects/sample-app/spark-warehouse'.
[error] (run-main-0) scala.ScalaReflectionException: class scala.Option in JavaMirror with ClasspathFilter(
[error] parent = URLClassLoader with NativeCopyLoader with RawResources(
[error] urls = List(/home/user/Projects/sample-app/target/scala-2.11/classes, ...,/home/user/.ivy2/cache/org.apache.parquet/parquet-jackson/jars/parquet-jackson-1.8.1.jar),
[error] parent = java.net.URLClassLoader#7c4113ce,
[error] resourceMap = Set(app.class.path, boot.class.path),
[error] nativeTemp = /tmp/sbt_c2afce
[error] )
[error] root = sun.misc.Launcher$AppClassLoader#677327b6
[error] cp = Set(/home/user/.ivy2/cache/org.glassfish.jersey.core/jersey-common/jars/jersey-common-2.22.2.jar, ..., /home/user/.ivy2/cache/net.razorvine/pyrolite/jars/pyrolite-4.13.jar)
[error] ) of type class sbt.classpath.ClasspathFilter with classpath [<unknown>] and parent being URLClassLoader with NativeCopyLoader with RawResources(
[error] urls = List(/home/user/Projects/sample-app/target/scala-2.11/classes, ..., /home/user/.ivy2/cache/org.apache.parquet/parquet-jackson/jars/parquet-jackson-1.8.1.jar),
[error] parent = java.net.URLClassLoader#7c4113ce,
[error] resourceMap = Set(app.class.path, boot.class.path),
[error] nativeTemp = /tmp/sbt_c2afce
[error] ) of type class sbt.classpath.ClasspathUtilities$$anon$1 with classpath [file:/home/user/Projects/sample-app/target/scala-2.11/classes/,...openjdk-amd64/jre/lib/jfr.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/classes] not found.
scala.ScalaReflectionException: class scala.Option in JavaMirror with ClasspathFilter(
parent = URLClassLoader with NativeCopyLoader with RawResources(
urls = List(/home/user/Projects/sample-app/target/scala-2.11/classes, ..., /home/user/.ivy2/cache/org.apache.parquet/parquet-jackson/jars/parquet-jackson-1.8.1.jar),
parent = java.net.URLClassLoader#7c4113ce,
resourceMap = Set(app.class.path, boot.class.path),
nativeTemp = /tmp/sbt_c2afce
)
root = sun.misc.Launcher$AppClassLoader#677327b6
cp = Set(/home/user/.ivy2/cache/org.glassfish.jersey.core/jersey-common/jars/jersey-common-2.22.2.jar, ..., /home/user/.ivy2/cache/net.razorvine/pyrolite/jars/pyrolite-4.13.jar)
) of type class sbt.classpath.ClasspathFilter with classpath [<unknown>] and parent being URLClassLoader with NativeCopyLoader with RawResources(
urls = List(/home/user/Projects/sample-app/target/scala-2.11/classes, ..., /home/user/.ivy2/cache/org.apache.parquet/parquet-jackson/jars/parquet-jackson-1.8.1.jar),
parent = java.net.URLClassLoader#7c4113ce,
resourceMap = Set(app.class.path, boot.class.path),
nativeTemp = /tmp/sbt_c2afce
) of type class sbt.classpath.ClasspathUtilities$$anon$1 with classpath [file:/home/user/Projects/sample-app/target/scala-2.11/classes/,.../jre/lib/charsets.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/jfr.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/classes] not found.
at scala.reflect.internal.Mirrors$RootsBase.staticClass(Mirrors.scala:123)
at scala.reflect.internal.Mirrors$RootsBase.staticClass(Mirrors.scala:22)
at org.apache.spark.sql.catalyst.ScalaReflection$$typecreator42$1.apply(ScalaReflection.scala:614)
at scala.reflect.api.TypeTags$WeakTypeTagImpl.tpe$lzycompute(TypeTags.scala:232)
at scala.reflect.api.TypeTags$WeakTypeTagImpl.tpe(TypeTags.scala:232)
at org.apache.spark.sql.catalyst.ScalaReflection$class.localTypeOf(ScalaReflection.scala:782)
at org.apache.spark.sql.catalyst.ScalaReflection$.localTypeOf(ScalaReflection.scala:39)
at org.apache.spark.sql.catalyst.ScalaReflection$.optionOfProductType(ScalaReflection.scala:614)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:51)
at org.apache.spark.sql.Encoders$.scalaInt(Encoders.scala:281)
at org.apache.spark.sql.SQLImplicits.newIntEncoder(SQLImplicits.scala:54)
at HelloWorld$.main(HelloWorld.scala:9)
at HelloWorld.main(HelloWorld.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
[trace] Stack trace suppressed: run last compile:run for the full output.
17/06/01 10:09:15 ERROR ContextCleaner: Error in cleaning thread
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:181)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1245)
at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:178)
at org.apache.spark.ContextCleaner$$anon$1.run(ContextCleaner.scala:73)
17/06/01 10:09:15 ERROR Utils: uncaught error in thread SparkListenerBus, stopping SparkContext
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:80)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1245)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
17/06/01 10:09:15 ERROR Utils: throw uncaught fatal error in thread SparkListenerBus
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:80)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1245)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
17/06/01 10:09:15 INFO SparkUI: Stopped Spark web UI at http://127.0.0.1:4040
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 7 s, completed 1 Jun, 2017 10:09:15 AM
But when I add fork in run := true in build.sbt the app runs fine
New build.sbt:
name := """sbt-sample-app"""
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies += "org.scalatest" %% "scalatest" % "2.2.6" % "test"
libraryDependencies += "org.apache.spark" % "spark-sql_2.11" % "2.1.1"
fork in run := true
Here's the output:
$ sbt run
[info] Loading global plugins from /home/user/.sbt/0.13/plugins
[info] Loading project definition from /home/user/Projects/sample-app/project
[info] Set current project to sbt-sample-app (in build file:/home/user/Projects/sample-app/)
[success] Total time: 0 s, completed 1 Jun, 2017 10:15:43 AM
[info] Updating {file:/home/user/Projects/sample-app/}sample-app...
[info] Resolving jline#jline;2.12.1 ...
[info] Done updating.
[warn] Scala version was updated by one of library dependencies:
[warn] * org.scala-lang:scala-library:(2.11.7, 2.11.0) -> 2.11.8
[warn] To force scalaVersion, add the following:
[warn] ivyScala := ivyScala.value map { _.copy(overrideScalaVersion = true) }
[warn] Run 'evicted' to see detailed eviction warnings
[info] Compiling 1 Scala source to /home/user/Projects/sample-app/target/scala-2.11/classes...
[info] Running HelloWorld
[error] Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
[error] 17/06/01 10:16:13 INFO SparkContext: Running Spark version 2.1.1
[error] 17/06/01 10:16:13 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[error] 17/06/01 10:16:14 WARN Utils: Your hostname, user-Vostro-15-3568 resolves to a loopback address: 127.0.1.1; using 127.0.0.1 instead (on interface enp3s0)
[error] 17/06/01 10:16:14 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
[error] 17/06/01 10:16:14 INFO SecurityManager: Changing view acls to: user
[error] 17/06/01 10:16:14 INFO SecurityManager: Changing modify acls to: user
[error] 17/06/01 10:16:14 INFO SecurityManager: Changing view acls groups to:
[error] 17/06/01 10:16:14 INFO SecurityManager: Changing modify acls groups to:
[error] 17/06/01 10:16:14 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user); groups with view permissions: Set(); users with modify permissions: Set(user); groups with modify permissions: Set()
[error] 17/06/01 10:16:14 INFO Utils: Successfully started service 'sparkDriver' on port 37747.
[error] 17/06/01 10:16:14 INFO SparkEnv: Registering MapOutputTracker
[error] 17/06/01 10:16:14 INFO SparkEnv: Registering BlockManagerMaster
[error] 17/06/01 10:16:14 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
[error] 17/06/01 10:16:14 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
[error] 17/06/01 10:16:14 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-edf40c39-a13e-4930-8e9a-64135bfa9770
[error] 17/06/01 10:16:14 INFO MemoryStore: MemoryStore started with capacity 1405.2 MB
[error] 17/06/01 10:16:14 INFO SparkEnv: Registering OutputCommitCoordinator
[error] 17/06/01 10:16:14 INFO Utils: Successfully started service 'SparkUI' on port 4040.
[error] 17/06/01 10:16:15 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://127.0.0.1:4040
[error] 17/06/01 10:16:15 INFO Executor: Starting executor ID driver on host localhost
[error] 17/06/01 10:16:15 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39113.
[error] 17/06/01 10:16:15 INFO NettyBlockTransferService: Server created on 127.0.0.1:39113
[error] 17/06/01 10:16:15 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
[error] 17/06/01 10:16:15 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 127.0.0.1, 39113, None)
[error] 17/06/01 10:16:15 INFO BlockManagerMasterEndpoint: Registering block manager 127.0.0.1:39113 with 1405.2 MB RAM, BlockManagerId(driver, 127.0.0.1, 39113, None)
[error] 17/06/01 10:16:15 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 127.0.0.1, 39113, None)
[error] 17/06/01 10:16:15 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 127.0.0.1, 39113, None)
[error] 17/06/01 10:16:15 INFO SharedState: Warehouse path is 'file:/home/user/Projects/sample-app/spark-warehouse/'.
[error] 17/06/01 10:16:18 INFO CodeGenerator: Code generated in 395.134683 ms
[error] 17/06/01 10:16:19 INFO CodeGenerator: Code generated in 9.077969 ms
[error] 17/06/01 10:16:19 INFO CodeGenerator: Code generated in 23.652705 ms
[error] 17/06/01 10:16:19 INFO SparkContext: Starting job: foreach at HelloWorld.scala:10
[error] 17/06/01 10:16:19 INFO DAGScheduler: Got job 0 (foreach at HelloWorld.scala:10) with 1 output partitions
[error] 17/06/01 10:16:19 INFO DAGScheduler: Final stage: ResultStage 0 (foreach at HelloWorld.scala:10)
[error] 17/06/01 10:16:19 INFO DAGScheduler: Parents of final stage: List()
[error] 17/06/01 10:16:19 INFO DAGScheduler: Missing parents: List()
[error] 17/06/01 10:16:19 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at foreach at HelloWorld.scala:10), which has no missing parents
[error] 17/06/01 10:16:20 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 6.3 KB, free 1405.2 MB)
[error] 17/06/01 10:16:20 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 3.3 KB, free 1405.2 MB)
[error] 17/06/01 10:16:20 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 127.0.0.1:39113 (size: 3.3 KB, free: 1405.2 MB)
[error] 17/06/01 10:16:20 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:996
[error] 17/06/01 10:16:20 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at foreach at HelloWorld.scala:10)
[error] 17/06/01 10:16:20 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
[error] 17/06/01 10:16:20 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 6227 bytes)
[error] 17/06/01 10:16:20 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
[info] 2
[info] 3
[info] 4
[error] 17/06/01 10:16:20 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1231 bytes result sent to driver
[error] 17/06/01 10:16:20 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 152 ms on localhost (executor driver) (1/1)
[error] 17/06/01 10:16:20 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
[error] 17/06/01 10:16:20 INFO DAGScheduler: ResultStage 0 (foreach at HelloWorld.scala:10) finished in 0.181 s
[error] 17/06/01 10:16:20 INFO DAGScheduler: Job 0 finished: foreach at HelloWorld.scala:10, took 0.596960 s
[error] 17/06/01 10:16:20 INFO SparkContext: Invoking stop() from shutdown hook
[error] 17/06/01 10:16:20 INFO SparkUI: Stopped Spark web UI at http://127.0.0.1:4040
[error] 17/06/01 10:16:20 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
[error] 17/06/01 10:16:20 INFO MemoryStore: MemoryStore cleared
[error] 17/06/01 10:16:20 INFO BlockManager: BlockManager stopped
[error] 17/06/01 10:16:20 INFO BlockManagerMaster: BlockManagerMaster stopped
[error] 17/06/01 10:16:20 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
[error] 17/06/01 10:16:20 INFO SparkContext: Successfully stopped SparkContext
[error] 17/06/01 10:16:20 INFO ShutdownHookManager: Shutdown hook called
[error] 17/06/01 10:16:20 INFO ShutdownHookManager: Deleting directory /tmp/spark-77d00e78-9f76-4ab2-bc40-0b99940661ac
[success] Total time: 37 s, completed 1 Jun, 2017 10:16:20 AM
Can anyone help me out in understanding the reason behind it ?

Excerpt from "Getting Started with SBT for Scala" By Shiti Saxena
Why do we need to fork JVM?
When a user runs code using run or console commands, the code is run on the same virtual machine as SBT. In some cases, running of code may cause SBT to crash, such as a System.exit call or unterminated threads (for example, when running tests on code while simultaneously working on the code).
If a test causes the JVM to shut down, you would need to restart SBT. In order to avoid such scenarious, forking the JVM is important.
You do not need to fork the JVM to run your code if the code follows the constraints listed as follows, else it must be run in a forked JVM:
No threads are created or the program ends when user-created threads terminate on their own
System.exit is used to end the program and user-created threads terminate when interrupted
No deserialization is done or deserialization code ensures that the right class loader is used

From the doc given here
By default, the run task runs in the same JVM as sbt. Forking is required under certain circumstances, however. Or, you might want to fork Java processes when implementing new tasks.
By default, a forked process uses the same Java and Scala versions being used for the build and the working directory and JVM options of the current process. This page discusses how to enable and configure forking for both run and test tasks. Each kind of task may be configured separately by scoping the relevant keys as explained below.
to enable fork in run simply use
fork in run := true

I couldn't find why exactly :
But this is their build file and recommendation :
https://github.com/deanwampler/spark-scala-tutorial/blob/master/project/Build.scala
Hope someone can give a better answer.
Edited Code :
import org.apache.spark.sql.SparkSession
object HelloWorld {
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().master("local").appName("BigApple").getOrCreate()
import spark.implicits._
val ds = Seq(1, 2, 3).toDS()
ds.map(_ + 1).foreach(x => println(x))
}
}
build.sbt
name := """untitled"""
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies += "org.scalatest" %% "scalatest" % "2.2.6" % "test"
libraryDependencies += "org.apache.spark" % "spark-sql_2.11" % "2.1.1"

Related

Failing to execute spark-submit command on a sample word count project

I am doing a tutorial on Pluralsight for Apache Spark which is a simple word counter. I am on Windows 11 and I have IntelliJ IDEA 2022.3.1 (Ultimate Edition). Additionally, on my machine I have JKD8, Apache Spark 3.3.1 pre built for Hadoop 3.3 and later, and Hadoop 3.3.4. The code is written in Scala with SBT as the build tooland I've included the code below. After packaging the file with sbt package I run the command
spark-submit --class "main.WordCount" --master "local[*]" "C:\Users\user\Documents\Projects\WordCount\target\scala-2.11\word-count_2.11-0.1.jar"
I am receiving an exception
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat; (Full log below)
I have my dev tools (Java, Spark, Hadoop, etc) under C:\DevTools\TOOL and the Windows Environment variables are set as follows:
JAVA_HOME -> C:\DevTools\TOOL\Java
SPARK_HOME -> C:\DevTools\TOOL\Spark
HADOOP_HOME -> C:\DevTools\TOOL\Hadoop
PATH -> %JAVA_HOME%\bin; %SPARK_HOME%\bin; %HADOOP_HOME%\bin
Lastly, I've downloaded various winutils.exe and and hadoop.dll and I've put them in the Spark bin folder and the Hadoop bin folder but nothing seemingly works. Does anyone have any suggestions as to how I can get this to execute successfully?
build.sbt
name := "Word Count"
version := "0.1"
scalaVersion := "2.11.8"
val sparkVersion = "1.6.1"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" %% "spark-streaming" % sparkVersion
)
WordCount.scala
package main
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
object WordCount {
def main (args: Array[String]): Unit = {
val configuration = new SparkConf().setAppName("Word Counter")
val sparkContext = new SparkContext(configuration)
val textFile = sparkContext.textFile("file:///DevTools/TOOL/Spark")
val tokenizedFileData = textFile.flatMap(line=>line.split(" "))
val countPrep = tokenizedFileData.map(word=>(word, 1))
val counts = countPrep.reduceByKey((accumValue, newValue)=>accumValue + newValue)
val storedCounts = counts.sortBy(kvPair=>kvPair._2, false)
storedCounts.saveAsTextFile("file:///DevTools/TOOL/Spark/output")
}
}
Full Log
PS C:\Users\user\Documents\Projects\WordCount> spark-submit --class "main.WordCount" --master "local[*]" "C:\Users\user\Documents\Projects\WordCount\target\scala-2.11\word-count_2.11-0.1.jar"
23/01/26 17:00:08 INFO SparkContext: Running Spark version 3.3.1
23/01/26 17:00:08 INFO ResourceUtils: ==============================================================
23/01/26 17:00:08 INFO ResourceUtils: No custom resources configured for spark.driver.
23/01/26 17:00:08 INFO ResourceUtils: ==============================================================
23/01/26 17:00:08 INFO SparkContext: Submitted application: Word Counter
23/01/26 17:00:08 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
23/01/26 17:00:08 INFO ResourceProfile: Limiting resource is cpu
23/01/26 17:00:08 INFO ResourceProfileManager: Added ResourceProfile id: 0
23/01/26 17:00:08 INFO SecurityManager: Changing view acls to: user
23/01/26 17:00:08 INFO SecurityManager: Changing modify acls to: user
23/01/26 17:00:08 INFO SecurityManager: Changing view acls groups to:
23/01/26 17:00:08 INFO SecurityManager: Changing modify acls groups to:
23/01/26 17:00:08 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user); groups with view permissions: Set(); users with modify permissions: Set(user); groups with modify permissions: Set()
23/01/26 17:00:09 INFO Utils: Successfully started service 'sparkDriver' on port 50249.
23/01/26 17:00:09 INFO SparkEnv: Registering MapOutputTracker
23/01/26 17:00:09 INFO SparkEnv: Registering BlockManagerMaster
23/01/26 17:00:09 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/01/26 17:00:09 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/01/26 17:00:09 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23/01/26 17:00:09 INFO DiskBlockManager: Created local directory at C:\Users\user\AppData\Local\Temp\blockmgr-c7d05098-5b05-4121-b1b6-2e7445fc9240
23/01/26 17:00:09 INFO MemoryStore: MemoryStore started with capacity 366.3 MiB
23/01/26 17:00:09 INFO SparkEnv: Registering OutputCommitCoordinator
23/01/26 17:00:10 INFO Utils: Successfully started service 'SparkUI' on port 4040.
23/01/26 17:00:10 INFO SparkContext: Added JAR file:/C:/Users/user/Documents/Projects/WordCount/target/scala-2.11/word-count_2.11-0.1.jar at spark://localhost:50249/jars/word-count_2.11-0.1.jar with timestamp 1674770408345
23/01/26 17:00:10 INFO Executor: Starting executor ID driver on host localhost
23/01/26 17:00:10 INFO Executor: Starting executor with user classpath (userClassPathFirst = false): ''
23/01/26 17:00:10 INFO Executor: Fetching spark://localhost:50249/jars/word-count_2.11-0.1.jar with timestamp 1674770408345
23/01/26 17:00:10 INFO TransportClientFactory: Successfully created connection to localhost/192.168.1.221:50249 after 58 ms (0 ms spent in bootstraps)
23/01/26 17:00:10 INFO Utils: Fetching spark://localhost:50249/jars/word-count_2.11-0.1.jar to C:\Users\user\AppData\Local\Temp\spark-d7979eef-eac8-4a89-8ee0-246a821703d6\userFiles-8222f8d5-3999-47a7-b048-a9c37e66150a\fetchFileTemp8156211875497724521.tmp
23/01/26 17:00:11 INFO Executor: Adding file:/C:/Users/user/AppData/Local/Temp/spark-d7979eef-eac8-4a89-8ee0-246a821703d6/userFiles-8222f8d5-3999-47a7-b048-a9c37e66150a/word-count_2.11-0.1.jar to class loader
23/01/26 17:00:11 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50306.
23/01/26 17:00:11 INFO NettyBlockTransferService: Server created on localhost:50306
23/01/26 17:00:11 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
23/01/26 17:00:11 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, localhost, 50306, None)
23/01/26 17:00:11 INFO BlockManagerMasterEndpoint: Registering block manager localhost:50306 with 366.3 MiB RAM, BlockManagerId(driver, localhost, 50306, None)
23/01/26 17:00:11 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, localhost, 50306, None)
23/01/26 17:00:11 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, localhost, 50306, None)
23/01/26 17:00:12 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 358.0 KiB, free 366.0 MiB)
23/01/26 17:00:12 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 32.3 KiB, free 365.9 MiB)
23/01/26 17:00:12 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:50306 (size: 32.3 KiB, free: 366.3 MiB)
23/01/26 17:00:12 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:13
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat;
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.getStat(NativeIO.java:608)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNativeIO(RawLocalFileSystem.java:934)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:848)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:816)
at org.apache.hadoop.fs.LocatedFileStatus.<init>(LocatedFileStatus.java:52)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:2199)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:2179)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:244)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:332)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:208)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:292)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:292)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:292)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:292)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:288)
at org.apache.spark.Partitioner$.$anonfun$defaultPartitioner$4(Partitioner.scala:78)
at org.apache.spark.Partitioner$.$anonfun$defaultPartitioner$4$adapted(Partitioner.scala:78)
at scala.collection.immutable.List.map(List.scala:293)
at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:78)
at org.apache.spark.rdd.PairRDDFunctions.$anonfun$reduceByKey$4(PairRDDFunctions.scala:323)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:406)
at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:323)
at main.WordCount$.main(WordCount.scala:16)
at main.WordCount.main(WordCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
23/01/26 17:00:12 INFO SparkContext: Invoking stop() from shutdown hook
23/01/26 17:00:12 INFO SparkUI: Stopped Spark web UI at http://localhost:4040
23/01/26 17:00:12 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
23/01/26 17:00:12 INFO MemoryStore: MemoryStore cleared
23/01/26 17:00:12 INFO BlockManager: BlockManager stopped
23/01/26 17:00:12 INFO BlockManagerMaster: BlockManagerMaster stopped
23/01/26 17:00:12 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
23/01/26 17:00:12 INFO SparkContext: Successfully stopped SparkContext
23/01/26 17:00:12 INFO ShutdownHookManager: Shutdown hook called
23/01/26 17:00:12 INFO ShutdownHookManager: Deleting directory C:\Users\user\AppData\Local\Temp\spark-d7979eef-eac8-4a89-8ee0-246a821703d6
23/01/26 17:00:12 INFO ShutdownHookManager: Deleting directory C:\Users\user\AppData\Local\Temp\spark-26625e11-a7f1-41f7-b2b3-29f97ea9e75a

Exception in thread "main" java.lang.NullPointerException com.databricks.dbutils_v1.DBUtilsHolder$$anon$1.invoke

I would like to read a parquet file in Azure Blob, so I have mount the data from Azure Blob to local with dbultils.fs.mount
But I got the errors Exception in thread "main" java.lang.NullPointerException
Below is my log:
hello big data
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/06/10 23:20:10 INFO SparkContext: Running Spark version 2.1.0
20/06/10 23:20:11 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/06/10 23:20:11 INFO SecurityManager: Changing view acls to: Admin
20/06/10 23:20:11 INFO SecurityManager: Changing modify acls to: Admin
20/06/10 23:20:11 INFO SecurityManager: Changing view acls groups to:
20/06/10 23:20:11 INFO SecurityManager: Changing modify acls groups to:
20/06/10 23:20:11 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Admin); groups with view permissions: Set(); users with modify permissions: Set(Admin); groups with modify permissions: Set()
20/06/10 23:20:12 INFO Utils: Successfully started service 'sparkDriver' on port 4725.
20/06/10 23:20:12 INFO SparkEnv: Registering MapOutputTracker
20/06/10 23:20:13 INFO SparkEnv: Registering BlockManagerMaster
20/06/10 23:20:13 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/06/10 23:20:13 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/06/10 23:20:13 INFO DiskBlockManager: Created local directory at C:\Users\Admin\AppData\Local\Temp\blockmgr-c023c3b8-fd70-461a-ac69-24ce9c770efe
20/06/10 23:20:13 INFO MemoryStore: MemoryStore started with capacity 894.3 MB
20/06/10 23:20:13 INFO SparkEnv: Registering OutputCommitCoordinator
20/06/10 23:20:13 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/06/10 23:20:13 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.0.102:4040
20/06/10 23:20:13 INFO Executor: Starting executor ID driver on host localhost
20/06/10 23:20:13 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 4738.
20/06/10 23:20:13 INFO NettyBlockTransferService: Server created on 192.168.0.102:4738
20/06/10 23:20:13 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/06/10 23:20:13 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.0.102, 4738, None)
20/06/10 23:20:13 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.0.102:4738 with 894.3 MB RAM, BlockManagerId(driver, 192.168.0.102, 4738, None)
20/06/10 23:20:13 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.0.102, 4738, None)
20/06/10 23:20:13 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.0.102, 4738, None)
20/06/10 23:20:14 INFO SharedState: Warehouse path is 'file:/E:/sparkdemo/sparkdemo/spark-warehouse/'.
Exception in thread "main" java.lang.NullPointerException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.databricks.dbutils_v1.DBUtilsHolder$$anon$1.invoke(DBUtilsHolder.scala:17)
at com.sun.proxy.$Proxy7.fs(Unknown Source)
at Transform$.main(Transform.scala:19)
at Transform.main(Transform.scala)
20/06/10 23:20:14 INFO SparkContext: Invoking stop() from shutdown hook
20/06/10 23:20:14 INFO SparkUI: Stopped Spark web UI at http://192.168.0.102:4040
20/06/10 23:20:14 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/06/10 23:20:14 INFO MemoryStore: MemoryStore cleared
20/06/10 23:20:14 INFO BlockManager: BlockManager stopped
20/06/10 23:20:14 INFO BlockManagerMaster: BlockManagerMaster stopped
20/06/10 23:20:14 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/06/10 23:20:14 INFO SparkContext: Successfully stopped SparkContext
20/06/10 23:20:14 INFO ShutdownHookManager: Shutdown hook called
20/06/10 23:20:14 INFO ShutdownHookManager: Deleting directory C:\Users\Admin\AppData\Local\Temp\spark-cbdbcfe7-bc70-4d34-ad8e-5baed8308ae2
My code:
import com.databricks.dbutils_v1.DBUtilsHolder.dbutils
import org.apache.spark.sql.SparkSession
object Demo {
def main(args:Array[String]): Unit = {
println("hello big data")
val containerName = "container1"
val storageAccountName = "storageaccount1"
val sas = "saskey"
val url = "wasbs://" + containerName + "#" + storageAccountName + ".blob.core.windows.net/"
var config = "fs.azure.sas." + containerName + "." + storageAccountName + ".blob.core.windows.net"
//Spark session
val spark : SparkSession = SparkSession.builder
.appName("SpartDemo")
.master("local[1]")
.getOrCreate()
//Mount data
dbutils.fs.mount(
source = url,
mountPoint = "/mnt/container1",
extraConfigs = Map(config -> sas))
val parquetFileDF = spark.read.parquet("/mnt/container1/test1.parquet")
parquetFileDF.show()
}
}
My sbt file:
name := "sparkdemo1"
version := "0.1"
scalaVersion := "2.11.8"
libraryDependencies ++= Seq(
"com.databricks" % "dbutils-api_2.11" % "0.0.3",
"org.apache.spark" % "spark-core_2.11" % "2.1.0",
"org.apache.spark" % "spark-sql_2.11" % "2.1.0"
)
Are you running this into a Databricks instance?
If not, that's the problem: dbutils are provided by Databricks execution context.
In that case, as far as I know, you have three options:
Package your application into a jar file and run it using a Databricks job
Use databricks-connect
Try to emulate a mocked dbutils instance outside Databricks as shown here:
com.databricks.dbutils_v1.DBUtilsHolder.dbutils0.set(
new com.databricks.dbutils_v1.DBUtilsV1{
...
}
)
Anyway, I'd say that options 1 and 2 are better than the third one. Also by choosing one of those you don't need to include "dbutils-api_2.11" dependency, as it is provided by Databricks cluster.

` _corrupt_record: string (nullable = true)` with a simple Spark Scala application [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I am trying to run a simple/dumb Spark Scala application example in Spark: The Definitive Guide. It reads a json file and do some work on it. But running it reports _corrupt_record: string (nullable = true). The json file has a JSON object per line. I was wondering what is wrong? Thanks.
Scala code:
package com.databricks.example
import org.apache.log4j.Logger
import org.apache.spark.sql.SparkSession
object DFUtils extends Serializable {
#transient lazy val logger = Logger.getLogger(getClass.getName)
def pointlessUDF(raw: String) = {
raw
}
}
object DataFrameExample extends Serializable {
def main(args: Array[String]): Unit = {
val pathToDataFolder = args(0)
val spark = SparkSession.builder().appName("Spark Example")
.config("spark.sql.warehouse.dir", "/user/hive/warehouse")
.getOrCreate()
// udf registration
spark.udf.register("myUDF", DFUtils.pointlessUDF(_:String):String)
val df = spark.read.json(pathToDataFolder + "data.json")
df.printSchema()
// df.collect.foreach(println)
// val x = df.select("value").foreach(x => println(x));
// val manipulated = df.groupBy("grouping").sum().collect().foreach(x => println(x))
// val manipulated = df.groupBy(expr("myUDF(group)")).sum().collect().foreach(x => println(x))
}
}
/tmp/test/data.json is
{"grouping":"group_1", value:5}
{"grouping":"group_1", value:6}
{"grouping":"group_3", value:7}
{"grouping":"group_2", value:3}
{"grouping":"group_4", value:2}
{"grouping":"group_1", value:1}
{"grouping":"group_2", value:2}
{"grouping":"group_3", value:3}
build.sbt is
$ cat build.sbt
name := "example"
organization := "com.databricks"
version := "0.1-SNAPSHOT"
scalaVersion := "2.11.8"
// scalaVersion := "2.13.1"
// Spark Information
// val sparkVersion = "2.2.0"
val sparkVersion = "2.4.5"
// allows us to include spark packages
resolvers += "bintray-spark-packages" at
"https://dl.bintray.com/spark-packages/maven/"
resolvers += "Typesafe Simple Repository" at
"http://repo.typesafe.com/typesafe/simple/maven-releases/"
resolvers += "MavenRepository" at
"https://mvnrepository.com/"
libraryDependencies ++= Seq(
// spark core
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" %% "spark-sql" % sparkVersion,
)
Build and package with SBT:
$ sbt package
[info] Loading project definition from /tmp/test/bookexample/project
[info] Loading settings for project bookexample from build.sbt ...
[info] Set current project to example (in build file:/tmp/test/bookexample/)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/simple/maven-releases/'; switch to HTTPS or opt-in as ("Typesafe Simple Repository" at "http://repo.typesafe.com/typesafe/simple/maven-releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/simple/maven-releases/'; switch to HTTPS or opt-in as ("Typesafe Simple Repository" at "http://repo.typesafe.com/typesafe/simple/maven-releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/simple/maven-releases/'; switch to HTTPS or opt-in as ("Typesafe Simple Repository" at "http://repo.typesafe.com/typesafe/simple/maven-releases/").withAllowInsecureProtocol(true)
[info] Compiling 1 Scala source to /tmp/test/bookexample/target/scala-2.11/classes ...
[success] Total time: 28 s, completed Mar 19, 2020, 8:35:50 AM
Run with spark-submit:
$ ~/programs/spark/spark-2.4.5-bin-hadoop2.7/bin/spark-submit --class com.databricks.example.DataFrameExample --master local target/scala-2.11/example_2.11-0.1-SNAPSHOT.jar /tmp/test/
20/03/19 08:37:58 WARN Utils: Your hostname, ocean resolves to a loopback address: 127.0.1.1; using 192.168.122.1 instead (on interface virbr0)
20/03/19 08:37:58 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/03/19 08:37:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/03/19 08:38:00 INFO SparkContext: Running Spark version 2.4.5
20/03/19 08:38:00 INFO SparkContext: Submitted application: Spark Example
20/03/19 08:38:00 INFO SecurityManager: Changing view acls to: t
20/03/19 08:38:00 INFO SecurityManager: Changing modify acls to: t
20/03/19 08:38:00 INFO SecurityManager: Changing view acls groups to:
20/03/19 08:38:00 INFO SecurityManager: Changing modify acls groups to:
20/03/19 08:38:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(t); groups with view permissions: Set(); users with modify permissions: Set(t); groups with modify permissions: Set()
20/03/19 08:38:01 INFO Utils: Successfully started service 'sparkDriver' on port 46163.
20/03/19 08:38:01 INFO SparkEnv: Registering MapOutputTracker
20/03/19 08:38:01 INFO SparkEnv: Registering BlockManagerMaster
20/03/19 08:38:01 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/03/19 08:38:01 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/03/19 08:38:01 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-42f9b92d-1420-4e04-aaf6-acb635a27907
20/03/19 08:38:01 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/03/19 08:38:02 INFO SparkEnv: Registering OutputCommitCoordinator
20/03/19 08:38:02 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/03/19 08:38:02 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.122.1:4040
20/03/19 08:38:02 INFO SparkContext: Added JAR file:/tmp/test/bookexample/target/scala-2.11/example_2.11-0.1-SNAPSHOT.jar at spark://192.168.122.1:46163/jars/example_2.11-0.1-SNAPSHOT.jar with timestamp 1584621482787
20/03/19 08:38:03 INFO Executor: Starting executor ID driver on host localhost
20/03/19 08:38:03 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35287.
20/03/19 08:38:03 INFO NettyBlockTransferService: Server created on 192.168.122.1:35287
20/03/19 08:38:03 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/03/19 08:38:03 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:03 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.122.1:35287 with 366.3 MB RAM, BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:03 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:03 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:04 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('/user/hive/warehouse').
20/03/19 08:38:04 INFO SharedState: Warehouse path is '/user/hive/warehouse'.
20/03/19 08:38:05 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/03/19 08:38:10 INFO InMemoryFileIndex: It took 97 ms to list leaf files for 1 paths.
20/03/19 08:38:10 INFO InMemoryFileIndex: It took 3 ms to list leaf files for 1 paths.
20/03/19 08:38:12 INFO FileSourceStrategy: Pruning directories with:
20/03/19 08:38:12 INFO FileSourceStrategy: Post-Scan Filters:
20/03/19 08:38:12 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
20/03/19 08:38:12 INFO FileSourceScanExec: Pushed Filters:
20/03/19 08:38:14 INFO CodeGenerator: Code generated in 691.376591 ms
20/03/19 08:38:14 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 285.2 KB, free 366.0 MB)
20/03/19 08:38:14 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 23.3 KB, free 366.0 MB)
20/03/19 08:38:14 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.122.1:35287 (size: 23.3 KB, free: 366.3 MB)
20/03/19 08:38:14 INFO SparkContext: Created broadcast 0 from json at DataFrameExample.scala:31
20/03/19 08:38:14 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194560 bytes, open cost is considered as scanning 4194304 bytes.
20/03/19 08:38:14 INFO SparkContext: Starting job: json at DataFrameExample.scala:31
20/03/19 08:38:14 INFO DAGScheduler: Got job 0 (json at DataFrameExample.scala:31) with 1 output partitions
20/03/19 08:38:14 INFO DAGScheduler: Final stage: ResultStage 0 (json at DataFrameExample.scala:31)
20/03/19 08:38:14 INFO DAGScheduler: Parents of final stage: List()
20/03/19 08:38:14 INFO DAGScheduler: Missing parents: List()
20/03/19 08:38:15 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at json at DataFrameExample.scala:31), which has no missing parents
20/03/19 08:38:15 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 12.3 KB, free 366.0 MB)
20/03/19 08:38:15 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 7.4 KB, free 366.0 MB)
20/03/19 08:38:15 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.122.1:35287 (size: 7.4 KB, free: 366.3 MB)
20/03/19 08:38:15 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1163
20/03/19 08:38:15 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at json at DataFrameExample.scala:31) (first 15 tasks are for partitions Vector(0))
20/03/19 08:38:15 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
20/03/19 08:38:15 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 8242 bytes)
20/03/19 08:38:15 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
20/03/19 08:38:15 INFO Executor: Fetching spark://192.168.122.1:46163/jars/example_2.11-0.1-SNAPSHOT.jar with timestamp 1584621482787
20/03/19 08:38:15 INFO TransportClientFactory: Successfully created connection to /192.168.122.1:46163 after 145 ms (0 ms spent in bootstraps)
20/03/19 08:38:15 INFO Utils: Fetching spark://192.168.122.1:46163/jars/example_2.11-0.1-SNAPSHOT.jar to /tmp/spark-983f7f15-6df2-4fec-90b0-2534f4b91764/userFiles-4bb98e5a-d49a-4e2f-9553-4e0982f41f0e/fetchFileTemp5270349024712252124.tmp
20/03/19 08:38:16 INFO Executor: Adding file:/tmp/spark-983f7f15-6df2-4fec-90b0-2534f4b91764/userFiles-4bb98e5a-d49a-4e2f-9553-4e0982f41f0e/example_2.11-0.1-SNAPSHOT.jar to class loader
20/03/19 08:38:16 INFO FileScanRDD: Reading File path: file:///tmp/test/data.json, range: 0-256, partition values: [empty row]
20/03/19 08:38:16 INFO CodeGenerator: Code generated in 88.903645 ms
20/03/19 08:38:16 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1893 bytes result sent to driver
20/03/19 08:38:16 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1198 ms on localhost (executor driver) (1/1)
20/03/19 08:38:16 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
20/03/19 08:38:16 INFO DAGScheduler: ResultStage 0 (json at DataFrameExample.scala:31) finished in 1.639 s
20/03/19 08:38:16 INFO DAGScheduler: Job 0 finished: json at DataFrameExample.scala:31, took 1.893394 s
root
|-- _corrupt_record: string (nullable = true)
20/03/19 08:38:16 INFO SparkContext: Invoking stop() from shutdown hook
20/03/19 08:38:16 INFO SparkUI: Stopped Spark web UI at http://192.168.122.1:4040
20/03/19 08:38:16 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/03/19 08:38:17 INFO MemoryStore: MemoryStore cleared
20/03/19 08:38:17 INFO BlockManager: BlockManager stopped
20/03/19 08:38:17 INFO BlockManagerMaster: BlockManagerMaster stopped
20/03/19 08:38:17 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/03/19 08:38:17 INFO SparkContext: Successfully stopped SparkContext
20/03/19 08:38:17 INFO ShutdownHookManager: Shutdown hook called
20/03/19 08:38:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-983f7f15-6df2-4fec-90b0-2534f4b91764
20/03/19 08:38:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-7d1fcc2e-af36-4dc4-ab6b-49b901e890ba
The original code from the book is
object DataFrameExample extends Serializable {
def main(args: Array[String]) = {
val pathToDataFolder = args(0)
// start up the SparkSession
// along with explicitly setting a given config
val spark = SparkSession.builder().appName("Spark Example")
.config("spark.sql.warehouse.dir", "/user/hive/warehouse")
.getOrCreate()
// udf registration
spark.udf.register("myUDF", someUDF(_:String):String)
val df = spark.read.json(pathToDataFolder + "data.json")
val manipulated = df.groupBy(expr("myUDF(group)")).sum().collect()
.foreach(x => println(x))
}
}
There is no issue with Code. The issue is with your data. It is not in json format. if you will check double quote(") is missing around column value in your data so it is giving _corrupt_record: string
Chang data as below and run the same code:
{"grouping":"group_1", "value":5}
{"grouping":"group_1", "value":6}
{"grouping":"group_3", "value":7}
{"grouping":"group_2", "value":3}
{"grouping":"group_4", "value":2}
{"grouping":"group_1", "value":1}
{"grouping":"group_2", "value":2}
{"grouping":"group_3", "value":3}
df = spark.read.json("/spath/files/1.json")
df.show()
+--------+-----+
|grouping|value|
+--------+-----+
| group_1| 5|
| group_1| 6|
| group_3| 7|
| group_2| 3|
| group_4| 2|
| group_1| 1|
| group_2| 2|
| group_3| 3|
+--------+-----+
As pointed out by others in this thread the problem is that your input is not a valid JSON. However libraries used by Spark, and by extensions Spark itself, supports such cases:
val df = spark
.read
.option("allowUnquotedFieldNames", "true")
.json(pathToDataFolder + "data.json")

Create an scala sbt project and using spark functionality

I have a spark application that i want to run using sbt. If a run just an application using only scala code, it works. But when a try to import spark functionalities and perform spark code, it wont work. This is my spark script:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark._
object hi {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("hi").setMaster("local[2]");
// Create a Scala Spark Context.
val sc = new SparkContext(conf)
// Load our input data.
val file1 = sc.textFile("geotweets.tsv")
val a2 = file1.map(_.split("\t")).map(rec => rec(1)).take(10).foreach(println)
}
}
And my build.sbt is like this
name := "Spark-test"
version := "1.0"
scalaVersion := "2.10.2"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.10" % "1.0.2"
)
But when i run this application in sbt i get this error-message:
[info] Compiling 1 Scala source to C:\Users\kolbj\OneDrive - NTNU\Emner\BigData\SBT-Phase2\target\scala-2.10\classes ...
[info] Done compiling.
[info] Packaging C:\Users\kolbj\OneDrive - NTNU\Emner\BigData\SBT-Phase2\target\scala-2.10\faen_2.10-1.0.jar ...
[info] Done packaging.
[info] Running hi
18/04/21 15:20:37 INFO spark.SecurityManager: Changing view acls to: kolbj
18/04/21 15:20:37 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(kolbj)
18/04/21 15:20:38 INFO slf4j.Slf4jLogger: Slf4jLogger started
18/04/21 15:20:38 INFO Remoting: Starting remoting
18/04/21 15:20:38 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark#LAPTOP-9N8CNCEL:51096]
18/04/21 15:20:38 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark#LAPTOP-9N8CNCEL:51096]
18/04/21 15:20:38 INFO spark.SparkEnv: Registering MapOutputTracker
18/04/21 15:20:38 INFO spark.SparkEnv: Registering BlockManagerMaster
18/04/21 15:20:38 INFO storage.DiskBlockManager: Created local directory at C:\Users\kolbj\AppData\Local\Temp\spark-local-20180421152038-b562
18/04/21 15:20:38 INFO storage.MemoryStore: MemoryStore started with capacity 273.3 MB.
18/04/21 15:20:38 INFO network.ConnectionManager: Bound socket to port 51099 with id = ConnectionManagerId(LAPTOP-9N8CNCEL,51099)
18/04/21 15:20:38 INFO storage.BlockManagerMaster: Trying to register BlockManager
18/04/21 15:20:38 INFO storage.BlockManagerInfo: Registering block manager LAPTOP-9N8CNCEL:51099 with 273.3 MB RAM
18/04/21 15:20:38 INFO storage.BlockManagerMaster: Registered BlockManager
18/04/21 15:20:38 INFO spark.HttpServer: Starting HTTP Server
18/04/21 15:20:38 INFO server.Server: jetty-8.1.14.v20131031
18/04/21 15:20:38 INFO server.AbstractConnector: Started SocketConnector#0.0.0.0:51100
18/04/21 15:20:38 INFO broadcast.HttpBroadcast: Broadcast server started at http://192.168.56.1:51100
18/04/21 15:20:38 INFO spark.HttpFileServer: HTTP File server directory is C:\Users\kolbj\AppData\Local\Temp\spark-17906dea-b751-4fca-9c8c-bca10d06246a
18/04/21 15:20:38 INFO spark.HttpServer: Starting HTTP Server
18/04/21 15:20:38 INFO server.Server: jetty-8.1.14.v20131031
18/04/21 15:20:38 INFO server.AbstractConnector: Started SocketConnector#0.0.0.0:51101
18/04/21 15:20:38 INFO server.Server: jetty-8.1.14.v20131031
18/04/21 15:20:38 INFO server.AbstractConnector: Started SelectChannelConnector#0.0.0.0:4040
18/04/21 15:20:38 INFO ui.SparkUI: Started SparkUI at http://LAPTOP-9N8CNCEL:4040
18/04/21 15:20:39 INFO storage.MemoryStore: ensureFreeSpace(32816) called with curMem=0, maxMem=286575820
18/04/21 15:20:39 INFO storage.MemoryStore: Block broadcast_0 stored as values to memory (estimated size 32.0 KB, free 273.3 MB)
18/04/21 15:20:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/04/21 15:20:39 WARN snappy.LoadSnappy: Snappy native library not loaded
[error] (run-main-0) org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/kolbj/OneDrive - NTNU/Emner/BigData/SBT-Phase2/geotweets.tsv
[error] org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/kolbj/OneDrive - NTNU/Emner/BigData/SBT-Phase2/geotweets.tsv
[error] at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
[error] at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
[error] at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:175)
18/04/21 15:20:39 ERROR spark.ContextCleaner: Error in cleaning thread
[java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:117)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply(ContextCleaner.scala:115)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply(ContextCleaner.scala:115)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1160)
at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:114)
at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:65)
18/04/21 15:20:39 INFO network.ConnectionManager: Selector thread was interrupted!
18/04/21 15:20:39 ERROR util.Utils: Uncaught exception in thread SparkListenerBus
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:48)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply(LiveListenerBus.scala:47)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply(LiveListenerBus.scala:47)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1160)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:46)
error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
[error] at scala.Option.getOrElse(Option.scala:120)
[error] at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
[error] at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
[error] at scala.Option.getOrElse(Option.scala:120)
[error] at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
[error] at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
[error] at scala.Option.getOrElse(Option.scala:120)
[error] at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
[error] at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
[error] at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
[error] at scala.Option.getOrElse(Option.scala:120)
[error] at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
[error] at org.apache.spark.rdd.RDD.take(RDD.scala:983)
[error] at hi$.main(hw.scala:15)
[error] at hi.main(hw.scala)
[error] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[error] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[error] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[error] at java.lang.reflect.Method.invoke(Method.java:498)
[error] at sbt.Run.invokeMain(Run.scala:93)
[error] at sbt.Run.run0(Run.scala:87)
[error] at sbt.Run.execute$1(Run.scala:65)
[error] at sbt.Run.$anonfun$run$4(Run.scala:77)
[error] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
[error] at sbt.util.InterfaceUtil$$anon$1.get(InterfaceUtil.scala:10)
[error] at sbt.TrapExit$App.run(TrapExit.scala:252)
[error] at java.lang.Thread.run(Thread.java:748)
[error] java.lang.RuntimeException: Nonzero exit code: 1
[error] at sbt.Run$.executeTrapExit(Run.scala:124)
[error] at sbt.Run.run(Run.scala:77)
[error] at sbt.Defaults$.$anonfun$bgRunTask$5(Defaults.scala:1172)
[error] at sbt.Defaults$.$anonfun$bgRunTask$5$adapted(Defaults.scala:1167)
[error] at sbt.internal.BackgroundThreadPool.$anonfun$run$1(DefaultBackgroundJobService.scala:366)
[error] at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
[error] at scala.util.Try$.apply(Try.scala:209)
[error] at sbt.internal.BackgroundThreadPool$BackgroundRunnable.run(DefaultBackgroundJobService.scala:289)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[error] at java.lang.Thread.run(Thread.java:748)
sbt:FAEN> [error] (Compile / run) Nonzero exit code: 1
[error] Total time: 16 s, completed 21.apr.2018 15:20:39
18/04/21 15:20:42 INFO storage.BlockManager: Removing broadcast 0
18/04/21 15:20:42 INFO spark.ContextCleaner: Cleaned broadcast 0
18/04/21 15:20:42 INFO storage.BlockManager: Removing block broadcast_0
18/04/21 15:20:42 INFO storage.MemoryStore: Block broadcast_0 of size 32816 dropped from memory (free 286575820)
i know that the spark code works fine when using spark REPL. Also this spark code needs to retreive a tsv file using this line
val file1 = sc.textFile("geotweets.tsv")
So my second question is where should this file be placed?
My project repository is like this:
SBT-phase2(project name)
\build.sbt
\src\main\scala\hw.scala
\src\main\scala\geotweets.tsv
Anyone who knows how to fix this? :)
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/kolbj/OneDrive - NTNU/Emner/BigData/SBT-Phase2/geotweets.tsv
The path of file you provided is wrong. Fix this. It will be good if you provide absolute path
You can use java.io.File's CanonicalPath api as
val file1 = sc.textFile(new java.io.File(".").getCanonicalFile+"\src\main\scala\geotweets.tsv")

Cannot run spark jobs locally using sbt, but works in IntelliJ

I've written a few simple Spark jobs and some tests for them. I've done everything in IntelliJ and it works great. Now, I'd like to make sure my code builds with sbt. Compiling is fine, but I get strange errors during running and testing.
I am using Scala version 2.11.8 and sbt version 0.13.8
My build.sbt file looks like this:
name := "test"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0"
libraryDependencies += "javax.mail" % "javax.mail-api" % "1.5.6"
libraryDependencies += "com.sun.mail" % "javax.mail" % "1.5.6"
libraryDependencies += "commons-cli" % "commons-cli" % "1.3.1"
libraryDependencies += "org.scalatest" % "scalatest_2.11" % "3.0.0" % "test"
libraryDependencies += "com.holdenkarau" % "spark-testing-base_2.11" % "2.0.0_0.4.4" % "test" intransitive()
I try to run my code using sbt "run-main com.test.email.processor.bin.Runner" Here is the output:
[info] Loading project definition from /Users/max/workplace/test/project
[info] Set current project to test (in build file:/Users/max/workplace/test/)
[info] Running com.test.email.processor.bin.Runner -j recipientCount -e /Users/max/workplace/data/test/enron_with_categories/*/*.txt
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/08/23 18:46:55 INFO SparkContext: Running Spark version 2.0.0
16/08/23 18:46:55 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/08/23 18:46:55 INFO SecurityManager: Changing view acls to: max
16/08/23 18:46:55 INFO SecurityManager: Changing modify acls to: max
16/08/23 18:46:55 INFO SecurityManager: Changing view acls groups to:
16/08/23 18:46:55 INFO SecurityManager: Changing modify acls groups to:
16/08/23 18:46:55 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(max); groups with view permissions: Set(); users with modify permissions: Set(max); groups with modify permissions: Set()
16/08/23 18:46:56 INFO Utils: Successfully started service 'sparkDriver' on port 61759.
16/08/23 18:46:56 INFO SparkEnv: Registering MapOutputTracker
16/08/23 18:46:56 INFO SparkEnv: Registering BlockManagerMaster
16/08/23 18:46:56 INFO DiskBlockManager: Created local directory at /private/var/folders/75/4dydy_6110v0gjv7bg265_g40000gn/T/blockmgr-9eb526c0-b7e5-444a-b186-d7f248c5dc62
16/08/23 18:46:56 INFO MemoryStore: MemoryStore started with capacity 408.9 MB
16/08/23 18:46:56 INFO SparkEnv: Registering OutputCommitCoordinator
16/08/23 18:46:56 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/08/23 18:46:56 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.11:4040
16/08/23 18:46:56 INFO Executor: Starting executor ID driver on host localhost
16/08/23 18:46:57 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 61760.
16/08/23 18:46:57 INFO NettyBlockTransferService: Server created on 192.168.1.11:61760
16/08/23 18:46:57 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.11, 61760)
16/08/23 18:46:57 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.11:61760 with 408.9 MB RAM, BlockManagerId(driver, 192.168.1.11, 61760)
16/08/23 18:46:57 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.11, 61760)
16/08/23 18:46:57 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 128.0 KB, free 408.8 MB)
16/08/23 18:46:57 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 14.6 KB, free 408.8 MB)
16/08/23 18:46:57 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.11:61760 (size: 14.6 KB, free: 408.9 MB)
16/08/23 18:46:57 INFO SparkContext: Created broadcast 0 from wholeTextFiles at RecipientCountJob.scala:22
16/08/23 18:46:58 WARN ClosureCleaner: Expected a closure; got com.test.email.processor.util.cleanEmail$
16/08/23 18:46:58 INFO FileInputFormat: Total input paths to process : 1702
16/08/23 18:46:58 INFO FileInputFormat: Total input paths to process : 1702
16/08/23 18:46:58 INFO CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 0
16/08/23 18:46:58 INFO SparkContext: Starting job: take at RecipientCountJob.scala:35
16/08/23 18:46:58 WARN DAGScheduler: Creating new stage failed due to exception - job: 0
java.lang.ClassNotFoundException: scala.Function0
at sbt.classpath.ClasspathFilter.loadClass(ClassLoaders.scala:63)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at com.twitter.chill.KryoBase$$anonfun$1.apply(KryoBase.scala:41)
at com.twitter.chill.KryoBase$$anonfun$1.apply(KryoBase.scala:41)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.immutable.Range.foreach(Range.scala:166)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at com.twitter.chill.KryoBase.<init>(KryoBase.scala:41)
at com.twitter.chill.EmptyScalaKryoInstantiator.newKryo(ScalaKryoInstantiator.scala:57)
at org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:86)
at org.apache.spark.serializer.KryoSerializerInstance.borrowKryo(KryoSerializer.scala:274)
at org.apache.spark.serializer.KryoSerializerInstance.<init>(KryoSerializer.scala:259)
at org.apache.spark.serializer.KryoSerializer.newInstance(KryoSerializer.scala:175)
at org.apache.spark.serializer.KryoSerializer.supportsRelocationOfSerializedObjects$lzycompute(KryoSerializer.scala:182)
at org.apache.spark.serializer.KryoSerializer.supportsRelocationOfSerializedObjects(KryoSerializer.scala:178)
at org.apache.spark.shuffle.sort.SortShuffleManager$.canUseSerializedShuffle(SortShuffleManager.scala:187)
at org.apache.spark.shuffle.sort.SortShuffleManager.registerShuffle(SortShuffleManager.scala:99)
at org.apache.spark.ShuffleDependency.<init>(Dependency.scala:90)
at org.apache.spark.rdd.ShuffledRDD.getDependencies(ShuffledRDD.scala:91)
at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:235)
at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:233)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.dependencies(RDD.scala:233)
at org.apache.spark.scheduler.DAGScheduler.visit$2(DAGScheduler.scala:418)
at org.apache.spark.scheduler.DAGScheduler.getAncestorShuffleDependencies(DAGScheduler.scala:433)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getShuffleMapStage(DAGScheduler.scala:288)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$visit$1$1.apply(DAGScheduler.scala:394)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$visit$1$1.apply(DAGScheduler.scala:391)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.scheduler.DAGScheduler.visit$1(DAGScheduler.scala:391)
at org.apache.spark.scheduler.DAGScheduler.getParentStages(DAGScheduler.scala:403)
at org.apache.spark.scheduler.DAGScheduler.getParentStagesAndId(DAGScheduler.scala:304)
at org.apache.spark.scheduler.DAGScheduler.newResultStage(DAGScheduler.scala:339)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:849)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1626)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1618)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1607)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
16/08/23 18:46:58 INFO DAGScheduler: Job 0 failed: take at RecipientCountJob.scala:35, took 0.076653 s
[error] (run-main-0) java.lang.ClassNotFoundException: scala.Function0
java.lang.ClassNotFoundException: scala.Function0
[trace] Stack trace suppressed: run last compile:runMain for the full output.
16/08/23 18:46:58 ERROR ContextCleaner: Error in cleaning thread
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:175)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1229)
at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:172)
at org.apache.spark.ContextCleaner$$anon$1.run(ContextCleaner.scala:67)
16/08/23 18:46:58 ERROR Utils: uncaught error in thread SparkListenerBus, stopping SparkContext
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:67)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:66)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:66)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:65)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1229)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:64)
java.lang.RuntimeException: Nonzero exit code: 1
It would appear you are missing your scala-library as scala.Function0 comes from the standard Scala lib.
You could try adding the scala-lib in certain scopes
libraryDependencies += "org.scala-lang" % "scala-library" % scalaVersion.value
But it seems like the scala-lib is not being added to the classpath of your run.
Might want to also add something like so the same classpath used to compile is used to run the code in SBT.
fullClasspath in run := (fullClasspath in Compile).value
Apparently, Spark cannot be run via sbt. I ended up packing the entire job into a jar using the assembly plugin and running it with java.