How to set Spark MemoryStore size when running in IntelliJ Scala Console? - scala

I am running Spark code as script in Intellij (CE 2017.1) Scala Console on Linux 64 (Fedora 25). I set SparkContext at the start:
import org.apache.spark.{SparkConf, SparkContext}
val conf = new SparkConf().
setAppName("RandomForest").
setMaster("local[*]").
set("spark.local.dir", "/spark-tmp").
set("spark.driver.memory", "4g").
set("spark.executor.memory", "4g")
val sc = new SparkContext(conf)
But the running SparkContext always starts with the same line:
17/03/27 20:12:21 INFO SparkContext: Running Spark version 2.1.0
17/03/27 20:12:21 INFO MemoryStore: MemoryStore started with capacity 871.6 MB
17/03/27 20:12:21 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.65:38119 with 871.8 MB RAM, BlockManagerId(driver, 192.168.1.65, 38119, None)
And the Executors tab in the Spark web UI shows the same amount.
Exporting _JAVA_OPTIONS="-Xms2g -Xmx4g" from the terminal before start also had no effect here.

The only way to increase Spark MemoryStore and eventually Storage memory Executors tab of web UI was to add -Xms2g -Xmx4g in VM options directly in Intellij Scala Console settings before start.
Now the info line prints:
17/03/27 20:12:21 INFO MemoryStore: MemoryStore started with capacity 2004.6 MB
17/03/27 20:12:21 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.65:41997 with 2004.6 MB RAM, BlockManagerId(driver, 192.168.1.65, 41997, None)
and the Spark web UI Executors tab Storage Memory shows 2.1 GB.

Related

Cannot run spark submit in standalone spark cluster

I am working with the following docker-compose image to build a spark standalone cluster:
---
# ----------------------------------------------------------------------------------------
# -- Docs: https://github.com/cluster-apps-on-docker/spark-standalone-cluster-on-docker --
# ----------------------------------------------------------------------------------------
version: "3.6"
volumes:
shared-workspace:
name: "hadoop-distributed-file-system"
driver: local
services:
jupyterlab:
image: andreper/jupyterlab:3.0.0-spark-3.0.0
container_name: jupyterlab
ports:
- 8888:8888
- 4040:4040
volumes:
- shared-workspace:/opt/workspace
spark-master:
image: andreper/spark-master:3.0.0
container_name: spark-master
ports:
- 8080:8080
- 7077:7077
volumes:
- shared-workspace:/opt/workspace
spark-worker-1:
image: andreper/spark-worker:3.0.0
container_name: spark-worker-1
environment:
- SPARK_WORKER_CORES=1
- SPARK_WORKER_MEMORY=512m
ports:
- 8081:8081
volumes:
- shared-workspace:/opt/workspace
depends_on:
- spark-master
spark-worker-2:
image: andreper/spark-worker:3.0.0
container_name: spark-worker-2
environment:
- SPARK_WORKER_CORES=1
- SPARK_WORKER_MEMORY=512m
ports:
- 8082:8081
volumes:
- shared-workspace:/opt/workspace
depends_on:
- spark-master
I followed this guide: https://towardsdatascience.com/apache-spark-cluster-on-docker-ft-a-juyterlab-interface-418383c95445.
Here can be found the Github repo: https://github.com/cluster-apps-on-docker/spark-standalone-cluster-on-docker
I can run the cluster and I can run code inside of the jupyter container, connecting to the master spark node without problems.
The problem starts when I want to run the spark code with spark submit. I really cannot understand how the cluster works. When I run inside the Jupyter container, I can quickly see where the scripts I create are, but I can't find them in the spark master container. If I check the docker-compose.yml, the volumes indicates that the folder where the scripts are stored is:
volumes:
- shared-workspace:/opt/workspace
But I cannot find this folder in any of the spark containers.
When I run, spark submit, I run it once I have executed inside of the Jupyter container. In the Jupyter container I have all the scripts that I am working with, but I have the doubt when I write the following command: spark submit --master spark:// spark-master:7077 <PATH to my python script>, the path of the python script, is the path where the script in Jupyter container or spark master container?
I can run the spark submit command without specifying the master, then it runs locally, and it runs without problems inside of the Jupyter container.
This is the python code I am executing:
from pyspark.sql import SparkSession
from pyspark import SparkContext, SparkConf
from os.path import expanduser, join, abspath
sparkConf = SparkConf()
sparkConf.setMaster("spark://spark-master:7077")
sparkConf.setAppName("pyspark-4")
sparkConf.set("spark.executor.memory", "2g")
sparkConf.set("spark.driver.memory", "2g")
sparkConf.set("spark.executor.cores", "1")
sparkConf.set("spark.driver.cores", "1")
sparkConf.set("spark.dynamicAllocation.enabled", "false")
sparkConf.set("spark.shuffle.service.enabled", "false")
sparkConf.set("spark.sql.warehouse.dir", warehouse_location)
spark = SparkSession.builder.config(conf=sparkConf).getOrCreate()
sc = spark.sparkContext
df = spark.createDataFrame(
[
(1, "foo"), # create your data here, be consistent in the types.
(2, "bar"),
],
["id", "label"], # add your column names here
)
print(df.show())
But when I specify the master= --master spark:// spark-master: 7077, and specifying the path where the script lives in the jupyter container:
spark-submit --master spark://spark-master:7077 test.py
ant this are the logs I receive:
21/06/06 21:32:04 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/06/06 21:32:08 INFO SparkContext: Running Spark version 3.0.0
21/06/06 21:32:09 INFO ResourceUtils: ==============================================================
21/06/06 21:32:09 INFO ResourceUtils: Resources for spark.driver:
21/06/06 21:32:09 INFO ResourceUtils: ==============================================================
21/06/06 21:32:09 INFO SparkContext: Submitted application: pyspark-4
21/06/06 21:32:09 INFO SecurityManager: Changing view acls to: root
21/06/06 21:32:09 INFO SecurityManager: Changing modify acls to: root
21/06/06 21:32:09 INFO SecurityManager: Changing view acls groups to:
21/06/06 21:32:09 INFO SecurityManager: Changing modify acls groups to:
21/06/06 21:32:09 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/06/06 21:32:12 INFO Utils: Successfully started service 'sparkDriver' on port 45627.
21/06/06 21:32:12 INFO SparkEnv: Registering MapOutputTracker
21/06/06 21:32:13 INFO SparkEnv: Registering BlockManagerMaster
21/06/06 21:32:13 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
21/06/06 21:32:13 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
21/06/06 21:32:13 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
21/06/06 21:32:13 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-5a81855c-3160-49a5-b9f9-9cdfe6e5ca62
21/06/06 21:32:14 INFO MemoryStore: MemoryStore started with capacity 366.3 MiB
21/06/06 21:32:14 INFO SparkEnv: Registering OutputCommitCoordinator
21/06/06 21:32:16 INFO Utils: Successfully started service 'SparkUI' on port 4040.
21/06/06 21:32:16 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://3b232f9ed93b:4040
21/06/06 21:32:19 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
21/06/06 21:32:20 INFO TransportClientFactory: Successfully created connection to spark-master/172.21.0.5:7077 after 284 ms (0 ms spent in bootstraps)
21/06/06 21:32:23 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20210606213223-0000
21/06/06 21:32:23 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46539.
21/06/06 21:32:23 INFO NettyBlockTransferService: Server created on 3b232f9ed93b:46539
21/06/06 21:32:23 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
21/06/06 21:32:23 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 3b232f9ed93b, 46539, None)
21/06/06 21:32:23 INFO BlockManagerMasterEndpoint: Registering block manager 3b232f9ed93b:46539 with 366.3 MiB RAM, BlockManagerId(driver, 3b232f9ed93b, 46539, None)
21/06/06 21:32:23 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 3b232f9ed93b, 46539, None)
21/06/06 21:32:23 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 3b232f9ed93b, 46539, None)
21/06/06 21:32:25 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
21/06/06 21:32:29 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('/opt/workspace/spark-warehouse').
21/06/06 21:32:29 INFO SharedState: Warehouse path is '/opt/workspace/spark-warehouse'.
ESTOY AQUI¿¿
21/06/06 21:33:09 INFO CodeGenerator: Code generated in 1925.0009 ms
21/06/06 21:33:09 INFO SparkContext: Starting job: showString at NativeMethodAccessorImpl.java:0
21/06/06 21:33:09 INFO DAGScheduler: Got job 0 (showString at NativeMethodAccessorImpl.java:0) with 1 output partitions
21/06/06 21:33:09 INFO DAGScheduler: Final stage: ResultStage 0 (showString at NativeMethodAccessorImpl.java:0)
21/06/06 21:33:09 INFO DAGScheduler: Parents of final stage: List()
21/06/06 21:33:09 INFO DAGScheduler: Missing parents: List()
21/06/06 21:33:10 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[6] at showString at NativeMethodAccessorImpl.java:0), which has no missing parents
21/06/06 21:33:10 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 11.3 KiB, free 366.3 MiB)
21/06/06 21:33:11 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 5.9 KiB, free 366.3 MiB)
21/06/06 21:33:11 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 3b232f9ed93b:46539 (size: 5.9 KiB, free: 366.3 MiB)
21/06/06 21:33:11 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1200
21/06/06 21:33:11 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[6] at showString at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0))
21/06/06 21:33:11 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
21/06/06 21:33:26 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
21/06/06 21:33:41 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
21/06/06 21:33:56 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
21/06/06 21:34:11 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
21/06/06 21:34:26 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
When I execute the same code, inside of a jupyter notebook, it works without problems.
It is because the path that I have to indicate for the script, is the path where the script lives in the spark-master node? or I am confounding things here
I use
docker pull bitnami/spark
https://hub.docker.com/r/bitnami/spark

System memory 259522560 must be at least 4.718592E8. Please use a larger heap size

I have this error when I run my Spark scripts with version 1.6 of Spark.
My scripts are working with version 1.5.
Java version: 1.8
scala version: 2.11.7
I tried to change the system env variable JAVA_OPTS=-Xms128m -Xmx512m many times, with different values of Xms and Xmx but it didn't change anything ...
I also tried to modify the memory settings of Intellij
help/change memory settings...
file/settings/scal compiler...
Nothing worked.
I have different users in the computer, and Java is setup at the root of the computer while intellij is setup in the folder of one of the users. Can it have an impact?
Here are the logs of the error:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/04/30 17:06:54 INFO SparkContext: Running Spark version 1.6.0
20/04/30 17:06:55 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/04/30 17:06:55 INFO SecurityManager: Changing view acls to:
20/04/30 17:06:55 INFO SecurityManager: Changing modify acls to:
20/04/30 17:06:55 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(); users with modify permissions: Set()
20/04/30 17:06:56 INFO Utils: Successfully started service 'sparkDriver' on port 57698.
20/04/30 17:06:57 INFO Slf4jLogger: Slf4jLogger started
20/04/30 17:06:57 INFO Remoting: Starting remoting
20/04/30 17:06:57 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#10.1.5.175:57711]
20/04/30 17:06:57 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 57711.
20/04/30 17:06:57 INFO SparkEnv: Registering MapOutputTracker
20/04/30 17:06:57 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: System memory 259522560 must be at least 4.718592E8. Please use a larger heap size.
at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:193)
at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:175)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:354)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:457)
at batch.BatchJob$.main(BatchJob.scala:23)
at batch.BatchJob.main(BatchJob.scala)
20/04/30 17:06:57 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.lang.IllegalArgumentException: System memory 259522560 must be at least 4.718592E8. Please use a larger heap size.
at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:193)
at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:175)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:354)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:457)
at batch.BatchJob$.main(BatchJob.scala:23)
at batch.BatchJob.main(BatchJob.scala)
And the beginning of the code:
package batch
import java.lang.management.ManagementFactory
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.sql.{SaveMode, SQLContext}
object BatchJob {
def main (args: Array[String]): Unit = {
// get spark configuration
val conf = new SparkConf()
.setAppName("Lambda with Spark")
// Check if running from IDE
if (ManagementFactory.getRuntimeMXBean.getInputArguments.toString.contains("IntelliJ IDEA")) {
System.setProperty("hadoop.home.dir", "C:\\Libraries\\WinUtils") // required for winutils
conf.setMaster("local[*]")
}
// setup spark context
val sc = new SparkContext(conf)
implicit val sqlContext = new SQLContext(sc)
...
Finally could find a solution:
add -Xms2g -Xmx4g in VM options directly in Intellij Scala Console.
That's the only thing that worked for me

Spark Streaming job wont schedule additional work

Spark 2.1.1 built for Hadoop 2.7.3
Scala 2.11.11
Cluster has 3 Linux RHEL 7.3 Azure VM's, running Spark Standalone Deploy Mode (no YARN or Mesos, yet)
I have created a very simple SparkStreaming job using IntelliJ, written in Scala. I'm using Maven and building the job into a fat/uber jar that contains all dependencies.
When I run the job locally it works fine. If I copy the jar to the cluster and run it with a master of local[2] it also works fine. However, if I submit the job to the cluster master it's like it does not want to schedule additional work beyond the first task. The job starts up, grabs however many events are in the Azure Event Hub, processes them successfully, then never does anymore work. It does not matter if I submit the job to the master as just an application or if it's submitted using supervised cluster mode, both do the same thing.
I've looked through all the logs I know of (master, driver (where applicable), and executor) and I am not seeing any errors or warnings that seem actionable. I've altered the log level, shown below, to show ALL/INFO/DEBUG and sifted through those logs without finding anything that seems relevant.
It may be worth noting that I had previously created several jobs that connect to Kafka, instead of the Azure Event Hub, using Java and those jobs run in supervised cluster mode without an issue on this same cluster. This leads me to believe that the cluster configuration isn't an issue, it's either something with my code (below) or the Azure Event Hub.
Any thoughts on where I might check next to isolate this issue? Here is the code for my simple job.
Thanks in advance.
Note: conf.{name} indicates values I'm loading from a config file. I've tested loading and hard-coding them, both with the same result.
package streamingJob
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.eventhubs.EventHubsUtils
import org.joda.time.DateTime
object TestJob {
def main(args: Array[String]): Unit = {
val sparkConf = new SparkConf()
sparkConf.setAppName("TestJob")
// Uncomment to run locally
//sparkConf.setMaster("local[2]")
val sparkContext = new SparkContext(sparkConf)
sparkContext.setLogLevel("ERROR")
val streamingContext: StreamingContext = new StreamingContext(sparkContext, Seconds(1))
val readerParams = Map[String, String] (
"eventhubs.policyname" -> conf.policyname,
"eventhubs.policykey" -> conf.policykey,
"eventhubs.namespace" -> conf.namespace,
"eventhubs.name" -> conf.name,
"eventhubs.partition.count" -> conf.partitionCount,
"eventhubs.consumergroup" -> conf.consumergroup
)
val eventData = EventHubsUtils.createDirectStreams(
streamingContext,
conf.namespace,
conf.progressdir,
Map("name" -> readerParams))
eventData.foreachRDD(r => {
r.foreachPartition { p => {
p.foreach(d => {
println(DateTime.now() + ": " + d)
}) // end of EventData
}} // foreachPartition
}) // foreachRDD
streamingContext.start()
streamingContext.awaitTermination()
}
}
Here is a set of logs from when I run this as an application, not cluster/supervised.
/spark/bin/spark-submit --class streamingJob.TestJob --master spark://{ip}:7077 --total-executor-cores 1 /spark/job-files/fatjar.jar
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/11/06 17:52:04 INFO SparkContext: Running Spark version 2.1.1
17/11/06 17:52:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/06 17:52:05 INFO SecurityManager: Changing view acls to: root
17/11/06 17:52:05 INFO SecurityManager: Changing modify acls to: root
17/11/06 17:52:05 INFO SecurityManager: Changing view acls groups to:
17/11/06 17:52:05 INFO SecurityManager: Changing modify acls groups to:
17/11/06 17:52:05 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
17/11/06 17:52:06 INFO Utils: Successfully started service 'sparkDriver' on port 44384.
17/11/06 17:52:06 INFO SparkEnv: Registering MapOutputTracker
17/11/06 17:52:06 INFO SparkEnv: Registering BlockManagerMaster
17/11/06 17:52:06 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/11/06 17:52:06 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/11/06 17:52:06 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-b5e2c0f3-2500-42c6-b057-cf5d368580ab
17/11/06 17:52:06 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
17/11/06 17:52:06 INFO SparkEnv: Registering OutputCommitCoordinator
17/11/06 17:52:06 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/11/06 17:52:06 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://{ip}:4040
17/11/06 17:52:06 INFO SparkContext: Added JAR file:/spark/job-files/fatjar.jar at spark://{ip}:44384/jars/fatjar.jar with timestamp 1509990726989
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://{ip}:7077...
17/11/06 17:52:07 INFO TransportClientFactory: Successfully created connection to /{ip}:7077 after 72 ms (0 ms spent in bootstraps)
17/11/06 17:52:07 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20171106175207-0000
17/11/06 17:52:07 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44624.
17/11/06 17:52:07 INFO NettyBlockTransferService: Server created on {ip}:44624
17/11/06 17:52:07 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20171106175207-0000/0 on worker-20171106173151-{ip}-46086 ({ip}:46086) with 1 cores
17/11/06 17:52:07 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO StandaloneSchedulerBackend: Granted executor ID app-20171106175207-0000/0 on hostPort {ip}:46086 with 1 cores, 1024.0 MB RAM
17/11/06 17:52:07 INFO BlockManagerMasterEndpoint: Registering block manager {ip}:44624 with 366.3 MB RAM, BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20171106175207-0000/0 is now RUNNING
17/11/06 17:52:08 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0

Spark stand alone mode: Submitting jobs programmatically

I am new to Spark and I am trying to submit the "quick-start" job from my app. I try to emulate standalone-mode by starting master and slave on my localhost.
object SimpleApp {
def main(args: Array[String]): Unit = {
val logFile = "/opt/spark-2.0.0-bin-hadoop2.7/README.md"
val conf = new SparkConf().setAppName("SimpleApp")
conf.setMaster("spark://10.49.30.77:7077")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile,2).cache();
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println("Lines with a: %s , lines with b: %s".format(numAs,numBs))
}
}
I run my Spark app in my IDE (IntelliJ).
Looking at the logs (logs in workernode), it seems spark cannot find the job class.
16/09/15 17:50:58 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1912.0 B, free 366.3 MB)
16/09/15 17:50:58 INFO TorrentBroadcast: Reading broadcast variable 1 took 137 ms
16/09/15 17:50:58 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.1 KB, free 366.3 MB)
16/09/15 17:50:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.ClassNotFoundException: SimpleApp$$anonfun$1
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
1.Does this mean the Job resources (classes) are not transmitted to the slave node?
2.For stand-alone mode , I must submit jobs using "spark-submit" CLI? If so, how to submit sparks jobs from a app(for example a webapp)
3.Also unrelated question : I see in the logs the,DriverProgram starts a server(port 4040).Whats the purpose of this? DriveProgram being the client, why does it start this service ?
16/09/15 17:50:52 INFO SparkEnv: Registering OutputCommitCoordinator
16/09/15 17:50:53 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/09/15 17:50:53 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.49.30.77:4040
You should either set the resources paths in SparkConf using the setJars method or provide the resources in spark-submit command with the --jars option when running from CLI.

Spark hangs during RDD reading

I have Apache Spark master node. When I try to iterate throught RDDs Spark hangs.
Here is an example of my code:
val conf = new SparkConf()
.setAppName("Demo")
.setMaster("spark://localhost:7077")
.set("spark.executor.memory", "1g")
val sc = new SparkContext(conf)
val records = sc.textFile("file:///Users/barbara/projects/spark/src/main/resources/videos.csv")
println("Start")
records.collect().foreach(println)
println("Finish")
Spark log says:
Start
16/04/05 17:32:23 INFO FileInputFormat: Total input paths to process : 1
16/04/05 17:32:23 INFO SparkContext: Starting job: collect at Application.scala:23
16/04/05 17:32:23 INFO DAGScheduler: Got job 0 (collect at Application.scala:23) with 2 output partitions
16/04/05 17:32:23 INFO DAGScheduler: Final stage: ResultStage 0 (collect at Application.scala:23)
16/04/05 17:32:23 INFO DAGScheduler: Parents of final stage: List()
16/04/05 17:32:23 INFO DAGScheduler: Missing parents: List()
16/04/05 17:32:23 INFO DAGScheduler: Submitting ResultStage 0 (file:///Users/barbara/projects/spark/src/main/resources/videos.csv MapPartitionsRDD[1] at textFile at Application.scala:19), which has no missing parents
16/04/05 17:32:23 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.0 KB, free 120.5 KB)
16/04/05 17:32:23 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1811.0 B, free 122.3 KB)
16/04/05 17:32:23 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.18.199.187:55983 (size: 1811.0 B, free: 2.4 GB)
16/04/05 17:32:23 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
16/04/05 17:32:23 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (file:///Users/barbara/projects/spark/src/main/resources/videos.csv MapPartitionsRDD[1] at textFile at Application.scala:19)
16/04/05 17:32:23 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
I see only a "Start" message. Seems Spark do nothing to read RDDs. Any ideas how to fix it?
UPD
The data I want to read:
123v4n312bv4nb12,Action,Comedy
2n4vhj2gvrh24gvr,Action,Drama
sjfu326gjrw6g374,Drama,Horror
If Spark hands on such a small dataset I would first look for:
Am I trying to connect to a cluster that doesn't respond/exists? If I am trying to connect to a running cluster, I would first try to run the same code locally setMaster("local[*]"). If this works, I would know that there is something going on with the "master" I try to connect to.
Am I asking for more resources that what the cluster has to offer? For example, if the cluster manages 2G and I ask for a 3GB executor, my application will never get schedule and it will be in the job queue forever.
Specific to the comments above. If you started your cluster by sbin/start-master.sh you will NOT get a running cluster. At the very minimum you need a master and a worker (for standalone). You should use the start-all.sh script. I recommend a bit more homework and follow a tutorial.
Use this instead:
val bufferedSource = io.Source.fromFile("/path/filename.csv")
for (line <- bufferedSource.getLines) {
println(line)
}