There is a HTTP server starts when Launching Spark jar on a machine, what's that? - scala

I want to use machine A where I will submit my Spark job to the cluster, A has no spark environment, just java. When I launch the jar, there is a HTTP server starts:
[steven#bj-230 ~]$ java -jar helloCluster.jar SimplyApp
log4j:WARN No appenders could be found for logger (akka.event.slf4j.Slf4jLogger).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
14/06/10 16:54:54 INFO SparkEnv: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/06/10 16:54:54 INFO SparkEnv: Registering BlockManagerMaster
14/06/10 16:54:54 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140610165454-4393
14/06/10 16:54:54 INFO MemoryStore: MemoryStore started with capacity 1055.1 MB.
14/06/10 16:54:54 INFO ConnectionManager: Bound socket to port 59981 with id = ConnectionManagerId(bj-230,59981)
14/06/10 16:54:54 INFO BlockManagerMaster: Trying to register BlockManager
14/06/10 16:54:54 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager bj-230:59981 with 1055.1 MB RAM
14/06/10 16:54:54 INFO BlockManagerMaster: Registered BlockManager
14/06/10 16:54:54 INFO HttpServer: Starting HTTP Server
14/06/10 16:54:54 INFO HttpBroadcast: Broadcast server started at http://10.10.10.230:59233
14/06/10 16:54:54 INFO SparkEnv: Registering MapOutputTracker
14/06/10 16:54:54 INFO HttpFileServer: HTTP File server directory is /tmp/spark-bfdd02f1-3c02-4233-854f-af89542b9acf
14/06/10 16:54:54 INFO HttpServer: Starting HTTP Server
14/06/10 16:54:54 INFO SparkUI: Started Spark Web UI at http://bj-230:4040
14/06/10 16:54:54 INFO SparkContext: Added JAR hdfs://master:8020/tmp/helloCluster.jar at hdfs://master:8020/tmp/helloCluster.jar with timestamp 1402390494838
14/06/10 16:54:54 INFO AppClient$ClientActor: Connecting to master spark://master:7077...
So, what's the meaning of this server? And if I am behind a NAT, is it possible to use this machine A to submit my job to remote cluster?
By the way, the result of this execution is failed. Error log:
14/06/10 16:55:05 INFO SparkDeploySchedulerBackend: Executor app-20140610165321-0005/7 removed: Command exited with code 1
14/06/10 16:55:05 ERROR AppClient$ClientActor: Master removed our application: FAILED; stopping client
14/06/10 16:55:05 WARN SparkDeploySchedulerBackend: Disconnected from Spark cluster! Waiting for reconnection...
14/06/10 16:55:11 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

The spark driver starts few HTTP endpoints:
It provides a Web console that shows the job progress. This http endpoint has a default port of 4040 and can be changed with the configuration option: spark.ui.port. Then, you connect to it with your browser: http://your_host:4040 and you will be able to follow the job. It's only alive the time the driver runs.
There's an additional HTTP endpoint to provide a file download service for the jars declared as dependencies. The workers will contact the driver to download the list of dependencies. This is a random assigned port. Therefore, the driver must be on a routable network from the Spark workers.

Related

Cannot run spark submit in standalone spark cluster

I am working with the following docker-compose image to build a spark standalone cluster:
---
# ----------------------------------------------------------------------------------------
# -- Docs: https://github.com/cluster-apps-on-docker/spark-standalone-cluster-on-docker --
# ----------------------------------------------------------------------------------------
version: "3.6"
volumes:
shared-workspace:
name: "hadoop-distributed-file-system"
driver: local
services:
jupyterlab:
image: andreper/jupyterlab:3.0.0-spark-3.0.0
container_name: jupyterlab
ports:
- 8888:8888
- 4040:4040
volumes:
- shared-workspace:/opt/workspace
spark-master:
image: andreper/spark-master:3.0.0
container_name: spark-master
ports:
- 8080:8080
- 7077:7077
volumes:
- shared-workspace:/opt/workspace
spark-worker-1:
image: andreper/spark-worker:3.0.0
container_name: spark-worker-1
environment:
- SPARK_WORKER_CORES=1
- SPARK_WORKER_MEMORY=512m
ports:
- 8081:8081
volumes:
- shared-workspace:/opt/workspace
depends_on:
- spark-master
spark-worker-2:
image: andreper/spark-worker:3.0.0
container_name: spark-worker-2
environment:
- SPARK_WORKER_CORES=1
- SPARK_WORKER_MEMORY=512m
ports:
- 8082:8081
volumes:
- shared-workspace:/opt/workspace
depends_on:
- spark-master
I followed this guide: https://towardsdatascience.com/apache-spark-cluster-on-docker-ft-a-juyterlab-interface-418383c95445.
Here can be found the Github repo: https://github.com/cluster-apps-on-docker/spark-standalone-cluster-on-docker
I can run the cluster and I can run code inside of the jupyter container, connecting to the master spark node without problems.
The problem starts when I want to run the spark code with spark submit. I really cannot understand how the cluster works. When I run inside the Jupyter container, I can quickly see where the scripts I create are, but I can't find them in the spark master container. If I check the docker-compose.yml, the volumes indicates that the folder where the scripts are stored is:
volumes:
- shared-workspace:/opt/workspace
But I cannot find this folder in any of the spark containers.
When I run, spark submit, I run it once I have executed inside of the Jupyter container. In the Jupyter container I have all the scripts that I am working with, but I have the doubt when I write the following command: spark submit --master spark:// spark-master:7077 <PATH to my python script>, the path of the python script, is the path where the script in Jupyter container or spark master container?
I can run the spark submit command without specifying the master, then it runs locally, and it runs without problems inside of the Jupyter container.
This is the python code I am executing:
from pyspark.sql import SparkSession
from pyspark import SparkContext, SparkConf
from os.path import expanduser, join, abspath
sparkConf = SparkConf()
sparkConf.setMaster("spark://spark-master:7077")
sparkConf.setAppName("pyspark-4")
sparkConf.set("spark.executor.memory", "2g")
sparkConf.set("spark.driver.memory", "2g")
sparkConf.set("spark.executor.cores", "1")
sparkConf.set("spark.driver.cores", "1")
sparkConf.set("spark.dynamicAllocation.enabled", "false")
sparkConf.set("spark.shuffle.service.enabled", "false")
sparkConf.set("spark.sql.warehouse.dir", warehouse_location)
spark = SparkSession.builder.config(conf=sparkConf).getOrCreate()
sc = spark.sparkContext
df = spark.createDataFrame(
[
(1, "foo"), # create your data here, be consistent in the types.
(2, "bar"),
],
["id", "label"], # add your column names here
)
print(df.show())
But when I specify the master= --master spark:// spark-master: 7077, and specifying the path where the script lives in the jupyter container:
spark-submit --master spark://spark-master:7077 test.py
ant this are the logs I receive:
21/06/06 21:32:04 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/06/06 21:32:08 INFO SparkContext: Running Spark version 3.0.0
21/06/06 21:32:09 INFO ResourceUtils: ==============================================================
21/06/06 21:32:09 INFO ResourceUtils: Resources for spark.driver:
21/06/06 21:32:09 INFO ResourceUtils: ==============================================================
21/06/06 21:32:09 INFO SparkContext: Submitted application: pyspark-4
21/06/06 21:32:09 INFO SecurityManager: Changing view acls to: root
21/06/06 21:32:09 INFO SecurityManager: Changing modify acls to: root
21/06/06 21:32:09 INFO SecurityManager: Changing view acls groups to:
21/06/06 21:32:09 INFO SecurityManager: Changing modify acls groups to:
21/06/06 21:32:09 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/06/06 21:32:12 INFO Utils: Successfully started service 'sparkDriver' on port 45627.
21/06/06 21:32:12 INFO SparkEnv: Registering MapOutputTracker
21/06/06 21:32:13 INFO SparkEnv: Registering BlockManagerMaster
21/06/06 21:32:13 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
21/06/06 21:32:13 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
21/06/06 21:32:13 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
21/06/06 21:32:13 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-5a81855c-3160-49a5-b9f9-9cdfe6e5ca62
21/06/06 21:32:14 INFO MemoryStore: MemoryStore started with capacity 366.3 MiB
21/06/06 21:32:14 INFO SparkEnv: Registering OutputCommitCoordinator
21/06/06 21:32:16 INFO Utils: Successfully started service 'SparkUI' on port 4040.
21/06/06 21:32:16 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://3b232f9ed93b:4040
21/06/06 21:32:19 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
21/06/06 21:32:20 INFO TransportClientFactory: Successfully created connection to spark-master/172.21.0.5:7077 after 284 ms (0 ms spent in bootstraps)
21/06/06 21:32:23 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20210606213223-0000
21/06/06 21:32:23 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46539.
21/06/06 21:32:23 INFO NettyBlockTransferService: Server created on 3b232f9ed93b:46539
21/06/06 21:32:23 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
21/06/06 21:32:23 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 3b232f9ed93b, 46539, None)
21/06/06 21:32:23 INFO BlockManagerMasterEndpoint: Registering block manager 3b232f9ed93b:46539 with 366.3 MiB RAM, BlockManagerId(driver, 3b232f9ed93b, 46539, None)
21/06/06 21:32:23 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 3b232f9ed93b, 46539, None)
21/06/06 21:32:23 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 3b232f9ed93b, 46539, None)
21/06/06 21:32:25 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
21/06/06 21:32:29 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('/opt/workspace/spark-warehouse').
21/06/06 21:32:29 INFO SharedState: Warehouse path is '/opt/workspace/spark-warehouse'.
ESTOY AQUI¿¿
21/06/06 21:33:09 INFO CodeGenerator: Code generated in 1925.0009 ms
21/06/06 21:33:09 INFO SparkContext: Starting job: showString at NativeMethodAccessorImpl.java:0
21/06/06 21:33:09 INFO DAGScheduler: Got job 0 (showString at NativeMethodAccessorImpl.java:0) with 1 output partitions
21/06/06 21:33:09 INFO DAGScheduler: Final stage: ResultStage 0 (showString at NativeMethodAccessorImpl.java:0)
21/06/06 21:33:09 INFO DAGScheduler: Parents of final stage: List()
21/06/06 21:33:09 INFO DAGScheduler: Missing parents: List()
21/06/06 21:33:10 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[6] at showString at NativeMethodAccessorImpl.java:0), which has no missing parents
21/06/06 21:33:10 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 11.3 KiB, free 366.3 MiB)
21/06/06 21:33:11 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 5.9 KiB, free 366.3 MiB)
21/06/06 21:33:11 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 3b232f9ed93b:46539 (size: 5.9 KiB, free: 366.3 MiB)
21/06/06 21:33:11 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1200
21/06/06 21:33:11 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[6] at showString at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0))
21/06/06 21:33:11 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
21/06/06 21:33:26 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
21/06/06 21:33:41 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
21/06/06 21:33:56 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
21/06/06 21:34:11 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
21/06/06 21:34:26 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
When I execute the same code, inside of a jupyter notebook, it works without problems.
It is because the path that I have to indicate for the script, is the path where the script lives in the spark-master node? or I am confounding things here
I use
docker pull bitnami/spark
https://hub.docker.com/r/bitnami/spark

Spark Streaming job wont schedule additional work

Spark 2.1.1 built for Hadoop 2.7.3
Scala 2.11.11
Cluster has 3 Linux RHEL 7.3 Azure VM's, running Spark Standalone Deploy Mode (no YARN or Mesos, yet)
I have created a very simple SparkStreaming job using IntelliJ, written in Scala. I'm using Maven and building the job into a fat/uber jar that contains all dependencies.
When I run the job locally it works fine. If I copy the jar to the cluster and run it with a master of local[2] it also works fine. However, if I submit the job to the cluster master it's like it does not want to schedule additional work beyond the first task. The job starts up, grabs however many events are in the Azure Event Hub, processes them successfully, then never does anymore work. It does not matter if I submit the job to the master as just an application or if it's submitted using supervised cluster mode, both do the same thing.
I've looked through all the logs I know of (master, driver (where applicable), and executor) and I am not seeing any errors or warnings that seem actionable. I've altered the log level, shown below, to show ALL/INFO/DEBUG and sifted through those logs without finding anything that seems relevant.
It may be worth noting that I had previously created several jobs that connect to Kafka, instead of the Azure Event Hub, using Java and those jobs run in supervised cluster mode without an issue on this same cluster. This leads me to believe that the cluster configuration isn't an issue, it's either something with my code (below) or the Azure Event Hub.
Any thoughts on where I might check next to isolate this issue? Here is the code for my simple job.
Thanks in advance.
Note: conf.{name} indicates values I'm loading from a config file. I've tested loading and hard-coding them, both with the same result.
package streamingJob
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.eventhubs.EventHubsUtils
import org.joda.time.DateTime
object TestJob {
def main(args: Array[String]): Unit = {
val sparkConf = new SparkConf()
sparkConf.setAppName("TestJob")
// Uncomment to run locally
//sparkConf.setMaster("local[2]")
val sparkContext = new SparkContext(sparkConf)
sparkContext.setLogLevel("ERROR")
val streamingContext: StreamingContext = new StreamingContext(sparkContext, Seconds(1))
val readerParams = Map[String, String] (
"eventhubs.policyname" -> conf.policyname,
"eventhubs.policykey" -> conf.policykey,
"eventhubs.namespace" -> conf.namespace,
"eventhubs.name" -> conf.name,
"eventhubs.partition.count" -> conf.partitionCount,
"eventhubs.consumergroup" -> conf.consumergroup
)
val eventData = EventHubsUtils.createDirectStreams(
streamingContext,
conf.namespace,
conf.progressdir,
Map("name" -> readerParams))
eventData.foreachRDD(r => {
r.foreachPartition { p => {
p.foreach(d => {
println(DateTime.now() + ": " + d)
}) // end of EventData
}} // foreachPartition
}) // foreachRDD
streamingContext.start()
streamingContext.awaitTermination()
}
}
Here is a set of logs from when I run this as an application, not cluster/supervised.
/spark/bin/spark-submit --class streamingJob.TestJob --master spark://{ip}:7077 --total-executor-cores 1 /spark/job-files/fatjar.jar
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/11/06 17:52:04 INFO SparkContext: Running Spark version 2.1.1
17/11/06 17:52:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/06 17:52:05 INFO SecurityManager: Changing view acls to: root
17/11/06 17:52:05 INFO SecurityManager: Changing modify acls to: root
17/11/06 17:52:05 INFO SecurityManager: Changing view acls groups to:
17/11/06 17:52:05 INFO SecurityManager: Changing modify acls groups to:
17/11/06 17:52:05 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
17/11/06 17:52:06 INFO Utils: Successfully started service 'sparkDriver' on port 44384.
17/11/06 17:52:06 INFO SparkEnv: Registering MapOutputTracker
17/11/06 17:52:06 INFO SparkEnv: Registering BlockManagerMaster
17/11/06 17:52:06 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/11/06 17:52:06 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/11/06 17:52:06 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-b5e2c0f3-2500-42c6-b057-cf5d368580ab
17/11/06 17:52:06 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
17/11/06 17:52:06 INFO SparkEnv: Registering OutputCommitCoordinator
17/11/06 17:52:06 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/11/06 17:52:06 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://{ip}:4040
17/11/06 17:52:06 INFO SparkContext: Added JAR file:/spark/job-files/fatjar.jar at spark://{ip}:44384/jars/fatjar.jar with timestamp 1509990726989
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://{ip}:7077...
17/11/06 17:52:07 INFO TransportClientFactory: Successfully created connection to /{ip}:7077 after 72 ms (0 ms spent in bootstraps)
17/11/06 17:52:07 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20171106175207-0000
17/11/06 17:52:07 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44624.
17/11/06 17:52:07 INFO NettyBlockTransferService: Server created on {ip}:44624
17/11/06 17:52:07 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20171106175207-0000/0 on worker-20171106173151-{ip}-46086 ({ip}:46086) with 1 cores
17/11/06 17:52:07 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO StandaloneSchedulerBackend: Granted executor ID app-20171106175207-0000/0 on hostPort {ip}:46086 with 1 cores, 1024.0 MB RAM
17/11/06 17:52:07 INFO BlockManagerMasterEndpoint: Registering block manager {ip}:44624 with 366.3 MB RAM, BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20171106175207-0000/0 is now RUNNING
17/11/06 17:52:08 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0

Hortonworks, Eclipse and Kerberos Client (Authentication, HOW?)

Hello everybody, we have a kerberized HDP (Hortonworks) cluster, we can run Spark jobs from Spark-Submit (CLI), Talend Big Data, but not from Eclipse.
We have a Windows client machine where Eclipse is installed and MIT windows Kerberos Client is confgiured (TGT Configuration). The goal is to run Spark job using eclipse. Portion of the java code related with Spark is operational and tested via CLI. Below is mentioned part of the code for the job.
private void setConfigurationProperties()
{
try{
sConfig.setAppName("abcd-name");
sConfig.setMaster("yarn-client");
sConfig.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
sConfig.set("spark.hadoop.yarn.resourcemanager.address", "rs.abcd.com:8032"); sConfig.set("spark.hadoop.yarn.resourcemanager.scheduler.address","rs.abcd.com:8030");
sConfig.set("spark.hadoop.mapreduce.jobhistory.address","rs.abcd.com:10020");
sConfig.set("spark.hadoop.yarn.app.mapreduce.am.staging-dir", "/dir");
sConfig.set("spark.executor.memory", "2g");
sConfig.set("spark.executor.cores", "4");
sConfig.set("spark.executor.instances", "24");
sConfig.set("spark.yarn.am.cores", "24");
sConfig.set("spark.yarn.am.memory", "16g");
sConfig.set("spark.eventLog.enabled", "true");
sConfig.set("spark.eventLog.dir", "hdfs:///spark-history");
sConfig.set("spark.shuffle.memoryFraction", "0.4");
sConfig.set("spark.hadoop." + "mapreduce.application.framework.path","/hdp/apps/version/mapreduce/mapreduce.tar.gz#mr-framework");
sConfig.set("spark.local.dir", "/tmp");
sConfig.set("spark.hadoop.yarn.resourcemanager.principal", "rm/_HOST#ABCD.COM");
sConfig.set("spark.hadoop.mapreduce.jobhistory.principal", "jhs/_HOST#ABCD.COM");
sConfig.set("spark.hadoop.dfs.namenode.kerberos.principal", "nn/_HOST#ABCD.COM");
sConfig.set("spark.hadoop.fs.defaultFS", "hdfs://hdfs.abcd.com:8020");
sConfig.set("spark.hadoop.dfs.client.use.datanode.hostname", "true"); }
}
When we run the code the following error pops up:
17/04/05 23:37:06 INFO Remoting: Starting remoting
17/04/05 23:37:06 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#1.1.1.1:54356]
17/04/05 23:37:06 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 54356.
17/04/05 23:37:06 INFO SparkEnv: Registering MapOutputTracker
17/04/05 23:37:06 INFO SparkEnv: Registering BlockManagerMaster
17/04/05 23:37:06 INFO DiskBlockManager: Created local directory at C:\tmp\blockmgr-baee2441-1977-4410-b52f-4275ff35d6c1
17/04/05 23:37:06 INFO MemoryStore: MemoryStore started with capacity 2.4 GB
17/04/05 23:37:06 INFO SparkEnv: Registering OutputCommitCoordinator
17/04/05 23:37:07 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/04/05 23:37:07 INFO SparkUI: Started SparkUI at http://1.1.1.1:4040
17/04/05 23:37:07 INFO RMProxy: Connecting to ResourceManager at rs.abcd.com/1.1.1.1:8032
17/04/05 23:37:07 ERROR SparkContext: Error initializing SparkContext.
org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
17/04/05 23:37:07 INFO SparkUI: Stopped Spark web UI at http://1.1.1.1:4040
Please guide us how to specify in java code Kerberos authentication method instead of SIMPLE. Or how to instruct the client for Kerberos authentication request. And whole what should the process look like and what would be the right approach
Thank you

When reading text file from file system, Spark still tries to connect to HDFS

I just created a DC/OS cluster and am trying to run simple Spark task that reads data from /mnt/mesos/sandbox.
object SimpleApp {
def main(args: Array[String]) {
val conf = new SparkConf()
.setAppName("Simple Application")
println("STARTING JOB!")
val sc = new SparkContext(conf)
val rdd = sc.textFile("file:///mnt/mesos/sandbox/foo")
println(rdd.count)
println("ENDING JOB!")
}
}
And I'm deploying the app using
dcos spark run --submit-args='--conf spark.mesos.uris=https://dripit-spark.s3.amazonaws.com/foo --class SimpleApp https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar' --verbose
Unfortunately, task keeps failing with following exception
I0701 18:47:35.782994 30997 logging.cpp:188] INFO level logging started!
I0701 18:47:35.783197 30997 fetcher.cpp:424] Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"https:\/\/dripit-spark.s3.amazonaws.com\/foobar-assembly-1.0.jar"}},{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"https:\/\/dripit-spark.s3.amazonaws.com\/foo"}}],"sandbox_directory":"\/var\/lib\/mesos\/slave\/slaves\/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2\/frameworks\/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002\/executors\/driver-20160701184530-0001\/runs\/67b94f34-a9d3-4662-bedc-8578381e9305"}
I0701 18:47:35.784752 30997 fetcher.cpp:379] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar'
I0701 18:47:35.784791 30997 fetcher.cpp:250] Fetching directly into the sandbox directory
I0701 18:47:35.784818 30997 fetcher.cpp:187] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar'
I0701 18:47:35.784835 30997 fetcher.cpp:134] Downloading resource from 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foobar-assembly-1.0.jar'
W0701 18:47:36.057448 30997 fetcher.cpp:272] Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar
I0701 18:47:36.057673 30997 fetcher.cpp:456] Fetched 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foobar-assembly-1.0.jar'
I0701 18:47:36.057696 30997 fetcher.cpp:379] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foo'
I0701 18:47:36.057714 30997 fetcher.cpp:250] Fetching directly into the sandbox directory
I0701 18:47:36.057741 30997 fetcher.cpp:187] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foo'
I0701 18:47:36.057770 30997 fetcher.cpp:134] Downloading resource from 'https://dripit-spark.s3.amazonaws.com/foo' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foo'
W0701 18:47:36.114565 30997 fetcher.cpp:272] Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: https://dripit-spark.s3.amazonaws.com/foo
I0701 18:47:36.114600 30997 fetcher.cpp:456] Fetched 'https://dripit-spark.s3.amazonaws.com/foo' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foo'
I0701 18:47:36.307576 31006 exec.cpp:143] Version: 0.28.1
I0701 18:47:36.310127 31022 exec.cpp:217] Executor registered on slave c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2
16/07/01 18:47:37 INFO SparkContext: Running Spark version 1.6.1
16/07/01 18:47:37 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/01 18:47:37 WARN SparkConf:
SPARK_JAVA_OPTS was detected (set to '-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2 ').
This is deprecated in Spark 1.0+.
Please instead use:
- ./spark-submit with conf/spark-defaults.conf to set defaults for an application
- ./spark-submit with --driver-java-options to set -X options for a driver
- spark.executor.extraJavaOptions to set -X options for executors
- SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or worker)
16/07/01 18:47:37 WARN SparkConf: Setting 'spark.executor.extraJavaOptions' to '-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2 ' as a work-around.
16/07/01 18:47:37 WARN SparkConf: Setting 'spark.driver.extraJavaOptions' to '-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2 ' as a work-around.
16/07/01 18:47:37 INFO SecurityManager: Changing view acls to: root
16/07/01 18:47:37 INFO SecurityManager: Changing modify acls to: root
16/07/01 18:47:37 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/07/01 18:47:37 INFO Utils: Successfully started service 'sparkDriver' on port 47358.
16/07/01 18:47:38 INFO Slf4jLogger: Slf4jLogger started
16/07/01 18:47:38 INFO Remoting: Starting remoting
16/07/01 18:47:38 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#10.0.1.107:54467]
16/07/01 18:47:38 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 54467.
16/07/01 18:47:38 INFO SparkEnv: Registering MapOutputTracker
16/07/01 18:47:38 INFO SparkEnv: Registering BlockManagerMaster
16/07/01 18:47:38 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-96092a9a-3164-4d65-8c0b-df5403abb056
16/07/01 18:47:38 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/07/01 18:47:38 INFO SparkEnv: Registering OutputCommitCoordinator
16/07/01 18:47:38 INFO Server: jetty-8.y.z-SNAPSHOT
16/07/01 18:47:38 INFO AbstractConnector: Started SelectChannelConnector#0.0.0.0:4040
16/07/01 18:47:38 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/07/01 18:47:38 INFO SparkUI: Started SparkUI at http://10.0.1.107:4040
16/07/01 18:47:38 INFO HttpFileServer: HTTP File server directory is /tmp/spark-37696e45-5e8b-4328-81e6-deec1f185d75/httpd-69184304-7ffd-4420-b020-5f8a1bafecbd
16/07/01 18:47:38 INFO HttpServer: Starting HTTP Server
16/07/01 18:47:38 INFO Server: jetty-8.y.z-SNAPSHOT
16/07/01 18:47:38 INFO AbstractConnector: Started SocketConnector#0.0.0.0:49074
16/07/01 18:47:38 INFO Utils: Successfully started service 'HTTP file server' on port 49074.
16/07/01 18:47:38 INFO SparkContext: Added JAR file:/mnt/mesos/sandbox/foobar-assembly-1.0.jar at http://10.0.1.107:49074/jars/foobar-assembly-1.0.jar with timestamp 1467398858626
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#712: Client environment:zookeeper.version=zookeeper C client 3.4.5
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#716: Client environment:host.name=ip-10-0-1-107.eu-west-1.compute.internal
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#723: Client environment:os.name=Linux
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#724: Client environment:os.arch=4.1.7-coreos-r1
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#725: Client environment:os.version=#2 SMP Thu Nov 5 02:10:23 UTC 2015
I0701 18:47:38.778355 103 sched.cpp:164] Version: 0.25.0
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#733: Client environment:user.name=(null)
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#741: Client environment:user.home=/root
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#753: Client environment:user.dir=/opt/spark/dist
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#zookeeper_init#786: Initiating client connection, host=master.mesos:2181 sessionTimeout=10000 watcher=0x7f74d587c600 sessionId=0 sessionPasswd=<null> context=0x7f7540003f70 flags=0
2016-07-01 18:47:38,786:6(0x7f74c6ec0700):ZOO_INFO#check_events#1703: initiated connection to server [10.0.7.83:2181]
2016-07-01 18:47:38,787:6(0x7f74c6ec0700):ZOO_INFO#check_events#1750: session establishment complete on server [10.0.7.83:2181], sessionId=0x155a57d07f60050, negotiated timeout=10000
I0701 18:47:38.788107 99 group.cpp:331] Group process (group(1)#10.0.1.107:35064) connected to ZooKeeper
I0701 18:47:38.788147 99 group.cpp:805] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0701 18:47:38.788162 99 group.cpp:403] Trying to create path '/mesos' in ZooKeeper
I0701 18:47:38.789402 99 detector.cpp:156] Detected a new leader: (id='1')
I0701 18:47:38.789512 99 group.cpp:674] Trying to get '/mesos/json.info_0000000001' in ZooKeeper
I0701 18:47:38.790228 99 detector.cpp:481] A new leading master (UPID=master#10.0.7.83:5050) is detected
I0701 18:47:38.790293 99 sched.cpp:262] New master detected at master#10.0.7.83:5050
I0701 18:47:38.790473 99 sched.cpp:272] No credentials provided. Attempting to register without authentication
I0701 18:47:38.792147 97 sched.cpp:641] Framework registered with c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002-driver-20160701184530-0001
16/07/01 18:47:38 INFO CoarseMesosSchedulerBackend: Registered as framework ID c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002-driver-20160701184530-0001
16/07/01 18:47:38 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 38752.
16/07/01 18:47:38 INFO NettyBlockTransferService: Server created on 38752
16/07/01 18:47:38 INFO BlockManagerMaster: Trying to register BlockManager
16/07/01 18:47:38 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.1.107:38752 with 511.1 MB RAM, BlockManagerId(driver, 10.0.1.107, 38752)
16/07/01 18:47:38 INFO BlockManagerMaster: Registered BlockManager
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/07/01 18:47:39 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 117.2 KB, free 117.2 KB)
16/07/01 18:47:39 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 12.6 KB, free 129.8 KB)
16/07/01 18:47:39 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.1.107:38752 (size: 12.6 KB, free: 511.1 MB)
16/07/01 18:47:39 INFO SparkContext: Created broadcast 0 from textFile at SimpleApp.scala:13
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 4 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 2 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 0 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 1 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 3 is now TASK_RUNNING
16/07/01 18:47:39 WARN DFSUtil: Namenode for hdfs remains unresolved for ID nn1. Check your hdfs-site.xml file to ensure namenodes are configured properly.
16/07/01 18:47:39 WARN DFSUtil: Namenode for hdfs remains unresolved for ID nn2. Check your hdfs-site.xml file to ensure namenodes are configured properly.
Exception in thread "main" java.lang.IllegalArgumentException: java.net.UnknownHostException: namenode1.hdfs.mesos
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:240)
at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy(ConfiguredFailoverProxyProvider.java:124)
at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:74)
at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:65)
at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:152)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:579)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:524)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:653)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:427)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:400)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at scala.Option.map(Option.scala:145)
at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD.count(RDD.scala:1157)
at SimpleApp$.main(SimpleApp.scala:15)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:786)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:183)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:208)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:123)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.UnknownHostException: namenode1.hdfs.mesos
... 48 more
16/07/01 18:47:39 INFO SparkContext: Invoking stop() from shutdown hook
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/07/01 18:47:40 INFO SparkUI: Stopped Spark web UI at http://10.0.1.107:4040
16/07/01 18:47:40 INFO CoarseMesosSchedulerBackend: Shutting down all executors
16/07/01 18:47:40 INFO CoarseMesosSchedulerBackend: Asking each executor to shut down
I0701 18:47:40.051103 111 sched.cpp:1771] Asked to stop the driver
I0701 18:47:40.051283 96 sched.cpp:1040] Stopping framework 'c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002-driver-20160701184530-0001'
16/07/01 18:47:40 INFO CoarseMesosSchedulerBackend: driver.run() returned with code DRIVER_STOPPED
16/07/01 18:47:40 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/07/01 18:47:40 INFO MemoryStore: MemoryStore cleared
16/07/01 18:47:40 INFO BlockManager: BlockManager stopped
16/07/01 18:47:40 INFO BlockManagerMaster: BlockManagerMaster stopped
16/07/01 18:47:40 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/07/01 18:47:40 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/07/01 18:47:40 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/07/01 18:47:40 INFO SparkContext: Successfully stopped SparkContext
16/07/01 18:47:40 INFO ShutdownHookManager: Shutdown hook called
16/07/01 18:47:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-37696e45-5e8b-4328-81e6-deec1f185d75/httpd-69184304-7ffd-4420-b020-5f8a1bafecbd
16/07/01 18:47:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-37696e45-5e8b-4328-81e6-deec1f185d75
Why is Spark trying to connect to HDFS although file type is explicitly set to file://?
I thought that sc.textFile("file:///") doesn’t require HDFS setup.
Spark always use the Hadoop API to access a file, regardless of that file is local or in HDFS.
I think the problem is your Spark is inheriting an invalid HDFS configuration and hit this bug https://issues.apache.org/jira/browse/SPARK-11227
You should try some workarounds in that ticket to see if it works for you:
Use an older Spark < 1.5.0
Disable HA in HDFS configuration.
Spark will still use the hdfs to write the intermediate results of the stages (in your case, I guess the partial counts).

Unable to connect to Spark master

I start my DataStax cassandra instance with Spark:
dse cassandra -k
I then run this program (from within Eclipse):
import org.apache.spark.sql.SQLContext
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
object Start {
def main(args: Array[String]): Unit = {
println("***** 1 *****")
val sparkConf = new SparkConf().setAppName("Start").setMaster("spark://127.0.0.1:7077")
println("***** 2 *****")
val sparkContext = new SparkContext(sparkConf)
println("***** 3 *****")
}
}
And I get the following output
***** 1 *****
***** 2 *****
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/12/29 15:27:50 INFO SparkContext: Running Spark version 1.5.2
15/12/29 15:27:51 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/12/29 15:27:51 INFO SecurityManager: Changing view acls to: nayan
15/12/29 15:27:51 INFO SecurityManager: Changing modify acls to: nayan
15/12/29 15:27:51 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(nayan); users with modify permissions: Set(nayan)
15/12/29 15:27:52 INFO Slf4jLogger: Slf4jLogger started
15/12/29 15:27:52 INFO Remoting: Starting remoting
15/12/29 15:27:53 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#10.0.1.88:55126]
15/12/29 15:27:53 INFO Utils: Successfully started service 'sparkDriver' on port 55126.
15/12/29 15:27:53 INFO SparkEnv: Registering MapOutputTracker
15/12/29 15:27:53 INFO SparkEnv: Registering BlockManagerMaster
15/12/29 15:27:53 INFO DiskBlockManager: Created local directory at /private/var/folders/pd/6rxlm2js10gg6xys5wm90qpm0000gn/T/blockmgr-21a96671-c33e-498c-83a4-bb5c57edbbfb
15/12/29 15:27:53 INFO MemoryStore: MemoryStore started with capacity 983.1 MB
15/12/29 15:27:53 INFO HttpFileServer: HTTP File server directory is /private/var/folders/pd/6rxlm2js10gg6xys5wm90qpm0000gn/T/spark-fce0a058-9264-4f2c-8220-c32d90f11bd8/httpd-2a0efcac-2426-49c5-982a-941cfbb48c88
15/12/29 15:27:53 INFO HttpServer: Starting HTTP Server
15/12/29 15:27:53 INFO Utils: Successfully started service 'HTTP file server' on port 55127.
15/12/29 15:27:53 INFO SparkEnv: Registering OutputCommitCoordinator
15/12/29 15:27:53 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/12/29 15:27:53 INFO SparkUI: Started SparkUI at http://10.0.1.88:4040
15/12/29 15:27:54 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
15/12/29 15:27:54 INFO AppClient$ClientEndpoint: Connecting to master spark://127.0.0.1:7077...
15/12/29 15:27:54 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkMaster#127.0.0.1:7077] has failed, address is now gated for [5000] ms. Reason: [Disassociated]
15/12/29 15:28:14 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[appclient-registration-retry-thread,5,main]
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask#1f22aef0 rejected from java.util.concurrent.ThreadPoolExecutor#176cb4af[Running, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
So something is happening during the creation of the spark context.
When i look in $DSE_HOME/logs/spark, it is empty. Not sure where else to look.
It turns out that the problem was the spark library version AND the Scala version. DataStax was running Spark 1.4.1 and Scala 2.10.5, while my eclipse project was using 1.5.2 & 2.11.7 respectively.
Note that BOTH the Spark library and Scala appear to have to match. I tried other combinations, but it only worked when both matched.
I am getting pretty familiar with this part of your posted error:
WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://...
It can have numerous causes, pretty much all related to misconfigured IPs. First I would do whatever zero323 says, then here's my two cents: I have solved my own problems recently by using IP addresses, not hostnames, and the only config I use in a simple standalone cluster is SPARK_MASTER_IP.
SPARK_MASTER_IP in the $SPARK_HOME/conf/spark-env.sh on your master then should lead the master webui to show the IP address you set:
spark://your.ip.address.numbers:7077
And your SparkConf setup can refer to that.
Having said that, I am not familiar with your specific implementation but I notice in the error two occurrences containing:
/private/var/folders/pd/6rxlm2js10gg6xys5wm90qpm0000gn/T/
Have you looked there to see if there's a logs directory? Is that where $DSE_HOME points? Alternatively connect to the driver where it creates it's webui:
INFO SparkUI: Started SparkUI at http://10.0.1.88:4040
and you should see a link to an error log there somewhere.
More on the IP vs. hostname thing, this very old bug is marked as Resolved but I have not figured out what they mean by Resolved, so I just tend toward IP addresses.