I apologize for such an extremely long post, but I wanted to be better understood.
I have built up my cluster, where master in on another machine than workers. Workers are allocated on a quite efficient machine. Between these two machines no firewall is applied.
URL: spark://MASTER_IP:7077
Workers: 10
Cores: 10 Total, 0 Used
Memory: 40.0 GB Total, 0.0 B Used
Applications: 0 Running, 0 Completed
Drivers: 0 Running, 0 Completed
Status: ALIVE
Before launching the app, in the workers logfile is (an example for one worker)
15/03/06 18:52:19 INFO Worker: Registered signal handlers for [TERM, HUP, INT]
15/03/06 18:52:19 INFO SecurityManager: Changing view acls to: szymon
15/03/06 18:52:19 INFO SecurityManager: Changing modify acls to: szymon
15/03/06 18:52:19 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(szymon); users with modify permissions: Set(szymon)
15/03/06 18:52:20 INFO Slf4jLogger: Slf4jLogger started
15/03/06 18:52:20 INFO Remoting: Starting remoting
15/03/06 18:52:20 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkWorker#WORKER_MACHINE_IP:42240]
15/03/06 18:52:20 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkWorker#WORKER_MACHINE_IP:42240]
15/03/06 18:52:20 INFO Utils: Successfully started service 'sparkWorker' on port 42240.
15/03/06 18:52:20 INFO Worker: Starting Spark worker WORKER_MACHINE_IP:42240 with 1 cores, 4.0 GB RAM
15/03/06 18:52:20 INFO Worker: Spark home: /home/szymon/spark
15/03/06 18:52:20 INFO Utils: Successfully started service 'WorkerUI' on port 8081.
15/03/06 18:52:20 INFO WorkerWebUI: Started WorkerWebUI at http://WORKER_MACHINE_IP:8081
15/03/06 18:52:20 INFO Worker: Connecting to master spark://MASTER_IP:7077...
15/03/06 18:52:20 INFO Worker: Successfully registered with master spark://MASTER_IP:7077
I launch my application on a cluster (on the master machine)
./bin/spark-submit --class SimpleApp --master spark://MASTER_IP:7077 --executor-memory 3g --total-executor-cores 10 code/trial_2.11-0.9.jar
My app is then fetched by workers, this is an example of the log output for a worker (#WORKER_MACHINE)
15/03/06 18:07:45 INFO ExecutorRunner: Launch command: "/usr/java/jdk1.8.0_31/bin/java" "-cp" "::/home/machine/spark/conf:/home/machine/spark/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar" "-Dspark.driver.port=56753" "-Dlog4j.configuration=file:////home/machine/spark/conf/log4j.properties" "-Dspark.driver.host=MASTER_IP" "-Xms3072M" "-Xmx3072M" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "akka.tcp://sparkDriver#MASTER_IP:56753/user/CoarseGrainedScheduler" "4" "WORKER_MACHINE_IP" "1" "app-20150306181450-0000" "akka.tcp://sparkWorker#WORKER_MACHINE_IP:45288/user/Worker"
The app wants to connect to localhost at address 127.0.0.1 instead of MASTER_IP (I believe).
How could it be fixed?
15/03/06 18:58:52 ERROR RetryingBlockFetcher: Exception while beginning fetch of 1 outstanding blocks
java.io.IOException: Failed to connect to localhost/127.0.0.1:56545
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:191)
The problem is caused by createClient method in TransportClientFactory which is in spark-network-common_2.10-1.2.1-sources.jar, String remoteHost is set up as localhost
/**
* Create a {#link TransportClient} connecting to the given remote host / port.
*
* We maintains an array of clients (size determined by spark.shuffle.io.numConnectionsPerPeer)
* and randomly picks one to use. If no client was previously created in the randomly selected
* spot, this function creates a new client and places it there.
*
* Prior to the creation of a new TransportClient, we will execute all
* {#link TransportClientBootstrap}s that are registered with this factory.
*
* This blocks until a connection is successfully established and fully bootstrapped.
*
* Concurrency: This method is safe to call from multiple threads.
*/
public TransportClient createClient(String remoteHost, int remotePort) throws IOException {
// Get connection from the connection pool first.
// If it is not found or not active, create a new one.
final InetSocketAddress address = new InetSocketAddress(remoteHost, remotePort);
.
.
.
clientPool.clients[clientIndex] = createClient(address);
}
Here is the file spark-env.sh on the workers site
export SPARK_HOME=/home/szymon/spark
export SPARK_MASTER_IP=MASTER_IP
export SPARK_MASTER_WEBUI_PORT=8081
export SPARK_LOCAL_IP=WORKER_MACHINE_IP
export SPARK_DRIVER_HOST=WORKER_MACHINE_IP
export SPARK_LOCAL_DIRS=/home/szymon/spark/slaveData
export SPARK_WORKER_INSTANCES=10
export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=4g
export SPARK_WORKER_DIR=/home/szymon/spark/work
And on the master
export SPARK_MASTER_IP=MASTER_IP
export SPARK_LOCAL_IP=MASTER_IP
export SPARK_MASTER_WEBUI_PORT=8081
export SPARK_JAVA_OPTS="-Dlog4j.configuration=file:////home/szymon/spark/conf/log4j.properties -Dspark.driver.host=MASTER_IP"
export SPARK_MASTER_OPTS="-Dspark.deploy.defaultCores=10"
This is the full log output with more details
15/03/06 18:58:50 INFO Worker: Asked to launch executor app-20150306190555-0000/0 for Simple Application
15/03/06 18:58:50 INFO ExecutorRunner: Launch command: "/usr/java/jdk1.8.0_31/bin/java" "-cp" "::/home/szymon/spark/conf:/home/szymon/spark/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar" "-Dspark.driver.port=49407" "-Dlog4j.configuration=file:////home/szymon/spark/conf/log4j.properties" "-Dspark.driver.host=MASTER_IP" "-Xms3072M" "-Xmx3072M" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "akka.tcp://sparkDriver#MASTER_IP:49407/user/CoarseGrainedScheduler" "0" "WORKER_MACHINE_IP" "1" "app-20150306190555-0000" "akka.tcp://sparkWorker#WORKER_MACHINE_IP:42240/user/Worker"
15/03/06 18:58:50 INFO CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT]
15/03/06 18:58:51 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/03/06 18:58:51 INFO SecurityManager: Changing view acls to: szymon
15/03/06 18:58:51 INFO SecurityManager: Changing modify acls to: szymon
15/03/06 18:58:51 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(szymon); users with modify permissions: Set(szymon)
15/03/06 18:58:51 INFO Slf4jLogger: Slf4jLogger started
15/03/06 18:58:51 INFO Remoting: Starting remoting
15/03/06 18:58:51 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher#WORKER_MACHINE_IP:52038]
15/03/06 18:58:51 INFO Utils: Successfully started service 'driverPropsFetcher' on port 52038.
15/03/06 18:58:52 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/03/06 18:58:52 INFO SecurityManager: Changing view acls to: szymon
15/03/06 18:58:52 INFO SecurityManager: Changing modify acls to: szymon
15/03/06 18:58:52 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(szymon); users with modify permissions: Set(szymon)
15/03/06 18:58:52 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/03/06 18:58:52 INFO Slf4jLogger: Slf4jLogger started
15/03/06 18:58:52 INFO Remoting: Starting remoting
15/03/06 18:58:52 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
15/03/06 18:58:52 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor#WORKER_MACHINE_IP:37114]
15/03/06 18:58:52 INFO Utils: Successfully started service 'sparkExecutor' on port 37114.
15/03/06 18:58:52 INFO CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://sparkDriver#MASTER_IP:49407/user/CoarseGrainedScheduler
15/03/06 18:58:52 INFO WorkerWatcher: Connecting to worker akka.tcp://sparkWorker#WORKER_MACHINE_IP:42240/user/Worker
15/03/06 18:58:52 INFO WorkerWatcher: Successfully connected to akka.tcp://sparkWorker#WORKER_MACHINE_IP:42240/user/Worker
15/03/06 18:58:52 INFO CoarseGrainedExecutorBackend: Successfully registered with driver
15/03/06 18:58:52 INFO Executor: Starting executor ID 0 on host WORKER_MACHINE_IP
15/03/06 18:58:52 INFO SecurityManager: Changing view acls to: szymon
15/03/06 18:58:52 INFO SecurityManager: Changing modify acls to: szymon
15/03/06 18:58:52 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(szymon); users with modify permissions: Set(szymon)
15/03/06 18:58:52 INFO AkkaUtils: Connecting to MapOutputTracker: akka.tcp://sparkDriver#MASTER_IP:49407/user/MapOutputTracker
15/03/06 18:58:52 INFO AkkaUtils: Connecting to BlockManagerMaster: akka.tcp://sparkDriver#MASTER_IP:49407/user/BlockManagerMaster
15/03/06 18:58:52 INFO DiskBlockManager: Created local directory at /home/szymon/spark/slaveData/spark-b09c3727-8559-4ab8-ab32-1f5ecf7aeaf2/spark-0c892a4d-c8b9-4144-a259-8077f5316b52/spark-89577a43-fb43-4a12-a305-34b267b01f8a/spark-7ad207c4-9d37-42eb-95e4-7b909b71c687
15/03/06 18:58:52 INFO MemoryStore: MemoryStore started with capacity 1589.8 MB
15/03/06 18:58:52 INFO NettyBlockTransferService: Server created on 51205
15/03/06 18:58:52 INFO BlockManagerMaster: Trying to register BlockManager
15/03/06 18:58:52 INFO BlockManagerMaster: Registered BlockManager
15/03/06 18:58:52 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver#MASTER_IP:49407/user/HeartbeatReceiver
15/03/06 18:58:52 INFO CoarseGrainedExecutorBackend: Got assigned task 0
15/03/06 18:58:52 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
15/03/06 18:58:52 INFO Executor: Fetching http://MASTER_IP:57850/jars/trial_2.11-0.9.jar with timestamp 1425665154479
15/03/06 18:58:52 INFO Utils: Fetching http://MASTER_IP:57850/jars/trial_2.11-0.9.jar to /home/szymon/spark/slaveData/spark-b09c3727-8559-4ab8-ab32-1f5ecf7aeaf2/spark-0c892a4d-c8b9-4144-a259-8077f5316b52/spark-411cd372-224e-44c1-84ab-b0c3984a6361/fetchFileTemp7857926599487994869.tmp
15/03/06 18:58:52 INFO Utils: Copying /home/szymon/spark/slaveData/spark-b09c3727-8559-4ab8-ab32-1f5ecf7aeaf2/spark-0c892a4d-c8b9-4144-a259-8077f5316b52/spark-411cd372-224e-44c1-84ab-b0c3984a6361/-19284804851425665154479_cache to /home/szymon/spark/work/app-20150306190555-0000/0/./trial_2.11-0.9.jar
15/03/06 18:58:52 INFO Executor: Adding file:/home/szymon/spark/work/app-20150306190555-0000/0/./trial_2.11-0.9.jar to class loader
15/03/06 18:58:52 INFO TorrentBroadcast: Started reading broadcast variable 0
15/03/06 18:58:52 ERROR RetryingBlockFetcher: Exception while beginning fetch of 1 outstanding blocks
java.io.IOException: Failed to connect to localhost/127.0.0.1:56545
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:191)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:156)
at org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:78)
at org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)
at org.apache.spark.network.shuffle.RetryingBlockFetcher.start(RetryingBlockFetcher.java:120)
at org.apache.spark.network.netty.NettyBlockTransferService.fetchBlocks(NettyBlockTransferService.scala:87)
at org.apache.spark.network.BlockTransferService.fetchBlockSync(BlockTransferService.scala:89)
at org.apache.spark.storage.BlockManager$$anonfun$doGetRemote$2.apply(BlockManager.scala:595)
at org.apache.spark.storage.BlockManager$$anonfun$doGetRemote$2.apply(BlockManager.scala:593)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.storage.BlockManager.doGetRemote(BlockManager.scala:593)
at org.apache.spark.storage.BlockManager.getRemoteBytes(BlockManager.scala:587)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.org$apache$spark$broadcast$TorrentBroadcast$$anonfun$$getRemote$1(TorrentBroadcast.scala:126)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1$$anonfun$1.apply(TorrentBroadcast.scala:136)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1$$anonfun$1.apply(TorrentBroadcast.scala:136)
at scala.Option.orElse(Option.scala:257)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply$mcVI$sp(TorrentBroadcast.scala:136)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply(TorrentBroadcast.scala:119)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply(TorrentBroadcast.scala:119)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$readBlocks(TorrentBroadcast.scala:119)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:174)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1090)
at org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:164)
at org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
at org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:64)
at org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:87)
at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:58)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: localhost/127.0.0.1:56545
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:208)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:287)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
... 1 more
15/03/06 18:58:52 INFO RetryingBlockFetcher: Retrying fetch (1/3) for 1 outstanding blocks after 5000 ms
15/03/06 18:58:57 ERROR RetryingBlockFetcher: Exception while beginning fetch of 1 outstanding blocks (after 1 retries)
java.io.IOException: Failed to connect to localhost/127.0.0.1:56545
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:191)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:156)
at org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:78)
at org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)
at org.apache.spark.network.shuffle.RetryingBlockFetcher.access$200(RetryingBlockFetcher.java:43)
at org.apache.spark.network.shuffle.RetryingBlockFetcher$1.run(RetryingBlockFetcher.java:170)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: localhost/127.0.0.1:56545
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:208)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:287)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
... 1 more
.
.
.
15/03/06 19:00:22 INFO RetryingBlockFetcher: Retrying fetch (1/3) for 1 outstanding blocks after 5000 ms
15/03/06 19:00:24 ERROR CoarseGrainedExecutorBackend: Driver Disassociated [akka.tcp://sparkExecutor#WORKER_MACHINE_IP:37114] -> [akka.tcp://sparkDriver#MASTER_IP:49407] disassociated! Shutting down.
15/03/06 19:00:24 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver#MASTER_IP:49407] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/03/06 19:00:24 INFO Worker: Asked to kill executor app-20150306190555-0000/0
15/03/06 19:00:24 INFO ExecutorRunner: Runner thread for executor app-20150306190555-0000/0 interrupted
15/03/06 19:00:24 INFO ExecutorRunner: Killing process!
15/03/06 19:00:25 INFO Worker: Executor app-20150306190555-0000/0 finished with state KILLED exitStatus 1
15/03/06 19:00:25 INFO Worker: Cleaning up local directories for application app-20150306190555-0000
15/03/06 19:00:25 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkExecutor#WORKER_MACHINE_IP:37114] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/03/06 19:00:25 INFO LocalActorRef: Message [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from Actor[akka://sparkWorker/deadLetters] to Actor[akka://sparkWorker/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkWorker%40WORKER_MACHINE_IP%3A45806-2#1549100100] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
There is a warning, which I believe is not the case at this issue
15/03/06 18:07:46 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
You can try to set conf.set("spark.driver.host",""), the client host is a host where you start spark-shell or other script.
Related
I am having a problem connecting to mongodb using spark structure streaming,
Here is my python code,
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
# from lib.logger import Log4j
if __name__ == "__main__":
spark = (SparkSession
.builder
.appName("Streaming from mongo db")
.master("local[3]")
.config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_10.0.5:3.3.1')
.config("spark.streaming.stopGracefullyOnShutdown", "true")
# .config("spark.sql.shuffle.partitions", 3)
.getOrCreate())
read_from_mongo = (spark
.readStream
.format("mongodb")
.option("uri", "mongodb://admin:admin#localhost:27017")
.option("database", "first_db")
.option("collection", "first_collection")
.load()
.writeStream
.format("console")
.trigger(continuous="1 second")
.outputMode("append"))
y = read_from_mongo.start()
I am running the script using spark-submit file_name.py
the output I'm getting is the following :
22/11/16 16:04:21 INFO SparkContext: Running Spark version 3.3.1
22/11/16 16:04:21 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/11/16 16:04:21 INFO ResourceUtils: ==============================================================
22/11/16 16:04:21 INFO ResourceUtils: No custom resources configured for spark.driver.
22/11/16 16:04:21 INFO ResourceUtils: ==============================================================
22/11/16 16:04:21 INFO SparkContext: Submitted application: Streaming from mongo db
22/11/16 16:04:21 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
22/11/16 16:04:21 INFO ResourceProfile: Limiting resource is cpu
22/11/16 16:04:21 INFO ResourceProfileManager: Added ResourceProfile id: 0
22/11/16 16:04:21 INFO SecurityManager: Changing view acls to: mustaphaaminedebbih
22/11/16 16:04:21 INFO SecurityManager: Changing modify acls to: mustaphaaminedebbih
22/11/16 16:04:21 INFO SecurityManager: Changing view acls groups to:
22/11/16 16:04:21 INFO SecurityManager: Changing modify acls groups to:
22/11/16 16:04:21 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(mustaphaaminedebbih); groups with view permissions: Set(); users with modify permissions: Set(mustaphaaminedebbih); groups with modify permissions: Set()
22/11/16 16:04:21 INFO Utils: Successfully started service 'sparkDriver' on port 52063.
22/11/16 16:04:21 INFO SparkEnv: Registering MapOutputTracker
22/11/16 16:04:21 INFO SparkEnv: Registering BlockManagerMaster
22/11/16 16:04:21 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
22/11/16 16:04:21 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
22/11/16 16:04:21 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
22/11/16 16:04:21 INFO DiskBlockManager: Created local directory at /private/var/folders/yt/jz2t42md7qx68kjlydk19x5w0000gn/T/blockmgr-53c25f51-35a7-44ab-bc1d-bb16a7ae050c
22/11/16 16:04:22 INFO MemoryStore: MemoryStore started with capacity 434.4 MiB
22/11/16 16:04:22 INFO SparkEnv: Registering OutputCommitCoordinator
22/11/16 16:04:22 INFO Utils: Successfully started service 'SparkUI' on port 4040.
22/11/16 16:04:22 INFO Executor: Starting executor ID driver on host 192.168.9.44
22/11/16 16:04:22 INFO Executor: Starting executor with user classpath (userClassPathFirst = false): ''
22/11/16 16:04:22 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 52065.
22/11/16 16:04:22 INFO NettyBlockTransferService: Server created on 192.168.9.44:52065
22/11/16 16:04:22 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
22/11/16 16:04:22 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.9.44, 52065, None)
22/11/16 16:04:22 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.9.44:52065 with 434.4 MiB RAM, BlockManagerId(driver, 192.168.9.44, 52065, None)
22/11/16 16:04:22 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.9.44, 52065, None)
22/11/16 16:04:22 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.9.44, 52065, None)
22/11/16 16:04:22 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir.
22/11/16 16:04:22 INFO SharedState: Warehouse path is 'file:/Users/mustaphaaminedebbih/Desktop/Scrambling/Spark%20Streaming/Streaming%20From%20Mongodb/spark-warehouse'.
Traceback (most recent call last):
File "/Users/mustaphaaminedebbih/Desktop/Scrambling/Spark Streaming/Streaming From Mongodb/test.py", line 19, in
read_from_mongo = (spark
File "/Users/mustaphaaminedebbih/spark3/spark-3.3.1-bin-hadoop3/python/lib/pyspark.zip/pyspark/sql/streaming.py", line 469, in load
File "/Users/mustaphaaminedebbih/spark3/spark-3.3.1-bin-hadoop3/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py", line 1321, in call
File "/Users/mustaphaaminedebbih/spark3/spark-3.3.1-bin-hadoop3/python/lib/pyspark.zip/pyspark/sql/utils.py", line 190, in deco
File "/Users/mustaphaaminedebbih/spark3/spark-3.3.1-bin-hadoop3/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o34.load.
java.lang.ClassNotFoundException:
Failed to find data source: mongodb. Please find packages at
https://spark.apache.org/third-party-projects.html
at org.apache.spark.sql.errors.QueryExecutionErrors$.failedToFindDataSourceError(QueryExecutionErrors.scala:587)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:675)
at org.apache.spark.sql.streaming.DataStreamReader.loadInternal(DataStreamReader.scala:157)
at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:144)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: java.lang.ClassNotFoundException: mongodb.DefaultSource
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:435)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:589)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$lookupDataSource$5(DataSource.scala:661)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$lookupDataSource$4(DataSource.scala:661)
at scala.util.Failure.orElse(Try.scala:224)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:661)
14 more
22/11/16 16:04:23 INFO SparkContext: Invoking stop() from shutdown hook
22/11/16 16:04:23 INFO SparkUI: Stopped Spark web UI at http://192.168.9.44:4040
22/11/16 16:04:23 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
22/11/16 16:04:23 INFO MemoryStore: MemoryStore cleared
22/11/16 16:04:23 INFO BlockManager: BlockManager stopped
22/11/16 16:04:23 INFO BlockManagerMaster: BlockManagerMaster stopped
22/11/16 16:04:23 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
22/11/16 16:04:23 INFO SparkContext: Successfully stopped SparkContext
22/11/16 16:04:23 INFO ShutdownHookManager: Shutdown hook called
22/11/16 16:04:23 INFO ShutdownHookManager: Deleting directory /private/var/folders/yt/jz2t42md7qx68kjlydk19x5w0000gn/T/spark-2ac1c680-c1d8-46e1-bfae-40c82cee015f
22/11/16 16:04:23 INFO ShutdownHookManager: Deleting directory /private/var/folders/yt/jz2t42md7qx68kjlydk19x5w0000gn/T/spark-1816dfc9-2a95-4361-88d1-025c999f1514
22/11/16 16:04:23 INFO ShutdownHookManager: Deleting directory /private/var/folders/yt/jz2t42md7qx68kjlydk19x5w0000gn/T/spark-1816dfc9-2a95-4361-88d1-025c999f1514/pyspark-095a2954-0010-4da3-b658-2483dcc79afc
I tried almost every possible solution, but nothing worked
Spark job (Scala/s3) worked fine for few runs in stand-alone cluster with spark-submit but after few run it started giving the below error. There were no changes to code, it is making connection to spark-master but immediately application is getting killed with the reason “All masters are unresponsive! Giving up”.
22/03/20 05:33:39 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
22/03/20 05:33:39 INFO TransportClientFactory: Successfully created connection to spark-master/xx.x.x.xxx:7077 after 42 ms (0 ms spent in bootstraps)
22/03/20 05:33:59 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
22/03/20 05:34:19 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
22/03/20 05:34:39 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
22/03/20 05:34:39 WARN StandaloneSchedulerBackend: Application ID is not initialized yet.
22/03/20 05:34:39 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33139.
22/03/20 05:34:39 INFO NettyBlockTransferService: Server created on a1326e4ae4bb:33139
22/03/20 05:34:39 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
22/03/20 05:34:39 INFO SparkUI: Stopped Spark web UI at http://xxxxxxxxxxxxx:4040
22/03/20 05:34:39 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, a1326e4ae4bb, 33139, None)
22/03/20 05:34:39 INFO StandaloneSchedulerBackend: Shutting down all executors
22/03/20 05:34:39 INFO BlockManagerMasterEndpoint: Registering block manager a1326e4ae4bb:33139 with 1168.8 MiB RAM, BlockManagerId(driver, a1326e4ae4bb, 33139, None)
22/03/20 05:34:39 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
22/03/20 05:34:39 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, a1326e4ae4bb, 33139, None)
22/03/20 05:34:39 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, a1326e4ae4bb, 33139, None)
22/03/20 05:34:39 WARN StandaloneAppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master
22/03/20 05:34:39 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
22/03/20 05:34:39 INFO MemoryStore: MemoryStore cleared
22/03/20 05:34:39 INFO BlockManager: BlockManager stopped
22/03/20 05:34:39 INFO BlockManagerMaster: BlockManagerMaster stopped
22/03/20 05:34:39 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
22/03/20 05:34:40 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:281)
Disconnected from the target VM, address: '127.0.0.1:39989', transport: 'socket' on intellij idea CE. I can't debug my program. Any suggestions?
Connected to the target VM, address: '127.0.0.1:39989', transport: 'socket'
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/12/29 17:29:47 INFO SparkContext: Running Spark version 2.1.2
17/12/29 17:29:49 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/12/29 17:29:49 WARN Utils: Your hostname, ashfaq-VirtualBox resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface enp0s3)
17/12/29 17:29:49 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/12/29 17:29:49 INFO SecurityManager: Changing view acls to: ashfaq
17/12/29 17:29:49 INFO SecurityManager: Changing modify acls to: ashfaq
17/12/29 17:29:49 INFO SecurityManager: Changing view acls groups to:
17/12/29 17:29:49 INFO SecurityManager: Changing modify acls groups to:
17/12/29 17:29:49 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ashfaq); groups with view permissions: Set(); users with modify permissions: Set(ashfaq); groups with modify permissions: Set()
17/12/29 17:29:51 INFO Utils: Successfully started service 'sparkDriver' on port 46133.
17/12/29 17:29:51 INFO SparkEnv: Registering MapOutputTracker
17/12/29 17:29:51 INFO SparkEnv: Registering BlockManagerMaster
17/12/29 17:29:51 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/12/29 17:29:51 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/12/29 17:29:51 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-b3b48105-28be-4781-a395-c7e83cc72e8c
17/12/29 17:29:51 INFO MemoryStore: MemoryStore started with capacity 393.1 MB
17/12/29 17:29:51 INFO SparkEnv: Registering OutputCommitCoordinator
17/12/29 17:29:53 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/12/29 17:29:53 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.2.15:4040
17/12/29 17:29:53 INFO Executor: Starting executor ID driver on host localhost
17/12/29 17:29:54 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33583.
17/12/29 17:29:54 INFO NettyBlockTransferService: Server created on 10.0.2.15:33583
17/12/29 17:29:54 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/12/29 17:29:54 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.0.2.15, 33583, None)
17/12/29 17:29:54 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.2.15:33583 with 393.1 MB RAM, BlockManagerId(driver, 10.0.2.15, 33583, None)
17/12/29 17:29:54 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.0.2.15, 33583, None)
17/12/29 17:29:54 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.0.2.15, 33583, None)
17/12/29 17:29:58 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 236.5 KB, free 392.8 MB)
17/12/29 17:29:58 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 22.9 KB, free 392.8 MB)
17/12/29 17:29:58 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.2.15:33583 (size: 22.9 KB, free: 393.1 MB)
17/12/29 17:29:59 INFO SparkContext: Created broadcast 0 from textFile at scalaApp.scala:13
Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/home/ashfaq/Desktop/saclaAPP/data/UserPurchaseHistory.csv
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1968)
at org.apache.spark.rdd.RDD.count(RDD.scala:1158)
at ScalaApp$.main(scalaApp.scala:18)
at ScalaApp.main(scalaApp.scala)
17/12/29 17:29:59 INFO SparkContext: Invoking stop() from shutdown hook
17/12/29 17:29:59 INFO SparkUI: Stopped Spark web UI at http://10.0.2.15:4040
17/12/29 17:29:59 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.0.2.15:33583 in memory (size: 22.9 KB, free: 393.1 MB)
17/12/29 17:29:59 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/12/29 17:30:00 INFO MemoryStore: MemoryStore cleared
17/12/29 17:30:00 INFO BlockManager: BlockManager stopped
17/12/29 17:30:00 INFO BlockManagerMaster: BlockManagerMaster stopped
17/12/29 17:30:00 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/12/29 17:30:00 INFO SparkContext: Successfully stopped SparkContext
17/12/29 17:30:00 INFO ShutdownHookManager: Shutdown hook called
Disconnected from the target VM, address: '127.0.0.1:39989', transport: 'socket'
17/12/29 17:30:00 INFO ShutdownHookManager: Deleting directory /tmp/spark-58667739-7c15-4665-8ede-fde9c3ff1d83
Process finished with exit code 1
It looks like, you are trying to open a file which doesn't exist. The fisrt line of the error message says so:
Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/home/ashfaq/Desktop/saclaAPP/data/UserPurchaseHistory.csv
I just created a DC/OS cluster and am trying to run simple Spark task that reads data from /mnt/mesos/sandbox.
object SimpleApp {
def main(args: Array[String]) {
val conf = new SparkConf()
.setAppName("Simple Application")
println("STARTING JOB!")
val sc = new SparkContext(conf)
val rdd = sc.textFile("file:///mnt/mesos/sandbox/foo")
println(rdd.count)
println("ENDING JOB!")
}
}
And I'm deploying the app using
dcos spark run --submit-args='--conf spark.mesos.uris=https://dripit-spark.s3.amazonaws.com/foo --class SimpleApp https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar' --verbose
Unfortunately, task keeps failing with following exception
I0701 18:47:35.782994 30997 logging.cpp:188] INFO level logging started!
I0701 18:47:35.783197 30997 fetcher.cpp:424] Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"https:\/\/dripit-spark.s3.amazonaws.com\/foobar-assembly-1.0.jar"}},{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"https:\/\/dripit-spark.s3.amazonaws.com\/foo"}}],"sandbox_directory":"\/var\/lib\/mesos\/slave\/slaves\/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2\/frameworks\/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002\/executors\/driver-20160701184530-0001\/runs\/67b94f34-a9d3-4662-bedc-8578381e9305"}
I0701 18:47:35.784752 30997 fetcher.cpp:379] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar'
I0701 18:47:35.784791 30997 fetcher.cpp:250] Fetching directly into the sandbox directory
I0701 18:47:35.784818 30997 fetcher.cpp:187] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar'
I0701 18:47:35.784835 30997 fetcher.cpp:134] Downloading resource from 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foobar-assembly-1.0.jar'
W0701 18:47:36.057448 30997 fetcher.cpp:272] Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar
I0701 18:47:36.057673 30997 fetcher.cpp:456] Fetched 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foobar-assembly-1.0.jar'
I0701 18:47:36.057696 30997 fetcher.cpp:379] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foo'
I0701 18:47:36.057714 30997 fetcher.cpp:250] Fetching directly into the sandbox directory
I0701 18:47:36.057741 30997 fetcher.cpp:187] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foo'
I0701 18:47:36.057770 30997 fetcher.cpp:134] Downloading resource from 'https://dripit-spark.s3.amazonaws.com/foo' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foo'
W0701 18:47:36.114565 30997 fetcher.cpp:272] Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: https://dripit-spark.s3.amazonaws.com/foo
I0701 18:47:36.114600 30997 fetcher.cpp:456] Fetched 'https://dripit-spark.s3.amazonaws.com/foo' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foo'
I0701 18:47:36.307576 31006 exec.cpp:143] Version: 0.28.1
I0701 18:47:36.310127 31022 exec.cpp:217] Executor registered on slave c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2
16/07/01 18:47:37 INFO SparkContext: Running Spark version 1.6.1
16/07/01 18:47:37 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/01 18:47:37 WARN SparkConf:
SPARK_JAVA_OPTS was detected (set to '-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2 ').
This is deprecated in Spark 1.0+.
Please instead use:
- ./spark-submit with conf/spark-defaults.conf to set defaults for an application
- ./spark-submit with --driver-java-options to set -X options for a driver
- spark.executor.extraJavaOptions to set -X options for executors
- SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or worker)
16/07/01 18:47:37 WARN SparkConf: Setting 'spark.executor.extraJavaOptions' to '-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2 ' as a work-around.
16/07/01 18:47:37 WARN SparkConf: Setting 'spark.driver.extraJavaOptions' to '-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2 ' as a work-around.
16/07/01 18:47:37 INFO SecurityManager: Changing view acls to: root
16/07/01 18:47:37 INFO SecurityManager: Changing modify acls to: root
16/07/01 18:47:37 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/07/01 18:47:37 INFO Utils: Successfully started service 'sparkDriver' on port 47358.
16/07/01 18:47:38 INFO Slf4jLogger: Slf4jLogger started
16/07/01 18:47:38 INFO Remoting: Starting remoting
16/07/01 18:47:38 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#10.0.1.107:54467]
16/07/01 18:47:38 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 54467.
16/07/01 18:47:38 INFO SparkEnv: Registering MapOutputTracker
16/07/01 18:47:38 INFO SparkEnv: Registering BlockManagerMaster
16/07/01 18:47:38 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-96092a9a-3164-4d65-8c0b-df5403abb056
16/07/01 18:47:38 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/07/01 18:47:38 INFO SparkEnv: Registering OutputCommitCoordinator
16/07/01 18:47:38 INFO Server: jetty-8.y.z-SNAPSHOT
16/07/01 18:47:38 INFO AbstractConnector: Started SelectChannelConnector#0.0.0.0:4040
16/07/01 18:47:38 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/07/01 18:47:38 INFO SparkUI: Started SparkUI at http://10.0.1.107:4040
16/07/01 18:47:38 INFO HttpFileServer: HTTP File server directory is /tmp/spark-37696e45-5e8b-4328-81e6-deec1f185d75/httpd-69184304-7ffd-4420-b020-5f8a1bafecbd
16/07/01 18:47:38 INFO HttpServer: Starting HTTP Server
16/07/01 18:47:38 INFO Server: jetty-8.y.z-SNAPSHOT
16/07/01 18:47:38 INFO AbstractConnector: Started SocketConnector#0.0.0.0:49074
16/07/01 18:47:38 INFO Utils: Successfully started service 'HTTP file server' on port 49074.
16/07/01 18:47:38 INFO SparkContext: Added JAR file:/mnt/mesos/sandbox/foobar-assembly-1.0.jar at http://10.0.1.107:49074/jars/foobar-assembly-1.0.jar with timestamp 1467398858626
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#712: Client environment:zookeeper.version=zookeeper C client 3.4.5
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#716: Client environment:host.name=ip-10-0-1-107.eu-west-1.compute.internal
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#723: Client environment:os.name=Linux
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#724: Client environment:os.arch=4.1.7-coreos-r1
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#725: Client environment:os.version=#2 SMP Thu Nov 5 02:10:23 UTC 2015
I0701 18:47:38.778355 103 sched.cpp:164] Version: 0.25.0
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#733: Client environment:user.name=(null)
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#741: Client environment:user.home=/root
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#log_env#753: Client environment:user.dir=/opt/spark/dist
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO#zookeeper_init#786: Initiating client connection, host=master.mesos:2181 sessionTimeout=10000 watcher=0x7f74d587c600 sessionId=0 sessionPasswd=<null> context=0x7f7540003f70 flags=0
2016-07-01 18:47:38,786:6(0x7f74c6ec0700):ZOO_INFO#check_events#1703: initiated connection to server [10.0.7.83:2181]
2016-07-01 18:47:38,787:6(0x7f74c6ec0700):ZOO_INFO#check_events#1750: session establishment complete on server [10.0.7.83:2181], sessionId=0x155a57d07f60050, negotiated timeout=10000
I0701 18:47:38.788107 99 group.cpp:331] Group process (group(1)#10.0.1.107:35064) connected to ZooKeeper
I0701 18:47:38.788147 99 group.cpp:805] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0701 18:47:38.788162 99 group.cpp:403] Trying to create path '/mesos' in ZooKeeper
I0701 18:47:38.789402 99 detector.cpp:156] Detected a new leader: (id='1')
I0701 18:47:38.789512 99 group.cpp:674] Trying to get '/mesos/json.info_0000000001' in ZooKeeper
I0701 18:47:38.790228 99 detector.cpp:481] A new leading master (UPID=master#10.0.7.83:5050) is detected
I0701 18:47:38.790293 99 sched.cpp:262] New master detected at master#10.0.7.83:5050
I0701 18:47:38.790473 99 sched.cpp:272] No credentials provided. Attempting to register without authentication
I0701 18:47:38.792147 97 sched.cpp:641] Framework registered with c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002-driver-20160701184530-0001
16/07/01 18:47:38 INFO CoarseMesosSchedulerBackend: Registered as framework ID c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002-driver-20160701184530-0001
16/07/01 18:47:38 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 38752.
16/07/01 18:47:38 INFO NettyBlockTransferService: Server created on 38752
16/07/01 18:47:38 INFO BlockManagerMaster: Trying to register BlockManager
16/07/01 18:47:38 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.1.107:38752 with 511.1 MB RAM, BlockManagerId(driver, 10.0.1.107, 38752)
16/07/01 18:47:38 INFO BlockManagerMaster: Registered BlockManager
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/07/01 18:47:39 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 117.2 KB, free 117.2 KB)
16/07/01 18:47:39 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 12.6 KB, free 129.8 KB)
16/07/01 18:47:39 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.1.107:38752 (size: 12.6 KB, free: 511.1 MB)
16/07/01 18:47:39 INFO SparkContext: Created broadcast 0 from textFile at SimpleApp.scala:13
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 4 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 2 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 0 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 1 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 3 is now TASK_RUNNING
16/07/01 18:47:39 WARN DFSUtil: Namenode for hdfs remains unresolved for ID nn1. Check your hdfs-site.xml file to ensure namenodes are configured properly.
16/07/01 18:47:39 WARN DFSUtil: Namenode for hdfs remains unresolved for ID nn2. Check your hdfs-site.xml file to ensure namenodes are configured properly.
Exception in thread "main" java.lang.IllegalArgumentException: java.net.UnknownHostException: namenode1.hdfs.mesos
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:240)
at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy(ConfiguredFailoverProxyProvider.java:124)
at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:74)
at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:65)
at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:152)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:579)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:524)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:653)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:427)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:400)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at scala.Option.map(Option.scala:145)
at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD.count(RDD.scala:1157)
at SimpleApp$.main(SimpleApp.scala:15)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:786)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:183)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:208)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:123)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.UnknownHostException: namenode1.hdfs.mesos
... 48 more
16/07/01 18:47:39 INFO SparkContext: Invoking stop() from shutdown hook
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/07/01 18:47:40 INFO SparkUI: Stopped Spark web UI at http://10.0.1.107:4040
16/07/01 18:47:40 INFO CoarseMesosSchedulerBackend: Shutting down all executors
16/07/01 18:47:40 INFO CoarseMesosSchedulerBackend: Asking each executor to shut down
I0701 18:47:40.051103 111 sched.cpp:1771] Asked to stop the driver
I0701 18:47:40.051283 96 sched.cpp:1040] Stopping framework 'c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002-driver-20160701184530-0001'
16/07/01 18:47:40 INFO CoarseMesosSchedulerBackend: driver.run() returned with code DRIVER_STOPPED
16/07/01 18:47:40 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/07/01 18:47:40 INFO MemoryStore: MemoryStore cleared
16/07/01 18:47:40 INFO BlockManager: BlockManager stopped
16/07/01 18:47:40 INFO BlockManagerMaster: BlockManagerMaster stopped
16/07/01 18:47:40 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/07/01 18:47:40 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/07/01 18:47:40 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/07/01 18:47:40 INFO SparkContext: Successfully stopped SparkContext
16/07/01 18:47:40 INFO ShutdownHookManager: Shutdown hook called
16/07/01 18:47:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-37696e45-5e8b-4328-81e6-deec1f185d75/httpd-69184304-7ffd-4420-b020-5f8a1bafecbd
16/07/01 18:47:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-37696e45-5e8b-4328-81e6-deec1f185d75
Why is Spark trying to connect to HDFS although file type is explicitly set to file://?
I thought that sc.textFile("file:///") doesn’t require HDFS setup.
Spark always use the Hadoop API to access a file, regardless of that file is local or in HDFS.
I think the problem is your Spark is inheriting an invalid HDFS configuration and hit this bug https://issues.apache.org/jira/browse/SPARK-11227
You should try some workarounds in that ticket to see if it works for you:
Use an older Spark < 1.5.0
Disable HA in HDFS configuration.
Spark will still use the hdfs to write the intermediate results of the stages (in your case, I guess the partial counts).
I'm going through a problem in which my job fails at a particular stage when it invokes a class.
Here's the line
val stockDataFilteredRDD: RDD[stockPriceInfo] =
lineMapToStockPriceInfoObjectRDD
.map(new stockDataFilter(_).requirementsMet.get)
Here's what I see
15/10/0916:02:28INFOClient:ApplicationreportfromResourceManager:
applicationidentifier:application_1438798768056_0254
appId:254
clientToAMToken:null
appDiagnostics:
appMasterHost:ip-10-0-142-138.ec2.internal
appQueue:root.add_twitter_user
appMasterRpcPort:0
appStartTime:1444421062565
yarnAppState:RUNNING
distributedFinalState:UNDEFINED
appTrackingUrl:http://myIP.ip
15/10/0916:02:29INFOClient:ApplicationreportfromResourceManager:
applicationidentifier:application_1438798768056_0254
appId:254
clientToAMToken:null
appDiagnostics:
appMasterHost:ip-10-0-142-138.ec2.internal
appQueue:root.add_twitter_user
appMasterRpcPort:0
appStartTime:1444421062565
yarnAppState:FINISHED
distributedFinalState:FAILED
appTrackingUrl:http://myIP.ip
appUser:add_twitter_user
The class invoked
class stockDataFilter(val s:stockPriceInfo){
val dateDelim="-"
val timeDelim=":"
val dateAndTime=s.dateTime
val splitDateTime=dateAndTime.split("#")
val dateStamp=splitDateTime(0)
val time=splitDateTime(1)
val splitDate=dateStamp.split(dateDelim)
val splitTime=time.split(timeDelim)
val year=splitDate(0);
val month=splitDate(1);
val day=splitDate(2)
val hour=splitTime(0);
val minute=splitTime(1);
val second=splitTime(2)
val openingBell:LocalTime=newLocalTime(9,30)
val closingBell:LocalTime=newLocalTime(16,0)
val currentTime:LocalTime=newLocalTime(hour.toInt,minute.toInt)
//NonTradingSessions
val newYearsDay:Date=newDate(year.toInt-1900,0,1)
val weekends:List[String]=List("Saturday","Sunday")
val date=new Date(year.toInt-1900,month.toInt-1,day.toInt)
val currentDate=new LocalDate(year.toInt-1990,month.toInt-1,day.toInt)
//NewYorkStockExchangeoperatesfrom9:30a.m.to4:00p.m
def isWithinTradingSession:Boolean={
val isAfterOpen:Boolean=currentTime.isAfter(openingBell)
val isBeforeClose:Boolean=currentTime.isBefore(closingBell)
isAfterOpen&&isBeforeClose
}//return Trueifitiswithingtradingtime
def requirementsMet:Option[stockPriceInfo]=isWithinTradingSessionmatch{
case true=>Some(s)
case false=>None
}
}
I am able to display(store in HDFS) anything before that, but once I add this line, it fails. I've looked at the logs, there are no obvious issues. There was also no compile-time or run-time exception. I've been stuck on this for days out of options. Your help would be appreciated.. Regards
LOGS:
LogType: stderr
LogLength: 6638
Log Contents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/jars/spark-assembly-1.1.0-cdh5.2.1-hadoop2.5.0-cdh5.2.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data01/yarn/nm/usercache/add_twitter_user/filecache/62/twitteryahoofinanceanalytics.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/10/10 11:27:24 INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT]
15/10/10 11:27:25 INFO spark.SecurityManager: Changing view acls to: yarn,add_twitter_user
15/10/10 11:27:25 INFO spark.SecurityManager: Changing modify acls to: yarn,add_twitter_user
15/10/10 11:27:25 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, add_twitter_user); users with modify permissions: Set(yarn, add_twitter_user)
15/10/10 11:27:25 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/10/10 11:27:25 INFO Remoting: Starting remoting
15/10/10 11:27:26 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher#IP]
15/10/10 11:27:26 INFO Remoting: Remoting now listens on addresses: [akka.tcp://driverPropsFetcher#IP]
15/10/10 11:27:26 INFO util.Utils: Successfully started service 'driverPropsFetcher' on port port.
15/10/10 11:27:26 INFO spark.SecurityManager: Changing view acls to: yarn,add_twitter_user
15/10/10 11:27:26 INFO spark.SecurityManager: Changing modify acls to: yarn,add_twitter_user
15/10/10 11:27:26 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, add_twitter_user); users with modify permissions: Set(yarn, add_twitter_user)
15/10/10 11:27:26 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/10/10 11:27:26 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/10/10 11:27:26 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/10/10 11:27:26 INFO Remoting: Starting remoting
15/10/10 11:27:26 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor#myIP]
15/10/10 11:27:26 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkExecutor#myIP]
15/10/10 11:27:26 INFO util.Utils: Successfully started service 'sparkExecutor' on port 54841.
15/10/10 11:27:26 INFO Remoting: Remoting shut down
15/10/10 11:27:26 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
15/10/10 11:27:26 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://sparkDriver#IP:port/user/CoarseGrainedScheduler
15/10/10 11:27:26 INFO executor.CoarseGrainedExecutorBackend: Successfully registered with driver
15/10/10 11:27:26 INFO spark.SecurityManager: Changing view acls to: yarn,add_twitter_user
15/10/10 11:27:26 INFO spark.SecurityManager: Changing modify acls to: yarn,add_twitter_user
15/10/10 11:27:26 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, add_twitter_user); users with modify permissions: Set(yarn, add_twitter_user)
15/10/10 11:27:26 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/10/10 11:27:26 INFO Remoting: Starting remoting
15/10/10 11:27:26 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor#myIP]
15/10/10 11:27:26 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkExecutor#myIP]
15/10/10 11:27:26 INFO util.Utils: Successfully started service 'sparkExecutor' on port 36011.
15/10/10 11:27:26 INFO util.AkkaUtils: Connecting to MapOutputTracker: akka.tcp://sparkDriver#IP:PORT/user/MapOutputTracker
15/10/10 11:27:26 INFO util.AkkaUtils: Connecting to BlockManagerMaster: akka.tcp://sparkDriver#IP/user/BlockManagerMaster
15/10/10 11:27:26 INFO storage.DiskBlockManager: Created local directory at /data01/yarn/nm/usercache/add_twitter_user/appcache/application_1438798768056_0263/spark-local-20151010112726-839d
15/10/10 11:27:26 INFO storage.DiskBlockManager: Created local directory at /data02/yarn/nm/usercache/add_twitter_user/appcache/application_1438798768056_0263/spark-local-20151010112726-4371
15/10/10 11:27:26 INFO util.Utils: Successfully started service 'Connection manager for block manager' on port port.
15/10/10 11:27:26 INFO network.ConnectionManager: Bound socket to port port with id = ConnectionManagerId(myIP,port)
15/10/10 11:27:26 INFO storage.MemoryStore: MemoryStore started with capacity 530.3 MB
15/10/10 11:27:26 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/10/10 11:27:26 INFO storage.BlockManagerMaster: Registered BlockManager
15/10/10 11:27:26 INFO util.AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver#IP/user/HeartbeatReceiver
15/10/10 11:27:27 INFO executor.CoarseGrainedExecutorBackend: Driver commanded a shutdown
15/10/10 11:27:27 INFO network.ConnectionManager: Selector thread was interrupted!
15/10/10 11:27:27 INFO network.ConnectionManager: ConnectionManager stopped
15/10/10 11:27:27 INFO storage.MemoryStore: MemoryStore cleared
15/10/10 11:27:27 INFO storage.BlockManager: BlockManager stopped
15/10/10 11:27:27 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/10/10 11:27:27 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/10/10 11:27:27 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/10/10 11:27:27 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/10/10 11:27:27 INFO Remoting: Remoting shut down
15/10/10 11:27:27 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
15/10/10 11:27:27 INFO Remoting: Remoting shut down
15/10/10 11:27:27 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
LogType: stdout
LogLength: 0
Log Contents: