Spark ClassNotFoundException when run on yarn-cluster - scala

my code:
import org.apache.spark.{SparkConf, SparkContext}
object Run extends App {
val conf = new SparkConf().setMaster("yarn-cluster").setAppName("t666")
sc.addJar("hdfs://10.1.11.99:8020/user/spark/share/scalaj-http_2.10-2.3.0.jar")
val sc = new SparkContext(conf)
val b = scalaj.http.Base64.encodeString("刘")
val a = Array[String](b)
sc.parallelize(a).saveAsTextFile("hdfs://10.1.11.99:8020/testdata/t2/")
}
and my submit commend is:
spark-submit --master yarn-cluster --class start.Run run.jar
the log on yarn show:
16/11/04 13:50:01 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done
16/11/04 13:50:01 INFO spark.SparkContext: Added JAR hdfs://10.1.11.99:8020/user/spark/share/scalaj-http_2.10-2.3.0.jar at hdfs://10.1.11.99:8020/user/spark/share/scalaj-http_2.10-2.3.0.jar with timestamp 1478238601256
16/11/04 13:50:01 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://YarnAM#192.168.3.49:53976)
16/11/04 13:50:01 ERROR yarn.ApplicationMaster: User class threw exception: java.lang.NoClassDefFoundError: scalaj/http/Base64
java.lang.NoClassDefFoundError: scalaj/http/Base64
at start.Run$delayedInit$body.apply(Run.scala:31)
at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
at scala.App$class.main(App.scala:71)
at start.Run$.main(Run.scala:9)
at start.Run.main(Run.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
Caused by: java.lang.ClassNotFoundException: scalaj.http.Base64
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 15 more
16/11/04 13:50:01 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.NoClassDefFoundError: scalaj/http/Base64)
16/11/04 13:50:01 INFO client.RMProxy: Connecting to ResourceManager at slave3/192.168.3.48:8030
16/11/04 13:50:01 INFO yarn.YarnRMClient: Registering the ApplicationMaster
16/11/04 13:50:01 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
16/11/04 13:50:01 INFO spark.SparkContext: Invoking stop() from shutdown hook
the 2nd line show:
INFO spark.SparkContext: Added JAR hdfs://10.1.11.99:8020/user/spark/share/scalaj-http_2.10-2.3.0.jar at hdfs://10.1.11.99:8020/user/spark/share/scalaj-http_2.10-2.3.0.jar with timestamp 1478238601256
it seems already add the jar file into my classpath,but this exception i can't explain.
anyone's answer will be help me a lot!

I believe SparkContext.addJar only adds the JAR to the classpath of the workers, and not the driver. Try adding the JAR using the --jars option in the spark-submit command:
spark-submit --master yarn \
--deploy-mode cluster \
--jars hdfs://10.1.11.99:8020/user/spark/share/scalaj-http_2.10-2.3.0.jar \
--class start.Run run.jar

Related

java.lang.NoClassDefFoundError: scala/Product$class Error while reading Redshift Table from EMR

Here is the code I am running on the EMR
import pyspark
from pyspark.sql import SQLContext
from pyspark.sql import SparkSession
from pyspark import SQLContext, SparkContext, SparkConf
spark = SparkSession.builder.getOrCreate()
sql_context = SQLContext(spark)
sc = spark.sparkContext
url = "jdbc:redshift://redshift-cluster-endpoint.amazonaws.com::5439/db-name?user=<USER>&password=<PWD>"
df = sql_context.read \
.format("com.databricks.spark.redshift") \
.option("url", url) \
.option("dbtable", "schema_name.table_name") \
.option("tempdir", "s3://temp-bucket/temp_data_pyspark/") \
.option("forward_spark_s3_credentials", "true") \
.load()
print(df.head(10))
This is my spark-submit
spark-submit \
--jars s3://aws-emr-resources-bucket-us-east-1/RedshiftJDBC41-1.2.12.1017.jar,s3://aws-emr-resources-bucket-us-east-1/minimal-json-0.9.4.jar,s3://aws-emr-resources-bucket-us-east-1/spark-avro_2.11-3.0.0.jar,s3://aws-emr-resources-bucket-us-east-1/spark-redshift_2.10-2.0.0.jar \
--packages com.databricks:spark-redshift_2.11:2.0.1 \
python-script.py
The error I am facing is :
Traceback (most recent call last):
File "/home/hadoop/python-script.py", line 18, in <module>
.option("forward_spark_s3_credentials", "true") \
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 210, in load
File "/usr/lib/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco
File "/usr/lib/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o67.load.
: java.lang.NoClassDefFoundError: scala/Product$class
at com.databricks.spark.redshift.Parameters$MergedParameters.<init>(Parameters.scala:78)
at com.databricks.spark.redshift.Parameters$.mergeParameters(Parameters.scala:72)
at com.databricks.spark.redshift.DefaultSource.createRelation(DefaultSource.scala:48)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:355)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:325)
at org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:307)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:307)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:225)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.ClassNotFoundException: scala.Product$class
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 20 more
22/09/06 08:29:38 INFO SparkContext: Invoking stop() from shutdown hook
22/09/06 08:29:38 INFO AbstractConnector: Stopped Spark#2edd2970{HTTP/1.1, (http/1.1)}{0.0.0.0:4040}
22/09/06 08:29:38 INFO SparkUI: Stopped Spark web UI at http://ip-10-50-105-28.ec2.internal:4040
22/09/06 08:29:38 INFO YarnClientSchedulerBackend: Interrupting monitor thread
22/09/06 08:29:38 INFO YarnClientSchedulerBackend: Shutting down all executors
22/09/06 08:29:38 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
22/09/06 08:29:38 INFO YarnClientSchedulerBackend: YARN client scheduler backend Stopped
22/09/06 08:29:38 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
22/09/06 08:29:38 INFO MemoryStore: MemoryStore cleared
22/09/06 08:29:38 INFO BlockManager: BlockManager stopped
22/09/06 08:29:38 INFO BlockManagerMaster: BlockManagerMaster stopped
22/09/06 08:29:38 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
22/09/06 08:29:38 INFO SparkContext: Successfully stopped SparkContext
22/09/06 08:29:38 INFO ShutdownHookManager: Shutdown hook called
22/09/06 08:29:38 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-fefa4b92-1737-4087-ba47-e8a81a3aa2e0
22/09/06 08:29:38 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-a58798f3-021f-4a8a-88f6-5bd54d57b428/pyspark-442297be-93ef-494d-af78-57b88db7395e
22/09/06 08:29:38 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-a58798f3-021f-4a8a-88f6-5bd54d57b428
Upon searching this error, I found here that I must be using different Scala version binaries. Any help in identifying the compatible versions of the above jars is appreciated. Resources/Links would be most beneficial.

Connection ERROR while writing Dataframe (Pyspark 3.x on EMR 6.x) to RDS (MySQL)

I get "connection refused error" when I try to write the results of a Dataframe to an RDS (MySQL). I am using PySpark 3 on EMR cluster v6.x (1 master node, 1 slave node). The table does not exist yet. But the data base exist.
spark-submit --jars s3://{some s3 folder}/mysql-connector-java-8.0.25.jar s3://{some s3 folder}/pyspark_script.py
The part of the script that writes to mysql is here (after testing, its the only part of the script that delivers error/is not working): * I have changed the name of my db, user, and password here below
df.write\
.mode("overwrite")\
.format("jdbc")\
.option("url","jdbc:mysql://localhost:3306/{my database name}?useSSL=false")\
.option("driver","com.mysql.cj.jdbc.Driver")\
.option("dbtable","mydb_table")\
.option("user","myuser")\
.option("password","mypassword")\
.save()
This is the error I get: It is about connection refused.
I have already given the EMR Role access RDS and its data!
Traceback (most recent call last):
File "/mnt/tmp/spark-93919f38-ea4d-44d6-be7d-0416be972753/pyspark_script.py", line 57, in <module>
.option("password","assignment")\
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 1107, in save
File "/usr/lib/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco
File "/usr/lib/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o163.save.
: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:833)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:453)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198)
at org.apache.spark.sql.execution.datasources.jdbc.connection.BasicConnectionProvider.getConnection(BasicConnectionProvider.scala:49)
at org.apache.spark.sql.execution.datasources.jdbc.connection.ConnectionProvider$.create(ConnectionProvider.scala:68)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$createConnectionFactory$1(JdbcUtils.scala:62)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:48)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:194)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:232)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:229)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:190)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:134)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:133)
at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:110)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:135)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:135)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:253)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:134)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:301)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151)
at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:89)
at com.mysql.cj.NativeSession.connect(NativeSession.java:144)
at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:953)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:823)
... 45 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:155)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:63)
... 48 more
21/12/19 11:40:04 INFO SparkContext: Invoking stop() from shutdown hook
21/12/19 11:40:04 INFO AbstractConnector: Stopped Spark#74d96709{HTTP/1.1, (http/1.1)}{0.0.0.0:4040}
21/12/19 11:40:04 INFO SparkUI: Stopped Spark web UI at http://{ip}.eu-central-1.compute.internal:4040
21/12/19 11:40:04 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
21/12/19 11:40:04 INFO MemoryStore: MemoryStore cleared
21/12/19 11:40:04 INFO BlockManager: BlockManager stopped
21/12/19 11:40:04 INFO BlockManagerMaster: BlockManagerMaster stopped
21/12/19 11:40:04 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
21/12/19 11:40:04 INFO SparkContext: Successfully stopped SparkContext
21/12/19 11:40:04 INFO ShutdownHookManager: Shutdown hook called
21/12/19 11:40:04 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-fd1b8e7c-7b4c-424d-a451-743a6e075fbd
21/12/19 11:40:04 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-93919f38-ea4d-44d6-be7d-0416be972753
21/12/19 11:40:04 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-fd1b8e7c-7b4c-424d-a451-743a6e075fbd/pyspark-40fbaaf5-2e34-44ba-875f-88308084546d
Try this thread: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
EMR with localhost RDS? seems odd, did you missed set the IP correctly ?

Spark Streaming Twitter "receiver-supervisor-future-0" java.lang.AbstractMethodError

I am using Spark Streaming Twitter package to fetch the popular hashtags. To be specific, I am trying to replicate this code https://github.com/andrewrgoss/udemy-spark-scala/tree/master/twitter_streaming
I ran the following commands on Windows command line :
sbt package
spark-submit --packages org.apache.bahir:spark-streaming-twitter_2.11:2.1.0 --class com.andrewrgoss.spark.PopularHashtags \target\scala-2.11\twitter_streaming_2.11-0.1.jar
This gives me the below error.
2018-04-20 03:16:30 INFO BlockManagerMaster:54 - Registered BlockManager BlockManagerId(driver, 172.24.20.2, 56082, None)
2018-04-20 03:16:30 INFO BlockManager:54 - Initialized BlockManager: BlockManagerId(driver, 172.24.20.2, 56082, None)
2018-04-20 03:16:30 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#6b00ad9{/metrics/json,null,AVAILABLE,#Spark}
Exception in thread "receiver-supervisor-future-0" java.lang.AbstractMethodError
at org.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala:99)
at org.apache.spark.streaming.twitter.TwitterReceiver.initializeLogIfNecessary(TwitterInputDStream.scala:60)
at org.apache.spark.internal.Logging$class.log(Logging.scala:46)
at org.apache.spark.streaming.twitter.TwitterReceiver.log(TwitterInputDStream.scala:60)
at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
at org.apache.spark.streaming.twitter.TwitterReceiver.logInfo(TwitterInputDStream.scala:60)
at org.apache.spark.streaming.twitter.TwitterReceiver.onStop(TwitterInputDStream.scala:106)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.stopReceiver(ReceiverSupervisor.scala:170)
at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply$mcV$sp(ReceiverSupervisor.scala:194)
at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
-------------------------------------------
Time: 1524208600000 ms
-------------------------------------------
-------------------------------------------
Time: 1524208610000 ms
-------------------------------------------
-------------------------------------------
I found the related thread Spark Program finding Popular HashTags from twiiter but no solution. Can someone from the community elaborate the error and how to resolve it?

Spark in cluster mode throws error if a SparkContext is not started

I have a Spark job that initializes the spark context only if it is really necessary:
val conf = new SparkConf()
val jobs: List[Job] = ??? //get some jobs
if(jobs.nonEmpty) {
val sc = new SparkContext(conf)
sc.parallelize(jobs).foreach(....)
} else {
//do nothing
}
It worked fine on Yarn if deploy-mode is 'client'
spark-submit --master yarn --deploy-mode client
Then I switched deploy mode to 'cluster' and it started to crash in case of jobs.isEmpty
spark-submit --master yarn --deploy-mode cluster
Below is the error text:
INFO yarn.Client: Application report for
application_1509613523426_0017 (state: ACCEPTED)
17/11/02 11:37:17
INFO yarn.Client: Application report for
application_1509613523426_0017 (state: FAILED) 17/11/02 11:37:17
INFO yarn.Client: client token: N/A diagnostics: Application
application_1509613523426_0017 failed 2 times due to AM Container for
appattempt_1509613523426_0017_000002 exited with exitCode: -1000 For
more detailed output, check application tracking
page:http://xxxxxx.com:8088/cluster/app/application_1509613523426_0017Then,
click on links to logs of each attempt. Diagnostics: File does not
exist:
hdfs://xxxxxxx/.sparkStaging/application_1509613523426_0017/__spark_libs__997458388067724499.zip
java.io.FileNotFoundException: File does not exist:
hdfs://xxxxxxx/.sparkStaging/application_1509613523426_0017/__spark_libs__997458388067724499.zip
at
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1309)
at
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
at
org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Failing this attempt. Failing the application. ApplicationMaster
host: N/A ApplicationMaster RPC port: -1 queue: dev start time:
1509622629354 final status: FAILED tracking URL:
http://xxxxxx.com:8088/cluster/app/application_1509613523426_0017 user: xxx Exception in thread "main"
org.apache.spark.SparkException: Application
application_1509613523426_0017 finished with failed status at
org.apache.spark.deploy.yarn.Client.run(Client.scala:1104) at
org.apache.spark.deploy.yarn.Client$.main(Client.scala:1150) at
org.apache.spark.deploy.yarn.Client.main(Client.scala) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
at
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/11/02 11:37:17 INFO util.ShutdownHookManager: Shutdown hook called
17/11/02 11:37:17 INFO util.ShutdownHookManager: Deleting directory
/tmp/spark-a5b20def-0218-4b0c-b9f8-fdf8a1802e95
Is it a bug in Yarn support or I'm missing something?
SparkContext is the one who is responsible for communication with cluster manager. If application is submitted to the cluster, but context is never created, YARN cannot determine the state of the application - this is why you get an error.

Spark on Yarn: Max number of executor failures reached

When I am running spark job on cluster mode I am facing following issue:
6/05/25 12:42:55 INFO Client: Application report for application_1464166348026_0025 (state: RUNNING)
16/05/25 12:42:56 INFO Client: Application report for application_1464166348026_0025 (state: FINISHED)
16/05/25 12:42:56 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 10.255.8.181
ApplicationMaster RPC port: 0
queue: root.pimuser
start time: 1464172925289
final status: FAILED
tracking URL: http://test-hadoop-001.localdomain:8088/proxy/application_1464166348026_0025/history/application_1464166348026_0025/2
user: pimuser
Exception in thread "main" org.apache.spark.SparkException: Application application_1464166348026_0025 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:927)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:973)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/05/25 12:42:56 INFO ShutdownHookManager: Shutdown hook called
Following Command I am using to run the job.
spark-submit --driver-java-options -XX:MaxPermSize=2048m --driver-memory 4g --deploy-mode cluster --master yarn --files cluster.xls --class com.app.test.Matching target/test-0.0.1-SNAPSHOT-jar-with-dependencies.jar
Even I tried --master yarn-cluster also but I got same error.
I am using cloudera 5.5 ,Hadoop 2.6.0-cdh5.5.1 and Spark 1.5 versions.