I am connecting to Hbase using Spark. I have added all the dependencies but still i am getting this exception. Kindly help me like which JAR i need to add to resolve this issue.
SPARK_MAJOR_VERSION is set to 2, using Spark2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/spark2/jars/slf4j-log4j12 -1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/spark2/jars/slf4j-log4j12 -1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/spark2/jars/phoenix-4.7.0 .2.6.5.0-292-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/spark2/jars/phoenix-4.7.0 .2.6.5.0-292-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLeve l(newLevel).
18/09/17 05:34:36 WARN Utils: Service 'SparkUI' could not bind on port 4040. Att empting port 4041.
Spark context Web UI available at http://sandbox-hdp.hortonworks.com:4041
Spark context available as 'sc' (master = local[*], app id = local-1537162476668).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.0.2.6.5.0-292
/_/
Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type :help for more information.
scala> :paste
// Entering paste mode (ctrl-D to finish)
import org.apache.spark.sql.{SQLContext, _}
import org.apache.spark.sql.execution.datasources.hbase._
import org.apache.spark.{SparkConf, SparkContext}
import spark.sqlContext.implicits._
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.{ConnectionFactory,HBaseAdmin,HTable,Put,Get}
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.hadoop.hbase.client.HBaseAdmin
import org.apache.hadoop.hbase.{HTableDescriptor,HColumnDescriptor}
def catalog = s"""{
|"table":{"namespace":"default", "name":"Contacts"},
|"rowkey":"key",
|"columns":{
|"rowkey":{"cf":"rowkey", "col":"key", "type":"string"},
|"officeAddress":{"cf":"Office", "col":"Address", "type":"string"},
|"officePhone":{"cf":"Office", "col":"Phone", "type":"string"},
|"personalName":{"cf":"Personal", "col":"Name", "type":"string"},
|"personalPhone":{"cf":"Personal", "col":"Phone", "type":"string"}
|}
|}""".stripMargin
def withCatalog(cat: String): DataFrame = {
spark.sqlContext
.read
.options(Map(HBaseTableCatalog.tableCatalog->cat))
.format("org.apache.spark.sql.execution.datasources.hbase")
.load()
}
val df = withCatalog(catalog)
df.registerTempTable("contacts")
val query = spark.sqlContext.sql("select personalName, officeAddress from contacts")
query.show() <p>
// Exiting paste mode, now interpreting.
warning: there was one deprecation warning; re-run with -deprecation for details
java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos$MasterService$BlockingInterface
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
Below are the Jar's available in spark-Jar's Folder
hbase-0.94.2.jar
hbase-annotations-1.2.0.jar
hbase-client-2.1.0.jar
hbase-common-2.1.0.jar
hbase-hadoop-compat-2.1.0.jar
hbase-hadoop2-compat-2.1.0.jar
hbase-it-1.1.2.2.6.5.0-292.jar
hbase-prefix-tree-1.1.2.2.6.5.0-292.jar
hbase-procedure-1.1.2.2.6.5.0-292.jar
hbase-protocol-2.1.0.jar
hbase-server-2.1.0.jar
hbase-spark-1.2.0-cdh5.8.3.jar
hbase-spark-1.1.2.2.6.5.0-292.jar
hbase-thrift-1.1.2.2.6.5.0-292.jar
hive-hbase-handler-0.12.0-cdh5.1.3.jar
hive-hbase-handler-3.1.0.jar
protobuf-java-3.5.1.jar
Kindly provide me suggestion like which jar i missed to add in the jars folder in order to connect to hbase.
Seems like you are missing a shc-core jar which is used to write dataframes to hbase which has been implented by hortonworks.
As you are importing the package from hortonworks-shc-connector
import org.apache.spark.sql.execution.datasources.hbase._
You need add the jar to your spark application.
Steps to get the jar of shc-core connector:
First get pull of hortonworks-spark/hbase connector github repository then checkout to a appropriate branch with version of hbase and hadoop that you are using in your environment and build it using
mvn clean install -DskipTests
after executing above you will have a jar in your ~/.m2/repository/com/hortonworks/shc/
Use this jar for your spark application.
You can either add to your spark-jar folder or you can pass it in spark-submit/spark-shell with --jars flag
Then use try to execute the code you are trying run.
I have followed the same steps and was able to read from hbase with HCatalog.
Example
spark-shell --jars shc-core-1.1.3-2.4-s_2.11.jar
SPARK_MAJOR_VERSION is set to 2, using Spark2
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
setLogLevel(newLevel).
Spark context Web UI available at http://sandbox-hdp.hortonworks.com:4040
Spark context available as 'sc' (master = yarn, app id =
application_1592322799672_0007).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.0.7.0.3.0-79
/_/
Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_232)
Type in expressions to have them evaluated.
Type :help for more information.
scala> :paste
// Entering paste mode (ctrl-D to finish)
import org.apache.spark.sql.{SQLContext, _}
import org.apache.spark.sql.execution.datasources.hbase._
import org.apache.spark.{SparkConf, SparkContext}
import spark.sqlContext.implicits._
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.{ConnectionFactory,HBaseAdmin,HTable,Put,Get}
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.hadoop.hbase.client.HBaseAdmin
import org.apache.hadoop.hbase.{HTableDescriptor,HColumnDescriptor}
def catalog = s"""{
|"table":{"namespace":"default", "name":"Contacts"},
|"rowkey":"key",
|"columns":{
|"rowkey":{"cf":"rowkey", "col":"key", "type":"string"},
|"officeAddress":{"cf":"Office", "col":"Address", "type":"string"},
|"officePhone":{"cf":"Office", "col":"Phone", "type":"string"},
|"personalName":{"cf":"Personal", "col":"Name", "type":"string"},
|"personalPhone":{"cf":"Personal", "col":"Phone", "type":"string"}
|}
|}""".stripMargin
def withCatalog(cat: String): DataFrame = {
spark.sqlContext
.read
.options(Map(HBaseTableCatalog.tableCatalog->cat))
.format("org.apache.spark.sql.execution.datasources.hbase")
.load()
}
val df = withCatalog(catalog)
df.registerTempTable("contacts")
val query = spark.sqlContext.sql("select personalName, officeAddress from contacts")
query.show()
// Exiting paste mode, now interpreting.
warning: there was one deprecation warning; re-run with -deprecation for details
Hive Session ID = 5cc02976-98c4-447f-9ba0-e70c4a3c4ab1
+------------+-------------+
|personalName|officeAddress|
+------------+-------------+
|John Jackson| 40 Ellis St.|
|John Jackson| 40 Ellis St.|
+------------+-------------+
import org.apache.spark.sql.{SQLContext, _}
import org.apache.spark.sql.execution.datasources.hbase._
import org.apache.spark.{SparkConf, SparkContext}
import spark.sqlContext.implicits._
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.{ConnectionFactory, HBaseAdmin, HTable, Put, Get}
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.hadoop.hbase.client.HBaseAdmin
import org.apache.hadoop.hbase.{HTableDescriptor, HColumnDescriptor}
catalog: String
withCatalog: (cat: String)org.apache.spark.sql.DataFrame
df: org.apache.spark.sql.DataFrame = [rowkey: string, officeAddress: string ... 3 more fields]
query: org.apache.spark.sql.DataFrame = [personalName: string, officeAddress: string]
scala> query.show()
+------------+-------------+
|personalName|officeAddress|
+------------+-------------+
|John Jackson| 40 Ellis St.|
|John Jackson| 40 Ellis St.|
+------------+-------------+
scala>
Stack Versions :
HBase 2.2.0
Hadoop 3.1.1
Spark 2.4.0
Scala 2.11.12
Related
spark-shell throws NoSuchMethodException if I define a class in REPL and then call newInstance via reflection.
Spark context available as 'sc' (master = yarn, app id = application_1656488084960_0162).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.0.3
/_/
Using Scala version 2.12.10 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_141)
Type in expressions to have them evaluated.
Type :help for more information.
scala> class Demo {
| def demo(s: String): Unit = println(s)
| }
defined class Demo
scala> classOf[Demo].newInstance().demo("OK")
java.lang.InstantiationException: Demo
at java.lang.Class.newInstance(Class.java:427)
... 47 elided
Caused by: java.lang.NoSuchMethodException: Demo.<init>()
at java.lang.Class.getConstructor0(Class.java:3082)
at java.lang.Class.newInstance(Class.java:412)
... 47 more
But the same code works fine in native scala REPL:
Welcome to Scala 2.12.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_131).
Type in expressions for evaluation. Or try :help.
scala> class Demo {
| def demo(s: String): Unit = println(s)
| }
defined class Demo
scala> classOf[Demo].newInstance().demo("OK")
OK
What's the difference between spark-shell REPL and native scala REPL?
I guess the Demo class might be treated as inner class in spark-shell REPL.
But ... how to solve the problem?
In Scala 2.12.4 REPL the class is nested into objects, so it has zero-argument constructor accessible via .newInstance(). In Spark 3.0.3 shell the class is nested into classes, so there is no zero-argument constructor, the constructor of Demo accepts an instance of outer class and should be accessed via .getConstructors()(0).newInstance(...). Start Scala REPL and Spark shell with ./scala -Xprint:typer and ./spark-shell -Xprint:typer correspondingly and you'll see the difference.
So in Spark shell try
classOf[Demo].getDeclaredMethod("demo", classOf[String])
.invoke(
classOf[Demo].getConstructors()(0).newInstance($lineXX.$read.INSTANCE.$iw.$iw),
"OK"
)
//OK
//resYY: Object = null
(XX is the number of line where Demo is defined).
See details in Spark adds hidden parameter to constructor of a Scala class
Why SparkHadoopUtil is not accessible here whereas is accessible in lower version of spark even though they are imported?
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.0.2
/_/
Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_282)
Type in expressions to have them evaluated.
Type :help for more information.
scala> import org.apache.spark.deploy.SparkHadoopUtil
import org.apache.spark.deploy.SparkHadoopUtil
scala> import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.conf.Configuration
scala>
scala>
scala> val hadoopConf: Configuration = SparkHadoopUtil.get.conf
<console>:25: error: object SparkHadoopUtil in package deploy cannot be accessed in package org.apache.spark.deploy
val hadoopConf: Configuration = SparkHadoopUtil.get.conf
^
scala>
That's because the SparkHadoopUtil class has been changed to a private class in Spark 3. Here's the difference between the source of Spark 2.4 and Spark 3.0.
Spark 2.4:
#DeveloperApi
class SparkHadoopUtil extends Logging {
Spark 3.0:
private[spark] class SparkHadoopUtil extends Logging {
I've compiled a spark-scala script to a JAR and I want to run it with spark-submit. But I'm having this error:
2020-01-07 13:03:02,190 WARN util.Utils: Your hostname, nifi resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface enp0s3)
2020-01-07 13:03:02,192 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
2020-01-07 13:03:03,109 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2020-01-07 13:03:03,826 WARN deploy.SparkSubmit$$anon$2: Failed to load hello.
java.lang.ClassNotFoundException: hello
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.util.Utils$.classForName(Utils.scala:238)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:806)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2020-01-07 13:03:03,857 INFO util.ShutdownHookManager: Shutdown hook called
2020-01-07 13:03:03,858 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-a8cc1ba6-3643-4646-82a3-4b44f4487105
This is my code:
import org.apache.spark.sql.SparkSession
import org.apache.spark.{SparkConf, SparkContext}
object hello {
def main(args: Array[String]): Unit = {
val conf = new SparkConf()
.setMaster("local")
.setAppName("quest9")
val sc = new SparkContext(conf)
val spark = SparkSession.builder().appName("quest9").master("local").getOrCreate()
import spark.implicits._
val zip_codes = spark.read.format("csv").option("header", "true").load("/home/hdfs/Documents/quest_9/doc/zip.csv")
val census = spark.read.format("csv").option("header", "true").load("/home/hdfs/Documents/quest_9/doc/census.csv")
census.createOrReplaceTempView("census")
zip_codes.createOrReplaceTempView("zip")
val query = census.as("census").join((zip_codes.where($"City" === "Inglewood").where($"County" === "Los Angeles").as("zip")),Seq("Zip_Code"),"inner").select($"census.Total_Males".as("male"),$"census.Total_Females".as("female")).distinct()
query.show()
val queryR = query.repartition(5)
queryR.write.parquet("/home/hdfs/Documents/population/census/IDE/census.parquet")
sc.stop()
}
}
I think my problem is that im using scala object instead of a class, but I'm not sure.
I run the spark-submit like this
spark-submit \
--class hello \
/home/hdfs/IdeaProjects/untitled/out/artifacts/quest_jar/quest.jar
Anyone solved this error before?
I think you need to specify a package name for both spark-submit and your object.
For instance :
spark-submit \
--class com.my.package.hello \
/home/hdfs/IdeaProjects/untitled/out/artifacts/quest_jar/quest.jar
and
package com.my.package
import org.apache.spark.sql.SparkSession
import org.apache.spark.{SparkConf, SparkContext}
object hello {
...
}
i want to try spark sql , i used at first the bin/spark-shell
inserting this code
val sqlcontext=new org.apache.spark.sql.SQLContext(sc)
val data=sc.textFile("hdfs://localhost:9000/cars.csv")
val mapr=data.map (p => p.split(','))
val MyMatchRDD=mapr.map(p =>MyMatch(p(0).toString(),p(1).toString(),p(2).toString(),
p(3).toString(),p(4).toString(),p(5).toString(),p(6).toString(),p(7).toString(),
p(8).toString()))
import sqlcontext.implicits._
val personDF=MyMatchRDD.toDF()
personDF.registerTempTable("Person")
val res = sqlcontext.sql("SELECT * FROM Person")
res.collect().foreach(println)
i didn't get any issue ,all is good.
But when i used the scala ide
i used in pom file (maven)
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-catalyst_2.10</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.3.0</version>
</dependency>
and i used the same code
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext._
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.slf4j.Logger
import org.slf4j.LoggerFactory;
object SparkSQL {
case class MyMatch( col1: String, col2: String,col3: String, col4 :String ,col5: String,
col6: String,col7 :String ,col8: String,
col9: String)
def main(args:Array[String]) {
val sparkConf = new SparkConf().setAppName("HiveFromSpark").setMaster("local")
val sc = new SparkContext(sparkConf)
val sqlcontext=new org.apache.spark.sql.SQLContext(sc)
val data=sc.textFile("hdfs://localhost:9000/cars.csv")
val mapr=data.map (p => p.split(','))
val MyMatchRDD=mapr.map(p =>MyMatch(p(0).toString(),p(1).toString(),p(2).toString(),p(3).toString(),
p(4).toString(),p(5).toString(),p(6).toString(),p(7).toString(),
p(8).toString()) )
import sqlcontext.implicits._
val personDF=MyMatchRDD.toDF()
personDF.registerTempTable("Person")
val res = sqlcontext.sql("SELECT * FROM Person")
res.collect().foreach(println)
}
}
i got this issue
Exception in thread "main"
scala.reflect.internal.MissingRequirementError: class
org.apache.spark.sql.catalyst.ScalaReflection in JavaMirror with
primordial classloader with boot classpath
[D:\scala-SDK-4.4.1-vfinal-2.11-win32.win32.x86_64\eclipse\plugins\org.scala-
Thanks in advance for your help
You are using an wrong Scala version - Spark's compiled with Scala version 2.10. Check your runtime and compiler Scala version.
Why you're using so old dependencies? Spark has version 2.0.2 right now with Scala 2.11
Recommended actions:
(optional) Change <version>1.3.0</version> to <version>2.0.2</version>
In your Scala compiler, change version to 2.11 (if updated to Spark 2) or 2.10 (if you still use Spark 1)
Make sure you have proper Scala version installed on your machine - 2.11 in case of Spark 2, 2.10 in case of Spark 1. You can check Scala version by typing scala -version in console
Make sure your Scala IDE supports Scala version that was choosen
I'm trying to run some examples in Apache Spark to learn more about it, but when I try to do it (in spark-shell) I'm receiving the error:
java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.addDeprecations([Lorg/apache/hadoop/conf/Configuration$DeprecationDelta;)V
There's the full execution and the error trace. I wish you could help me.
pcitbu#pcitbumint /usr/spark/spark-2.0.1-bin-hadoop2.7 $ spark-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel).
16/10/25 09:52:38 WARN SparkConf:
SPARK_WORKER_INSTANCES was detected (set to '3').
This is deprecated in Spark 1.0+.
Please instead use:
- ./spark-submit with --num-executors to specify the number of executors
- Or set SPARK_EXECUTOR_INSTANCES
- spark.executor.instances to configure the number of instances in the spark config.
16/10/25 09:52:38 WARN Utils: Your hostname, pcitbumint resolves to a loopback address: 127.0.1.1; using 192.168.0.119 instead (on interface ens33)
16/10/25 09:52:38 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/10/25 09:52:38 WARN SparkContext: Use an existing SparkContext, some configuration may not take effect.
Spark context Web UI available at http://192.168.0.119:4040
Spark context available as 'sc' (master = local[*], app id = local-1477381958561).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.0.1
/_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_101)
Type in expressions to have them evaluated.
Type :help for more information.
scala> val file = sc.textFile("README.md")
file: org.apache.spark.rdd.RDD[String] = README.md MapPartitionsRDD[1] at textFile at <console>:24
scala> val counts = file.flatMap(line => line.split(" "))
counts: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[2] at flatMap at <console>:26
scala> .map(word => (word, 1))
res0: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[3] at map at <console>:29
scala> .reduceByKey(_ + _)
java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.addDeprecations([Lorg/apache/hadoop/conf/Configuration$DeprecationDelta;)V
at org.apache.hadoop.hdfs.HdfsConfiguration.addDeprecatedKeys(HdfsConfiguration.java:66)
at org.apache.hadoop.hdfs.HdfsConfiguration.<clinit>(HdfsConfiguration.java:31)
at org.apache.hadoop.hdfs.DistributedFileSystem.<clinit>(DistributedFileSystem.java:116)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:855)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1440)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:124)
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:563)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:318)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:291)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$29.apply(SparkContext.scala:992)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$29.apply(SparkContext.scala:992)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at scala.Option.map(Option.scala:146)
at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:328)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:328)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:327)
... 48 elided
please check your hadoop version. More particularly the version of hadoop-common-x.x.x.jar
Couple of points:
Did you built this spark source yourself, if yes check the hadoop version you built with.
If you've a pre-built version, please see the version of hadoop-common inside your install dir i.e. /usr/spark/spark-2.0.1-bin-hadoop2.7