spark-shell dependencies, translate from sbt - scala

While checking how to use the cassandra connection, the documentation instructs to add this to the sbt file:
"libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector" % "1.6.0-M1"
In general, is there an obvious, straight forward logic to translate this into the corresponding:
spark-shell --packages "field1":"field2"
I've tried:
spark-shell --packages "com.datastax.spark":"spark-cassandra-connector"
and a few other things but that doesn't work.

I believe it is --packages "groupId:artifactId:version". If you have multiple packages, you can comma separate them.
--packages "groupId1:artifactId1:version1, groupId2:artifactId2:version2"
In sbt
val appDependencies = Seq(
"com.datastax.spark" % "spark-cassandra-connector_2.10" % "1.6.0-M1"
)
and
val appDependencies = Seq(
"com.datastax.spark" %% "spark-cassandra-connector" % "1.6.0-M1"
)
are identical.
In case you use %% syntax (after the groupId) in sbt, it automatically picks up the artifact for your scala version. So using scala 2.10 it changes your spark-cassandra-connector to spark-cassandra-connector_2.10. Not sure this feature is there when using spark-shell, so you might need to ask for the scala2_10 version of your artifact explicitly like this: --packages "com.datastax.spark:spark-cassandra-connector_2.10:1.6.0-M1"

Version should be specified.
spark-shell --packages "com.datastax.spark":"spark-cassandra-connector_2.11":"2.0.0-M3"
You can find version information from http://search.maven.org/#search%7Cga%7C1%7Cspark-cassandra-connector .

Follow the instructions as posted on the Spark Packages Website
To use the Spark-Shell
$SPARK_HOME/bin/spark-shell --packages datastax:spark-cassandra-connector:1.6.0-M1-s_2.10
There are also instructions for a variety of build systems
SBT
resolvers += "Spark Packages Repo" at "http://dl.bintray.com/spark-packages/maven"
libraryDependencies += "datastax" % "spark-cassandra-connector" % "1.6.0-M1-s_2.11"
And Maven
<dependencies>
<!-- list of dependencies -->
<dependency>
<groupId>datastax</groupId>
<artifactId>spark-cassandra-connector</artifactId>
<version>1.6.0-M1-s_2.11</version>
</dependency>
</dependencies>
<repositories>
<!-- list of other repositories -->
<repository>
<id>SparkPackagesRepo</id>
<url>http://dl.bintray.com/spark-packages/maven</url>
</repository>
</repositories>

Related

Spark ignoring package jars included in the configuration of my Spark Session

I keep running into a java.lang.ClassNotFoundException: Failed to find data source: iceberg. Please find packages at https://spark.apache.org/third-party-projects.html error.
I am trying to include the org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:1.1.0 package as part of my spark code. The reason is that I want it to write unit tests locally. I have tried several things:
Include the package as part of my SparkSession builder:
val conf = new SparkConf()
conf.set("spark.jars.packages", "org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:1.1.0")
val sparkSession: SparkSession =
SparkSession
.builder()
.appName(getClass.getSimpleName)
.config(conf = conf)
// ... the rest of my config
.master("local[*]").getOrCreate()
and it does not work, I get the same error. I also tried directly using the configuration string in the sparksession builder and that didn't work either.
Downloading the jar myself. I really don't want to do this, I want it to be automated. But even this, I cannot specify "spark.jars" to point to the downloaded jar, it cannot find it for some reason.
Can anybody help me figure this out?
You can create a uber/fat jar and put all your dependencies in that jar.
Lets say if you want to use iceberg in your spark application.
Create a pom.xml file and add the dependency in include section.
<dependencies>
<dependency>
<groupId>org.apache.iceberg</groupId>
<artifactId>iceberg-spark-runtime-3.2_2.12</artifactId>
<version>4.12</version>
</dependency>
</dependencies>
It will create a fat jar along with that dependency baked in it.
you can deploy that jar via spark-submit and the dependent libraries will be picked automatically.
It seems spark.jars.packages is only read when spark-shell starts up. That means it can be changed in the spark-shell session via SparkSession or SparkConf, however, it will not be processed or loaded.
For a Self-Contained Scala Application, you may used to add the following dependencies in the build.sbt:
libraryDependencies ++= Seq(
"org.mongodb.spark" %% "mongo-spark-connector" % "10.0.5",
"org.apache.spark" %% "spark-core" % "3.0.2",
"org.apache.spark" %% "spark-sql" % "3.0.2"
)

Why do I have to add jars to class path to run the jar file when I already have the dependency added in my project?

I am working on a Scala project on IntelliJ with SBT as my build tool. I started working sbt build recently.
This is my project structure:
This is my build.sbt file:
name := "AnalyzeTables"
version := "0.1"
scalaVersion := "2.11.8"
// https://mvnrepository.com/artifact/org.postgresql/postgresql
libraryDependencies += "org.postgresql" % "postgresql" % "42.1.4"
// https://mvnrepository.com/artifact/commons-codec/commons-codec
libraryDependencies += "commons-codec" % "commons-codec" % "1.13"
// https://mvnrepository.com/artifact/org.apache.commons/commons-lang3
libraryDependencies += "org.apache.commons" % "commons-lang3" % "3.8.1"
// https://mvnrepository.com/artifact/log4j/log4j
//libraryDependencies += "log4j" % "log4j" % "1.2.17"
I have Class.forName("org.postgresql.Driver) in my code to connect to database and query. Along with that, I have password decryption & logger added in the code.
I am running the jar in the below format:
scala <jarname> <argument I use in my code>
The problem here is if I just submit in the way I mentioned above, I see ClassNotFoundException for postgres driver. So I add it to the classpath of the jar and submit it as below.
scala -cp /path/postgresql-42.1.4.jar <jarname> <argument I use in my code>
Now I get exception for Logger. So I add it to classpath again and it becomes:
scala -cp /path/postgresql-42.1.4.jar:/path/log4j-1.2.17.jar <jarname> <argument I use in my code>
Now I get exception for commons-codec, so I added that as well.
scala -cp /path/postgresql-42.1.4.jar:/path/log4j-1.2.17.jar:/path/commons-codec-1.13.jar
Now that jar is running fine and I can see the result.
So I have the dependencies added to the build.sbt file. I also did the below operation:
project structure -> Modules -> Dependencies -> + -> jars -> Add all the missing jars that are giving problems
If I remove all the -cp parameters and submit the jar with just: scala <jarname> <argument> it goes back again to ClassNotFoundException to postgres jar.
So what is the point of adding dependencies on build.sbt file and then add them in the classpath again ?
Is there ay setting I am missing or am I looking it in the wrong way ?
Edit:
After suggestions I created a new project & copied all the code into it and added the plugin addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.5") in a new file plugins.sbt file which I created in the dir /project/ as shown in the image below.
I can see the plugin in the sbt-plugins dir. But when I build the jar once again and export, it still shows 11kb jar instead of a fat jar.

Spark typed aggregation throws exception? [duplicate]

The common problems when building and deploying Spark applications are:
java.lang.ClassNotFoundException.
object x is not a member of package y compilation errors.
java.lang.NoSuchMethodError
How these can be resolved?
Apache Spark's classpath is built dynamically (to accommodate per-application user code) which makes it vulnerable to such issues. #user7337271's answer is correct, but there are some more concerns, depending on the cluster manager ("master") you're using.
First, a Spark application consists of these components (each one is a separate JVM, therefore potentially contains different classes in its classpath):
Driver: that's your application creating a SparkSession (or SparkContext) and connecting to a cluster manager to perform the actual work
Cluster Manager: serves as an "entry point" to the cluster, in charge of allocating executors for each application. There are several different types supported in Spark: standalone, YARN and Mesos, which we'll describe bellow.
Executors: these are the processes on the cluster nodes, performing the actual work (running Spark tasks)
The relationsip between these is described in this diagram from Apache Spark's cluster mode overview:
Now - which classes should reside in each of these components?
This can be answered by the following diagram:
Let's parse that slowly:
Spark Code are Spark's libraries. They should exist in ALL three components as they include the glue that let's Spark perform the communication between them. By the way - Spark authors made a design decision to include code for ALL components in ALL components (e.g. to include code that should only run in Executor in driver too) to simplify this - so Spark's "fat jar" (in versions up to 1.6) or "archive" (in 2.0, details bellow) contain the necessary code for all components and should be available in all of them.
Driver-Only Code this is user code that does not include anything that should be used on Executors, i.e. code that isn't used in any transformations on the RDD / DataFrame / Dataset. This does not necessarily have to be separated from the distributed user code, but it can be.
Distributed Code this is user code that is compiled with driver code, but also has to be executed on the Executors - everything the actual transformations use must be included in this jar(s).
Now that we got that straight, how do we get the classes to load correctly in each component, and what rules should they follow?
Spark Code: as previous answers state, you must use the same Scala and Spark versions in all components.
1.1 In Standalone mode, there's a "pre-existing" Spark installation to which applications (drivers) can connect. That means that all drivers must use that same Spark version running on the master and executors.
1.2 In YARN / Mesos, each application can use a different Spark version, but all components of the same application must use the same one. That means that if you used version X to compile and package your driver application, you should provide the same version when starting the SparkSession (e.g. via spark.yarn.archive or spark.yarn.jars parameters when using YARN). The jars / archive you provide should include all Spark dependencies (including transitive dependencies), and it will be shipped by the cluster manager to each executor when the application starts.
Driver Code: that's entirely up to - driver code can be shipped as a bunch of jars or a "fat jar", as long as it includes all Spark dependencies + all user code
Distributed Code: in addition to being present on the Driver, this code must be shipped to executors (again, along with all of its transitive dependencies). This is done using the spark.jars parameter.
To summarize, here's a suggested approach to building and deploying a Spark Application (in this case - using YARN):
Create a library with your distributed code, package it both as a "regular" jar (with a .pom file describing its dependencies) and as a "fat jar" (with all of its transitive dependencies included).
Create a driver application, with compile-dependencies on your distributed code library and on Apache Spark (with a specific version)
Package the driver application into a fat jar to be deployed to driver
Pass the right version of your distributed code as the value of spark.jars parameter when starting the SparkSession
Pass the location of an archive file (e.g. gzip) containing all the jars under lib/ folder of the downloaded Spark binaries as the value of spark.yarn.archive
When building and deploying Spark applications all dependencies require compatible versions.
Scala version. All packages have to use the same major (2.10, 2.11, 2.12) Scala version.
Consider following (incorrect) build.sbt:
name := "Simple Project"
version := "1.0"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % "2.0.1",
"org.apache.spark" % "spark-streaming_2.10" % "2.0.1",
"org.apache.bahir" % "spark-streaming-twitter_2.11" % "2.0.1"
)
We use spark-streaming for Scala 2.10 while remaining packages are for Scala 2.11. A valid file could be
name := "Simple Project"
version := "1.0"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % "2.0.1",
"org.apache.spark" % "spark-streaming_2.11" % "2.0.1",
"org.apache.bahir" % "spark-streaming-twitter_2.11" % "2.0.1"
)
but it is better to specify version globally and use %% (which appends the scala version for you):
name := "Simple Project"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.0.1",
"org.apache.spark" %% "spark-streaming" % "2.0.1",
"org.apache.bahir" %% "spark-streaming-twitter" % "2.0.1"
)
Similarly in Maven:
<project>
<groupId>com.example</groupId>
<artifactId>simple-project</artifactId>
<modelVersion>4.0.0</modelVersion>
<name>Simple Project</name>
<packaging>jar</packaging>
<version>1.0</version>
<properties>
<spark.version>2.0.1</spark.version>
</properties>
<dependencies>
<dependency> <!-- Spark dependency -->
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.bahir</groupId>
<artifactId>spark-streaming-twitter_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
</dependencies>
</project>
Spark version All packages have to use the same major Spark version (1.6, 2.0, 2.1, ...).
Consider following (incorrect) build.sbt:
name := "Simple Project"
version := "1.0"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % "1.6.1",
"org.apache.spark" % "spark-streaming_2.10" % "2.0.1",
"org.apache.bahir" % "spark-streaming-twitter_2.11" % "2.0.1"
)
We use spark-core 1.6 while remaining components are in Spark 2.0. A valid file could be
name := "Simple Project"
version := "1.0"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % "2.0.1",
"org.apache.spark" % "spark-streaming_2.10" % "2.0.1",
"org.apache.bahir" % "spark-streaming-twitter_2.11" % "2.0.1"
)
but it is better to use a variable
(still incorrect):
name := "Simple Project"
version := "1.0"
val sparkVersion = "2.0.1"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % sparkVersion,
"org.apache.spark" % "spark-streaming_2.10" % sparkVersion,
"org.apache.bahir" % "spark-streaming-twitter_2.11" % sparkVersion
)
Similarly in Maven:
<project>
<groupId>com.example</groupId>
<artifactId>simple-project</artifactId>
<modelVersion>4.0.0</modelVersion>
<name>Simple Project</name>
<packaging>jar</packaging>
<version>1.0</version>
<properties>
<spark.version>2.0.1</spark.version>
<scala.version>2.11</scala.version>
</properties>
<dependencies>
<dependency> <!-- Spark dependency -->
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_${scala.version}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_${scala.version}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.bahir</groupId>
<artifactId>spark-streaming-twitter_${scala.version}</artifactId>
<version>${spark.version}</version>
</dependency>
</dependencies>
</project>
Spark version used in Spark dependencies has to match Spark version of the Spark installation. For example if you use 1.6.1 on the cluster you have to use 1.6.1 to build jars. Minor versions mismatch are not always accepted.
Scala version used to build jar has to match Scala version used to build deployed Spark. By default (downloadable binaries and default builds):
Spark 1.x -> Scala 2.10
Spark 2.x -> Scala 2.11
Additional packages should be accessible on the worker nodes if included in the fat jar. There are number of options including:
--jars argument for spark-submit - to distribute local jar files.
--packages argument for spark-submit - to fetch dependencies from Maven repository.
When submitting in the cluster node you should include application jar in --jars.
In addition to the very extensive answer already given by user7337271, if the problem results from missing external dependencies you can build a jar with your dependencies with e.g. maven assembly plugin
In that case, make sure to mark all the core spark dependencies as "provided" in your build system and, as already noted, make sure they correlate with your runtime spark version.
Dependency classes of your application shall be specified in the application-jar option of your launching command.
More details can be found at the Spark documentation
Taken from the documentation:
application-jar: Path to a bundled jar including your application and
all dependencies. The URL must be globally visible inside of your
cluster, for instance, an hdfs:// path or a file:// path that is
present on all nodes
I think this problem must solve a assembly plugin.
You need build a fat jar.
For example in sbt :
add file $PROJECT_ROOT/project/assembly.sbt with code addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.0")
to build.sbtadded some librarieslibraryDependencies ++= Seq("com.some.company" %% "some-lib" % "1.0.0")`
in sbt console enter "assembly", and deploy assembly jar
If you need more information, go to https://github.com/sbt/sbt-assembly
Add all the jar files from spark-2.4.0-bin-hadoop2.7\spark-2.4.0-bin-hadoop2.7\jars in the project. The spark-2.4.0-bin-hadoop2.7 can be downloaded from https://spark.apache.org/downloads.html
I have the following build.sbt
lazy val root = (project in file(".")).
settings(
name := "spark-samples",
version := "1.0",
scalaVersion := "2.11.12",
mainClass in Compile := Some("StreamingExample")
)
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.4.0",
"org.apache.spark" %% "spark-streaming" % "2.4.0",
"org.apache.spark" %% "spark-sql" % "2.4.0",
"com.couchbase.client" %% "spark-connector" % "2.2.0"
)
// META-INF discarding
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
I've created a fat jar of my appliction using sbt assembly plugin, but when running using spark-submit it fails with the error :
java.lang.NoClassDefFoundError: rx/Completable$OnSubscribe
at com.couchbase.spark.connection.CouchbaseConnection.streamClient(CouchbaseConnection.scala:154)
I can see that the class exists in my fat jar:
jar tf target/scala-2.11/spark-samples-assembly-1.0.jar | grep 'Completable$OnSubscribe'
rx/Completable$OnSubscribe.class
not sure what am i missing here, any clues?

ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0) java.lang.ClassNotFoundException: scala.None [duplicate]

The common problems when building and deploying Spark applications are:
java.lang.ClassNotFoundException.
object x is not a member of package y compilation errors.
java.lang.NoSuchMethodError
How these can be resolved?
Apache Spark's classpath is built dynamically (to accommodate per-application user code) which makes it vulnerable to such issues. #user7337271's answer is correct, but there are some more concerns, depending on the cluster manager ("master") you're using.
First, a Spark application consists of these components (each one is a separate JVM, therefore potentially contains different classes in its classpath):
Driver: that's your application creating a SparkSession (or SparkContext) and connecting to a cluster manager to perform the actual work
Cluster Manager: serves as an "entry point" to the cluster, in charge of allocating executors for each application. There are several different types supported in Spark: standalone, YARN and Mesos, which we'll describe bellow.
Executors: these are the processes on the cluster nodes, performing the actual work (running Spark tasks)
The relationsip between these is described in this diagram from Apache Spark's cluster mode overview:
Now - which classes should reside in each of these components?
This can be answered by the following diagram:
Let's parse that slowly:
Spark Code are Spark's libraries. They should exist in ALL three components as they include the glue that let's Spark perform the communication between them. By the way - Spark authors made a design decision to include code for ALL components in ALL components (e.g. to include code that should only run in Executor in driver too) to simplify this - so Spark's "fat jar" (in versions up to 1.6) or "archive" (in 2.0, details bellow) contain the necessary code for all components and should be available in all of them.
Driver-Only Code this is user code that does not include anything that should be used on Executors, i.e. code that isn't used in any transformations on the RDD / DataFrame / Dataset. This does not necessarily have to be separated from the distributed user code, but it can be.
Distributed Code this is user code that is compiled with driver code, but also has to be executed on the Executors - everything the actual transformations use must be included in this jar(s).
Now that we got that straight, how do we get the classes to load correctly in each component, and what rules should they follow?
Spark Code: as previous answers state, you must use the same Scala and Spark versions in all components.
1.1 In Standalone mode, there's a "pre-existing" Spark installation to which applications (drivers) can connect. That means that all drivers must use that same Spark version running on the master and executors.
1.2 In YARN / Mesos, each application can use a different Spark version, but all components of the same application must use the same one. That means that if you used version X to compile and package your driver application, you should provide the same version when starting the SparkSession (e.g. via spark.yarn.archive or spark.yarn.jars parameters when using YARN). The jars / archive you provide should include all Spark dependencies (including transitive dependencies), and it will be shipped by the cluster manager to each executor when the application starts.
Driver Code: that's entirely up to - driver code can be shipped as a bunch of jars or a "fat jar", as long as it includes all Spark dependencies + all user code
Distributed Code: in addition to being present on the Driver, this code must be shipped to executors (again, along with all of its transitive dependencies). This is done using the spark.jars parameter.
To summarize, here's a suggested approach to building and deploying a Spark Application (in this case - using YARN):
Create a library with your distributed code, package it both as a "regular" jar (with a .pom file describing its dependencies) and as a "fat jar" (with all of its transitive dependencies included).
Create a driver application, with compile-dependencies on your distributed code library and on Apache Spark (with a specific version)
Package the driver application into a fat jar to be deployed to driver
Pass the right version of your distributed code as the value of spark.jars parameter when starting the SparkSession
Pass the location of an archive file (e.g. gzip) containing all the jars under lib/ folder of the downloaded Spark binaries as the value of spark.yarn.archive
When building and deploying Spark applications all dependencies require compatible versions.
Scala version. All packages have to use the same major (2.10, 2.11, 2.12) Scala version.
Consider following (incorrect) build.sbt:
name := "Simple Project"
version := "1.0"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % "2.0.1",
"org.apache.spark" % "spark-streaming_2.10" % "2.0.1",
"org.apache.bahir" % "spark-streaming-twitter_2.11" % "2.0.1"
)
We use spark-streaming for Scala 2.10 while remaining packages are for Scala 2.11. A valid file could be
name := "Simple Project"
version := "1.0"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % "2.0.1",
"org.apache.spark" % "spark-streaming_2.11" % "2.0.1",
"org.apache.bahir" % "spark-streaming-twitter_2.11" % "2.0.1"
)
but it is better to specify version globally and use %% (which appends the scala version for you):
name := "Simple Project"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.0.1",
"org.apache.spark" %% "spark-streaming" % "2.0.1",
"org.apache.bahir" %% "spark-streaming-twitter" % "2.0.1"
)
Similarly in Maven:
<project>
<groupId>com.example</groupId>
<artifactId>simple-project</artifactId>
<modelVersion>4.0.0</modelVersion>
<name>Simple Project</name>
<packaging>jar</packaging>
<version>1.0</version>
<properties>
<spark.version>2.0.1</spark.version>
</properties>
<dependencies>
<dependency> <!-- Spark dependency -->
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.bahir</groupId>
<artifactId>spark-streaming-twitter_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
</dependencies>
</project>
Spark version All packages have to use the same major Spark version (1.6, 2.0, 2.1, ...).
Consider following (incorrect) build.sbt:
name := "Simple Project"
version := "1.0"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % "1.6.1",
"org.apache.spark" % "spark-streaming_2.10" % "2.0.1",
"org.apache.bahir" % "spark-streaming-twitter_2.11" % "2.0.1"
)
We use spark-core 1.6 while remaining components are in Spark 2.0. A valid file could be
name := "Simple Project"
version := "1.0"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % "2.0.1",
"org.apache.spark" % "spark-streaming_2.10" % "2.0.1",
"org.apache.bahir" % "spark-streaming-twitter_2.11" % "2.0.1"
)
but it is better to use a variable
(still incorrect):
name := "Simple Project"
version := "1.0"
val sparkVersion = "2.0.1"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % sparkVersion,
"org.apache.spark" % "spark-streaming_2.10" % sparkVersion,
"org.apache.bahir" % "spark-streaming-twitter_2.11" % sparkVersion
)
Similarly in Maven:
<project>
<groupId>com.example</groupId>
<artifactId>simple-project</artifactId>
<modelVersion>4.0.0</modelVersion>
<name>Simple Project</name>
<packaging>jar</packaging>
<version>1.0</version>
<properties>
<spark.version>2.0.1</spark.version>
<scala.version>2.11</scala.version>
</properties>
<dependencies>
<dependency> <!-- Spark dependency -->
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_${scala.version}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_${scala.version}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.bahir</groupId>
<artifactId>spark-streaming-twitter_${scala.version}</artifactId>
<version>${spark.version}</version>
</dependency>
</dependencies>
</project>
Spark version used in Spark dependencies has to match Spark version of the Spark installation. For example if you use 1.6.1 on the cluster you have to use 1.6.1 to build jars. Minor versions mismatch are not always accepted.
Scala version used to build jar has to match Scala version used to build deployed Spark. By default (downloadable binaries and default builds):
Spark 1.x -> Scala 2.10
Spark 2.x -> Scala 2.11
Additional packages should be accessible on the worker nodes if included in the fat jar. There are number of options including:
--jars argument for spark-submit - to distribute local jar files.
--packages argument for spark-submit - to fetch dependencies from Maven repository.
When submitting in the cluster node you should include application jar in --jars.
In addition to the very extensive answer already given by user7337271, if the problem results from missing external dependencies you can build a jar with your dependencies with e.g. maven assembly plugin
In that case, make sure to mark all the core spark dependencies as "provided" in your build system and, as already noted, make sure they correlate with your runtime spark version.
Dependency classes of your application shall be specified in the application-jar option of your launching command.
More details can be found at the Spark documentation
Taken from the documentation:
application-jar: Path to a bundled jar including your application and
all dependencies. The URL must be globally visible inside of your
cluster, for instance, an hdfs:// path or a file:// path that is
present on all nodes
I think this problem must solve a assembly plugin.
You need build a fat jar.
For example in sbt :
add file $PROJECT_ROOT/project/assembly.sbt with code addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.0")
to build.sbtadded some librarieslibraryDependencies ++= Seq("com.some.company" %% "some-lib" % "1.0.0")`
in sbt console enter "assembly", and deploy assembly jar
If you need more information, go to https://github.com/sbt/sbt-assembly
Add all the jar files from spark-2.4.0-bin-hadoop2.7\spark-2.4.0-bin-hadoop2.7\jars in the project. The spark-2.4.0-bin-hadoop2.7 can be downloaded from https://spark.apache.org/downloads.html
I have the following build.sbt
lazy val root = (project in file(".")).
settings(
name := "spark-samples",
version := "1.0",
scalaVersion := "2.11.12",
mainClass in Compile := Some("StreamingExample")
)
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.4.0",
"org.apache.spark" %% "spark-streaming" % "2.4.0",
"org.apache.spark" %% "spark-sql" % "2.4.0",
"com.couchbase.client" %% "spark-connector" % "2.2.0"
)
// META-INF discarding
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
I've created a fat jar of my appliction using sbt assembly plugin, but when running using spark-submit it fails with the error :
java.lang.NoClassDefFoundError: rx/Completable$OnSubscribe
at com.couchbase.spark.connection.CouchbaseConnection.streamClient(CouchbaseConnection.scala:154)
I can see that the class exists in my fat jar:
jar tf target/scala-2.11/spark-samples-assembly-1.0.jar | grep 'Completable$OnSubscribe'
rx/Completable$OnSubscribe.class
not sure what am i missing here, any clues?

How to exclude commons-logging from a scala/sbt/slf4j project?

My scala/sbt project uses grizzled-slf4j and logback. A third-party dependency uses Apache Commons Logging.
With Java/Maven, I would use jcl-over-slf4j and logback-classic so that I can use logback as the unified logging backend.
I would also eliminate the commons-logging dependency that the third-party lib would let sbt pull in. I do the following in Maven (which is recommended by http://www.slf4j.org/faq.html#excludingJCL):
<dependency>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
<version>1.1.1</version>
<scope>provided</scope>
</dependency>
And the question is, how to do the same with sbt?
Heiko's approach will probably work, but will lead to none of the dependencies of the 3rd party lib to be downloaded. If you only want to exclude a specific one use exclude.
libraryDependencies += "foo" % "bar" % "0.7.0" exclude("org.baz", "bam")
or
... excludeAll( ExclusionRule(organization = "org.baz") ) // does not work with generated poms!
For sbt 0.13.8 and above, you can also try the project-level dependency exclusion:
excludeDependencies += "commons-logging" % "commons-logging"
I met the same problem before. Solved it by adding dependency like
libraryDependencies += "foo" % "bar" % "0.7.0" exclude("commons-logging","commons-logging")
or
libraryDependencies += "foo" % "bar" % "0.7.0" excludeAll(ExclusionRule(organization = "commons-logging"))
Add intransitive your 3rd party library dependency, e.g.
libraryDependencies += "foo" %% "bar" % "1.2.3" intransitive