Sbt ( new version 1.0.4) assembly failure - eclipse

I have been trying to build fat jar for some time now. I got assembly.sbt in project folder and it looks like below
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.6")
and my build.sbt looks like below
name := "cool"
version := "0.1"
scalaVersion := "2.11.8"
resolvers += "Hortonworks Repository" at
"http://repo.hortonworks.com/content/repositories/releases/"
resolvers += "Hortonworks Jetty Maven Repository" at
"http://repo.hortonworks.com/content/repositories/jetty-hadoop/"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-streaming_2.10" % "1.6.1.2.4.2.0-258" %
"provided",
"org.apache.spark" % "spark-streaming-kafka-assembly_2.10" %
"1.6.1.2.4.2.0-258"
)
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
i get below error
assemblyMergeStrategy in assembly := {
^
C:\Users\sreer\Desktop\workspace\cool\build.sbt:14: error: not found:
value assembly
assemblyMergeStrategy in assembly := {
^
C:\Users\sreer\Desktop\workspace\cool\build.sbt:15: error: not found:
value PathList
case PathList("META-INF", xs # _*) => MergeStrategy.discard
^
C:\Users\sreer\Desktop\workspace\cool\build.sbt:15: error: star patterns
must correspond with varargs parameters
case PathList("META-INF", xs # _*) => MergeStrategy.discard
^
C:\Users\sreer\Desktop\workspace\cool\build.sbt:15: error: not found:
value MergeStrategy
case PathList("META-INF", xs # _*) => MergeStrategy.discard
^
C:\Users\sreer\Desktop\workspace\cool\build.sbt:16: error: not found:
value MergeStrategy
case x => MergeStrategy.first
^
[error] Type error in expression
I get this Type error and seems like it won't recognize keys like "assemblyMergeStrategy". I use sbt new version 1.0.4 and latest version of eclipse IDE for scala.
I have tried changing version of sbt and still no result, went through whole document at https://github.com/sbt/sbt-assembly, made sure there were no typos and suggestions mentioned in other threads weren't of much help ( mostly questions are about older versions of sbt). If some one could guide me that would be very helpful. Thanks.

Related

Assembly scala project causes deduplicate errors

I'm trying to assembly my scala project and cant get rid of some deduplicate errors
Here is the problematic output:
> [error] 2 errors were encountered during merge [error] stack trace is
> suppressed; run 'last
> ProjectRef(uri("https://hyehezkel#fs-bitbucket.fsd.forescout.com/scm/~hyehezkel/classification_common.git#test_branch"),
> "global") / assembly' for the full output [error]
> (ProjectRef(uri("https://hyehezkel#fs-bitbucket.fsd.forescout.com/scm/~hyehezkel/classification_common.git#test_branch"),
> "global") / assembly) deduplicate: different file contents found in
> the following: [error]
> C:\Users\hyehezkel\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-buffer\4.1.42.Final\netty-buffer-4.1.42.Final.jar:META-INF/io.netty.versions.properties
> [error]
> C:\Users\hyehezkel\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-codec\4.1.42.Final\netty-codec-4.1.42.Final.jar:META-INF/io.netty.versions.properties
> [error]
> C:\Users\hyehezkel\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-common\4.1.42.Final\netty-common-4.1.42.Final.jar:META-INF/io.netty.versions.properties
> [error]
> C:\Users\hyehezkel\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-handler\4.1.42.Final\netty-handler-4.1.42.Final.jar:META-INF/io.netty.versions.properties
> [error]
> C:\Users\hyehezkel\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-resolver\4.1.42.Final\netty-resolver-4.1.42.Final.jar:META-INF/io.netty.versions.properties
> [error]
> C:\Users\hyehezkel\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-transport-native-epoll\4.1.42.Final\netty-transport-native-epoll-4.1.42.Final.jar:META-INF/io.netty.versions.properties
> [error]
> C:\Users\hyehezkel\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-transport-native-unix-common\4.1.42.Final\netty-transport-native-unix-common-4.1.42.Final.jar:META-INF/io.netty.versions.properties
> [error]
> C:\Users\hyehezkel\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\io\netty\netty-transport\4.1.42.Final\netty-transport-4.1.42.Final.jar:META-INF/io.netty.versions.properties
> [error] deduplicate: different file contents found in the following:
> [error]
> C:\Users\hyehezkel\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\fasterxml\jackson\core\jackson-annotations\2.10.1\jackson-annotations-2.10.1.jar:module-info.class
> [error]
> C:\Users\hyehezkel\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\fasterxml\jackson\core\jackson-core\2.10.1\jackson-core-2.10.1.jar:module-info.class
> [error]
> C:\Users\hyehezkel\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\fasterxml\jackson\core\jackson-databind\2.10.1\jackson-databind-2.10.1.jar:module-info.class
> [error]
> C:\Users\hyehezkel\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\fasterxml\jackson\dataformat\jackson-dataformat-csv\2.10.0\jackson-dataformat-csv-2.10.0.jar:module-info.class
I have read the following article but didnt manage to solve it:
https://index.scala-lang.org/sbt/sbt-assembly/sbt-assembly/0.14.5?target=_2.12_1.0
This is my plugins.sbt
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.10")
And this is my build.st
import sbt.Keys.{dependencyOverrides, libraryDependencies, mappings}
import sbtassembly.AssemblyPlugin.assemblySettings._
name := "classification_endpoint_discovery"
version := "0.1"
organization in ThisBuild := "com.forescout"
scalaVersion in ThisBuild := "2.13.1"
updateOptions := updateOptions.value.withCachedResolution(true)
//classpathTypes += "maven-plugin"
exportJars := true
logLevel := Level.Info
logLevel in assembly := Level.Debug
lazy val commonProject = RootProject(uri("https://hyehezkel#fs-bitbucket.fsd.forescout.com/scm/~hyehezkel/classification_common.git#test_branch"))
lazy val global = project
.in(file("."))
.settings(settings)
.enablePlugins(AssemblyPlugin)
// .disablePlugins(AssemblyPlugin)
.aggregate(
commonProject,
`endpoint-discovery`
)
lazy val `endpoint-discovery` = project
.settings(
name := "endpoint-discovery",
settings,
assemblySettings,
assemblyJarName in assembly := "endpoint-discovery.jar",
assemblyJarName in assemblyPackageDependency := "endpoint-discovery-dep.jar",
libraryDependencies += dependencies.postgresql,
libraryDependencies += "com.lihaoyi" %% "ujson" % "0.7.5",
libraryDependencies += "com.lihaoyi" %% "requests" % "0.2.0",
libraryDependencies += dependencies.`deepLearning4j-core`,
libraryDependencies += dependencies.`deeplearning4j-nn`,
libraryDependencies += dependencies.`nd4j-native-platform`,
excludeDependencies += "commons-logging" % "commons-logging"
// dependencyOverrides += "org.slf4j" % "slf4j-api" % "1.7.5",
// dependencyOverrides += "org.slf4j" % "slf4j-simple" % "1.7.5",
)
.dependsOn(commonProject)
.enablePlugins(AssemblyPlugin)
lazy val dependencies =
new {
val deepLearning4jV = "1.0.0-beta4"
val postgresqlV = "9.1-901.jdbc4"
val `deepLearning4j-core` = "org.deeplearning4j" % "deeplearning4j-core" % deepLearning4jV
val `deeplearning4j-nn` = "org.deeplearning4j" % "deeplearning4j-nn" % deepLearning4jV
val `nd4j-native-platform` = "org.nd4j" % "nd4j-native-platform" % deepLearning4jV
val postgresql = "postgresql" % "postgresql" % postgresqlV
}
// SETTINGS
lazy val settings =
commonSettings
lazy val compilerOptions = Seq(
"-unchecked",
"-feature",
"-language:existentials",
"-language:higherKinds",
"-language:implicitConversions",
"-language:postfixOps",
"-deprecation",
"-encoding",
"utf8"
)
lazy val commonSettings = Seq(
scalacOptions ++= compilerOptions
)
lazy val assemblySettings = Seq(
assemblyOption in assembly := (assemblyOption in assembly).value.copy(includeScala = false, includeDependency = false),
assemblyMergeStrategy in assembly := {
case PathList("META-INF", "io.netty.versions.properties", xs # _*) => MergeStrategy.singleOrError
case "module-info.class" => MergeStrategy.singleOrError
case PathList("org", "xmlpull", xs # _*) => MergeStrategy.discard
case PathList("org", "nd4j", xs # _*) => MergeStrategy.first
case PathList("org", "bytedeco", xs # _*) => MergeStrategy.first
case PathList("org.bytedeco", xs # _*) => MergeStrategy.first
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case "XmlPullParser.class" => MergeStrategy.discard
case "Nd4jBase64.class" => MergeStrategy.discard
case "XmlPullParserException.class" => MergeStrategy.discard
// case n if n.startsWith("rootdoc.txt") => MergeStrategy.discard
// case n if n.startsWith("readme.html") => MergeStrategy.discard
// case n if n.startsWith("readme.txt") => MergeStrategy.discard
case n if n.startsWith("library.properties") => MergeStrategy.discard
case n if n.startsWith("license.html") => MergeStrategy.discard
case n if n.startsWith("about.html") => MergeStrategy.discard
// case _ => MergeStrategy.first
case x =>
val oldStrategy = (assemblyMergeStrategy in assembly).value
oldStrategy(x)
}
)
I have tried many Merge Strategies but nothing works
What am i missing here?
Any advice?
for META-INF/io.netty.versions.properties
you have:
case PathList("META-INF", "io.netty.versions.properties", xs # _*) => MergeStrategy.singleOrError
which says that it will error out, if there are more than 1 files with this name.
try MergeStrategy.first for these files instead
module-info.class
these files are only relevant for the Java 9 module system. Usually, you can just discard them:
case "module-info.class" => MergeStrategy.discard

Why does Spark application fail with "ClassNotFoundException: Failed to find data source: jdbc" as uber-jar with sbt assembly?

I'm trying to assemble a Spark application using sbt 1.0.4 with sbt-assembly 0.14.6.
The Spark application works fine when launched in IntelliJ IDEA or spark-submit, but if I run the assembled uber-jar with the command line (cmd in Windows 10):
java -Xmx1024m -jar my-app.jar
I get the following exception:
Exception in thread "main" java.lang.ClassNotFoundException: Failed to find data source: jdbc. Please find packages at http://spark.apache.org/third-party-projects.html
The Spark application looks as follows.
package spark.main
import java.util.Properties
import org.apache.spark.sql.SparkSession
object Main {
def main(args: Array[String]) {
val connectionProperties = new Properties()
connectionProperties.put("user","postgres")
connectionProperties.put("password","postgres")
connectionProperties.put("driver", "org.postgresql.Driver")
val testTable = "test_tbl"
val spark = SparkSession.builder()
.appName("Postgres Test")
.master("local[*]")
.config("spark.hadoop.fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName)
.config("spark.sql.warehouse.dir", System.getProperty("java.io.tmpdir") + "swd")
.getOrCreate()
val dfPg = spark.sqlContext.read.
jdbc("jdbc:postgresql://localhost/testdb",testTable,connectionProperties)
dfPg.show()
}
}
The following is build.sbt.
name := "apache-spark-scala"
version := "0.1-SNAPSHOT"
scalaVersion := "2.11.8"
mainClass in Compile := Some("spark.main.Main")
libraryDependencies ++= {
val sparkVer = "2.1.1"
val postgreVer = "42.0.0"
val cassandraConVer = "2.0.2"
val configVer = "1.3.1"
val logbackVer = "1.7.25"
val loggingVer = "3.7.2"
val commonsCodecVer = "1.10"
Seq(
"org.apache.spark" %% "spark-sql" % sparkVer,
"org.apache.spark" %% "spark-core" % sparkVer,
"com.datastax.spark" %% "spark-cassandra-connector" % cassandraConVer,
"org.postgresql" % "postgresql" % postgreVer,
"com.typesafe" % "config" % configVer,
"commons-codec" % "commons-codec" % commonsCodecVer,
"com.typesafe.scala-logging" %% "scala-logging" % loggingVer,
"org.slf4j" % "slf4j-api" % logbackVer
)
}
dependencyOverrides ++= Seq(
"io.netty" % "netty-all" % "4.0.42.Final",
"commons-net" % "commons-net" % "2.2",
"com.google.guava" % "guava" % "14.0.1"
)
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
Does anyone has any idea, why?
[UPDATE]
Configuration taken from offical GitHub Repository did the trick:
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) =>
xs map {_.toLowerCase} match {
case ("manifest.mf" :: Nil) | ("index.list" :: Nil) | ("dependencies" :: Nil) =>
MergeStrategy.discard
case ps # (x :: xs) if ps.last.endsWith(".sf") || ps.last.endsWith(".dsa") =>
MergeStrategy.discard
case "services" :: _ => MergeStrategy.filterDistinctLines
case _ => MergeStrategy.first
}
case _ => MergeStrategy.first
}
The question is almost Why does format("kafka") fail with "Failed to find data source: kafka." with uber-jar? with the differences that the other OP used Apache Maven to create an uber-jar and here it's about sbt (sbt-assembly plugin's configuration to be precise).
The short name (aka alias) of a data source, e.g. jdbc or kafka, are only available if the corresponding META-INF/services/org.apache.spark.sql.sources.DataSourceRegister registers a DataSourceRegister.
For jdbc alias to work Spark SQL uses META-INF/services/org.apache.spark.sql.sources.DataSourceRegister with the following entry (there are others):
org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider
That's what ties jdbc alias up with the data source.
And you've excluded it from an uber-jar by the following assemblyMergeStrategy.
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
Note case PathList("META-INF", xs # _*) which you simply MergeStrategy.discard. That's the root cause.
Just to check that the "infrastructure" is available and you could use the jdbc data source by its fully-qualified name (not the alias), try this:
spark.read.
format("org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider").
load("jdbc:postgresql://localhost/testdb")
You will see other problems due to missing options like url, but...we're digressing.
A solution is to MergeStrategy.concat all META-INF/services/org.apache.spark.sql.sources.DataSourceRegister (that would create an uber-jar with all data sources, incl. the jdbc data source).
case "META-INF/services/org.apache.spark.sql.sources.DataSourceRegister" => MergeStrategy.concat

SBT assembly falis

I am running a spark job through intellij. Job executes and gives me output. i need to take this job as jar file to server and run, but when i try to do sbt assembly it throws below error:
[error] Not a valid command: assembly
[error] Not a valid project ID: assembly
[error] Expected ':' (if selecting a configuration)
[error] Not a valid key: assembly
[error] assembly
my sbt version is 0.13.8
below is my build.sbt file:
import sbt._, Keys._
name := "mobilewalla"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= Seq("org.apache.spark" %% "spark-core" % "2.0.0",
"org.apache.spark" %% "spark-sql" % "2.0.0")
i added a file assembly.sbt under project dir. it contains:
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.3")
what am i missing here
Add these lines in your build.sbt
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
mainClass in assembly := Some("com.SparkMain")
resolvers += "spray repo" at "http://repo.spray.io"
assemblyJarName in assembly := "streaming-api.jar"
and include these lines in your plugins.sbt file
addSbtPlugin("io.spray" % "sbt-revolver" % "0.7.2")
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.13.0")
To assemble the multiple jars to one u need add below plugin in plugins.sbt under project directory.
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.3")
If u need to customize the assembled jar to trigger specific MainClass take example assembly.sbt
import sbtassembly.Plugin.AssemblyKeys._
Project.inConfig(Compile)(baseAssemblySettings)
mainClass in (Compile, assembly) := Some("<main application name with package path>")
jarName in (Compile, assembly) := s"${name.value}-${version.value}-dist.jar"
//below is merge strategy to make what all file need to exclude or include
mergeStrategy in (Compile, assembly) <<= (mergeStrategy in (Compile, assembly)) {
(old) => {
case PathList(ps # _*) if ps.last endsWith ".html" =>MergeStrategy.first
case "META-INF/MANIFEST.MF" => MergeStrategy.discard
case x => old(x)
}
}

Dependency issue with Scalding and Hadoop with sbt-assembly

I'm trying to build a far with sbt of a simple hadoop job I'm trying to run in an attempt to run it on Amazon EMR. However when I run sbt assembly I get the following error:
[error] (*:assembly) deduplicate: different file contents found in the following:
[error] /Users/trenthauck/.ivy2/cache/org.mortbay.jetty/jsp-2.1/jars/jsp-2.1-6.1.14.jar:org/apache/jasper/compiler/Node$ChildInfo.class
[error] /Users/trenthauck/.ivy2/cache/tomcat/jasper-compiler/jars/jasper-compiler-5.5.12.jar:org/apache/jasper/compiler/Node$ChildInfo.class
[error] Total time: 10 s, completed Sep 14, 2013 4:49:24 PM
I attempted to follow the suggestion here https://groups.google.com/forum/#!topic/simple-build-tool/tzkq5TioIqM however it didn't work.
My build.sbt looks like:
import AssemblyKeys._
mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) =>
{
case PathList("org", "apache", "jasper", xs # _*) => MergeStrategy.last
case x => old(x)
}
}
assemblySettings
name := "Scaling Play"
version := "SNAPSHOT-0.1"
scalaVersion := "2.10.1"
libraryDependencies ++= Seq(
"com.twitter" % "scalding-core_2.10" % "0.8.8",
"com.twitter" % "scalding-args_2.10" % "0.8.8",
"com.twitter" % "scalding-date_2.10" % "0.8.8",
"org.apache.hadoop" % "hadoop-core" % "1.0.0"
)
The order of the directives is important. You update the assembly settings, to overwrite it again a line later. First defining assemblySettings and then updating it will solve it.
The updated build.sbt:
import AssemblyKeys._
assemblySettings
mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) =>
{
case PathList("org", "apache", "jasper", xs # _*) => MergeStrategy.last
case x => old(x)
}
}
…
After that you will discover that there are a lot more conflicting classes and other files. In this case you will require the following merges:
case PathList("org", "apache", xs # _*) => MergeStrategy.last
case PathList("javax", "servlet", xs # _*) => MergeStrategy.last
case PathList("com", "esotericsoftware", xs # _*) => MergeStrategy.last
case PathList("project.clj") => MergeStrategy.last
case PathList("overview.html") => MergeStrategy.last
case x => old(x)
Note that using merge strategies for class files may give problems, caused by incompatible versions of that specific class. If that is the case then your problem is larger, because then the dependencies are incompatible with each other. You have then to resort to removing the dependency and find/make a compatible version.

assembly-merge-strategy issues using sbt-assembly

I am trying to convert a scala project into a deployable fat jar using sbt-assembly. When I run my assembly task in sbt I am getting the following error:
Merging 'org/apache/commons/logging/impl/SimpleLog.class' with strategy 'deduplicate'
:assembly: deduplicate: different file contents found in the following:
[error] /Users/home/.ivy2/cache/commons-logging/commons-logging/jars/commons-logging-1.1.1.jar:org/apache/commons/logging/impl/SimpleLog.class
[error] /Users/home/.ivy2/cache/org.slf4j/jcl-over-slf4j/jars/jcl-over-slf4j-1.6.4.jar:org/apache/commons/logging/impl/SimpleLog.class
Now from the sbt-assembly documentation:
If multiple files share the same relative path (e.g. a resource named
application.conf in multiple dependency JARs), the default strategy is
to verify that all candidates have the same contents and error out
otherwise. This behavior can be configured on a per-path basis using
either one of the following built-in strategies or writing a custom one:
MergeStrategy.deduplicate is the default described above
MergeStrategy.first picks the first of the matching files in classpath order
MergeStrategy.last picks the last one
MergeStrategy.singleOrError bails out with an error message on conflict
MergeStrategy.concat simply concatenates all matching files and includes the result
MergeStrategy.filterDistinctLines also concatenates, but leaves out duplicates along the way
MergeStrategy.rename renames the files originating from jar files
MergeStrategy.discard simply discards matching files
Going by this I setup my build.sbt as follows:
import sbt._
import Keys._
import sbtassembly.Plugin._
import AssemblyKeys._
name := "my-project"
version := "0.1"
scalaVersion := "2.9.2"
crossScalaVersions := Seq("2.9.1","2.9.2")
//assemblySettings
seq(assemblySettings: _*)
resolvers ++= Seq(
"Typesafe Releases Repository" at "http://repo.typesafe.com/typesafe/releases/",
"Typesafe Snapshots Repository" at "http://repo.typesafe.com/typesafe/snapshots/",
"Sonatype Repository" at "http://oss.sonatype.org/content/repositories/releases/"
)
libraryDependencies ++= Seq(
"org.scalatest" %% "scalatest" % "1.6.1" % "test",
"org.clapper" %% "grizzled-slf4j" % "0.6.10",
"org.scalaz" % "scalaz-core_2.9.2" % "7.0.0-M7",
"net.databinder.dispatch" %% "dispatch-core" % "0.9.5"
)
scalacOptions += "-deprecation"
mainClass in assembly := Some("com.my.main.class")
test in assembly := {}
mergeStrategy in assembly := mergeStrategy.first
In the last line of the build.sbt, I have:
mergeStrategy in assembly := mergeStrategy.first
Now, when I run SBT, I get the following error:
error: value first is not a member of sbt.SettingKey[String => sbtassembly.Plugin.MergeStrategy]
mergeStrategy in assembly := mergeStrategy.first
Can somebody point out what I might be doing wrong here?
Thanks
As for the current version 0.11.2 (2014-03-25), the way to define the merge strategy is different.
This is documented here, the relevant part is:
NOTE:
mergeStrategy in assembly expects a function, you can't do
mergeStrategy in assembly := MergeStrategy.first
The new way is (copied from the same source):
mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) =>
{
case PathList("javax", "servlet", xs # _*) => MergeStrategy.first
case PathList(ps # _*) if ps.last endsWith ".html" => MergeStrategy.first
case "application.conf" => MergeStrategy.concat
case "unwanted.txt" => MergeStrategy.discard
case x => old(x)
}
}
This is possibly applicable to earlier versions as well, I don't know exactly when it has changed.
I think it should be MergeStrategy.first with a capital M, so mergeStrategy in assembly := MergeStrategy.first.
this is the proper way to merge most of the common java/scala projects.
it takes care of META-INF and classes.
also the service registration in META-INF is taken care of.
assemblyMergeStrategy in assembly := {
case x if Assembly.isConfigFile(x) =>
MergeStrategy.concat
case PathList(ps # _*) if Assembly.isReadme(ps.last) || Assembly.isLicenseFile(ps.last) =>
MergeStrategy.rename
case PathList("META-INF", xs # _*) =>
(xs map {_.toLowerCase}) match {
case ("manifest.mf" :: Nil) | ("index.list" :: Nil) | ("dependencies" :: Nil) =>
MergeStrategy.discard
case ps # (x :: xs) if ps.last.endsWith(".sf") || ps.last.endsWith(".dsa") =>
MergeStrategy.discard
case "plexus" :: xs =>
MergeStrategy.discard
case "services" :: xs =>
MergeStrategy.filterDistinctLines
case ("spring.schemas" :: Nil) | ("spring.handlers" :: Nil) =>
MergeStrategy.filterDistinctLines
case _ => MergeStrategy.first
}
case _ => MergeStrategy.first}
I have just setup a little sbt project that needs to rewire some mergeStrategies, and found the answer a little outdated, let me add my working code for versions (as of 4-7-2015)
sbt 0.13.8
scala 2.11.6
assembly 0.13.0
mergeStrategy in assembly := {
case x if x.startsWith("META-INF") => MergeStrategy.discard // Bumf
case x if x.endsWith(".html") => MergeStrategy.discard // More bumf
case x if x.contains("slf4j-api") => MergeStrategy.last
case x if x.contains("org/cyberneko/html") => MergeStrategy.first
case PathList("com", "esotericsoftware", xs#_ *) => MergeStrategy.last // For Log$Logger.class
case x =>
val oldStrategy = (mergeStrategy in assembly).value
oldStrategy(x)
}
For the new sbt version (sbt-version :0.13.11), I was getting the error for slf4j; for the time being took the easy way out : Please also check the answer here Scala SBT Assembly cannot merge due to de-duplication error in StaticLoggerBinder.class where sbt-dependency-graph tool is mentioned which is pretty cool to do this manually
assemblyMergeStrategy in assembly <<= (assemblyMergeStrategy in assembly) {
(old) => {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
}
Quick update: mergeStrategy is deprecated. Use assemblyMergeStrategy. Apart from that, earlier responses are still solid
Add following to build.sbt to add kafka as source or destination
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
//To add Kafka as source
case "META-INF/services/org.apache.spark.sql.sources.DataSourceRegister" =>
MergeStrategy.concat
case x => MergeStrategy.first
}