I have managed to run Mahout rowsimilarity on flat files of below format:
item-id tag1 tag-2 tag3
This has to be run via cli and the output is again flat files. I want to make this such that it reads data from MongoDB (open to using other DBs too) and then dumps the output to DB which can then be picked from our system.
I've researched for past few days and found below things:
Will have to write Scala code implementing RowSimilarity
Pass it an IndexedDataSet object to process the data
Convert the output to required format (json/csv)
What I'm yet to figure out is how do I go about importing data from DB to IndexedDataSet. Also, I've read about RDD format and still can't figure out how to convert json data to RDD which can be used by RowSimilarity code.
tl;dr: How to convert MongoDB data so that it can be processed by mahout/spark rowsimilarity?
Edit1: I have found some code that converts Mongodata to RDD from this link: https://github.com/mongodb/mongo-hadoop/wiki/Spark-Usage#scala-example
Now I need help to convert it to IndexedDataset so that it can be passed to SimilarityAnalysis.rowSimilarityIDS.
tl;dr: How do I convert RDD to IndexedDataset
Below is the answer:
import org.apache.hadoop.conf.Configuration
import org.apache.mahout.math.cf.SimilarityAnalysis
import org.apache.mahout.math.indexeddataset.Schema
import org.apache.mahout.sparkbindings
import org.apache.mahout.sparkbindings.indexeddataset.IndexedDatasetSpark
import org.apache.spark.rdd.RDD
import org.bson.BSONObject
import com.mongodb.hadoop.MongoInputFormat
object SparkExample extends App {
implicit val mc = sparkbindings.mahoutSparkContext(masterUrl = "local", appName = "RowSimilarity")
val mongoConfig = new Configuration()
mongoConfig.set("mongo.input.uri", "mongodb://hostname:27017/db.collection")
val documents: RDD[(Object, BSONObject)] = mc.newAPIHadoopRDD(
mongoConfig,
classOf[MongoInputFormat],
classOf[Object],
classOf[BSONObject]
)
val documents_Array: RDD[(String, Array[String])] = documents.map(
doc1 => (
doc1._2.get("product_id").toString(),
doc1._2.get("product_attribute_value").toString().replace("[ \"", "").replace("\"]", "").split("\" , \"").map(value => value.toLowerCase.replace(" ", "-").mkString(" "))
)
)
val new_doc: RDD[(String, String)] = documents_Array.flatMapValues(x => x)
val myIDs = IndexedDatasetSpark(new_doc)(mc)
val readWriteSchema = new Schema(
"rowKeyDelim" -> "\t",
"columnIdStrengthDelim" -> ":",
"omitScore" -> false,
"elementDelim" -> " "
)
SimilarityAnalysis.rowSimilarityIDS(myIDs).dfsWrite("hdfs://hadoop:9000/mongo-hadoop-rowsimilarity", readWriteSchema)(mc)
}
build.sbt:
name := "scala-mongo"
version := "1.0"
scalaVersion := "2.10.6"
libraryDependencies += "org.mongodb" %% "casbah" % "3.1.1"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.6.1"
libraryDependencies += "org.mongodb.mongo-hadoop" % "mongo-hadoop-core" % "1.4.2"
libraryDependencies ++= Seq(
"org.apache.hadoop" % "hadoop-client" % "2.6.0" exclude("javax.servlet", "servlet-api") exclude ("com.sun.jmx", "jmxri") exclude ("com.sun.jdmk", "jmxtools") exclude ("javax.jms", "jms") exclude ("org.slf4j", "slf4j-log4j12") exclude("hsqldb","hsqldb"),
"org.scalatest" % "scalatest_2.10" % "1.9.2" % "test"
)
libraryDependencies += "org.apache.mahout" % "mahout-math-scala_2.10" % "0.11.2"
libraryDependencies += "org.apache.mahout" % "mahout-spark_2.10" % "0.11.2"
libraryDependencies += "org.apache.mahout" % "mahout-math" % "0.11.2"
libraryDependencies += "org.apache.mahout" % "mahout-hdfs" % "0.11.2"
resolvers += "typesafe repo" at " http://repo.typesafe.com/typesafe/releases/"
resolvers += Resolver.mavenLocal
I've used mongo-hadoop to get data from Mongo and use it. Since my data had an array, I had to use flatMapValues to flatten it and then pass to IDS for proper output.
PS: I posted the answer here and not the linked question because this Q&A covers the full scope of getting data and processing it.
Related
I have a basic Spark - Kafka code, I try to run following code:
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.storage.StorageLevel
import java.util.regex.Pattern
import java.util.regex.Matcher
import org.apache.spark.streaming.kafka._
import kafka.serializer.StringDecoder
import Utilities._
object WordCount {
def main(args: Array[String]): Unit = {
val ssc = new StreamingContext("local[*]", "KafkaExample", Seconds(1))
setupLogging()
// Construct a regular expression (regex) to extract fields from raw Apache log lines
val pattern = apacheLogPattern()
// hostname:port for Kafka brokers, not Zookeeper
val kafkaParams = Map("metadata.broker.list" -> "localhost:9092")
// List of topics you want to listen for from Kafka
val topics = List("testLogs").toSet
// Create our Kafka stream, which will contain (topic,message) pairs. We tack a
// map(_._2) at the end in order to only get the messages, which contain individual
// lines of data.
val lines = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
ssc, kafkaParams, topics).map(_._2)
// Extract the request field from each log line
val requests = lines.map(x => {val matcher:Matcher = pattern.matcher(x); if (matcher.matches()) matcher.group(5)})
// Extract the URL from the request
val urls = requests.map(x => {val arr = x.toString().split(" "); if (arr.size == 3) arr(1) else "[error]"})
// Reduce by URL over a 5-minute window sliding every second
val urlCounts = urls.map(x => (x, 1)).reduceByKeyAndWindow(_ + _, _ - _, Seconds(300), Seconds(1))
// Sort and print the results
val sortedResults = urlCounts.transform(rdd => rdd.sortBy(x => x._2, false))
sortedResults.print()
// Kick it off
ssc.checkpoint("/home/")
ssc.start()
ssc.awaitTermination()
}
}
I am using IntelliJ IDE, and create scala project by using sbt. Details of build.sbt file is as follow:
name := "Sample"
version := "1.0"
organization := "com.sundogsoftware"
scalaVersion := "2.11.8"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.2.0" % "provided",
"org.apache.spark" %% "spark-streaming" % "1.4.1",
"org.apache.spark" %% "spark-streaming-kafka" % "1.4.1",
"org.apache.hadoop" % "hadoop-hdfs" % "2.6.0"
)
However, when I try to build the code, it creates following error:
Error:scalac: missing or invalid dependency detected while loading class file 'StreamingContext.class'.
Could not access type Logging in package org.apache.spark,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with -Ylog-classpath to see the problematic classpath.)
A full rebuild may help if 'StreamingContext.class' was compiled against an incompatible version of org.apache.spark.
Error:scalac: missing or invalid dependency detected while loading class file 'DStream.class'.
Could not access type Logging in package org.apache.spark,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with -Ylog-classpath to see the problematic classpath.)
A full rebuild may help if 'DStream.class' was compiled against an incompatible version of org.apache.spark.
When using different Spark libraries together the versions of all libs should always match.
Also, the version of kafka you use matters also, so should be for example: spark-streaming-kafka-0-10_2.11
...
scalaVersion := "2.11.8"
val sparkVersion = "2.2.0"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVersion % "provided",
"org.apache.spark" %% "spark-streaming" % sparkVersion,
"org.apache.spark" %% "spark-streaming-kafka-0-10_2.11" % sparkVersion,
"org.apache.hadoop" % "hadoop-hdfs" % "2.6.0"
)
This is a useful site if you need to check the exact dependencies you should use:
https://search.maven.org/
I am new in spark and I am trying this example:
import org.apache.spark.SparkConf
import org.apache.spark.mllib.clustering.StreamingKMeans
import org.apache.spark.mllib.linalg.{Vectors,Vector}
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.streaming.{Seconds, StreamingContext}
object App {
def main(args: Array[String]) {
if (args.length != 5) {
System.err.println(
"Usage: StreamingKMeansExample " +
"<trainingDir> <testDir> <batchDuration> <numClusters> <numDimensions>")
System.exit(1)
}
// $example on$
val conf = new SparkConf().setAppName("StreamingKMeansExample")
val ssc = new StreamingContext(conf, Seconds(args(2).toLong))
val trainingData = ssc.textFileStream(args(0)).map(Vectors.parse)
val testData = ssc.textFileStream(args(1)).map(LabeledPoint.parse)
val model = new StreamingKMeans()
.setK(args(3).toInt)
.setDecayFactor(1.0)
.setRandomCenters(args(4).toInt, 0.0)
model.trainOn(trainingData)
model.predictOnValues(testData.map(lp => (lp.label, lp.features))).print()
ssc.start()
ssc.awaitTermination()
// $example off$
}
}
but it cannot resolve LabeledPoint.parse it only has apply and unapply methods available not parse.
It's probably the version I am using. So this is my sbt
name := "myApp"
version := "0.1"
scalaVersion := "2.11.0"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.2.0" % "provided",
"org.apache.spark" %% "spark-streaming" % "2.2.0",
"org.apache.spark" %% "spark-mllib" % "2.3.1"
)
EDIT so I made a custom made labelPoint class since nothing else worker that did solved the compile problem. But, I tried to run it and the predicted values are always zero.
the input txt for train is
[36.72, 67.44]
[92.20, 11.81]
[90.85, 48.07]
.....
and the test txt is
(2, [9.26,68.19])
(1, [3.27,9.14])
(9, [66.66,13.85])
....
So why the result values are 2,0 1,0 9,0 ? Is there a problem with labeledPoint?
The code below causes Spark to become unresponsive:
System.setProperty("hadoop.home.dir", "H:\\winutils");
val sparkConf = new SparkConf().setAppName("GroupBy Test").setMaster("local[1]")
val sc = new SparkContext(sparkConf)
def main(args: Array[String]) {
val text_file = sc.textFile("h:\\data\\details.txt")
val counts = text_file
.flatMap(line => line.split(" "))
.map(word => (word, 1))
.reduceByKey(_ + _)
println(counts);
}
I'm setting hadoop.home.dir in order to avoid the error mentioned here: Failed to locate the winutils binary in the hadoop binary path
This is how my build.sbt file looks like:
lazy val root = (project in file(".")).
settings(
name := "hello",
version := "1.0",
scalaVersion := "2.11.0"
)
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % "1.6.0"
)
Should Scala Spark be compilable/runnable using the sbt code in the file?
I think code is fine, it was taken verbatim from http://spark.apache.org/examples.html, but I am not sure if the Hadoop WinUtils path is required.
Update: "The solution was to use fork := true in the main build.sbt"
Here is the reference: Spark: ClassNotFoundException when running hello world example in scala 2.11
This is the content of my build.sbt. Notice that if your internet connection is slow it might take some time.
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.6.1",
"org.apache.spark" %% "spark-mllib" % "1.6.1",
"org.apache.spark" %% "spark-sql" % "1.6.1",
"org.slf4j" % "slf4j-api" % "1.7.12"
)
run in Compile <<= Defaults.runTask(fullClasspath in Compile, mainClass in (Compile, run), runner in (Compile, run))
In the main I added this, however it depends on where you placed the winutil folder.
System.setProperty("hadoop.home.dir", "c:\\winutil")
After updating my Java project from 2.2 to 2.4, I followed the instructions on the Migration page, but am getting that error, saying the value PlayEbean was not found.
What am I doing wrong? As far as I can tell I only have to add that one line to the plugins.sbt file and it should work, right?
EDIT: I tried 2.4.2, exact same problem occured.
For clarity's sake: there is no build.sbt file. Only a Build.scala file and a BuildKeys.scala and BuildPlugin.scala file. Though those last 2 have no relation to this problem.
The files:
project/Build.scala:
import sbt._
import Keys._
import play.sbt.PlayImport._
import PlayKeys._
object BuildSettings {
val appVersion = "0.1"
val buildScalaVersion = "2.11.7"
val buildSettings = Seq (
version := appVersion,
scalaVersion := buildScalaVersion
)
}
object Resolvers {
val typeSafeRepo = "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/"
val localRepo = "Local Maven Repositor" at "file://"+Path.userHome.absolutePath+"/.m2/repository"
val bintrayRepo = "scalaz-bintray" at "https://dl.bintray.com/scalaz/releases"
val sbtRepo = "Public SBT repo" at "https://dl.bintray.com/sbt/sbt-plugin-releases/"
val myResolvers = Seq (
typeSafeRepo,
localRepo,
bintrayRepo,
sbtRepo
)
}
object Dependencies {
val mindrot = "org.mindrot" % "jbcrypt" % "0.3m"
val libThrift = "org.apache.thrift" % "libthrift" % "0.9.2"
val commonsLang3 = "org.apache.commons" % "commons-lang3" % "3.4"
val commonsExec = "org.apache.commons" % "commons-exec" % "1.3"
val guava = "com.google.guava" % "guava" % "18.0"
val log4j = "org.apache.logging.log4j" % "log4j-core" % "2.3"
val jacksonDataType = "com.fasterxml.jackson.datatype" % "jackson-datatype-joda" % "2.5.3"
val jacksonDataformat = "com.fasterxml.jackson.dataformat" % "jackson-dataformat-xml" % "2.5.3"
val postgresql = "postgresql" % "postgresql" % "9.3-1103.jdbc41"
val myDeps = Seq(
// Part of play
javaCore,
javaJdbc,
javaWs,
cache,
// User defined
mindrot,
libThrift,
commonsLang3,
commonsExec,
guava,
log4j,
jacksonDataType,
jacksonDataformat,
postgresql
)
}
object ApplicationBuild extends Build {
import Resolvers._
import Dependencies._
import BuildSettings._
val appName = "sandbox"
val main = Project(
appName,
file("."),
settings = buildSettings ++ Seq (resolvers := myResolvers, libraryDependencies := myDeps)
)
.enablePlugins(play.PlayJava, PlayEbean)
.settings(jacoco.settings: _*)
.settings(parallelExecution in jacoco.Config := false)
.settings(javaOptions in Test ++= Seq("-Xmx512M"))
.settings(javaOptions in Test ++= Seq("-XX:MaxPermSize=512M"))
}
project/plugins.sbt:
// Use the Play sbt plugin for Play projects
addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.4.1")
// The Typesafe repository
resolvers ++= Seq(
"Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/",
"Local Maven Repositor" at "file://"+Path.userHome.absolutePath+"/.m2/repository",
"scalaz-bintray" at "https://dl.bintray.com/scalaz/releases",
"Public SBT repo" at "https://dl.bintray.com/sbt/sbt-plugin-releases/"
)
libraryDependencies ++= Seq(
"com.puppycrawl.tools" % "checkstyle" % "6.8",
"com.typesafe.play" %% "play-java-ws" % "2.4.1",
"org.jacoco" % "org.jacoco.core" % "0.7.1.201405082137" artifacts(Artifact("org.jacoco.core", "jar", "jar")),
"org.jacoco" % "org.jacoco.report" % "0.7.1.201405082137" artifacts(Artifact("org.jacoco.report", "jar", "jar"))
)
// Plugin for code coverage
addSbtPlugin("de.johoop" % "jacoco4sbt" % "2.1.6")
// Play enhancer - this automatically generates getters/setters for public fields
// and rewrites accessors of these fields to use the getters/setters. Remove this
// plugin if you prefer not to have this feature, or disable on a per project
// basis using disablePlugins(PlayEnhancer) in your build.sbt
addSbtPlugin("com.typesafe.sbt" % "sbt-play-enhancer" % "1.1.0")
// Play Ebean support, to enable, uncomment this line, and enable in your build.sbt using
// enablePlugins(SbtEbean). Note, uncommenting this line will automatically bring in
// Play enhancer, regardless of whether the line above is commented out or not.
addSbtPlugin("com.typesafe.sbt" % "sbt-play-ebean" % "1.0.0")
I have tried adding javaEbean to the myDeps variable, output remains the same.
Also, contrary to all the examples and tutorials, if I want to enable PlayJava, I have to do it via play.PlayJava. What is up with that?
For the error: not found: value PlayEbean, you must import play.ebean.sbt.PlayEbean in Build.scala,
Then you will have a not-found error for jacoco, you must import de.johoop.jacoco4sbt.JacocoPlugin.jacoco,
After that a NoClassDefFoundError, there you must upgrade SBT to 0.13.8 in project/build.properties,
Finally the postgresql dependency is incorrect and doesn't resolve.
The SBT part should work, in my case it fail later because I don't have eBeans in project.
Patch version:
diff a/project/Build.scala b/project/Build.scala
--- a/project/Build.scala
+++ b/project/Build.scala
## -1,3 +1,5 ##
+import de.johoop.jacoco4sbt.JacocoPlugin.jacoco
+import play.ebean.sbt.PlayEbean
import play.sbt.PlayImport._
import sbt.Keys._
import sbt._
## -35,7 +37,7 ##
val log4j = "org.apache.logging.log4j" % "log4j-core" % "2.3"
val jacksonDataType = "com.fasterxml.jackson.datatype" % "jackson-datatype-joda" % "2.5.3"
val jacksonDataformat = "com.fasterxml.jackson.dataformat" % "jackson-dataformat-xml" % "2.5.3"
- val postgresql = "postgresql" % "postgresql" % "9.3-1103.jdbc41"
+ val postgresql = "org.postgresql" % "postgresql" % "9.3-1103-jdbc41"
val myDeps = Seq(
// Part of play
diff a/project/build.properties b/project/build.properties
--- a/project/build.properties
+++ b/project/build.properties
## -1,1 +1,1 ##
-sbt.version=0.13.5
+sbt.version=0.13.8
EDIT: How did I end up with this: the latest versions of Scala plugin for IntelliJ IDEA allow better editing of SBT configs (than previously), but (for now) one need to make the SBT project build a first time to import it (i.e. commenting suspicious lines). Once the project is imported, one can use autocompletion, auto-import and other joys. I hope it will be usefull with crossScalaVersions. About that, keep in mind that Play 2.4 is Java 8+ only and Scala 2.10 doesn't support fully Java 8. (First section of the "Play 2.4 Migration Guide")
To avoid version-related problems with scala (2.9, 2.10, 2.11, …), we want to include all necessary jar files to use scala in a java application. To facilitate debugging & development, we want to include the sources & javadocs of all such libraries too.
I know this topic has been asked many times before; however, I haven't found a solution that could work for us (scala 2.11 & sbt 0.13.5).
I managed to prototype an approximate solution with an sbt project configured as follows:
./build.sbt:
val packAllCommand = Command.command("packAll") {
state =>
"clean" :: "update" :: "updateClassifiers" ::
"pack" :: "dependencyGraph" :: "dependencyDot" ::
state
}
commands += packAllCommand
./project/plugins.sbt:
resolvers +=
"sonatype-releases" at "https://oss.sonatype.org/content/repositories/releases/"
addSbtPlugin("org.xerial.sbt" % "sbt-pack" % "0.6.1")
addSbtPlugin("net.virtual-void" % "sbt-dependency-graph" % "0.7.4")
./project/Build.scala
import sbt._
import Keys._
import net.virtualvoid.sbt.graph.Plugin.graphSettings
import xerial.sbt.Pack._
/**
* Goal:
*
* use sbt to package all the jars/sources/javadoc for scala & related libraries needed to use scala in a java application
* without requiring scala to be installed on the system.
*
* #author Nicolas.F.Rouquette#jpl.nasa.gov
*/
object BuildWithSourcesAndJavadocs extends Build {
object Versions {
val scala = "2.11.2"
val config = "1.2.1"
val scalaCheck = "1.11.5"
val scalaTest = "2.2.1"
val specs2 = "2.4"
val parboiled = "2.0.0"
}
lazy val scalaLibs: Project = Project(
"scalaLibs",
file( "scalaLibs" ),
settings = Defaults.coreDefaultSettings ++ Defaults.runnerSettings ++ Defaults.baseTasks ++ graphSettings ++ packSettings ++ Seq(
scalaVersion := Versions.scala,
packExpandedClasspath := true,
libraryDependencies ++= Seq(
"org.scala-lang" % "scala-library" % scalaVersion.value % "compile" withSources () withJavadoc (),
"org.scala-lang" % "scala-compiler" % scalaVersion.value % "compile" withSources () withJavadoc (),
"org.scala-lang" % "scala-reflect" % scalaVersion.value % "compile" withJavadoc () withJavadoc () ),
( mappings in pack ) := { extraPackFun.value } ) )
lazy val otherLibs: Project = Project(
"otherLibs",
file( "otherLibs" ),
settings = Defaults.coreDefaultSettings ++ Defaults.runnerSettings ++ Defaults.baseTasks ++ graphSettings ++ packSettings ++ Seq(
scalaVersion := Versions.scala,
packExpandedClasspath := true,
libraryDependencies ++= Seq(
"org.scala-lang" % "scala-library" % Versions.scala % "provided",
"org.scala-lang" % "scala-compiler" % Versions.scala % "provided",
"org.scala-lang" % "scala-reflect" % Versions.scala % "provided",
"com.typesafe" % "config" % Versions.config % "compile" withSources () withJavadoc (),
"org.scalacheck" %% "scalacheck" % Versions.scalaCheck % "compile" withSources () withJavadoc (),
"org.scalatest" %% "scalatest" % Versions.scalaTest % "compile" withSources () withJavadoc (),
"org.specs2" %% "specs2" % Versions.specs2 % "compile" withSources () withJavadoc (),
"org.parboiled" %% "parboiled" % Versions.parboiled % "compile" withSources () withJavadoc () ),
( mappings in pack ) := { extraPackFun.value } ) ).dependsOn( scalaLibs )
lazy val root: Project = Project( "root", file( "." ) ) aggregate ( scalaLibs, otherLibs )
val extraPackFun: Def.Initialize[Task[Seq[( File, String )]]] = Def.task[Seq[( File, String )]] {
def getFileIfExists( f: File, where: String ): Option[( File, String )] = if ( f.exists() ) Some( ( f, s"${where}/${f.getName()}" ) ) else None
val ivyHome: File = Classpaths.bootIvyHome( appConfiguration.value ) getOrElse sys.error( "Launcher did not provide the Ivy home directory." )
// this is a workaround; how should it be done properly in sbt?
// goal: process the list of library dependencies of the project.
// that is, we should be able to tell the classification of each library dependency module as shown in sbt:
//
// > show libraryDependencies
// [info] List(
// org.scala-lang:scala-library:2.11.2,
// org.scala-lang:scala-library:2.11.2:provided,
// org.scala-lang:scala-compiler:2.11.2:provided,
// org.scala-lang:scala-reflect:2.11.2:provided,
// com.typesafe:config:1.2.1:compile,
// org.scalacheck:scalacheck:1.11.5:compile,
// org.scalatest:scalatest:2.2.1:compile,
// org.specs2:specs2:2.4:compile,
// org.parboiled:parboiled:2.0.0:compile)
// but... libraryDependencies is a SettingKey (see ld below)
// I haven't figured out how to get the sequence of modules from it.
val ld: SettingKey[Seq[ModuleID]] = libraryDependencies
// workaround... I found this API that I managed to call...
// this overrides the classification of all jars -- i.e., it is as if all library dependencies had been classified as "compile".
// for now... it's a reasonable approaximation of the goal...
val managed: Classpath = Classpaths.managedJars( Compile, classpathTypes.value, update.value )
val result: Seq[( File, String )] = managed flatMap { af: Attributed[File] =>
af.metadata.entries.toList flatMap { e: AttributeEntry[_] =>
e.value match {
case null => Seq()
case m: ModuleID => Seq() ++
getFileIfExists( new File( ivyHome, s"cache/${m.organization}/${m.name}/srcs/${m.name}-${m.revision}-sources.jar" ), "lib.srcs" ) ++
getFileIfExists( new File( ivyHome, s"cache/${m.organization}/${m.name}/docs/${m.name}-${m.revision}-javadoc.jar" ), "lib.javadoc" )
case _ => Seq()
}
}
}
result
}
}
Thanks to the sbt-pack and sbt-dependency-graph plugins, the above produces what I need:
scalaLibs/target/dependencies-compile.dot
scalaLibs/target/pack/lib
scalaLibs/target/pack/lib.srcs
scalaLibs/target/pack/lib.javadoc
otherLibs/target/dependencies-compile.dot
otherLibs/target/pack/lib
otherLibs/target/pack/lib.srcs
otherLibs/target/pack/lib.javadoc
The dot files can be visualized with GraphViz; it helps explain why a particular library is included…
I would like to improve this approach in terms of the following:
some libraries in scalaLibs are duplicated in otherLibs,
this approach ignores library dependency classification & overrides (not used here)
Suggestions?
Nicolas.